Fostering User Trust in AI: Insights from a UX perspective Amanda Kassick, June 6, 2024June 17, 2024 You know when you read something that is so impactful that it keeps coming back to you over and over again? That was me with “Humans and Automation: Use, Misuse, Disuse and Abuse” (Parasuraman & Riley, 1997). I’ve lost count of how many times I’ve cited that study. Despite the article’s main goal being to define the four ways in which we interact with automation, what hooked me is how our trust in those systems is one of the main drivers in determining how willing we are to rely on them. As AI becomes more present in our lives, from recommending products to diagnosing diseases, some people hesitate to trust it. We can’t blame them when we still see strange images of hands that just aren’t realistic, or how some systems are unable to correctly assess how long ago 1919 was. While there are still some hiccups, we’re going to dig into what makes people trust AI, ways we can make it more trustworthy through UX design, why transparency matters, how to tackle bias and fairness issues, what people think about its accuracy, and how to deal with mistakes without losing that trust. What does “trust” entail? Trust in AI involves a few key things: reliability, competence, integrity, and benevolence. Reliability means AI consistently does what it’s supposed to. Competence is about how well it can do its tasks accurately. Integrity refers to whether it behaves ethically, and benevolence is all about whether it’s looking out for the user’s best interests. UX design plays a vital role in earning people’s trust in AI. By emphasizing clarity, simplicity, and predictability in user interactions, designers can enhance users’ confidence in AI systems. Providing clear feedback, guiding users through the process, and minimizing cognitive load are things to keep in mind as we create those designs. Transparency is key to building trust in AI. Parasuraman and Riley (1997) mention that what differentiates the (correct) “use” of an automation, versus the less desirables: “misuse”, “disuse”, and “abuse” is the user’s ability to monitor that system. To monitor something, we must understand how it works. So, in this context, users need to grasp how AI works, from its algorithms to where it gets its data and how it makes decisions. For instance, showing confidence levels or relevant data inputs helps users judge how reliable and accurate AI is. Another relevant, and valid, concern is that bias in AI can wreck trust and make outcomes unfair. Designers need to tackle bias in how data is collected, algorithms are designed, and models are trained to ensure fairness. Techniques such as algorithmic auditing, diverse dataset curation, and regular bias assessments can mitigate bias and promote fairness in AI systems. What if the system fails? Can we still build trust? Users’ perceptions of AI accuracy shape their trust in these systems. Some research suggests that users tend to overestimate AI capabilities in some contexts, leading to unrealistic expectations. Educating users, promoting more AI literacy, and offering transparency around AI’s limits and uncertainties is vital to keep expectations realistic. Offering chances for feedback and always trying to improve can also boost how accurate people think AI is and how much they trust it. As with any other system, errors will also inevitably occur with AI, so how they are handled can impact user trust. Error detection mechanisms should be implemented, informative error messages provided, and clear recovery paths offered for users. Maintaining open communication about errors and acknowledging AI’s fallibility can reassure users and preserve trust. Trust is essential for the successful adoption and utilization of AI technologies. By prioritizing transparency, fairness, accuracy, and effective error handling in UX design, we can cultivate trust and confidence in AI systems. Remember, trust is fragile, but it’s also a powerful driver of user engagement. By conducting thoughtful UX research, we can build AI systems that users trust and rely on with confidence. Parasuraman, R., & Riley, V. (1997). Humans and Automation: Use, Misuse, Disuse, Abuse. Human Factors, 39(2), 230-253. https://doi.org/10.1518/001872097778543886 Research