Trust in technology is waning, and the skepticism surrounding AI systems is even more pronounced. Despite the promises of innovation and efficiency, recent research by Sonata Insights and the IAB indicates that even the most tech-savvy, young consumers are wary of messaging created by AI. Consumer skepticism about AI isn't arbitrary. According to the Edelman Trust Barometer’s Insights for the Tech Sector report (2024), there is a significant 26-point gap between trust in the tech industry (76%) and trust in AI (50%). This gap is further exacerbated by key concerns such as privacy (39%), the devaluation of what it means to be human (36%), harm to individuals (35%), and job loss/ security (22%). These findings raise a crucial question: How can an innovative organization seeking to enhance processes and services with AI earn the trust of consumers?
For organizations aiming to leverage AI, understanding the barriers to AI-adoption and addressing consumer concerns is paramount. When evaluating a new technology, individuals weigh the benefits against the risks. The perceived benefits must outweigh the risks for adoption to occur. As Bingley et al. (2023) note in their study of user experiences with AI, seamless functionality and ease of adoption are critical for end users. Four in 10 (39%) of those who have had negative experiences with AI cite poor functionality as a top barrier to adoption. The key to AI adoption is to experience and communicate the positive outcomes of this technology. In fact, the study continues to show that four in 10 (42%) consumers who have had good experiences with AI list social impact as the leading factor shaping their positive attitude towards AI.
Human-Centered AI: Bridging the Gap between Builders and Communicators
Human-centered AI (HCAI) emphasizes shifting the focus from technology to people. According to Bingley et al., functionality and social impact are defining features of positive user experiences with AI. While developers certainly prioritize functionality, they also focus on ethics, privacy, and security -- mportant but not always aligned with user priorities or knowledge. For AI to be truly human-centered, it must not only work well but also show users how it benefits them and the larger society. This is where developers need to partner with communication professionals to show how AI-powered technologies and experiences can positively impact humanity.
Path to Building Trust in AI
Building trust in AI requires a multifaceted and balanced approach to communications, striking just the right chords to appeal to consumers’ sensitivities:
Conclusion
The path to earning trust in AI is challenging but achievable. By prioritizing transparency, personalization and social impact in their communications, organizations can foster a more positive perception of AI. As trust in technology continues to evolve, organizations will need to go beyond functionality in their AI applications to increase reach and adoption. Innovative organizations that align their AI strategies with human-centered principles will be better positioned to succeed.
Posted at MediaVillage through the Thought Leadership self-publishing platform.
Click the social buttons to share this story with colleagues and friends.
The opinions expressed here are the author's views and do not necessarily represent the views of MediaVillage.org/MyersBizNet.