Despite AI’s rapid integration into our daily lives and extraordinary capabilities, only 32% of people in the U.S. trust it. This striking statistic from the 2025 Edelman Trust Barometer reveals a critical paradox: as artificial intelligence grows more impactful, public confidence in it remains alarmingly low.

Left unaddressed, this confidence gap could create a self-reinforcing cycle where skepticism limits AI adoption, reducing opportunities to demonstrate its value and further entrenching public mistrust. What’s driving this skepticism, and how can we bridge this trust gap before it hinders AI’s ability to transform our society for the better?

Background on the Edelman Trust Barometer

“Trust takes years to build, seconds to break, and forever to repair.” – Anonymous

Edelman, a global communications firm, has studied trust for the past 25 years. Edelman defines trust as the “willingness to rely on an entity to do what is right” and is built on two dimensions:

  • Competence: delivering on promises effectively
  • Ethical Behavior: acting with integrity and fairness

Edelman views trust as the ultimate currency in the relationship that all institutions—spanning business, governments, NGOs, and media—build with their stakeholders. Trust is fundamental in every aspect of life because it forms the foundation for cooperation, stability, and growth.

Key Facts from The 2025 Trust Barometer

A couple of the facts which stood out to me from Richard’s keynote and the global report are:

  • Global trust is flat at 56% compared to 2024 (U.S. at 47% vs. 46% in 2024)
  • Globalization, economic, and technology fears worsen job insecurity for global employees:
    • 58% fear automation (+5% from 2024)
    • 58% fearing a lack of training (+2% from 2024)
  • 61% have a moderate or high sense of grievance against business, government, and the rich. Grievance is defined by a belief that government and business make their lives harder and serve narrow interests, and wealthy people benefit unfairly from the system.
  • As illustrated in Figure 1 below, a greater sense of grievance is eroding trust across all institutions and declining trust in AI and its use by business.

421565888-ea0e0b41-51c3-4937-b3bb-ca80b0737972

Figure 1. Correlation between grievance levels and trust in AI (Source: 2025 Edelmand Trust Barometer).

The technology sector insights highlight:

  • Global trust in AI is 49% vs. 50% in 2024 (U.S. at 32%, up 2% from 2024)
  • 44% are comfortable with business using AI (U.S. at 32%)
  • 27% embrace the growing use of AI, down 3 points vs. 2024 (U.S. at 19%, flat to 2024)

The Promise of AI: Why Trust Matters

One of the key factors driving low trust in AI is the fear of job displacement. It is hard not to run into articles with dramatic headlines like this Forbes article which states “Goldman Sachs Predicts 300 Million Jobs Will Be Lost or Degraded by Artificial Intelligence.”

Historically, other general purpose technologies have sparked similar fears. When Gutenberg introduced the printing press, scribes feared displacement—yet an entire publishing industry emerged. Likewise, AI may not simply eliminate jobs but transform industries in ways that create new opportunities. McKinsey Digital reports that Generative AI and other technologies could automate 60 to 70% of work activities, shifting job roles rather than fully replacing them. However, concerns persist: how will jobs shift and will employees receive the necessary training to successfully transition into these new roles?

The challenge or opportunity (depending on where you lie on the trust spectrum) is AI is advancing at a rapid pace and is demonstrating benefits across many sectors. In healthcare, AI systems are helping detect diseases earlier and with greater accuracy than human physicians alone. In technology, AI is automating the generation of code.

AI is here to stay even if we experience a bubble. Business and government have an important opportunity to help people gain trust in AI to not only prevent fears, but to more importantly ensure our society can fully benefit from its advancements.

Business Impact on Trust in AI

Business play a crucial role in shaping AI’s perception. To foster trust, business can help in a couple of key areas:

  • Increase AI Literacy: Help people understand what AI does and how it works. Help people parse through the marketing and media hype to understand what merits consideration.

  • Empower Employees: Provide training for employees on AI tools to learn and help them understand how it can benefit their roles. Breaking down barriers to technology access will be key.

  • Ensure Transparency: Clearly communicate where AI is being piloted, what is being learned, and how it impacts products and operations. If investments are changing how work is done, share how the business is supporting people through the change.

  • Facilitate Open Dialogue: Create channels for employees and customers to voice and receive honest responses to shape products and capabilities.

Safety is Critical

While businesses can take significant steps to build trust through transparency and education, these efforts will be undermined without addressing the fundamental safety concerns that make many people wary of AI technology. The global risk to safety can’t be overlooked due to AI’s dual-use as both a beneficial and harmful technology. As progress is made toward achieving Artificial General Intelligence and eventually Artificial Superintelligence, we run a higher risk with AI’s lower barrier of entry for rogue actors to commit global cyberattacks at the click of a button which could cripple our infrastructure.

In “The Coming Wave”, Mustafa Suleyman argues for an urgent containment strategy to mitigate the risk of harmful outcomes. Suleyman defines containment as a set of interlinked and mutually reinforcing technical, cultural, legal, and political mechanisms for maintaining societal control of AI during this time of exponential change. Through the process of “thinking about the unthinkable”, Suleyman breaks containment down into a set of steps, which if acted on, can help ensure AI from the start is adapted to people, to their lives and hopes.

  • Safety Programs: Create an AI safety programs requiring frontier companies to invest in and sharing findings with the government to track results.

  • Government Audits: Conduct regular, independent reviews to verify AI systems function as intended, supported by red teams stress-testing systems.

  • Development Choke Points: Implement regulatory mechanisms like the restricted sale of certain Nvidia chips to China to control AI’s proliferation, similar to how aviation and pharmaceuticals are regulated.

  • Aligning Profit with Social Purpose: Create legal frameworks that incentivize companies to properly balance financial returns with responsible AI development.

  • Stronger Government Capabilities: Establish in-house AI expertise to regulate effectively, rather than relying solely on private sector influence.

  • Global Alliances: Create a multinational approach to AI oversight, akin to nuclear non-proliferation treaties, ensuring responsible AI development across borders.

  • A Culture of Learning from Failures: Applying the aviation industry’s rigorous post-mortem approach to AI mistakes, fostering accountability and improvement.

They key point is we can’t continue to ride this exist “wave” of technology nor stop it. We have to create change which allows it to be scuplted to ensure safety to build trust.

Conclusions

The trust gap in AI presents both a challenge and an opportunity. By prioritizing education, transparency, and containment, we can build AI systems that are not only powerful but worthy of public trust built on a strong foundation of safety. We have the ability to transform AI from a technology people fear into one they actively embrace and that benefits our society.

And by “we,” I mean each of us. The responsibility for creating trustworthy AI systems extends beyond businesses and governments to every individual. Anyone anywhere can lead change, whether through advocacy, education, or thoughtful adoption. As an example, I learned about the Future of Life Institute whose mission is to “steer transformative technology towards benefiting life and away from extreme large-scale risks.” AI is one of their focus areas, and they provide options for taking action, from supporting informed policy development to participating in public discourse on AI safety.

As I continue to explore these possibilities on Intelligence Reimagined, I invite you to share your thoughts: how can we collectively build AI systems worthy of trust?


<
Previous Post
AI Agents: Promise versus Reality in Creating Business Process Efficiency
>
Blog Archive
Archive of all previous blog posts