Author: Matthew Renze
Published: 2025-08-01

What are the biggest risks of AI — today, tomorrow, and beyond?

As we move further into the AI revolution, we’re beginning to see the benefits of AI.

However, we are also starting to see several new risks emerge.

Unfortunately, it’s hard to separate the hype from reality.

So, what are the real risks of AI that we should be concerned about now and in the future?

To answer that question, here are the three biggest risks of AI that I see in the short, medium, and long term.

Misinformation

In the short term, the biggest risk that AI poses to our society is AI-generated misinformation.

We’ve been dealing with propaganda and misinformation since the dawn of human civilization. However, with AI, the volume, quality, and pervasiveness of misinformation could reach a scale we’ve never seen before. The average person already struggles to identify reliable sources of news, information, and knowledge. However, we’re entering an era where even experts may struggle to tell what’s real and what’s fake.

Today, we can generate entire misinformation pipelines with AI — AI-generated social media posts, created by AI-generated avatars, linking to AI-generated articles, citing AI-generated sources, hosted on AI-generated websites. An army of AI agents can flood the internet with a web of misinformation and populate your social media feed with millions of convincing posts.

In addition, nefarious actors can now generate fake images, audio, and video — called “deep fakes” — that are nearly indistinguishable from reality. Anyone already stuck in an information bubble or echo chamber will be pulled even deeper into the misinformation rabbit hole.

Fortunately, there are several strategies we can use to fight back. We can:

However, education alone is not enough. As a society, we also need to:

  • develop better technology for detecting deep-fakes
  • create sensible regulations to prevent the spread of AI-generated misinformation
  • enforce strong penalties for the malicious use of AI-generated content
  • build better systems to verify the factual accuracy of information
  • establish immutable audit trails for digital information (e.g., using a blockchain)

Unemployment

In the medium term, the biggest risk of AI is unemployment due to automation.

Over the next decade, AI is expected to automate a large number of economically valuable jobs. We saw similar patterns during the previous technology revolutions. The Industrial Revolution reduced demand for artisans and craftsmen. The Information Revolution reduced demand for paralegals and clerks. Likewise, the AI Revolution will impact knowledge workers and, eventually, physical laborers.

While some jobs are more resistant to automation (e.g., nurses, teachers, etc.), many jobs are unlikely to exist in the next decade. These include jobs that are simple, repetitive, costly, dangerous, and error-prone. However, many of these jobs won’t completely disappear. Rather, the people in these roles will become managers of a team of software agents (or robots) performing the line-level tasks they once did themselves. Essentially, everyone becomes a manager.

To prepare for this wave of AI-enabled automation, we as individuals can:

  • learn new AI skills to remain relevant in a rapidly evolving job market
  • practice using AI tools to automate the routine tasks in our jobs
  • improve our uniquely human skills — to offer value in areas where machines struggle
  • embrace working with the machines rather than fighting against them

However, we as a society also need to prepare for what’s rapidly approaching. We need to:

  • retrain employees to become managers of teams of AI agents and robots
  • provide social safety nets to help those displaced by automation
  • learn how to redistribute wealth in an economy where machines create most of the value

This could involve a Universal Basic Income (UBI), a Negative Income Tax, a social stipend, or some form of universal basic ownership.

Misalignment

In the long term, the biggest risk of AI is an existential threat due to misalignment of goals and values.

How do we keep the goals and values of humanity and AI in alignment in this decade and beyond? AI is on a trajectory to become more intelligent than the average human. Eventually, it will surpass the smartest humans and even the collective intelligence of all humanity. So, how do we ensure that we remain relevant — and safe — in a world where machines are vastly more powerful and capable than we are?

Right now, we don’t have a practical solution to the alignment problem. In fact, we don’t even know if a real solution is possible. However, many AI researchers are actively working on technical solutions to this problem. For example, constitutional AI, reinforcement-learning via human feedback (RLHF), interpretability tools, and scalable oversight.

In the meantime, there are a few things we can do to increase the likelihood that humanity and AI stay aligned. We can:

  • learn how to use AI responsibly and ethically
  • encourage our governments to enact sensible AI regulations
  • build safeguards against malicious actors using AI for harmful purposes

However, solving the alignment problem isn’t just about keeping AI aligned with our human goals and values. We, as a society, also need to learn how to align humanity with AI. To do this, we can:

  • learn how to co-evolve with AI in mutually beneficial ways
  • use this new technology to benefit all of humanity and life on Earth
  • become better stewards of our shared environment — both organic and synthetic

 

While the risks of AI are significant, the potential benefits are enormous. We can’t run from this technology; we need to embrace it. Humanity needs to learn how to live in harmony with our technology.

To learn more, please check out my series of articles on Artificial General Intelligence.