(Top)

The race to AGI

“One of the things we should be careful of when it comes to AI, is to avoid ‘Race Conditions’. Where people working on it across companies get caught up in who’s first, so that we lose. Potential pitfalls and downsides to it. [..] Does that keep me up at night? Absolutely.”

“The problem with that is that it creates a self-fulfulling prophecy, so the default there is that we all end up doing it.”

We want the benefits from AI, but we also want to make sure that it’s safe and doesn’t end in disaster . Individually, AI companies and countries also want to balance safety with progress. This requires all of them to slow down and prioritize safety. AI companies know this, and some of them have taken steps to prevent an AI race in the past.

But, the incentives of racing are too strong. Having the best AI model is a tremendous competitive advantage - it gets you more customers, more data, more investments.

OpenAI was created explicitly to “create a counterweight to Google and Deepmind”, a non-profit dedicated to safety.

AI companies and countries are stuck in a race to the bottom . We know this concept from:

Companies

Now, it’s a wholly-owned subsidiary of Microsoft racing to achieve AGI.

Nations

It’s not just companies racing towards AGI - it’s also nations. They, too, have competitive advantages to gain from having the best AI. There is a lot of money to be made from having a successful AI company in your country. France and Germany lobbied against the EU’s AI regulation because they were concerned that the safety requirements would slow down their AI companies. But it’s not just economic reasons - it’s also military. Countries are increasingly becoming aware that having powerful AI is a strategic asset. The US (DARPA) is now collaborating with OpenAI.

The Solution: International Coordination

To keep future large AI models safe and controllable, we will likely need large, comprehensive International Organizations to help ensure that all countries and companies adhere to internationally agreed-upon quality and safety standards. For example, international organizations like:

  • UN (United Nations)
  • IAEA (International Atomic Energy Agency)
  • And perhaps even a new organization modeled after the “Manhattan Project”

We may also need binding treaties signed by all countries capable of training very large AI models. These treaties would ensure that no country cuts safety requirements in an effort to release a large, potentially dangerous AI model onto the public internet ahead of others.

Fortunately, the computer hardware required to train these large LLM/AI models is (currently) very specialized and expensive. With proper organization and tracking, we could prevent rogue or unsafe actors from training and deploying such a model, similar to how we track and limit access to unsafe biological pathogens or nuclear materials.

“Giving people time to come to grips with this technology, to understand it, to find its limitations, its benefits, the regulations we need around it, what it takes to make it safe, that’s really important. Going off to build a super powerful AI system in secret and then dropping it on the world all at once I think would not go well.” — Sam Altman, US Congressional Testimony 2024