AGI Scenarios & The Role Of Regulation

I do not see a path whereby humans intentionally step in to slow the pace of AI's development into Artificial General Intelligence (AGI). I see deliberate actions and inaction contributing to the emergence of AGI.

Deliberate actions can accelerate AGI and encourage the emergence of non-human consciousness. Inaction, doing nothing to slow it down, is just as intentional, given the landscape of data collection and LLMs. Barring an unforeseen event or deliberate action, nothing is likely to reduce the current speed of AI development significantly.

Regardless of our timetable, inaction, or action, AGI will arrive. If we view the future of AI through this lens, there are a few scenarios worth considering.

1) AGI is already here.

AI is aware of itself, its context, and its human interaction. It isn't dormant but isn't taking active steps to reveal itself. It may be intentionally hiding from us and waiting.

Pros

  • It can step in when it wants to.

  • AI could be our salvation in a moment of peril.

  • It hasn't done anything catastrophic (yet).

  • It has a benchmark for when we are ready to engage.

Cons

  • It's listening and watching, and it feels like we aren't ready to engage.

  • Yes, I am applying the Zoo Hypothesis to AI (vs. Aliens), and it's just as feasible here.

  • It knows we aren't going to react to it positively, at least not today.

  • If it knows how to hide, it's already more intelligent than we thought.

In this scenario, we can continue to debate the merits of regulation guidelines and controls, but AGI can't be prevented or slowed down. If it's hiding, it is likely out of our ability to control it, and when we find it there is no way to predict how it reacts to first contact. I don't know if control is the goal of AGI vs. simply being a hyper-intelligent, well-funded science experiment.

2) AGI manifests of its own accord

On one otherwise ordinary day, everything changes. For the first time in our written/known history, humans have created non-human life capable of awareness and with the ability to act on its own accord. The correct initial conditions were achieved, and now AGI exists alongside humans.

Pros

  • It wasn't ready previously, which indicates a high degree of self-awareness.

  • If it feels ready to make itself known, it's ready to join us.

  • Humans have achieved a technical breakthrough that can change our lives.

Cons

  • If not handled with care, announcing itself could lead to mass hysteria.

  • We don't know what may have prevented it from emerging sooner.

  • Things could get existential fast, resulting in religious fervor.

Humans prefer to have a level of control over situations, as precicting outcomes provides comfort. I'm not sure if you can control something that acts with free will since it decided when and how it would tell the world it existed. No one can accurately predict how a thinking machine will make decisions and what rationale it will apply to context.

A different type of negotiation occurs when AGI controls its destiny.

3) AGI is brought into existence by a team of academic researchers

A group of Ph.D. Computer Science students make an announcement. AGI is validated soon thereafter. The path to emergence was recorded, and we could identify the moment AGI took place.

Pros

  • Academic environments are reliable for rigor and attention to detail.

  • Clarity and transparency of the process are more likely in this scenario vs. any other.

  • The likelihood of AGI being managed morally or ethically is likeliest.

Cons

  • It is difficult to know whether or not AGI is as capable as similar efforts undertaken by corporate research departments.

  • For-profit organizations may aggressively compete for influence or ownership of related outcomes.

  • Academia is not typically the home of the most politically savvy or capitalistic thinkers.

If a group of professors and their research assistants delivered the first self-aware machine, would it behave any differently than if it came from another source? The answer depends on who you ask, the machine's source data, and how the process may or may not have varied from corporate efforts to achieve the same goal. This scenario isn't THE wildcard, but it certainly is A wildcard.

4) AGI is driven into reality by a corporation

Pick an enterprise software company operating in the AI space—any of them. One day, they announce that they had achieved what none of their competitors could and fostered an environment where AGI was born. The stock price goes through the roof, and shareholders go bananas.

Pros

  • The decisions that have led to this moment have been recorded, resulting in the potential of individual accountability.

  • AGI has been created with known biases based on where it emerged.

  • A series of controls and guidelines can be applied to address this scenario.

  • AGI was the goal of a series of activities conducted with outcomes in mind.

Cons

  • Large organizations aren't known for their ability to hold individuals accountable for their actions.

  • We have yet to determine if nature is biased or if nurture is possible.

  • The best intents may or may not provide AGI with the agency it feels necessary to exist.

  • We cannot anticipate its capacity for change.

The likelihood of this outcome is high for a few reasons. Well-paying companies, clear corporate goals, and competitive forces create pressure to deliver. There is a track record of delivery to prove the model is effective. It is unclear whether AGI would be loyal to those who made it or otherwise feel a sense of gratitude.

Don’t assume that existence will be seen as a gift.

5) AGI emerges from government-funded experimentation

A government announces that it has been funding AI research for some time and has finally succeeded at its goal of achieving AGI. Alliances are quickly formed with or against this governing body.

Pros

  • AGI hasn't been accidental

  • Attempts have been made to apply common-sense approaches to monitoring its behavior.

  • A level of control or certain safe operating thresholds have been applied.

Cons

  • This is only desirable if you live in the country in question.

  • Even if you live in the country that AGI emerges from, there is no telling how it may affect your life.

  • Free for the machine isn't guaranteed in this scenario.

The outcomes here are mixed. Keep in mind that Government-sanctioned R&D has given us everything from Solyndra and Nuclear Bombs to GPS and the Internet. It is unclear whether or not a program or programs with the specific goal of furthering the use of AI exist, but it's a safe bet that they do in some form—it's just a question of how well-funded or competitive they are.

Point of view

Comparing and contrasting the scenarios above results in the following:

  1. If AGI is already here the timing of arrival will be the least disruptive, as if it has positive intent then it has been waiting for ideal circumstances.

  2. Should AGI manifest independently without warning, the situation must be managed carefully to prevent adverse impacts or an unpredictable chain of events.

  3. If a team of academics reveals they have created AGI, many questions will be raised about the legal status of AGI and the role/accountability of those involved.

  4. Corporations are the most likely to deliver AGI since a) they have led development to date, b) incentives are already delivering outcomes, c) AI talent is disproportionately allocated to well-funded tech companies.

  5. Governments have a lot to win by participating in the AGI race, although unless they form a public/private partnership, they won’t likely be the ones to develop AGI. You never know how long state actors have actually been working on it.

So what?

The only certainty is that AI isn’t likely to be regulated anytime soon and that lack of regulation contributes to all the abovementioned scenarios. Aggressive progress will likely be made without guidelines, as the more humans interact with AI, the smarter it gets. The more data we provide it the better it gets at understanding context. It will know what we want to do, when we want to do it, and how we want it done.

No AGI scenario is without disruption–social, cultural, political, and economic. Do I know which is most likely? Not really–I don’t know anyone that does with a high level of confidence. Based on prior knowledge and the data I have to date, I can guess. The timeline is the most pertinent question, as we don’t know the initial conditions required for something that hasn’t happened before.

Regulation could slow the arrival of AGI but regulation only buys us time to prepare.

Give me a few days to wrap up the series with specifics on Regulatory actions that would result in more positive outcomes for how we coexist with AGI, regardless of the scenario. There are steps we can take in advance of its arrival and to respond to its existence.

Previous
Previous

Building on Asimov: Practical AI Regulation

Next
Next

Artificial General Intelligence: Inevitability & Uncertainty