Artificial General Intelligence: Inevitability & Uncertainty

The regulation of AI comes down to one particular topic–Artificial General Intelligence (AGI). A topic of modern scientific fiction that is likely to become scientific fact. Fact because initial conditions are becoming more favorable for AI to emerge, regardless of the point of origin. We, as humans, are still determining what AGI may or may not do, where it will live, and what scale it will operate on. Based on the current trajectory of chipset innovation, available computing power, and increased use of LLMs, we will likely see AGI in our lifetimes.

The current assumption is that AGI will behave as a reflection of its origin—built with an intentional nature and uncertain impact of nurture.

With AGI being the primary driver behind regulation, this article will focus on timing, drivers, and approaches to regulation. The next and final installment of this series will look at a few scenarios associated with the arrival of AGI. But first, let's look at timing.

AGI may arrive sooner than anticipated.

Many argue that AI is already sentient, and the hints of self-awareness exist today. This perspective often leads to philosophical discussions on defining awareness, consciousness, and intelligence. I can't take a position on that topic in an SEO-friendly Linked-In optimized article, much less a GPT'd listicle.

AGI may be here today, but it's likely not in whatever final form we consider it taking. AI will likely evolve into something we don't expect or can't anticipate, in which case the precursors are likely here already. The answer as to when it arrives is subjective. But it is difficult to argue that we are on our way.

AGI may arrive later than anticipated.

The counterpoint to the argument that AGI already exists is technical. Modern automation is predictive—and while it seems like magic when it correctly guesses our intent, it is acting based on data and not intuition. Spelling correction is the best example, which led to Next Word Prediction (this didn't happen overnight). AI predicts based on what has been correct, just like humans. As it turns out, not surprisingly, data is key to understanding how AGI will become a reality.

AI requires context to become AGI, but how much is unclear. We do know, however, that it will take a lot of context. The good news is that technology companies have been thinking about how to solve data-centric problems for a long time. We have been preparing for this moment long before cloud computing existed.

Enablement at scale

With World Models rolling out, mobile OS integration, and digital agents becoming more mainstream, the initial conditions for AGI are increasingly favorable.

World Models enable multi-sensory inputs that provide more data by which AI can learn more about the environment, going far beyond text-based interaction. Mobile device usage increases the number of Chat-GPT and other LLM interactions; the more AI is used, the better it is. Agents have been a hot topic in the AI space for the last year, and enterprise software platforms are prepared to scale their implementation—enabling AI to understand the outcomes that humans drive.

Together, World Models, Mobile Device integration, and Agents create the initial conditions for AGI. A robust understanding of environments, patterns of interactions, and usage drivers will only accelerate AI's knowledge of the human world. How is this different from how we used applications and completed tasks before?

Use Case Study

I worked at Motorola in a short-lived post-RZR pre-iPhone era. Our phones quickly went from correcting spelling to suggesting, and the more we used our phones, the better the suggestions got.

Take this simple example: something I do all the time on my phone:

  • Open application > Misspell a destination > Copy the name of the destination > Paste it into the mapping application > Navigate to destination

^The workflow above has resulted in data your phone service provider, applications, and phone were privy to.

In 2025, AI will know the following:

  • What application you are using > When you get hungry > Your food preferences > How much you spend on meals > Your exact location > How fast you drive

^Plus all of your data. When these interactions are scaled, predicting human behavior is easy, and the data is freely given in exchange for our convenience.

Collecting more data and subjecting it to more advanced math results in higher convenience at the cost of our privacy. There are many questions about AI and its use of these data points: If AGI is heading our way, what can we do? If it's already going to be everywhere, isn't it inevitable?

Inevitability

Given more computational capabilities and data, AGI will eventually emerge. This statement is neither good nor evil; it's a mathematical fact. A few considerations to take into account when thinking about the regulation of AI include, but are not limited to the following:

Organizations aren't going to be more transparent about how close AGI may be to arriving.

There is no clear incentive to tell the competition what you are doing and what progress you have made.

Funding to accelerate AI's impact is going to increase.

We are in a race to a finish line that isn't clearly defined, so reducing spending on AI is unlikely--by any company or country.

Timely coordination across countries and companies is not realistic

AGI may arrive in the time it takes to broker an agreement and establish political/business alignment. When you consider countries and companies, there is a lot of complexity to manage.

Safety is not guaranteed.

If we spend less money on irresponsible technology use, someone may have an advantage over us because they did instead. Without clear rules of engagement, what is and is not illegal?

Placing rules around something like AGI may not make sense

If we don't know what AGI will look like or operate, how can we establish regulations or guidelines?

Scaled Uncertainty

The only variables that humans know with certainty about AGI are that we don't know:

  • Who will foster it

  • What it will do

  • Where it will live

  • When it will arrive

  • Why it emerged

  • How it will behave

We also are still determining how we will know AGI has arrived. It may not see the need to announce or introduce itself. Potentially ever. Commonsense guidelines should exist in the face of these unknowns. Yet, marketplace and political competition overcome collective gains. There is nothing new about that.

In my next posts, I will explore multiple scenarios associated with Risk, Compliance, and the emergence of AGI. Each scenario will have pros, cons, and steps to drive positive outcomes.

Previous
Previous

AGI Scenarios & The Role Of Regulation

Next
Next

AI & the Goldilocks Of Regulation