Constraints Drive Innovation: The Case for AI Regulation

Regulation is fundamental. Without it, we risk repeating the mistakes of the past, and with too much of it, we encourage rebellion and social backlash. It’s the amount of Regulation that is important.

Laws are meant to protect us and keep us from harming others, and they are adhered to by all citizens equally. Regulations uphold laws by implementing and enforcing them. Regulations are not laws but are written to explain how to implement them.

Regulations vary from city to city and state to state in the US. Bordering countries tend to prioritize one another to achieve clarity and/or parity whenever possible. The United Nations does what it can, but it is challenging to keep everyone playing by the same rules globally.

Regulation became a trending topic in late 2024, as in most election cycles. However, AI is responsible for maintaining an unusual level of interest in the subject.

In my first post of this series, I addressed human nature, regulation, and the cost of enforcement through the imperfect metaphor of driving. After this article, I will provide examples of regulations applied too little, too late, and too much. The series will conclude with potential scenarios and Gyroscope's point of view on AI, Regulation, and Business Value.

Today I will share my perspective on objections to regulating AI, the cost of taking action, and how to frame the potential impacts of AI.

Modern Pragmatism

Allow me to take on the biggest objection to regulating AI. I hear it all the time, but it sounds a little different each time. It goes something like this: If no one else is playing by the rules the US establishes to regulate AI, then isn’t the US preventing innovation by applying regulation?

The answer is no. Regulations are put in place to protect consumers and encourage competition. Progress for progress’ sake is a reckless approach to emerging technology. Advancement without a clear goal increases risk. Technology without constraints can quickly become more destructive than constructive.

Examples should help me reinforce my point–the following experiments with AI at scale without clarity of purpose include, but are not limited to:

  • Microsoft’s Tay, which took 24 hours to reflect the offensive and racist comments it was given.

  • Google’s early implementations of image search reflected multiple biases in training data.

  • Uber’s Self-Driving Car didn’t classify a pedestrian correctly and accidentally killed them.

  • Amazon’s AI Hiring Tool prevented hiring women due to training data bias.

Aside from the fatalities caused by Uber (and Tesla), no one was physically harmed in the examples cited above. The societal damage, combined with a lack of faith in AI, was one impact. Another was the bruised egos of the brands that overstated their own capabilities. When these impacts are aggregated, they require significant resources to address.

Just as there is no one-size-fits-all solution to regulating AI, there is no easy fix for its mistakes.

Initial Conditions

Constraints drive innovation. Easy problems to solve don't demand creative solutions. Disruption occurs when competition is stale, and the reward for solving problems or addressing root causes is high. Opportunities don’t just manifest out of thin air.

Regulation limits what can be implemented, not what can be explored. Research and Development will continue but must be conducted with oversight and well-defined constraints. Similarly, making capabilities available to the public/clients may involve technical scrutiny.

Cost of Regulation

The cost of not regulating technology may be the best motivator for AI. The opportunity cost isn’t known and may be difficult to quantify. Based on how often it appears in fiction, you can be sure it's a topic of concern for humanity.

Many contemporary examples of dystopian science fiction signal adverse outcomes of AI being self-aware. These range from adorable unintended consequences (Wall-E) to depicting a future Governor of California as a terrifying naked robot (Terminator). But that’s science fiction, not science fact.

Predictions are funny like that. I’m not surprised that people imagined the cell phone or the internet long before they became a reality. We often forget that there were hundreds or thousands of incorrect predictions for each correct one. I both want and fear a future with flying cars, which are decades late on arrival. Technology, after all, comes at a cost.

The cost of not regulating AI is unknown because the technology's upper limits exceed our ability to measure it.

Undefined Risks and Rewards

We simply cannot estimate what AI can accomplish. If humans can't predict its impact, it makes sense to slow things down until we know what the risk/reward looks like.

Decision-making requires an understanding of risk vs. reward. Without objective understanding of what is gained or lost by making a decision, the gravity of a choice, we increase the likelihood of an error. Humans, and machines designed to think like humans, learn by making mistakes.

The central issue with Regulation and AI is not knowing the cost of making a mistake. Without data converted to knowledge to make informed decisions, we are driven by intuition, often a combination of past experiences and assumptions. Intuition and hunches are far from certainties.

I don't know about you, but I try to avoid making uninformed decisions. I try to learn from my mistakes so I don’t repeat them (yet it still happens). Prior data isn’t required for all of my choices, which is good because I don’t always have access to the necessary information. That said, I don’t make many decisions that involve broad societal or economic impacts. But I have a lot of examples coming in the next post.

Previous
Previous

AI & the Goldilocks Of Regulation

Next
Next

The Speed Limit for Emerging Technology