Building on Asimov: Practical AI Regulation

In my previous installment in the series on AI and Regulation, I proposed scenarios that addressed the emergence of Average General Intelligence (AGI). In this final post in the series, I will focus on Regulation and what we can do to drive innovation with common-sense guidelines that result in positive outcomes from leveraging AI.

AGI is likely to occur in our lifetimes, and many well-informed individuals have varied opinions on when it arrives. Some argue that AGI is already here, others that it requires some level of a world model to establish itself. Experts tend to agree that AGI is inevitable.

If something is inevitable, why take action to stop it or slow it down? Does it make sense to accelerate it? Is AGI something that mankind benefits from?

In favor of regulation

If something is inevitable, there are ways to prepare for its arrival. Regulations can shape the initial conditions under which AGI comes into existence. How we provide ethical boundaries and determine what is and is not acceptable can address certain aspects of AGI.

Humans abide by laws, some of which are more universally accepted than others. Certain cultures and civilizations adhere to stricter laws determined by those with political power. It is doubtful that the totality of mankind can agree on enforceable laws that apply to AI. However, a set of principles to govern AI should be achievable.

Since Science Fiction has dealt with AI as a topic for years it’s as good a place as any to start. Asimov’s rules of robotics are fundamentally sound but represent unintentional gray areas. Nothing is perfect, but these three are a solid foundation:

  • A robot must not harm a human, or allow a human to be harmed through inaction.

  • A robot must obey human orders, unless those orders conflict with the First Law.

  • A robot must protect its own existence, unless that protection conflicts with the First or Second Law.

Of course, these are too basic to be applicable in today’s context. However, they represent a direction to consider. Regulation has the potential to impact the emergence of AGI by setting up guardrails to adhere to and creating ethical walls that cannot be violated.

Regulatory Approaches that would deliver value fall into three categories: High Potential, Potential, and Unrealistic. Each approach to regulation has its own challenges. I have taken time to make recommendations on solutions to each challenge.

High Potential

Clear Ethical Guidelines

Guidelines that emphasize fairness, transparency, accountability, and human-centric outcomes should govern AI development and deployment.

CHALLENGE

The issue here is clarity and objectivity. Ambiguity or subjectivity can lead to inconsistent interpretation and enforcement. Furthermore, being too strict can stifle competition, and cross-cultural alignment on ethics can be challenging.

SOLUTION

Start small, with inarguable ethical guidance, and transparently increase specificity over time.

Ban on Harmful Applications

Specific uses of AI, such as autonomous weapons, AI for mass surveillance, or deepfake production without consent, can be deemed universally harmful.

CHALLENGE

Defining "harmful" applications could be contentious. Objective guidelines must be established. Also, the Dual-Use Dilemma of tools that have both beneficial and harmful applications is a challenge. Oh, and like any technology use, enforcement is a challenge.

SOLUTION

Simple language that determines what is and is not acceptable use of AI. Dual-use dilemmas are nothing new. Most tools can be used for good or bad purposes and area function of how they are implemented. The United Nations has published guidelines about certain technologies and ethical labor practices.

Data Privacy Protections

Enforce robust data protection laws to limit the misuse of personal data in AI training and deployment.

CHALLENGE
Compliance with frameworks like GDPR has resulted in resource-intensive reverse-engineering of existing platforms and technology.

SOLUTION

Establish simple guidelines that competing parties can agree to. Ensure guidelines can be reasonably applied and enforced. Apply AI to the problem of AI’s training needs—which is being conducted today via ongoing experiments with artificial data.

Potential

Liability Frameworks

Establish clear legal accountability for AI misuse or failures. Ensure that developers, operators, and organizations can be held responsible.

CHALLENGE

Quantifying damages for breaking the law requires an understanding of the impact. Impact assessments at scale aren’t easy and take into account the weighing of multiple factors that are subject to change.

SOLUTION

Engage platform organizations, including but not limited to Amazon, Facebook, Google, Microsoft,, and Salesforce, to create impact categories. Assign minimums and maximum penalties for each category. Refine the thresholds as the framework evolves.

Licensing and Certification

This is important for high-risk AI systems, ensuring safety and adherence to ethical standards. Requiring developers and organizations to obtain licenses or certifications is similar to how doctors and hospitals deliver a high level of care today.

CHALLENGE

Requiring a license may reduce competition for smaller operators. Keeping up with the rate of change of emerging technology is a potential issue.

SOLUTION
Place focus on specific applications of AI that represent disproportionately high societal impact or risk. Create a certification scale for different AI usage, similar to manufacturing and Six Sigma, ISO, and AS.


Continuous Monitoring and Updates

Regular reviews of AI systems to ensure they remain safe, secure, and aligned with societal values.

CHALLENGE

The Resource Strain could be significant. Ongoing monitoring requires expertise and resources, which would be an issue for smaller organizations.

SOLUTION

Keep it simple and share audit logs. Share basic instrumentation and metrics to ensure that AI is operating within pre-determined thresholds to reduce the risk of harmful use. If the data collected are simple enough to drive directional compliance, this approach shouldn’t be an issue.

Mandatory Impact Assessments

Requiring AI developers to conduct and publish impact assessments before launching systems is new to software. However, this approach would be similar to environmental impact studies, which evaluate potential societal risks and benefits.

CHALLENGE
If this level of oversight sounds costly and time-consuming, it’s because it is. Smaller companies can’t afford to file paperwork while shipping features to remain competitive.

SOLUTION

Implement a lightweight rubric and provide templates to drive a model that makes impact studies less academic and more approachable.

Unrealistic

Create Regulatory Oversight Bodies

Create an independent regulatory body with the authority to oversee and audit AI systems.

CHALLENGE

It is highly unlikely in the US, the leader in AI development, for the next four years. To be fair, bureaucracy typically accompanies regulation. Slow-moving, inefficient, or overly politicized enforcement conducted by individuals lacking technical expertise is also a risk.

SOLUTION

Split up oversight into smaller, topic-driven organizations. The financial industry has specific types of regulators in the US, including: Federal Reserve System, Office of the Comptroller of the Currency, Securities and Exchange Commission, Commodity Futures Trading Commission, Federal Deposit Insurance Corporation, Consumer Financial Protection Bureau, and state-level agencies.

International Collaboration

Establish and drive global agreements to standardize AI regulations and share best practices.

CHALLENGE

Anything at a global scale is, by definition, difficult to achieve consensus on. Government and industry support are managed differently based on various factors. It requires a level of fortitude and agreement on common goals.

SOLUTION

Engage governments to collaboratively author a series of inarguable principles that foster innovation and ensure societal protection. This is not easy to achieve, but it is worth the effort.

The Wrapup

There are ways to foster an environment of responsible innovation.

We benefit from hindsight regarding emerging technology and how humans harness modern tools. I have explored regulation through historical, contemporary, and future-facing lenses in this series. I have considered examples, common-sense guidelines, and likely outcomes if AI continues to be developed at its current pace.

If you found the series helpful, please don’t hesitate to reshare it. If you are interested in learning more about Gyroscope and our role in driving business outcomes through automation, we look forward to talking to you.

Previous
Previous

Five Years & 300+ Students Later

Next
Next

AGI Scenarios & The Role Of Regulation