AI: Balancing Regulation and Innovation

Tanvi Ghate
23rd January 2024

AI policy and regulation has jumped from a niche, nerdy topic to ‘must-have’ front page news, partly because of OpenAI’s ChatGPT which made us question its scope, capabilities and implications. If 2023 was the year lawmakers agreed on the need for regulation, 2024 will be the year policies start to morph into concrete legislation.

 By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.” - Eliezer Yudkowsky, Computer Scientist and AI Researcher 

Yudkowsky is not wrong. Artificial Intelligence (AI) has proven to be pretty good at doing things most humans can’t even comprehend. While its all-round transformative impact has given rise to an unprecedented era of innovation and efficiency, the surge in enthusiasm is accompanied by a parallel concern about establishing effective controls and governance. 

Undoubtedly the AI space needs to be regulated. But the impact of regulation on innovation is always subject to debate, depending on who you ask. Governments find themselves in a  dilemma, as they seek to develop comprehensive regulatory frameworks to strike a balance between fostering innovation and addressing potential risks. Traditional regulatory paradigms don’t work in the dynamic landscape presented by AI

We have a legal, regulatory framework built on the basis of mail, paper, words, versus a new world order which is digital, continuous, 24/7, and built on bits and bytes. Somehow we need to square these two worlds 26. - Aaron Klein, policy director, Center on Regulation and Markets, Brookings Institution

Regulators have always struggled to keep pace with technology. More so with AI, which presents a unique set of challenges given its nature.

The Pacing Problem. Existing regulatory structures are slow to adapt to the pace at which technology evolves, and regulatory agencies are generally risk-averse. The policy cycle often takes anywhere between five and 20 years, whereas a unicorn startup can develop into a company with global reach in a matter of months.

Disruptive Business Models. Industry boundaries become blurred as innovative products and services hop regulatory sectors. The interconnected nature of business models makes it difficult to assign liability for consumer harm. For example, if a self-driving car crashes, who is liable – the software developer, automobile manufacturer, or the occupant?

The ‘Black Box’ Problem. Algorithms today make scores of strategic decisions, from what we should watch on OTT platforms to determining heart-attack risk. The rationale, source or basis for these decisions is unknown. Algorithms are closely held by the organisations that create them, or are so complex that even their creators can’t explain how they work. This is AI’s ‘black box’ — the inability to see what’s inside an algorithm. That’s likely to change, as some experts believe companies should make their algorithms public. In May 2018, the GDPR went into effect requiring companies to explain how algorithms using personal data of customers work and make decisions 36.

Last year, the New York Times sued OpenAI and Microsoft for copyright infringement, claiming unauthorised use of published work to train artificial intelligence technologies. The lawsuit could test the emerging legal contours of copyright laws and generative AI technologies.

But above all, the primary question is who do you govern?

Should regulation be focussed on governing AI systems (i.e. developers and deployers), or should it govern the users? While AI systems are the ones developing and protecting the algorithm, users are exercising their human agency and discretion in using and taking advantage of these algorithms. Assigning responsibilities between developers and deployers is crucial, especially considering that developers may release general-purpose AI without specific intended uses, leaving deployers to apply it across a broad spectrum of downstream uses.

If AI is able to do most things better than humans, can it and should it explain its decisions? Regulators are certainly moving in that direction.

Regulatory bodies all over the world have taken their time with regulating the space. Deloitte calls it the ‘Understand-Grow-Shape’ sequence in AI regulations:

  1. Understand. When confronted with an unfamiliar and fast-moving technology such as AI, governments first try to understand it, often via collaborative mechanisms such as advisory bodies, to try and gauge its likely impact.
  1. Grow. Most countries next create national strategies that deploy funding, education programs, and other tools designed to spur the growth of the AI industry.
  1. Shape. As the AI industry continues to grow and develop, governments look to shape the development and use of AI through instruments like voluntary standards or regulations.

In 2023, AI policy and regulation went from a niche, nerdy topic to ‘must-have’ front page news, partly because of OpenAI’s ChatGPT which made us question the scope, capabilities and implications of AI. We saw the first sweeping AI law emerge in the European Union; the risk-based AI Act; US President Joe Biden’s Executive Order empowering AI companies to police themselves; and specific rules and bans in China for models endangering public interest.

If 2023 was the year lawmakers agreed on the need for regulation, 2024 will be the year policies start to morph into concrete legislation. 

Where is India?

India has gone through its own Understand-Grow-Shape model. In June last year, a couple of months after it said that it had no plans to introduce a specific law to govern the growth of AI, MeitY clarified the government would regulate the space – at least to protect digital users from harm, likely through the proposed Digital India Act (DIA).

The future of AI Regulation

The canvas of regulation must be broad and innovative. It should concern itself largely with – direct oversight of AI innovation to mould the development and application of algorithms; influencing the contextual landscape of AI development and adoption and the trajectory of innovation; being able to respond nimbly to emerging outcomes and impacts; and reshaping the design and implementation of policies to match the dynamism of AI. 

In a nutshell, regulatory artistry is the key to fostering responsible AI advancement.

Nimble regulation: Replace the static "regulate and forget" mindset with an adaptable and iterative approach. For AI, we will need an adaptive approach to regulation, relying more on trial and error; and faster feedback loops instead of the typical, long drawn-out regulatory cycle. Soft law mechanisms, such as informal guidance, and a push for industry self-regulation, could be beneficial in creating substantive expectations that are not directly enforceable. It will allow regulators to adapt quickly to changes in technology and business models. Moreover, deep engagement with affected stakeholders will help regulators understand the nuances of the technology.

Regulatory sandboxes. A regulatory sandbox allows live testing of new products or services in a controlled or test regulatory environment for which regulators permit certain relaxations for the limited purpose of the testing. The Reserve Bank of India, Securities and Exchange Board of India, and the Insurance Regulatory and Development Authority of India have all used the regulatory sandbox mechanism. Recently, the Telangana government launched a Web 3.0 regulatory sandbox to address issues and challenges faced by Web3 startups.

Collaborative regulation. In the AI context especially, it is critical to align regulation nationally and internationally by engaging with a broader set of players across the ecosystem. A recent global survey indicated that regulatory divergence (inconsistent regulations across different nations) costs financial institutions about 5-10% of their annual revenue. The patchwork of international financial regulations costs the global economy $780 billion annually 79.

As the digital economy expands, the risks of AI are inherently international in nature. Therefore, collaborative approaches with multiple regulators from different nations and with those being regulated would encourage innovation while protecting consumers.

The Bletchley Declaration was a landmark effort in establishing shared agreement and responsibility on the risks, opportunities and a forward process for international collaboration on frontier AI safety and research. Twenty nine countries, including India, teamed up to prevent “catastrophic harm, either deliberate or unintentional” that could arise from the ever-increasing use of AI.

Technically sound regulation.  AI regulation requires technical and socio-technical expertise in government. Crafting effective audits, standards, disclosures, and enforcements will not be possible without government expertise in AI systems, including engineering, machine learning and data ethics. Such expertise is currently lacking within the public sector.

In conclusion, a fresh approach to regulation will help induce positive conversation among various stakeholders who might otherwise be less amenable to compromise. To be prepared for imminent AI regulations that are on the horizon, companies will need new processes and tools such as system audits, documentation and data protocols (for traceability), AI monitoring, and diversity awareness training.

Let’s not forget that for any technology to sustain long term and become mainstream, regulation is inevitable. Timely, well deliberated regulation can and will accelerate innovation, if it is flexible enough to adapt to tomorrow’s technology.