Regulation Of Artificial Intelligence

Author: Ashwin Singh, Student, Symbiosis Law School, Pune (Symbiosis International Deemed University)


Life has evolved from humans performing tasks on their own to machines achieving the same for them. The phase has reversed, from people believing in hard work to smart work. This is the new era of technology. With the introduction of computers, the internet and broadbands, mobile phones, printers, etc life of humans have smoothened. 

Artificial intelligence is also a segment of these new inventions. John McCarthy and Alan Turing are said to be the founding fathers of artificial intelligence. Artificial intelligence – AI is a branch of computer sciences that focuses on the establishment of machines that can perform tasks that generally require human intelligence. AI impersonates humans and executes tasks with the help of machine learning, deep learning and various other algorithms.


As diverse kinds of data can be collected and processed thus, AI is being used in various sectors like healthcare, E-commerce, the Defence sector, Autonomous vehicles, the legal profession, the education sector, etc. As it is said: With great power comes great responsibility, AI is the power that needs continuous regulation to ensure there is no misuse.

With many benefits and comforts come the drawbacks which challenge their existence. 

The foremost concern of AI is data protection and safety. AI relies totally on big data and as data is readily available, risks associated with it should be accounted for. The sensitive information of a user can be exposed in the process and be threatful. As humans try to supersede artificial intelligence as it is also a product made by a human, stricter laws are needed to reduce the risk.

Secondly, there is a lack of accountability for the actions of an AI. The question arises who is to be blamed for errors made by AI due to which users suffer a particular loss. There is no one to be held responsible for the mistakes. 

Thirdly, autonomous weapons are designed in such a way that they work on AI and can be highly desirable for planning a war. A large number of casualties may occur if such technology reaches the wrong hands, various AI-wars will be held further making it difficult to handle.


As to the risks associated with using AI, it is need of the hour to bring up new laws to form a protective environment. It is important to minimise the exposure of risk through constant regulation.

Regulation of artificial intelligence a phenomenon governing the policies and laws of the public sector, modulating and nurturing AI. This is an issue desiring both national and international laws to control situations where risk is high. This has emerged as a global issue yet AI laws are not a prime concern for the United Nations.

European Union is one of the regulatory bodies which is working on the framework of laws and amendments related to AI. One of the regulations by the EU is the General Data Protection Regulation [ GDPR ] which helps in the protection of personal data and provides privacy and safety to users. It provides various rights and duties, provisions and principles, remedies regarding various issues arising.


On April 21, 2021, European Commission has proposed for Regulation of AI which aims to introduce a global regulatory substructure that would facilitate a legal aspect for the upcoming upgrades in the field of AI. It focuses on conserving users fundamental rights while using AI. The prime focus lies on the following provisions :

  • Obligatory rules for users, providers and distributors of AI disregarding where they belong to.
  • Substantial rules for high-risk AI’s.
  • Bans and prohibitions on certain unknown AI systems to avoid dark patterns and terrorism.
  • High rate of fines up to EUR 30 million.
  • The emergence of National supervisory agencies for governance.
  • Conduct market surveillance of AI systems by specified authorities.

Analysing the high rate of risk in today’s world of technology, experts say it would be very tough to control AI systems around the globe. Thus, with the potential to safeguard the interests of humans, desire to execute and implement a legal framework worldwide.


With the accelerated progress and boundless use of AI, the dangers caused by it have alarmed the Government. The Government has more cause to worry as India does not have any specific law related to AI. There is a need to intervene as this has become an issue on a national platform.

Currently, AI is being adapted and encouraged in India at a faster pace than expected. The need for regulation arose due to the high pace of advancement in the adoption of AI. Now, the Government is hurt by contriving new laws, guidelines, policies in regards to AI. 

In 2017 one of the steps taken to safeguard the people was the introduction of the Right to Privacy as a fundamental right shielded under the Indian Constitution. Justice Srikrishna committee recommends the government introduction of privacy laws. A Personal Data Protection Bill has been drafted in 2019, once it is passed by both houses of the Parliament it will become a law.

The Government of India has prioritized building up a Digital India and has launched various schemes related to AI. According to NITI Aayog has adopted a three jagged theory:

  1. Initiate projects which involve the full proof concept of AI 
  2. Building an atmosphere and ecosystem of AI in India.
  3. Collaboration with contributors and professionals.

In 2018, the planning commission of India, NITI Aayog introduced the National Strategy on Artificial Intelligence [NSAI]. Various provisions regarding the application of AI were discussed. The NITI Aayog report suggests the following : 

  • Setting up a panel consisting of The Ministry Of Corporate Affairs and the Department Of Industrial Policy and Promotion to look over the regulations needed in intellectual property laws.
  • Formation of appealing IP regimes for AI upgrades.
  • Introduction of legal networks for data protection, security and privacy.
  • Creation of ethics concerning each sector.

Four committees were set in motion by the Ministry of and Information Technology to analyse multiple ethical issues. The Bureau of India Standards has launched a new committee for systematic and levelled AI. The government is working on various safety parameters to limit the risk associated with its interaction.

Another initiative taken by NITI Aayog is the establishment of AIRAWAT – AI Research, Analytics and Knowledge Assimilation platform. It is an approach paper given by senior adviser, Anna Roy recommending AI Specific Cloud Compute Infrastructure. As India has relied on cloud-based AI, AIRAWAT talks about requirements for the ideal use of AI.

  • Bringing up a specialised AI structure and atmosphere will aid the computing needs of the Centre of Research Excellence, International Centres Transformational AI, startups, researchers, students, various Innovation Hubs, etc will be satisfied.
  • An inter-ministerial task force backed with cross-sectoral representation will execute AIRAWAT as prescribed.
  • The task force will keep a check on funding and program the approach. 
  • Funding for AIRAWAT will be done by the National Supercomputing Mission.
  • It will incorporate equipment, arrangements, staff, maintenance and up-gradation done in the process. 


In 2020, NITI Aayog drafted documents based on launching an oversight body and enforcement of responsible AI principles which covered the following aspect :

  • Inspecting and operating principles concerned with responsible AI.
  • Crystal clear design, structure and process to set particular standards.
  • Formation of the legal and technical network.
  • Imparting education and making aware about responsible AI.
  • Creation of new Techniques and tools for a responsible AI.
  • Representation of India on a global standard.

These drafts would keep on updating as per the dynamics and will never be final. 


Many new weapons, including AI, are being developed like armed drones, cyber-attack software, combat robots, slaughter bots, and robotic weapons. LAWs- Lethal Autonomous Weapons are a segment of autonomous military weapons which work independently using AI. Such weapons are not banned but their use is constantly regulated by different countries. If such weapons are not supervised and controlled by law, robot wars can emerge causing devastation at large. Efforts to govern and regulate the LAWs can be done through international treaties and leveraging legal substructure to conquer these challenges. 


We see vigorous growth around the world in the field of AI, its scope and application. The private and public sectors have both adapted to the use of AI. Different countries and cultures have diverse perspectives about the application of AI and define it in its way. As everything grows, it complicates so a set of norms are also set for AI too. AI is present all around in the forms of codes and smart technology, to understand such codes is difficult for a layman. 

Furthermore, regulating the codes might be hard, but AI usage can be regulated from time to time. For every place, there is an appropriate use of the AI which is to be known for its proper execution. For instance, face recognition can be used for the identification of persons but the same is not possible for buildings. 

Regulating AI internationally might be a tricky job, so we need governance on a national basis that is going to entail and understand the systematic use of AI. 

As AI is a smart machine imitating human intelligence and behaviour, it needs to be regulated to safeguard a balanced approach in the world. Adapting to the regulations would maximise the benefits and narrow down the risk associated. It is foremost to recognize that AI is not something that is bothering but typical usage of it in an atrocious way distresses the people. 

Thus unless we provide AI with laws, it can be a persistent threat to mankind.