AI AND THE LAW : THE NEED FOR A REGULATORY FRAMEWORK
BY- MANSI RANI
Artificial intelligence refers to computer systems that can perform complex tasks normally done by human-reasoning, decision making, creating, etc. AI tools are capable of a wide range of tasks and output, but NASA follows the definition of AI found within EO 13960, which references section 238(g) of the National Defence Authorization Act of 2019.[1]
Any artificial system that performs tasks under varying and unpredictable circumstances without significant human oversight, or that can learn from experience and improve performances when exposed to data sets.The legal profession in India, known for its complexity and voluminous documentation, is undergoing a significant transformation with the advent of artificial intelligence (AI). AI technologies are being increasingly integrated into legal practices to enhance efficiency, accuracy, and decision-making process. AI in legal document review and drafting[2]: Document review and drafting are critical components of legal practice that demand precision and attention to detail.AI tools assist lawyers in reviewing large volumes of documents,identifying key information, and drafting legal documents. AI in predictive analytic: predictive analytic powered by AI is transforming how lawyers approach litigation. By analyzing historical data, AI can predict case outcomes, helping lawyers to make strategy more effectively.
Embracing AI :Challenges and Consideration[3] while the integration of AI in the legal profession offers numerous benefits, it also presents challenges and considerations that must be addressed to fully harness its potential. Data Privacy and Security, Ethical Consideration, Accuracy and Reliability, Training and Adoption, Cost and Accessibility. The growing use of artificial intelligence (AI) systems across the world have raised reasonable concerns about the risk of undermining human rights, democracy and the rule. There are currently no specific laws regulating AI in India. However, a clutch of new regulations and agreements have come up at sovereign and multilateral forums to oversee AI tools, including the G7 pact on AI (October 2023), the Bletchley declaration (October 2023), and the EC’s AI Act (March 2024). AI Policy around the world[4]: The G7 pact on AI is called the International Code of Conduct for Organizations Developing Advanced AI Systems. The 11-point code aims to ‘promote safe, secure, and trustworthy AI worldwide’ through ‘Voluntary guidance for actions by organizations developing the most advanced systems’. These include generative AI applications like ChatGpt and the foundation models they built on.
The US issued an Executive order on AI on 30th October 2023 which largely relies on industry self-regulation. Developers will be directed to put powerful AI models through safety tests and submit results to the government before their public release. The initiative also creates infrastructure for watermarking standards for AI generated content, such as audio or images, often referred to as ‘deepfakes’. A total of 15 companies have signed on to the commitment. In November 2023, India joined 27 other countries, including the US and China, in adopting the Bletchley Declaration, a commitment from the recent UK-led AI safety Summit that acknowledges the need for a global alliance to combat AI related risks such as disinformation. These signatories vowed to ‘work together in an inclusive manner to ensure human centric, trustworthy, and responsible AI that is safe and supports the good of all’. China has significantly risen to the fore in the AI sector, establishing itself as a leading global power in AI. The country’s goal to become the leading AI innovation center by 2030 is well on path to realization, heralding a decade-long journey to a marked dominance in technology. As much as the government professes total claim in remodeling all parts of its technology through AI, it seems highly aware of AI’s ethics and security. The Chinese administration has, as a result, developed patterns to control AI growth and operations. Furthermore, china’s extensive regulations concerning AI and cybersecurity encompass most of the guiding principles applied to AI.
“China and Singapore are leading with comprehensive measures and strategic updates to manage AI’s impact and innovation. Hong Kong focuses on data protection, while Vietnam and Indonesia establish legal and ethical guidelines for AI applications, particularly in emerging sectors like fintech”. – Rohit Nayak, Principal Solution Designer, APAC region, Diligent.
Canada has been proactive in its approach to AI regulations by striking a delicate balance between promoting innovation and preserving ethical standards and societal interests. The country has introduced key government-led programs, such as the pan-Canadian AI strategy and the Canadian AI Ethics Council, to advocate for the responsible development of AI and address relevant ethical issues in the AI sector. These initiatives play a fundamental role in helping the stakeholders collaborate to develop policies aligned with respect for ethical values and the advancement of technology.
Misinformation, Disinformation and Fake News[5]: While the world is struggling to regulate AI through varying interventions, the current legal landscape is mostly occupied with issues such as violation of privacy rights, spreading biased or false information, ownership of the AI-generated contents and the role of interventions in checking crime. Italian Prime Minister Giorgia Meloni demanded one lakh Euros as compensation for her derogatory videos made using deep fake technology. Hollywood actress Scarlett Johansson accused Open-AI of cloning her voice without permission and violating her personality rights. In India, in May 2024, the Delhi High Court protected the personality and publicity rights of actor Jackie Shroff while restraining various e-commerce stores, AI chatbots, among others from misusing the actor’s name, image, voice and likeness without his consent. India is rapidly expanding AI-powered surveillance infrastructure, deploying facial recognition systems and AI technologies across law enforcement without comprehensive legal safeguards. The current regulatory landscape, exemplified by the the Digital Personal Data Protection Act of 2023, grants broad government exemptions that potentially compromise individual privacy rights. Unlike the European Union’s risks-based approach to AI regulations, India lacks clear legislative frameworks to govern these technologies, leaving citizens vulnerable to unchecked data collection and potential civil liberties infringements. Information Technology Act 2000[6]: It provides legal recognition for electronic transactions and includes rules to protect electronic data, information, and records from unauthorized or unlawful use.
These are set to be replaced by the Digital India Act 2023, which is currently in draft form and is expected to include key provisions related to AI. Information Technology ( Intermediary Guidelines and Digital Ethics Code),2021 provides a framework for oversight of social media, OTT platforms, and digital news media.
National Artificial Intelligence Strategy (2018): Launched by NITI Aayog under the tagline #AIFORALL. It mainly focuses on areas like Healthcare, education, agriculture, smart cities, and transportation.
Recommendations implemented[7]: High quality data set creation and legislative frameworks for data protection and cyber security. It serves as a foundational document for future AI regulation in the country. Draft National Data Governance Framework policy (2022): Modernizes government data collection and management. Aims to support AI-driven research and startups throuhg a comprehensive dataset repository. AI law emerges as a specialized field addressing these unique challenges. It encompasses a range of legal domains, including intellectual property, privacy law, contract law, and tort law, all of which are being reshaped in the context of AI. understanding AI legal issues, therefore, is not just about applying old laws to new technologies but also about developing new legal paradigms that can accommodate the nuances of AI. Key legal issues in AI such as Intellectual Property Rights, Privacy and Data Protection, Liability and Accountability, Transparency and Explain ability, Bias and Discrimination.
Government investment in AI: India’s AI sector is expected to grow rapidly, thanks to significant government investment. In 2024, the government sanctioned INR 103 billion (approximately USD 1.25 billion) for AI projects over five years. This investment will be used to develop computing infrastructure, support AI startups, and establish a National Data Management Office to improve data quality and availability for AI projects. These investments are designed to position India as a global leader in AI while ensuring that AI technologies are developed responsibly, with appropriate oversight. India is at the forefront of AI development, with significant investment amd policy frameworks in place to drive innovation. Unavoidably, Artificial Intelligence (AI) is integrated in governance, business, healthcare, judiciary, and education as India speeds towards becoming a digitally empowered country. However, the country still faces challenges in creating a robust legal framework for AI. Existing laws like the IT Act, Digital Personal Data Protection Act, and IT Rules provide a foundation
for AI regulations, but there is clear need for AI-specific legislation to address the complexities and ethical concerns of AI technologies. As AI continues to transform industries and societies, India must strike a balance between promoting innovation and ensuring responsible, ethical AI practices. The future of AI regulation in India will likely include comprehensive laws that address bias, discrimination, accountability, and privacy concerns while fostering AI’s immense potential to drive economic growth and societal progress. Such a framework must Balance innovation with public interest, safeguarding fundamental rights without stifling technological progress.
As the legal philosopher Roscoe Pound once said: “The law must be stable, but it must not stand still.”[8]
India’s regulatory approach to AI must reflect this principle – stable in protecting rights, yet dynamic enough to evolve with technology.
[1] https://www.nasa.gov
[2] https://en.wikipedia.org
[3] https://epiloguesystems.com/blog/ai-legal-challenges
[4]https://www.drishtiias.com/
[5] https://www.natstrat.org
[6] https://www.lexology.com
[7] https://www.unesco.org
[8] https://www.azquotes.com/author/44160-Roscoe_Pound