Writing the New Rules for AI

Regulating AI

Source: https://www.istockphoto.com/photos/ai-regulation

The launch of ChatGPT in November 2022 heralded a new era in democratizing the use of artificial intelligence (AI). Since then, use of AI has quickly expanded across many sectors, including healthcare, education, financial services, public safety, etc. However, rapidly advancing capabilities of AI have also brought to the fore the criticality of safety and ethical use of these technologies. At the Global Partnership for Artificial Intelligence summit in 2023 in New Delhi, the Hon’ble Prime Minister stressed the importance of creating a global framework for ethical use of AI, including a protocol for testing and deploying high-risk and frontier AI tools. Earlier, at the first global AI Safety Summit 2023 at Bletchley Park, 28 countries gave a call for international cooperation to manage the challenges and risks of AI.

How can a global framework for safe and ethical use of AI be developed? Several countries have initiated efforts to regulate and govern AI. The US government issued an executive order in October 2023, focusing on safe, secure and trustworthy development and use of AI. It seeks to address several critical areas, including national security, consumer protection, privacy, etc. and requires AI developers to share safety results with the US government. EU’s AI Act adopts a risk-based regulatory approach with stricter oversight for higher levels of risk of the AI systems.

At a fundamental level, a global framework for governance of AI must address the key concerns regarding development, deployment and use of AI. These include dealing with machine learning biases and potential discrimination, misinformation, deep fakes, concerns on privacy and access to personal data, copyright protection, potential job losses, and ensuring the safety, transparency and explainability of the AI algorithms.

The goal of AI governance should be to promote innovation and ensure safe, fair and ethical applications of the technology in promising sectors. To address the concerns noted above, the framework for governance of AI must be based on certain core principles, which can be enumerated as below.

Innovation: The governance framework must promote innovation and competition in AI technologies to continuously improve them. This would require, for example, facilitating access to large amounts of anonymized datasets to startups for developing and training AI applications in various domains. The National Data Governance Policy of GoI is an excellent initiative in this direction.

Infrastructure: The framework must also support expanding access to compute infrastructure and AI models to promote competition and encourage innovation. This would particularly be helpful to startups in this domain.

Capacity Building and Engagement: A sustainedfocus on capacity building holds the key to involving and engaging with more stakeholders in the development and deployment of AI across multiple sectors. This can significantly help in managing and reducing the risks. Engaging with stakeholders would also help in addressing any potential job losses and worker displacements due to deployment of AI.

Safety and Risk Management: This would involve development of standards and ensuring that AI models are tested and assessed for safety and risk. Appropriate risk management strategies must be put in place to address any likely harms that may be caused. This would include ensuring transparency, fairness and explainability in the AI development lifecycle through selection of proper training data sets, removing any biases and ensuring that cybersecurity issues have been addressed.

Privacy Protection: AI models must focus on privacy preserving technologies to ensure protection of privacy. This would help in creating trust in these models and enhancing their beneficial impact.

International Cooperation: For any global framework to succeed, international collaboration and partnerships built on a shared vision and common goals are essential. A global framework on AI must build on evidence in this rapidly evolving technology and promote collaboration across all countries to become effective.   India, being a global leader in technology, can play a proactive role in developing a global framework for governance of AI based on the key principles enumerated above. With its huge technology talent base and a rapidly growing economy, India enjoys a unique advantage in the global technology ecosystem, which it can leverage in this direction. We also need to focus on the development of AI applications trained on Indian data sets in various domains, such as agriculture, education, health care, transportation, public safety, etc., which can play a huge role in revolutionising the entire citizen-centric service delivery paradigm and bring efficiency gains at a systemic level across multiple sectors.

(The above article appeared in The Economic Times on January 28, 2024. It is available here: https://economictimes.indiatimes.com/tech/catalysts/writing-the-new-rules-for-ai/articleshow/107192031.cms?from=mdr. The views are personal.)

Featured

Traceability vs Privacy: The Real Issue is of Collective Security

Source: weforum.org

Societies have long realised the need to provide collective security for all to ensure sustainable development and prosperity. Providing collective security involved imposing some form of social control to regulate individual and group behaviour through gathering information about individuals. In the modern information age, a good government can ensure collective security through efficient use of information for law enforcement without necessarily encroaching upon individual privacy.  

Countries around the world have enacted laws to ensure that such information could be collected easily through various sources to help in achieving the wider societal goal of collective security. The US enacted the Stored Communications Act (SCA) in 1986 to require the internet service providers (ISPs) to provide content and metadata on stored emails to the government agencies under certain conditions. As this law soon became outdated due to rapid technological advances, the US passed the Communication Assistance for Law Enforcement Act (CALEA) that required the telecom companies to redesign their networks to facilitate wiretapping by the government agencies. Later, in 2005, it was expanded to cover ISPs and services like Skype, etc.   

UK and Australia have gone even further in enacting laws that require device makers and software developers to provide access to encrypted data. The Investigative Powers Act 2016 and the Investigatory Powers Regulations 2018 in the UK provide sweeping powers to the intelligence and law enforcement agencies to carry out both targeted and bulk interception of internet communications and hack into devices to access data. The Telecommunications Assistance and Access Act 2018 of Australia gives broad powers to the government agencies to require communication service providers (CSPs) to decrypt any communication.

The raging debate in India over the ‘traceability’ provision in the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 must be understood in the context of the need for ensuring collective security as a social good. The rules require the significant social media intermediaries to identify the first originator of a message in India for investigation of grave offences relating to the sovereignty and integrity of the country, crimes against women and children, etc. that are punishable with a minimum prison term of 5 years.

Critics have claimed that this provision would seriously undermine privacy and force the intermediaries to break the end-to-end encryption. However, the rules make it very clear that what is required to be provided by the intermediaries is only the metadata about the first originator of the offending message, and not its contents.  The message itself needs to be provided by the law enforcement agencies to the intermediaries. There is no attempt to make them break any encryption. With such safeguards built into the rules, the provision cannot be termed as harming privacy. In fact, the rules place much less onerous obligations on the intermediaries for sharing information compared to what several other countries have mandated, as noted earlier.

The law and the evolving jurisprudence in this domain in India have provided strong safeguards for ensuring freedom of expression and privacy. The upcoming Personal Data Protection Bill aims to further enhance this legal framework for protection of personal data and online privacy subject to reasonable checks in the interest of collective and national security. John Locke, a famous 17th century philosopher and the “Father of Liberalism”, argued in his Second Treatise of Civil Government that individuals needed a strong government to be able to exercise their individual rights and liberties.  

There need not necessarily be a trade-off between privacy and collective security. Collective security is just as essential to make people feel safe and allow them to enjoy their privacy protections to function effectively as individuals.  The new IT Rules seek to achieve that larger social good.

(The above article appeared in The Economic Times on 10th October 2021. It is available at: https://economictimes.indiatimes.com/tech/catalysts/traceability-vs-privacy-the-real-issue-is-of-collective-security/articleshow/86721078.cms?from=mdr. The views are personal.)

Featured

Unduly Worried Over New Information Technology Rules

Photo by Canva Studio on Pexels.com

In a communication dated June 11, three UN Special Rapporteurs raised serious concerns over provisions of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. They claim that these provisions do not meet the standards of rights to privacy and to freedom of expression as per the Articles 17 and 19 of the International Covenant on Civil and Political Rights (ICCPR) and that some of the due diligence obligations of intermediaries may infringe upon a ‘’wide range of human rights”.

They claim that the terms such as “ethnically or racially objectionable”, “harmful to child”, “impersonates another person”, etc. are broad and lack clear definitions and may lead to arbitrary application. Nothing could be further from truth. These terms have been very well defined and understood in both Indian and international law and jurisprudence. The Rule 3(1)(b) of the IT Rules specifies these terms clearly as part of a user agreement that the intermediaries must publish. They are aimed at bringing more transparency in how intermediaries deal with the user content and are not violative of the UN’s Joint Declaration on Freedom of Expression and “Fake News”, Disinformation and Propaganda.

It must also be mentioned that the Rule 3(1)(d) allows for removal of an unlawful content relating to sovereignty and integrity of India, security of the state, friendly relations with foreign states, public order, etc. only upon an order by a competent court or by the Appropriate Government. This is as per the due process specified by the Supreme Court in the Shreya Singhal Vs Union of India case in 2015. Given the potential of immense harm that can be caused by such unlawful content being freely available online, the time limit of 36 hours for their removal after due process is reasonable. Similarly, the time limit of 72 hours for providing information for investigation in response to lawful requests in writing from government agencies is entirely reasonable. The Rule 3(2) also provides for establishing a grievance redressal mechanism by the intermediaries and resolution of user complaints within 15 days. However, content in the nature of ‘revenge porn’ must be removed within 24 hours. Again, given the potential of immense personal damage that such acts can cause to the dignity of women and children, this time limit is reasonable.  

The liability of the Chief Compliance Officer under Rule 4(1) of a significant social media intermediary is not arbitrary. He or she can be held liable in any proceeding only after a due process of law. This has been clearly specified in the rule itself.

The apprehensions about the Rules harming privacy are also misplaced. The Rule 4(2) requires the significant social media intermediaries to provide only the metadata about the first originator of a viral message that may be required for investigation of a serious crime relating to sovereignty and integrity of India, public order, rape, child sexual abuse, etc. that are punishable with a minimum term of five years. This again is after a lawful order is passed by a court or a competent authority and where there is no other less intrusive means of obtaining such information. There is no provision to ask the intermediary to break any encryption to obtain the contents of the message. In fact, the content is provided by the law enforcement agencies to the intermediary. Lawful investigation of crimes cannot be termed as harmful to privacy. Several countries, such as the US, UK and Australia have enacted laws that allow for far more intrusive interception of encrypted messages, including their decryption.

The concerns with regard to media freedom are also misplaced. The section 5 of the UN’s Joint Declaration on Freedom of Expression and “Fake News”, specifically enjoins upon the media outlets to provide for self-regulation at the individual media outlet level and/or at the media sector level. The IT Rules provide for a three-tier system of regulation, in which the government oversight mechanism comes in at the third level only after the first two tiers of self-regulation have failed to produce a resolution. The rules clearly specify the due process for the government oversight mechanism.

India is a vibrant democracy with a long tradition of rule of law and respect for freedom of expression and privacy. The IT Rules aim at empowering the users to enable them to exercise their right to freedom of expression responsibly and prevent the misuse of these platforms for unlawful purposes. The selective interpretation of the provisions of the IT Rules by the UN Rapporteurs is, at best, disingenuous.  

(The above article appeared in The Economic Times on July 11, 2021 and is available at https://economictimes.indiatimes.com/opinion/et-commentary/unduly-worried-over-new-rules/articleshow/84323812.cms?from=mdr. The views expressed by the author are personal.)