The Ministry of Electronic and Information Technology Advisory on AI


The Ministry of Electronic and Information Technology (“MeitY”) issued an advisory on 1st March 2024, highlighting the failure of the intermediaries and their platforms to take up necessary due diligence as required under Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“Rules”) and suggested measures, as provided below, need to be undertaken by the intermediaries with respect to their AI platform/model:

  1. The AI platform must not generate results which are in contravention of laws in India. MeitY now essentially requires all intermediaries to make sure their AI platform/model including by not limited to generative AI, software(s), LLM or algorithm(s) on or through the computer of the intermediaries, must not allow the user to host, display, upload, modify, publish, transmit, store, update or share any information or content which are unlawful as per the Rule 3(1)(b) of the Rules or violate any other provision of the Information Technology Act, 2000.
  2. It is now the responsibility of the intermediaries to ensure its computer resource including the use of its AI platform/model does not provide or permit a result which is bias or may threaten the integrity of the electoral process in India.
  3. The government has now restricted the intermediaries from making available to the Indian public, AI platforms/models which are at testing stage or unreliable for the users, without the prior permission of the government. In addition to seeking the government approval, a disclaimer cum consent popup needs to be added on all such platforms to inform the user of a possibility that the output generated by the AI platform/model may be unreliable and inaccurate.
  4. The intermediaries shall through the terms of service and user agreement, clearly inform the users of the AI platform/model at a testing stage, of the possibility that the information provided by the AI platform/model could be unlawful and the possible consequences of dealing with such unlawful information by the user may include disabling of access to or removal of the non-compliant information, suspension or termination of access or usage rights of the user to its account as the case may be, in addition to punishment under applicable laws.
  5. To counter the rising incidents of deepfake videos, the government through the present advisory has directed the intermediaries that when their software or other resources are used to create information in the form of text, audio, audio-video, which can potentially be used as misinformation or deepfake, the information so created shall be labelled or embedded with a permanent unique metadata or identifier of the intermediary in order to identify that such misinformation was generated using the intermediaries resources or software and to recognize the user of such software, which in turn could help in identifying the creator or first originator of such misinformation or deepfake.
  6. MeitY wants all the intermediaries to comply with the advisory and has sought action taken cum status report by 16th March 2024, i.e., within 15 days from the release of the advisory.
  7. After release of the present advisory, it has been clarified that the present advisory is directed towards AI models developed by large platform and not by startups. Adding further, that the process of seeking permission, labelling of the AI platforms and the consent-based disclosure to user, as discussed above, is more like an insurance policy for the intermediary, protecting intermediaries from getting sued by customers.

These guidelines lay the foundation for a critical imperative in our rapidly advancing technological landscape. While AI holds immense potential to revolutionize industries and improve lives, its unchecked proliferation poses significant ethical, societal, and economic risks. Effective regulation must strike a delicate balance, fostering innovation while safeguarding against potential harms such as bias, privacy breaches, and job displacement. Collaborative efforts between policymakers, technologists, ethicists, and stakeholders are essential to establish comprehensive frameworks that promote responsible AI development and deployment. The present advisory lays the groundwork for legislation to be enacted on the rising issue of circulation of misinformation and deepfake incidences as well as placing the onus of care upon the intermediaries to ensure that the user of their platforms or AI tools are aware of the potential issues and dangerous associated therewith. Thereby making the AI ecosystem safe for public and the society in general. By embracing transparent, accountable, and inclusive regulatory approaches, we can harness the transformative power of AI to build a more equitable, secure, and sustainable future for all and this guideline appears to be a small step in the right direction. However, it remains to be seen if the present restrictions will act as a hindrance for the intermediaries to test and develop their AI platform having the effect of holding back the innovation in an already underdeveloped and a nascent AI ecosystem in India.

This news flash has been written for the general interest of our clients and professional colleagues and is subject to change. This news flash is not to be construed as any form of solicitation. It is not intended to be exhaustive or a substitute for legal advice. We cannot assume legal liability for any errors or omissions. Specific advice must be sought before taking any action pursuant to this news flash. For further clarification and details on the above, you may write to [email protected]