Date of Ban: March 31, 2023
The Italian Data Protection Authority (an Italian government agency tasked with protecting the privacy of Italian internet users) has temporarily banned ChatGPT in Italy, citing a lack of data controls. In particular, the regulator cites several issues:
- ChatGPT suffered a data breach on March 20, 2023 that constitutes a breach of the EU’s GDPR rules. The incident was created by a bug in OpenAI’s code that allowed some users to see the titles of other users’ chat conversations.
- The regulator claims OpenAI had “no legal basis” for using the data that it used “to train the algorithms that power the platform.”
- ChatGPT does not verify the age of users and does not stop children under 13 from using the chatbot. The Italian regulator claims this exposes children to “responses that are absolutely unsuitable to their degree of development and self-awareness.”
- The regulator claims that ChatGPT’s failure to always provide accurate information leads to misuse of personal information that the chatbot discusses.
In addition to demanding OpenAI stop operating ChatGPT within Italy, the regulator has given OpenAI 20 days to show proof that it has taken steps to correct the issues enumerated above, or else it will impose fines.
At minimum, OpenAI would need to add age checks, update its privacy policies and data usage disclosures, and request more explicit user permissions in order to comply with Italy’s demands. However, OpenAI may face a more difficult challenge if the regulator insists that all identifiable Italian people and their information be removed from GPT’s training data.
If OpenAI doesn’t satisfy the regulator’s demands, the company could be hit with a GDPR fine of up to € 20 million (approximately $22 million) or 4% of its global revenue, whichever is greater.
This isn’t the first time the Italian regulator has banned an AI chatbot. In February, the regulator banned Replika.ai, an emotional companion & erotic discussion chatbot.
Italy is the first European country to make such bold moves against AI companies, but the rest of Europe is not far behind. The EU is currently in the late stages of preparing a new regulatory framework that goes beyond GDPR to more specifically regulate how AI is used in Europe. If passed, the new regulations would (among other things) explicitly regulate the types of data that may be used to train AI systems. The fine for violating the AI law would also be higher than the fine for violating GDPR: € 30 million or 6% of global revenue, whichever is greater.
For entrepreneurs and investors, there are three interesting consequences of these developments:
- AI startups should (financially speaking) focus more on building products for the U.S. market than for the European market so that they can move faster. Once they have solidified enough market share, they can then spend the money to compliantly spread into Europe.
- As AI gains wider adoption in the U.S. but not Europe, the economic productivity gap will widen, making U.S. companies outperform European companies.
- If the EU’s new AI legislation passes, it will create an opportunity for entrepreneurs to create AI compliance startups.