Italy has become the first country to ban OpenAI’s language model ChatGPT over privacy concerns. The Italian data protection authority has accused OpenAI of stealing the data of its users and also raised concerns about the lack of an age-verification system to prevent minors from being exposed to inappropriate material.
According to a report by the New York Times, the Italian regulators cited a data breach on March 20 that exposed the conversations and payment details of dozens of ChatGPT users. The Italian authority might impose a fine of about $22 million or 4% of OpenAI’s worldwide annual revenue.
In response to the ban, OpenAI founder Sam Altman said, “We of course defer to the Italian government and have ceased offering ChatGPT in Italy (though we think we are following all privacy laws). Italy is one of my favorite countries, and I look forward to visiting again soon!”
OpenAI has been given 20 days to provide additional information and possible remedies before a final decision can be made about the future of ChatGPT in Italy. The company has announced that it had disabled ChatGPT in Italy and remains committed to protecting people’s privacy.
“We actively work to reduce personal data in training our AI systems like ChatGPT because we want our AI to learn about the world, not about private individuals,” ChatGPT’s statement said. “We also believe that AI regulation is necessary.”
While OpenAI has deliberately decided to remain inaccessible in China, Russia, North Korea, and Iran, the ban in Italy is the first known instance when a government has blocked an artificial intelligence tool.
ChatGPT has gained significant popularity in recent months, with Microsoft co-founder Bill Gates calling it the most “revolutionary” technology in 40 years. However, this ban highlights the growing concerns around data privacy and the need for robust regulations to protect individuals’ data.