Skip to main content

Moving Towards Ai Regulations

June 17, 2023 | Expert Insights

The EU has asked major tech platforms such as Google, Facebook, YouTube, and TikTok to identify content that is generated by AI (artificial intelligence) and label it for users. This move is a part of the EU’s efforts to tackle misinformation, which has now become a rampant issue.

The EU has requested the 44 signatories of its code of practice against disinformation to detect AI-generated content and make it easier for users to identify.

Background

While influence campaigns have been known to sweep around the globe for the last decade or so, these have become especially prominent during the ongoing Russia-Ukraine conflict. Both sides have been actively trying to put their narrative across with considerable embellishments, which makes it difficult for the impartial observer to make an objective opinion. Allegedly such disinformation campaigns have also been run by unidentified state/non-state entities during the last two American presidential elections to influence public opinion.

Social media platforms and algorithms already provide the potential for this, and the increase in AI-generated content makes it all the more prevalent. The spread of false information and propaganda during a war is historically an instrument of warfare. However, using AI and machine learning models to spread disinformation is a relatively new development.

AI-generated content such as photos, videos, and text can potentially increase the spread of fake news. Advanced AI tools and chatbots can create life-like pictures depicting events that did not occur. Voice generation software can closely mimic a living person's voice.

Machine learning can exploit human psychology with high levels of accuracy. The vast amounts of information the internet provides enable it to learn what will reinforce or counter certain opinions based on feedback loops. It can identify this for a specific demographic cohort, making its impact more targeted and effective. These technological capacities form the basis for social media. Private individuals or governments can misuse them to influence the opinions of large numbers of people.

Recent advancements in AI have made this risk all the more potent. Transformer networks can produce messages and assess the impact of the messages. When this is multiplied in large numbers, the technology rapidly picks up how to influence large sections of people. 

While both the West and its adversaries (Russia, China, Iran, and North Korea) extensively conduct social media misinformation campaigns as a strategy to further their agendas, the techniques and themes used by highly innovative Western agencies are far more creative, realistic and convincing. On the other hand, Russia especially has been handicapped on this account, and its efforts appear at best novice. The Ukrainians have been winning this war of perception with the active support of the West. Popular social media platforms like Tiktok have provided a solid platform to wage this contest, with Russian and Ukrainian bloggers vying in cyberspace to showcase their enemies' brutalities and highlight their supremacy on the battlefield. 

The EU is considering further laws to regulate AI, called the AI Act. This act would ban certain AI practices, such as social scoring, where private and public entities can use the information to assess, categorise, and score people. It would also restrict facial recognition in public places and limit AI in areas like recruitment, which could cause discrimination. However, these restrictions will have to go through a long legislative process before they become law.

4

Analysis

The issue of misinformation goes beyond the business interests of tech companies and has political ramifications. The EU is especially keen to take measures against alleged Russian disinformation because it undermines support for Ukraine in the war and affects public opinion in Europe. 

The step is also a move towards gaining more control over AI, which has witnessed rapid development in its capacities. Since the code of practice is voluntary, tech companies are not obligated to comply with the EU's request and will not face any sanctions if they fail to do so. However, the EU has come up with new regulations called the Digital Services Act, which will require tech companies to be more transparent about their algorithms, take steps to prevent the spread of harmful content, and ban targeted advertising that relies on sensitive data. Major online platforms will have to comply with these content moderation requirements or face a penalty.

Twitter has opted out of the EU code of practice against disinformation. This has brought it under closer scrutiny by the EU, which was not pleased with what it perceived as a confrontational move by Twitter. However, Twitter will be obligated to comply with the content moderation regulations that come into operation in August.

Assessment

  • Given the recent and rapid developments in AI technology, the EU is keen to move towards AI regulation. While this first step may not be binding on tech businesses, it is a precursor for more stringent obligations that will strengthen AI regulation.
  • The EU’s crackdown on AI-generated misinformation is also part of an effort to combat the so-called Russian disinformation campaign, which undermines support for Ukraine in the war.
  • Technical issues may make it difficult for tech companies to detect AI-generated content. However, they may undertake to do so on a best-efforts basis.