© 2024 AIDIGITALX. All Rights Reserved.

Fast AI, Big Problems: How Misinformation Spreads!

AI like ChatGPT gained fame but raised worries in 2023. The EU plans new rules for AI in 2024, focusing on fairness, privacy, and accountability. Challenges exist with data and responsibilities. Efforts aim for ethical AI worldwide, but uncertainties persist.
Fast AI, Big Problems: How Misinformation Spreads!
Fast AI, Big Problems: How Misinformation Spreads! / aidigitalx

Generative AI, like ChatGPT and similar technologies, had a huge start in 2022 but caused a bit of a scare in 2023. People got worried about how these AI systems were being used. Governments got involved – the US and the European Union started looking into rules for AI in things like political ads and how companies train these AI systems.

The European Union (EU) is planning rules for a special type of AI called generative AI in 2024. They’ve already got rules about protecting people’s data (GDPR), and now they’re making new ones just for AI. These rules will tackle things like how AI companies get people’s permission to use their info, making sure AI isn’t biased, and deciding who’s responsible for what in the AI world.

The EU’s old rules about data protection (GDPR) are crucial for AI that uses personal info. But there’s a bit of a clash because AI needs lots of data to learn, and getting permission from everyone for that much data is tough. The EU’s new rules will also make sure AI doesn’t keep wrong info, respects people’s privacy, and doesn’t show biases based on things like gender or race.

These new rules are expected to affect how AI works not just in Europe but around the world. They’re trying to make sure AI follows the rules, doesn’t cause problems, and respects everyone’s rights.

Advertisement

But a lot of the problems with AI aren’t new. They’re like the issues we’ve seen with social media for a long time. Some companies that make AI now are facing the same problems that big social media companies faced before – things like dealing with misleading info, bad working conditions for workers, and privacy issues.

These AI companies often hire cheap workers in other countries to do tough jobs, just like social media companies did with content moderation. This makes it hard for people to know how these AI systems or social networks are really being run.

There’s also the problem of figuring out who’s really doing the work – is it the AI or a human worker? And when there’s a mistake or something goes wrong, it’s tough to tell whose fault it is.

The response from these AI companies to problems is similar to how social media companies dealt with issues before. They talk about having rules and protections in place, but these can be easy to get around.

For example, when Google released a chatbot called Bard, people found out it could spread false info about things like Covid-19 and the war in Ukraine. Google said it was just an early test and they’d fix it, but it’s tough to make sure these AI systems always do the right thing.

Advertisement

AI is making it easier and faster to create fake stuff, like videos of politicians saying things they never actually said. This can mess with real news and information.

Some big companies are trying to put rules in place for AI-generated political ads, but that doesn’t cover all the ways fake stuff can be made and shared.

To make things worse, these big tech companies are cutting back on the teams that check for harmful content. They’re laying off lots of workers, which makes it harder to keep an eye on how these technologies are being used in bad ways.

It seems like these AI companies are rushing to put out new tech without thinking enough about the consequences. And even though governments are trying to make rules for AI, they’re a bit behind, so companies don’t really have a reason to slow down.

So, it’s not just about the technology itself – it’s also about how companies are using it and the way our society deals with these issues. There’s a concern that tech companies are focused on making money without really thinking about the problems they’re causing for everyone else.

Advertisement

The GDPR, in effect since 2018, focuses on data privacy rights and consent, presenting potential challenges for generative AI that relies on large-scale data. Obtaining individual consent for AI training on vast data sets may be impractical. However, leveraging “legitimate interest” might suffice for lawful processing, emphasizing a compelling reason for data usage.

Additionally, the GDPR’s tenets, such as rectification, erasure, and data minimization, pose considerations for generative AI. Balancing the rights to rectify or delete personal data with the necessity for extensive training data in AI models is a challenge.

Looking forward, the AIA, expected to finalize in early 2024, will further define regulations specific to generative AI. It introduces obligations on AI models’ registration, testing, documentation, and risk reduction. It also addresses bias mitigation, copyright usage, and distinguishes between different types of AI entities like Foundation Models (FMs), General Purpose AI (GPAI), and Generative AI.

The AIA stands for the EU Artificial Intelligence Act. It’s a regulation proposed by the European Union to set rules and guidelines specifically for artificial intelligence (AI) systems, including generative AI. This act aims to define and regulate how AI technologies are developed, deployed, and used within the EU region. It covers various aspects, such as setting requirements for AI system providers, addressing issues of bias and transparency, outlining obligations for high-risk AI applications, and establishing rules related to AI training data and governance. The goal is to ensure ethical and trustworthy AI while fostering innovation and protecting individuals’ rights within the EU.

The AIA aims to address issues related to bias in training data, copyright usage in AI models, and delineation of systemic Foundation Models (SFMs) that might pose higher risks. It anticipates categorizing AI models based on their impact and resource requirements for training.

The evolving regulations aim to strike a balance between regulatory compliance and fostering innovation in generative AI. The impact of these regulations extends beyond the EU, potentially influencing global AI governance and serving as a template for other regions’ regulations.

Despite these efforts, the future direction of generative AI regulation remains uncertain, considering its emerging nature and ongoing legal challenges. The collaboration between regulators and industry stakeholders is crucial to finding a regulatory framework that benefits consumers, enterprises, and society while enabling continued innovation in generative AI.

NewsletterYour weekly roundup of the best stories on AI. Delivered to your inbox weekly.

By subscribing you agree to our Privacy Policy & Cookie Statement and to receive marketing emails from AIDIGITALX. You can unsubscribe at any time.

Advertisement
Advertisement
Expert
Expert

Expert in the AI field. He is the founder of aidigitalx. He loves AI.