© 2024 AIDIGITALX. All Rights Reserved.

Can You Really Trust an AI If You Don’t Know How It Works?

Many of the most powerful AI models today are based on large neural networks with millions or billions of parameters.
Trusting AI Without Understanding How It Works / Illustration / aidigitalx
Trusting AI Without Understanding How It Works / Illustration / aidigitalx

Understanding how an AI works may not always be feasible or necessary.

We generally don’t need to understand the intimate internal workings of technologies we use every day, like smartphones, cars, or even our own brains. As long as a system behaves reliably and does not exhibit serious flaws or unintended behaviors, some would argue that how it achieves its results is not crucial.

AI systems like the ones being deployed to evaluate loan applications or analyze medical scans are typically “black boxes” – computer programs whose precise operating mechanics and training data are closely guarded secrets. While we may know the inputs and outputs, the reasoning behind how the AI transforms the former into the latter remains opaque.

Advertisement

“There is a fundamental tension between AI’s incredible power and accuracy, and the fact that we often have no idea what inner procedures and data distributions led it to arrive at a particular decision,” said Eliza Cavendish, a professor of computer science and ethics. “It’s like a advanced calculators that always gives you the right answer, but never shows its work.”

This lack of transparency has critics sounding the alarm about potential biases and errors being baked into influential AI systems without anyone realizing it. If an AI rejects someone’s loan application, for example, how can we be sure it didn’t discriminate against them based on race or gender? If an AI diagnostic tool misses a tumor, how do we identify and fix the flaw?

“You wouldn’t want to fly on an aircraft whose aerodynamics were a complete mystery,” said Raj Patel, a researcher who studies ethical AI. “Yet that’s essentially what we’re being asked to do when we put blind trust in these black box systems with societal effects.”

Advertisement

The companies developing AI systems argue that shielding their proprietary code and data is essential to maintaining competitive advantages over rivals. There are also legitimate security concerns around publicly exposing an AI’s inner workings in a way that could potentially allow bad actors to game or attack the system.

But the debate speaks to an emerging societal reckoning about what level of transparency and accountability we should demand from advanced AI systems, especially those being used to help high-stakes decisions impacting human lives.

“Just as we ultimately decided that the potential dangers of human pilots required implementing robust certification and oversight processes, the same will likely need to happen for AI,” said Patel. “The technology is too important to society to allow it to remain a black box indefinitely.”

NewsletterYour weekly roundup of the best stories on AI. Delivered to your inbox weekly.

By subscribing you agree to our Privacy Policy & Cookie Statement and to receive marketing emails from AIDIGITALX. You can unsubscribe at any time.

Advertisement
Kevin Land
Kevin Land

Kevin Land is an AI entrepreneur and writer. He explores the entrepreneurial side of AI development. Focuses on the challenges and rewards of AI startups.