

Here's an example of how one of these startups works. Companies that use them also want to make sure their AI isn't making biased decisions when, for example, sifting through job applications or granting loans. Their most popular task: Improve an AI model's performance. A small but growing industry is emerging that monitors how these systems work.

That doesn't make them completely inscrutable. It will often give the right answer, but its designers often can't explain how. But they know for sure which picture shows the fox.Ī similar paradox affects machine-learning models. But can you explain how you know? Most people would find it hard to articulate what it is about the nose, ears or shape of the head that tells them which is which. You can probably tell within a few milliseconds which animal is the fox and which is the dog.

The hard part, for engineers, is understanding how AI makes a decision in the first place. Millions of pieces of data are used to train AI software, allowing it to make predictions loosely similar to humans'. AI powers most of the action we see on Facebook or YouTube and, in particular, the recommendation systems that line up which posts go into your newsfeed, or what videos you should watch next - all to keep you scrolling.
