Hannah Fry: “AI can do some superhuman things – but so can forklifts”


TX DATE:23-02-2026,TX WEEK:8,EMBARGOED UNTIL:17-02-2026 00:00:00,PEOPLE:Hannah Fry,DESCRIPTION:,COPYRIGHT:Curious Films,CREDIT LINE:BBC/Curious Langdon Downs/Rory

BBC/Curious Films/Rory Langdon

Chances are you think a lot more about artificial intelligence today than you did five years ago. Since ChatGPT launched in November 2022, we’ve grown accustomed to interacting with AIs in most spheres of life, from chatbots and smart home technology to banking and healthcare.

But such rapid changes cause unexpected problems – as mathematician and broadcaster Hannah Fry shows in AI Confidential with Hannah Frya new three-part BBC documentary in which she talks to people whose lives have been transformed by technology. She talked to New Scientist about how we should look at AI, its role in modern mathematics – and why it will turn the global economy around.

Bethan Ackerley: In the show you explore what AI is doing to our relationships and our sense of reality. Some of this stems from “AI sycophancy” – the idea that these tools are giving us what we want to hear, not what we need to hear. How does this happen?

Hannah Fry: Previous models were extremely sycophantic. Everything you’d write, they’d be like, “Oh my god, you’re so amazing, you’re the best writer I’ve ever come across”. They’re a little better now, but there’s this fundamental contradiction. We want them to be helpful, encouraging, and make us feel important, which are the things you get from a really good human relationship.

At the same time, a really good human relationship will say the difficult things out loud. If you put too much of that into the AI, it stops being useful and starts arguing and not being fun to be around. There’s also this huge amount of people who have broken up with partners because they used it as a therapist and the AI ​​said, “Get rid of him”.

There are people who have given up their jobs. There are people who tried to use AI to make money and lost fortunes because they overbelieved its capabilities. When you start including all these people, this is a very large group. I think we all know someone who has been affected by social media bubbles and radicalization. I think this is the new version of it.

Has witnessing these issues changed the way you use artificial intelligence?

What it has changed is the way I pray it. So now I regularly ask it to say, tell me the thing I don’t see, find my biases. Don’t be sycophantic, tell me the hard stuff.

If we don’t want AI to be like this, how do we want it to be?

The answer probably depends on the situation. In the scientific space, there are amazing examples – I think of AlphaFold (an artificial intelligence that predicts protein structures). Incredible progress is being made in mathematics, where algorithms have an intelligence that is not like that of humans. But I don’t think you can have a good model of reasoning unless it has a conceptual overlap with things humans understand the world to be. So I think it must be more human-like.


There are certain situations where AI can do superhuman things, but so can forklifts

It seems like every day there is a news story about a math problem that was unsolved for years but has now been solved with the help of AI. Does it make you excited?

I like to think of it like it’s this great map of mathematics, and human mathematicians are in a certain territory and circle around it. They don’t always see connections to things nearby. Amazing mathematicians have found bridges between two areas of the map, such as the Taniyama-Shimura conjecture, where Japanese mathematicians found a bridge between two otherwise disconnected areas of mathematics. Then everything we knew from over here applied and vice versa.

I think AI is very good at saying, “Take a little look here, it looks like fertile territory that’s been underexplored,” and that’s very, very exciting. What AI is not so good at is pushing the boundaries further. And what it’s not really good at… is full abstraction, of having broader, grander theories. What people always say is, if you gave AI everything up to 1900, it wouldn’t come up with general relativity. So I’m still excited that we’re in this very sweet spot where AI will make human math faster, more efficient, more exciting, but it still needs us.

There are many misconceptions about AI. Which one would you remove if you could?

People imagine it to be omnipotent, almost omnipotent. “The AI ​​said this; the AI ​​told me to buy these stocks.” There are certain situations where AI can do superhuman things, but so can forklifts. We have built tools that can do things that humans have not been able to do for a long time. It does not mean that they are godlike or have untouchable knowledge.

You’re not going to give a forklift access to your bank account…

No! Exact. I think it is—the framing of these things. Because they speak in language and speak to us, they feel like a creature. We don’t have that problem with Wikipedia. It would be better to think of these things as an Excel spreadsheet that is really capable, rather than a creature.

Why do we tend to anthropomorphize AI?

Our bodies are tuned for cognitive social relationships. We are the smart, social species. And this is a seemingly smart, seemingly social device. Of course we put a grade on it. There is nothing in our past, in our design, that would cause us to do otherwise.

Is there no way to protect oneself from the anthropomorphic urge?

I think it’s unfair to put it in the hands of individuals really. It’s a bit like saying junk food is freely available and it’s your responsibility to make sure you don’t have too much of it. The way these interfaces are designed, the conversations they have with you, we now have really good evidence that all of this leads people more and more into this trap. And I think it’s only in the design of these systems that you’ll ever be able to prevent people from falling down these rabbit holes.

There are many social problems that AI highlights, such as people being very isolated and lonely. But couldn’t AI help with these problems?

If you say “OK, you can’t talk to any chatbots if you feel lonely, let’s ban that”, then you still have lonely people. And of course it would be amazing if there were a lot of human relationships for everyone, but that doesn’t happen. So given that this is the world we live in, I think there are some situations where talking to a chatbot can alleviate some of the worst issues surrounding loneliness. But these are delicate topics. When you start using technology to solve really human questions, there’s an incredible fragility to it all.

Let’s talk about the distant future. We often think about extreme scenarios with AI – say a super-intelligent AI designed to make paper clips turns us all into paper clips. How useful is it to think about such doomsday scenarios?

There was one point where I thought these crazy, far-fetched scenarios were a distraction from what really mattered, which is that decisions were made by algorithms that affected people’s lives. I’ve changed my mind in recent years, because I think it’s only by worrying about such things that you can build in technical safety mechanisms to prevent it from happening.

So, worrying is not pointless, worrying really has power. There are genuinely bad potential outcomes from AI, and the more honest we are about that, the more likely we are to mitigate them. I want this to be like Y2K, you know? I want this to be what we worried and worried about and so we did the work to stop it from happening.

Do you think we will ever reach artificial general intelligence?

We don’t really have a clear definition of what AGI is. But if we take AGI to mean at least as good as most humans at a task involving a computer, then we’re almost there, really. Some people take AGI to mean beyond human ability for any possible task. I don’t know that. But I think AGI is really not far off at all. I truly believe that in the next five to ten years we are going to see seismic changes.

What kind of changes?

I think there are going to be profound changes in the economic models that we have been accustomed to for the whole of human history. I think there will be really big leaps forward in science, which I’m very excited about, also in medicine design. The whole fabric of our society is built on the idea that you exchange labor and knowledge and human intelligence for money which you then use to buy things – I think there’s a fragility to that.

AI will almost certainly change our relationship with work. What do we need to do to ensure that AI leads to all of us working less, rather than someone being out of work altogether?

I have an answer for this – I can just see how much trouble I’ll get into if I say it out loud. OK, I’ll give you a version of it. Those are just some undeniable facts, right? So far, society has been based on exchanging labor for money. Our tax system is based on taxation of income, not wealth. I think those two things need to change.

Topics:

Add Comment