Do you remember who Watson was? That’s right, the IBM algorithm that won the Jeopardy quiz in 2011. Once again, robots were smarter than humans. Earlier, in 1997, a chess computer from Garry Kasparov won, in the following years the chess computers taught themselves how to play chess and many other games. But now a computer beat humans in a more human game: Watson recognized a spoken question, understood what was being asked, and knew the answer.
Robots were the future. There would be a new “superfluous class,” predicted best-selling author and, in many intellectual circles, popular visionary Yuval Harari in his book Gay Deus. We also had to think carefully about the safety of the onrushing superintelligence who, when ordered to do so, could destroy us all without batting an eyelid.
Do you remember how Watson fared? No, because that wasn’t big news. IBM thought it could revolutionize cancer care, bought companies with large databases (‘the new gold’) for five billion dollars and put seven thousand people to work. But according to the oncologists who tested Watson, the miracle machine mostly produced open doors or irrelevant suggestions. A course of chemotherapy that the doctors have always prescribed, for example. I learned a new English word: boondogglea project like a bottomless pit of money and manpower that never yields anything.
The difference between winning Jeopardy and curing cancer? The answers in a quiz are existing knowledge, while oncologists wonder whether the right questions are asked at all in cancer biology. Chess or quizzes are well-organized games of pattern recognition with fixed rules. Algorithms work wonders there. But as soon as things get more complex, the machines often stumble. In the meantime, the models are being used for complex matters, such as nitrogen deposition. The Aerius model is repeatedly misused to judge which farmers should stop, down to the square kilometer. In this way, drivers can shift responsibility.
Read also: Hey, nice, my co-author is a robot!
I expected so much more from it all. From artificial intelligence, for example: it is 2022 and I still drive my car myself. We are still dying from stupid things like air pollution, traffic accidents and too much smoking, drinking or eating.
I also expected more from the underlying data science. Open source and rock solid reliable methods: if 3 plus 3 always equals 6 and an algorithm is no more than a million of those kinds of sums, you can rely on it. But what turns out? Even machine learning is just human work. It suffers from the same reproducibility problems as social psychology and cancer biology: where the experiments in one lab yield something different than in the other lab. Moreover, transparency is hard to find.
It is not the first scientific hype. Genetics could one day cure any disease. Microbiome research would show exactly what someone should eat. Even chemistry was once fashionable. Chemical substances were found for color, taste, nutrition – life itself turned out to consist of chemistry. People optimistically searched the brain for the substance that gave us consciousness. After all, everything could be reduced to one molecule, right?
I am also disappointed in my own ability to appreciate all those promises. My bullshit radar sometimes falters. I am already writing a number of columns in a row about geopolitical bullshit predictions from experts, about the missed climate targets, about artificial intelligence that flops. But the common thread is: we humans are constantly carried away by stories. That is our deepest nature. Our ability to see reality, to know what is true, is constantly clouded by belief, zeitgeist, herd behavior, peer pressure, culture and vain self-overestimation. I investigate those delusions, because one day I might be able to recognize them and that will bring me closer to what I want most: to know what is true.
Rosanne Hertzberger is a microbiologist.
A version of this article also appeared in the newspaper of November 19, 2022