When algorithms surprise us

Machine learning algorithms are not like other computer programs. In the usual sort of programming, a human programmer tells the computer exactly what to do. In machine learning, the human programmer merely gives the algorithm the problem to be solved, and through trial-and-error the algorithm has to figure out how to solve it.

This often works really well - machine learning algorithms are widely used for facial recognition, language translation, financial modeling, image recognition, and ad delivery. If you’ve been online today, you’ve probably interacted with a machine learning algorithm.

But it doesn’t always work well. Sometimes the programmer will think the algorithm is doing really well, only to look closer and discover it’s solved an entirely different problem from the one the programmer intended. For example, I looked earlier at an image recognition algorithm that was supposed to recognize sheep but learned to recognize grass instead, and kept labeling empty green fields as containing sheep.

When machine learning algorithms solve problems in unexpected ways, programmers find them, okay yes, annoying sometimes, but often purely delightful.

So delightful, in fact, that in 2018 a group of researchers wrote a fascinating paper that collected dozens of anecdotes that “elicited surprise and wonder from the researchers studying them”. The paper is well worth reading, as are the original references, but here are several of my favorite examples.

Bending the rules to win

First, there’s a long tradition of using simulated creatures to study how different forms of locomotion might have evolved, or to come up with new ways for robots to walk.

Why walk when you can flop? In one example, a simulated robot was supposed to evolve to travel as quickly as possible. But rather than evolve legs, it simply assembled itself into a tall tower, then fell over. Some of these robots even learned to turn their falling motion into a somersault, adding extra distance.

[Image: Robot is simply a tower that falls over.]

Why jump when you can can-can? Another set of simulated robots were supposed to evolve into a form that could jump. But the programmer had originally defined jumping height as the height of the tallest block so - once again - the robots evolved to be very tall. The programmer tried to solve this by defining jumping height as the height of the block that was originally the *lowest*. In response, the robot developed a long skinny leg that it could kick high into the air in a sort of robot can-can.

[Image: Tall robot flinging a leg into the air instead of jumping]

Hacking the Matrix for superpowers

Potential energy is not the only energy source these simulated robots learned to exploit. It turns out that, like in real life, if an energy source is available, something will evolve to use it.

Floating-point rounding errors as an energy source: In one simulation, robots learned that small rounding errors in the math that calculated forces meant that they got a tiny bit of extra energy with motion. They learned to twitch rapidly, generating lots of free energy that they could harness. The programmer noticed the problem when the robots started swimming extraordinarily fast.

Harvesting energy from crashing into the floor: Another simulation had some problems with its collision detection math that robots learned to use. If they managed to glitch themselves into the floor (they first learned to manipulate time to make this possible), the collision detection would realize they weren’t supposed to be in the floor and would shoot them upward. The robots learned to vibrate rapidly against the floor, colliding repeatedly with it to generate extra energy.

[Image: robot moving by vibrating into the floor]

Clap to fly: In another simulation, jumping bots learned to harness a different collision-detection bug that would propel them high into the air every time they crashed two of their own body parts together. Commercial flight would look a lot different if this worked in real life.

Discovering secret moves: Computer game-playing algorithms are really good at discovering the kind of Matrix glitches that humans usually learn to exploit for speed-running. An algorithm playing the old Atari game Q*bert discovered a previously-unknown bug where it could perform a very specific series of moves at the end of one level and instead of moving to the next level, all the platforms would begin blinking rapidly and the player would start accumulating huge numbers of points.

A Doom-playing algorithm also figured out a special combination of movements that would stop enemies from firing fireballs - but it only works in the algorithm’s hallucinated dream-version of Doom. Delightfully, you can play the dream-version here

[Image: Q*bert player is accumulating a suspicious number of points, considering that it’s not doing much of anything]

Shooting the moon: In one of the more chilling examples, there was an algorithm that was supposed to figure out how to apply a minimum force to a plane landing on an aircraft carrier. Instead, it discovered that if it applied a *huge* force, it would overflow the program’s memory and would register instead as a very *small* force. The pilot would die but, hey, perfect score.

Destructive problem-solving

Something as apparently benign as a list-sorting algorithm could also solve problems in rather innocently sinister ways.

Well, it’s not unsorted: For example, there was an algorithm that was supposed to sort a list of numbers. Instead, it learned to delete the list, so that it was no longer technically unsorted.

Solving the Kobayashi Maru test: Another algorithm was supposed to minimize the difference between its own answers and the correct answers. It found where the answers were stored and deleted them, so it would get a perfect score.

How to win at tic-tac-toe: In another beautiful example, in 1997 some programmers built algorithms that could play tic-tac-toe remotely against each other on an infinitely large board. One programmer, rather than designing their algorithm’s strategy, let it evolve its own approach. Surprisingly, the algorithm suddenly began winning all its games. It turned out that the algorithm’s strategy was to place its move very, very far away, so that when its opponent’s computer tried to simulate the new greatly-expanded board, the huge gameboard would cause it to run out of memory and crash, forfeiting the game.

In conclusion

When machine learning solves problems, it can come up with solutions that range from clever to downright uncanny.

Biological evolution works this way, too - as any biologist will tell you, living organisms find the strangest solutions to problems, and the strangest energy sources to exploit. Sometimes I think the surest sign that we’re not living in a computer simulation is that if we were, some microbe would have learned to exploit its flaws.

So as programmers we have to be very very careful that our algorithms are solving the problems that we meant for them to solve, not exploiting shortcuts. If there’s another, easier route toward solving a given problem, machine learning will likely find it.

Fortunately for us, “kill all humans” is really really hard. If “bake an unbelievably delicious cake” also solves the problem and is easier than “kill all humans”, then machine learning will go with cake.

Mailing list plug

If you become an AI Weirdness supporter, there will be cake!

Subscribe now