Why did the neural network cross the road?
Can a machine learning algorithm learn to tell a joke? I’ve experimented with neural networks and jokes before, teaching them to tell knock-knock jokes, or to generate April Fools pranks. In each case, the results were… underwhelming. However, that could have been because the algorithm didn’t have much data to work with, just a couple of hundred examples of each type of joke. What happens when I give a neural network a LOT of examples to copy?
I trained textgenrnn on a dataset of >43,000 jokes (actually, I started with this dataset of over 230,000 jokes, but this is what was left after I removed the entries that weren’t really question jokes, and removed all the offensive keywords I could think of). What was left was a dataset of all sorts of short jokes: “What do you call a…” or “How do you…” etc.
Here are some jokes the neural network generated:
What do you call a cat does it take to screw in a light bulb?
They could worry the banana.
What did the new ants say after a dog?
It was a pirate.
Why did the monsters change a lightbulb?
And a cow the cough.
What do you call a pastor cross the road?
He take the chicken.
What do you call a farts of tea?
He was calling the game of the dry.
As you can see, it had… difficulties. It struggled to generate a question and answer that were grammatical, let alone a pair that went together and made sense. They’re only jokes if you’re paying attention to the surface appearance and not the meaning. Joke-shaped objects that contain no actual joke.
I tried retraining the algorithm just with “What do you get when you cross a <blank> with a <blank>?” jokes. It helped… maybe a little?
What do you get when you cross a pirate and a little butter?
What do you get when you cross a dinosaur?
They get a lawyers.
What do you get when you cross a police at the polming light bulb?
One Because Electrons
So I went back to the more general question dataset, and this time decided to see if the neural net could at least try to come up with the punchlines to common jokes. For example, “Why is six afraid of seven?” was in the dataset twenty times with slight variations. Could it answer this one correctly?
Why was six afraid of seven?
Because he doesn’t have a birthday?
The answer is no. In fact, it couldn’t answer ANY of the most common jokes.
Why did the chicken cross the road?
To screw in a light bulb.
What’s black and white and red all over?
A confuse on the bull!
What’s brown and sticky?
A potato, on the space.
Why did the scarecrow get an award?
Because they have the one cheese.
Why should you never trust an atom?
It doesn’t scroom.
Why do seagulls fly over the sea?
Because they know …………..
I mean, it COULD answer them. Just rather… creatively. In a way, this is good news - it means the neural network didn’t simply memorize its input data (that’s a phenomenon called overfitting, and is the bane of machine learning programmers). It should be prepared to answer not just jokes it’s seen, but questions of all sorts.
How many engineers does it take to change a light bulb?
A star an alligator and because they are bees.
What is a neural network’s favorite pasttime?
A bacon on a book with a rooster.
Why did the neural network crash?
A jacks when he was the stop get a cheese problems? Because they have to kill? Because they have dogs.
What did the neural network say to the programmer?