Bots have been getting some extra love on the internet these days, particularly in the form of the I Forced A Bot meme. So I thought it would be a good time to revisit all things bot, neural network and AI.
First, let’s take a moment to discuss and debunk the I Forced A Bot meme. The general gist is that a bot sucks in 1,000 hours of X, trains a neural network and then recreates X, usually to hilarious effect, something like this one below.
I forced a bot to watch over 1,000 hours of Olive Garden commercials and then asked it to write an Olive Garden commercial of its own. Here is the first page. pic.twitter.com/CKiDQTmLeH
— Keaton Patti (@KeatonPatti) June 13, 2018
The meme is said to have started with a piece of Harry Potter fanfic that was written using a predictive text keyboard trained on the Harry Potter books — plus a little bit of human intervention — and released on Twitter by @BotnikStudios.
Hilarious, sure, but this meme really started to pique my interest when Janelle Shane, AI researcher and chief blogger at AIweirdness, chimed in to explain what makes these memes so different from actual content generated by a machine learning algorithm. One comment that Shane makes is that it’s really hard for bots trained on neural nets to do things like write scripts and recipes because they have very short memories. For example, a bot that writes a recipe will forget the ingredients by the time it gets to the preparation instructions. And in stories or commercials, this would mean great difficulty in recalling characters and making any kind of meaningful narrative arc.
These bots — and the so-called deep learning algorithms — typically work by processing training data over and over again to improve their understanding of the data, essentially by performing increasingly better pattern matching. But as Shane and many others have long pointed out, these types of algorithms are typically very bad a learning common sense. We’ve all heard the one about how changing a single pixel can cause the neural net to confuse a dog with a stealth bomber. For a great down-to-earth overview of how deep learning works and why in fact it’s essential that deep learning algorithms have such selective memories, check out this piece by Natalie Wolchover for Quanta.
Shane has blogged about lots of strangely funny constructions that can come out of trained neural nets. For example, what sort of halloween costumes might a neural network come up with if it was fed the inventories of costume warehouses? Or what nail polish colors do you get when you feed existing polish colors into a neural network trained on heavy metal band names?
Machine learning made another appearance in entertainment news this summer when the pop singer Taryn Southern was called out for using AI to write her music. To me it sounds more like she wrote the piece collaboratively with some AI software, which I think is pretty cool.