The Algorithm is Training Me
2023-01-12
I was listening to a CD this morning, yes physical media on a non-internet connected device. When I went to stop mid-song so I could take a call, I had this thought in the back of my mind, “but I like this song, and I don’t want the player to think I’m stopping it because I don’t like it and not play it again.” Remember, I wasn’t online. There isn’t any algorithm, but my subconscious behavior has been altered by the fact I know, when I’m online, my choices for how I interact with this bit of media will affect my future opportunities.
I’m not going to pretend I know how the algorithms all work, especially when a lot of people who write and manage the actual algorithms can’t really explain what’s going on inside the model. I’m also not going to pretend I understand how our behaviors are changing because we know every decision we make online is tracked and fed into some sort of a decision engine.
And that makes our decisions change.
I still haven’t added the obligatory phrase, “make sure you like and subscribe..” to my YouTube videos. I haven’t done a great job of encouraging comments on my postings. Part or me hates having to feed the algorithm, it feels kind of… well like being trained to do something by what we (incorrectly) call AI.
But I see, and hear, my favorite YouTubers and podcasters using phrases like that all the time. They know that if they have more subscribers the algorithm will be more likely to offer up their content. They know that if they have more comments the algorithm is more likely to get a better understanding of what their content is all about, and again, impressions and clicks rise.
They have been trained to train the algorithm.
There’s no “Intelligence” in promotion algorithms. There are biases, yes, but these biases have nothing to do with my goals. The biases are “we want people to remain on our digital properties longer and consume more content” or “we want people to buy stuff from our advertisers” or “we want to find more people like us.” These biases aren’t generated by the algorithms, but rather, by real human beings with animal hindbrains who might not really understand their own motivations (but we can leave that for another day).
These tools are written to ingest a huge amount of data, search for patterns, and produce an outcome. That single outcome is what the data analysis is looking for, the probability that if I watched Weird Al’s masterpiece “Bob” I’m going to want to watch things on College Humor. The script comes to that conclusion because it tracked what everyone else who watched “Bob” was watching and averaged out a bunch of common “related” videos. It also keeps track of what I’m likely to click on.
My problem is that I’m likely to click on a lot of stuff. I like to understand what people are doing online. I like to understand the cultural references people make… but I don’t really want to be flooded with that content. There’s nothing worse than trying to find something you like but can’t because the “AI” is trying to “help” by offering up a bunch of the garbage you were glancing at as opposed to the stuff you really like to watch or listen to.
So I find myself trying to train the algorithm by listening longer to the things I actually like or using Incognito to watch political content I don’t actually agree with. Part of my brain says, “Look how clever I am, training the algorithm to do my bidding…” but what I’ve come to realize is… the algorithm is training me to fit into the way it works, which, I guess, makes us all part of the algorithm.
Listen to the podcast: