Artificially Speaking

I watch a lot of YouTube.  Anyone else do that? My habits tend towards Rock and Roll, stand-up comedy and a lot of lectures, interviews, talks, and other things I can find from philosophers, writers, historians, anthropologists, artists, and generally people with knowledge about things I’m interested in. But, ok you caught me, I also watch some right-wing content to see what’s going on in crazy town.

The right-wing propaganda is a very small amount of what I engage with but the youtube algorithm has made it the majority of what it decides to show me.  Mainly, well this is my theory, because it thinks engagement should mostly be combative. It doesn’t really think actually, not like we do, but it analyzes, makes assumptions, creates tests, retools, learns, and then makes new assumptions all in an attempt to meet some sort of defined metric. That’s basic modern advertising. Its assumptions, in theory, are not like ours because there’s no emotion involved. So given that I don’t engage with the right-wing crap it presents me but that it still delivers it disproportionally I have decided the reason is a man-made one.  That I, someone clearly considered liberal based on everything else I engage with, would naturally be tempted to engage with what pisses me off. It’s actually a pretty standardly accepted theory.

If the metric was just to keep me interested I should get a lot more Carl Sagan and a lot less Tomi Lahren.  I’ve never searched for Tomi Lahren but I’ve searched plenty of times for Carl Sagan and Religion, Or Carl Sagan and Military Budget (and you should too by the way).  So it seems to me that the algorithms are slanted by human assumptions. And that’s how the socials get used to sell advertising but also to spread propaganda. 

But before I get to that let’s talk about advertisements.  If you’re like me you generally have a Pavlovian impulse to click “skip ads” as soon as it becomes available or scroll faster so that any perceived screen engagement wouldn’t be detected. But because I worked in marketing/publishing/advertising for a while I think about the ads that I quickly blow past. On social media, and most modern websites and apps, the ads that are presented would never have been allowed to be broadcast on television or in print a couple of decades ago.  There were rules back then against selling snake oil and making outlandish and dangerous claims.

Still, just like the organic feeds the ads aren’t really very effective with me.  I don’t engage, mainly because I’m not interested in that one weird ice trick that cures diabetes. Or what body type I am.  I’m not going to fall for some scam that claims the government will give me six hundred dollars a month.  I like the soap I have and don’t want to see ads about men shaving their private parts.  Also when it comes to the five foods I should avoid to stay healthy I’m pretty sure that’s not how things work. I also don’t need a special edition from the Epoch Times about how January 6th was a hoax. And finally, the least worrisome, the guitar guy that played with Eddie Money once. Dude it helps to know scales when playing guitar. Okay? So just stop.

So while currently all of this seems stupid and frivolous we should recognize that even in its current and clumsy form all of this junk is having an effect on some people.  Like storming the capital, or shooting up nightclubs, or millions of teenagers being depressed.  Oh, right, it’s actually already a plague. Oops, I forgot. I’m so used to seeing all of this crap it just seems normal. It’s become the water we swim in online and unless you spend time thinking about your own mental health people can really get screwed up.

Well, as messed up as it is we haven’t done much to regulate it and that means we ain’t seen nothing yet.

Sorry about the long set up so I’ll wrap the rest up quickly. In a nutshell, all of this rather primitive tech, which already has an effect and of which we already discussed how it generally works, is about to get extremely smart, very quickly. And worse, the army of lame content creators whipping up this deceiving advertising and destructive propaganda will be replaced by this same technology.  For the first time, we have a tool that can be told to target an individual with a specific success metric, track that individual, make ads and content for that individual and then deliver it wherever that individual happens to be.

Right now there’s a ton of data collected on anyone who spends time online. But there hasn’t been a great way to really harness all of that data. Until now.  Now we have the tools to analyze tons of data, share the results instantly with other systems, and create ads and content, images, stories, music, videos, you name it.  And then tailor-make that content to try to achieve a desired effect not on just a demographic, but on a sole individual human being.  You and me. Until recently both the tech and the computing power have kept that from being possible.  But that’s rapidly changing.

Very soon all those goofy ads I described above that you may or may not have any familiarity with will take on new characteristics.  All advertising online will soon become tailored to you, specifically. Pretty easy concept to get, but one that we’re most likely not prepared for nevertheless. And neither are the people who are about to unleash it. The largest advertising agency in the world, WPP, is going full on into AI advertising.  And so will most of the others. Don’t mean to be cynical but they probably aren’t really going through the due diligence to, oh, whatever, never mind.  No one is and you get the idea. 

Very soon marketers will have the ability to segment a population, determine some metrics for success, and then tell an AI tool to try to achieve it. They won’t have to know what’s being delivered anymore.  There are few ethical guidelines to adhere to. No time consuming sign off’s on creative being developed.  In fact, there can be so many iterations and adaptations developed in real-time and delivered to specific individuals that only one person will ever see that particular ad or piece of content. These advertising AI tools, designed to make you eat more cool ranch Doritos for example, will be able to track you across platforms, make assumptions about your behavior, create and deliver ads and content, analyze, retool, make new assumptions and so on and so on, without anyone but you ever seeing what it comes up with. Believe me, the people selling things would love nothing more than to put in data and watch successful data come back without having to deal with everything in between.  No stupid creative types to contend with.  No production houses.  No writers.  This is a dream come true.

You’ll basically have an army of entities constantly analyzing your behavior and sharing what they learn with each other to get a variety of outcomes from your future behavior. Or attitude. Or outlook. Depends on what the desired effect is. Depends on what’s being sold.

Sure, someone with ethics in the pipeline may say that the tool should adhere to brand guidelines, be wholesome and fun, trustworthy and reliable, and all that jazz.  Or maybe not. So who knows what dark turns to play on insecurities might occur.  What knowledge about someone’s habits may be exploited?  These agencies will mostly be concerned what the success rate is based on the desired metrics. Are people indeed eating more cool ranch Doritos?

Or buying electric vehicles? Or switching cable companies?  Watching more Disney movies? Or voting for a political candidate.  Buying into a particular agenda.  Believing that this person or that person is the enemy of their freedom.

Yep, the same tech used to sell Orville’s Lightly Salted Popcorn will also be generating most of the advertising in the 2024 U.S. presidential campaign cycle. And it will also be deployed, by god knows who else, to develop reams and reams of disinformation.  Very convincing misinformation.  Fake news, fake pictures, fake videos, depicting who knows what. We’ve already seen what humans can do with simple editing techniques and some photoshop. What can an army of tireless, nondistracted, supercomputers produce? We’re about to find out. 

So you might think, as I do, that these algorithms really don’t have much of an effect on my decision-making.  But they already do for a lot of people.  We’ve seen the studies and we’ve done almost nothing to prepare ourselves for the next wave of tech that’s about to wash over us.  Yuval Harari, someone you might want to check out if you haven’t already, thinks our best defense is to get to know ourselves better.  The machines will know more about our online behavior, which includes location data, than you or I could possibly remember about ourselves.  But they won’t really know us in any sense we can relate to. It’s hard for us to perceive a world without emotion.  All they’ll know is whether they were able to affect you because that’s all they will be doing.  All day, all night, without needing to sleep, or to eat, or stop, or get distracted. Producing endless amounts of ads, misinformation, disinformation and outright fabrication all designed to get into our heads to make us do or think something defined by only god knows who. 

It’s sort of like we’ll all be in a total war with an army of emotionless supercomputers only loosely monitored by humans whose intentions we may never fully know. And who probably don’t care what the methods of success are, just as long as there is success, whatever that means to them.  Whether it’s selling you Doritos or getting people to think that the last election was stolen.   Oh wait a minute, did I say sort of like? I meant it’s going to be exactly like that.