Live AI monitoring: the naturalization of the police state

Imagine that you’re having a conversation with someone on the internet. It could be on any given platform: Snapchat, Messenger, Discord, WhatsApp, Skype, Zoom, Meets, and so on. Some of them, of course, are taken more or less seriously than others. Although it’s not my job to report, I’m starting to think nobody else will. But the scenario here is nothing like what you might expect: for example, we could, instead of talking about the taboo of online sex (I give up, fuck off), address a job interview.

If you’re working for a bank, in a strategic position (let’s say, UX Designer), I assume they might ask you a question about your beliefs. “If there was a person asking for money on the street and you were on your way to buy chocolate, while trying to be on a diet, would you give money to that person and maybe have a conversation with them, or would you go with your instincts and get what you need?” I don’t know about you, but I tend to think about what we, as a collective, need. And if someone is begging on the street, I empathize (and the only reason I don’t help more is because paper money has been basically abolished, unless it’s for drugs).

A lot to unpack there, I know. But think for a second that, in case you think like me, skip the chocolate, and help out a stranger, you’re very likely going to hear: “okay, I think we’re done here. We wish you good luck.” How would you feel? Useful to note that this is a fictional example, but also that it’s the company representative who’s asking this, personally, and not making an assessment based on a social profile retrieved by multiple sources on the web. We haven’t even discussed for real what companies do with our data, but the example illustrates the problem. Guess what? You’re not fucking getting the job.

Now imagine a situation like this: you have conflicts in your family. The internet is your safe space, where you can follow people you agree with, and feel a sense of belonging for the first time in your life. Thanks, social media! You connected with people because it was possible to do so. Before, there were no platforms for you to connect. Maybe you’d have to email. But how would you get a person’s email? Do you think there were hackers back then who traded information like that? Whoa, that’s a little paranoid. But you never know! Anyway, you’re talking to your partner. And suddenly you want to comment on taxation of the rich. “It’s insane how these people can accumulate so much money that they could finance God knows how many generations of their future families, and we have literally nothing”. Uh-oh. The video stopped. You didn’t press a button. You check your internet. Everything’s stable. Maybe something happened with the router. You get up. The lights are all normal. You go back. You minimize and maximize the page. You press Ctrl+Alt+Del to see the Task Manager. Nothing unusual. You wait. And then the call drops.

You’re back to texting now. “Honey, did your thumb slip?” — you ask. “No, I think we had a connection problem, but my internet is normal” — your partner says. “That’s weird, I checked mine and everything was fine too!” — you add, noticing how strange this situation is getting. What could possibly have caused it? Without turning this into an episode of Black Mirror, I want everybody to know this is happening as we speak, potentially everywhere. AI is monitoring everything, and they’re investing more and more money into its tools. It’s considered to be the future of business. But what it’s doing to the regular population, especially the marginalized, is staggering. As it turns out, we can’t do activism anymore, not even have private conversations with people about socially relevant subjects. We need to think about how social listening works, but also how its tools can be abused, especially in violation of privacy. Companies need to rewrite their privacy policies, rethinking the power AI has and how to tackle their most pressing issues (and if I can just try for the last time, monitor the people showcasing guns on camera and create better experiences for those showing their bodies, along with a lot of cool things you can do if you just open up your mind a little and have the guts to stand by your policy).

It’s scary enough to sign up for a website and see that they’re asking for your full name, address, date of birth, ID photo, a biometric reading of your face, your IP, your bank account details, your email and also to run cookies on your computer, but somehow pretend they’re doing nothing wrong and make you bend over to the rules of the free market. Some people have started to call it the “marketplace of ideas” (in my opinion, very far from the “digital town square”); but it’s a trap: once the tech company has your data and a functioning algorithm, intellectual property doesn’t matter to you as much as it matters to them, using their product, that is patented. You see, it’s a big scam.

Maybe live AI monitoring has some interesting applications. But if we let it fall into the wrong hands, people could get persecuted politically and for personal vendettas, while we should’ve seen what was coming and stopped it before the worst could happen. This post approaches many subjects, but I hope it makes you think about why people are willing to spend several trillions on AI — whether if it’s because of national security, enterprise reputation or the sheer purpose of making money, but without forgetting about what bad actors can do against the good people on the web who happen to give a fuck.

Leave a comment