What is AI and why is it so controversial?

The amount of people who have debated the subject of AI in recent months is insurmountable, and so are the questions that arose from debates of basically every specialist in the areas affected — they’re saying, everyone in business. But while the philosophical debates tend to reflect upon the replacement of human tasks and consequent detrimental effects on the qualities of skilled people, and the more business-centered approach nearly always engages in vociferous talks on the importance of raising productivity in increasingly competitive markets, there’s something missing: what are we dealing with, after all?

It seems almost ridiculous to be caught up in a discussion without knowing where it started and giving your opinion based on a comment you’ve read by someone you don’t know, but ladies and gentlemen, that’s the current state of things. And I’m not even talking about the Reddit IPO. But the amount of people from tech promising a brighter future full of opportunity, creative output and joy is just too big to ignore, and those people seem to be always surrounded by cameras giving out statements laying out the promise in official terms, which are then absorbed by the markets and their complex nature.

But I have something radical to say: the markets can be ridiculous. While people were celebrating Apple Vision Pro instead of talking about what Western students had to say about conflicts in Asia, there were ugly things happening: in a comment I made on Threads, praising the journalist Christiane Amanpour for her interviews while she described the editorial process to comedian and journalist Jon Stewart, I couldn’t say “this woman is a legend” without receiving a comment from an account exclusively dedicated to make pro-Israel remarks, who said, at the time: “she hates Jews”. It was very unsettling, but I assume they found me because of search mechanisms. And those are the center of AI.

AI was described by workaholic journalist Kara Swisher as “a better search”. This is awfully accurate, but in a closer look, potentially misleading. What AI does is it uses a large database (large language models was the word that explained the generative capabilities) and provides you with nothing more than a crawl of pages and pages of searches that Google had already been compiling in a split second. AI uses a “relevance” mechanism, to cite reliable sources as much as possible, and answer difficult questions with phrases like “it’s hard to point out exactly what the numbers are, but recent estimates show” that, for example, reading has become less common. The real question is how that relevance was decided on, and knowing a little bit of how AI firms work, it took humans to assess it one piece of search at a time.

The question of copyrights is important to stress. While AI is helpful, it does cite sources, but most people rely on the answer directly without looking at where it was taken from, and it varies from language to language. This alone creates a huge problem in establishing common ground between media narratives and traditions, which AI has yet to solve. But most notably, generative AI can successfully copy models taken from internet posts and use them to create a similar version that goes really fast from funny to creepy, disrespectful and unlawful. It’s been a common thing to see people post: “I want AI to do my laundry and make my bed, so I can focus on my art; not the other way around”. And when the art world faces such an existential reckoning, it has to fight for their rights to be protected, which happened with the strikes of the Writer’s Guild of America and in other instances too, which challenged the proposal to have a voice recognition pattern established to licence your voice for commercial use.

If we are to think of a world where AI powered software makes our tasks easier to accomplish in time, then we’re off to a good set of procedures being put into place for certain areas, like logistics and finance. The ability to deal with large amounts of data and organize it, by labeling and predicting, would make safety measures more in line with expectations from the population. But on the other hand, on this aspect alone, you have a problem with how data is managed, which goes through user permission. Luckily for tech companies, that was the first thing they agreed upon; sadly for internet users in general, we don’t know where that’s going, and we have examples like the education sector suffering greatly with the “need for innovation” while the most basic tasks are far from being accomplished, like making students pay attention by prohibiting them from using the cell phone in class. Sometimes, the “relevance” is outside of the tech world, not increasingly in it, creating bubbles that only segregate a society that offers innovation for the rich and mass media for the poor, when that’s even possible.

Leave a comment