I’m gonna be honest: I received a copy of an entire book, called AI for good, by Rahul Dodhia, and I merely read the introduction. It’s a topic that so many people are talking about that it has become evident by now that there are essential problems with the technology, from copyrights and impersonation to misinformation and incoherence. But most strikingly, the tone in which the AI answers come make us question how exactly this language was modelled, and we’re forced to come back to linguistic concepts and areas such as Rhetoric and bias.
One thing is different than the other. It really depends if you want advice (a number of people have been reported to be seeking that from ChatGPT and the like), instructions or information, as well as task automation. These 4 categories seem to be the most impactful, the latter being advocated by many as the productivity leap that we were all waiting for — but that has generated a vibrant meme culture which argues, essentially, that we still need to do manual work, while our creative capacities are supposed to be exercised naturally, because we like it that way (such as writing music or even blogs like this one, the effect on journalism being a big factor to consider).
People who look for advice are often desperate. Queries such as “how do I know if my girlfriend is cheating on me?” or “what can I do if people at work think I smell bad?” are popular. The concept was tested, and as wide adoption took place, companies have felt the need to create an ecosystem of chatbots, in order to attend to these more trivial themes.
The process of searching for instructions took a bite out of YouTube’s famous “how to” videos, where you could go from learning a cooking recipe to fixing a problem with your WiFi configurations. As bullet points, results for these searches have been summarized, sometimes straight from Google’s home (whether people used the address bar on desktop or opened the Search app that comes with nearly every smartphone). With information from various blogs, and replacing their clicks, Google as well as Copilot, ChatGPT and all others redirected us to a sort of dummies manual for how to live. And so we learned, in the most positive scenario, that the support channel of a website was no longer needed, because we could find the feature or tool that we wanted to access from the AI answer. More and more, chatbots, which had been developing for some years before the AI boom, became intelligent: fewer people pressing numbers for specific service options on their phones, not to mention that websites took customer support a lot more seriously, as they scaled up.
Automated tasks proved to be a time saver when it came to the power of AI. Sometimes, a summary was what you had space in your agenda for, and it filled the knowledge gap; other times, multiple tasks were reduced by a single prompt and so images were created, with their particulars and an inescapable artificial vibe, as well as action plans; calculations and projections were made; analyses took place — and the role of consultants, illustrators, even managers, became a little more obsolete, to use a word that people still resist when talking about AI, but it’s arguably the case when you have a technological advancement: who would want to post on Facebook while knowing that Instagram reels can make you popular, reaching out to many different audiences? Of course, details of how social media works or does not work remain shady, and that part (as well as a basic consultation of a marketing strategy’s potential on an AI platform) has still not been addressed.
But what really makes us sound the alarm is that, when it comes to information seeking, in the information society a lot of us have read about, data brokers and all, something’s a little off — or maybe very much. There are nuances in the modes of asking that categorize opining or informing. In Portuguese, the word for when a person is supposed to “pick a side” is roughly translated as “party taking”. And so we’re naturally reminded that, while on LinkedIn, we should avoid debating politics, religion, sexuality and so on. But this is only natural now because there has been a process of normalization, which happens when something is reinforced at every turn, by groups of interest. For sure, the business class doesn’t want strong opinions to get in the way of profits: imagine making a sale to a person who doesn’t have the same political view as you? You want the money, not to start a community and campaign. But in many cases, businesses seek community and campaigns — and they have to be careful to strike the right tone, a reason why copywriting is one of the most skill demanding profession of all.
Sure, AI can help with copywriting. Imagine writing a prompt, and visualizing a word cloud, together with suggestions of illustrations and templates for transitions on slides, simulating a campaign. If we go further, video can be built by an AI, even with human-like voices. In case you haven’t noticed, these have been dominating YouTube, including in the format of news. I’ve seen a job offer made by an AI hirer, on my personal Facebook account. I don’t remember if I even had the option to report it appropriately, and it wouldn’t matter, because Meta informs, in the end: “we haven’t taken the content down”.
When it comes to reporting facts, though, the journalist is the specialist. And there are styles of reporting, just like there’s a variety of styles in film and books; but most of all, we are looking for information. A tendency of journalism, however, is that more than simply reporting, we need to explain, and in order to do that well, we have to be reminded of our values. How can we accurately depict a situation if we don’t give our personal take on how we feel about it, attributing a moral assessment to what’s being dealt with? AI is not moral. AI is computational. And technology has been driven in a certain direction, computers and mobile devices being sources of great distress, whether it’s because the user is exposed to bad content, or because they produce it.
It remains a role of society to judge what’s happening and state opinions. The logics of massive AI adoption rely on the principle that we have to remove bias. But everyone’s got an opinion about everything, any cultural study will show. If you don’t care about culture, think about products: they have a reputation, and associated concepts that the consumer attributed to them, reacting to a company’s efforts in propaganda, to perceived quality, or even to pricing. Stating your opinion costs, apparently. But this, too, is a naturalized ideology. If everyone gave their opinion, as long as we kept some basic rules, we’d have a clearer picture of what the impact of something is. And since companies can grow enough to be valued in trillions on dollars, let’s just say it’s natural that opinions be strong. What we cannot allow is transitioning into a world where giving opinion is taboo, and the very democratic process of voting is taken out of the picture. We already vote for select people, and others who we didn’t vote for make decisions that will affect the entire country, city, world. With less critical thinking (which implies on fully and extensively exercising the right to give opinion), we would succumb to a system where the only thing that matters is who paid for it. And there is more to life than that, and more to business than money.