1. Artificial intelligence: from robots to data

On robots

“Artificial intelligence could wipe out humanity when it gets too clever, as humans will be like ants.” Superimposed on top of a photo of a giant robot in The Independent, this quote from physicist Stephen Hawking made its way around the world. The experts, however, didn’t give it too much importance. For Héctor Geffner, ICREA professor at Pompeu Fabra University, robots are still clumsy and “human-level general intelligence in machines only exists in films for now. They can react but aren’t very flexible and don’t have any mental life.”

Carme Torras, a researcher at the CSIC Institut de Robòtica i Informàtica Industrial, believes that “today humans are more dangerous than robots”, but defends their current development. Her group is working on care robots that can help dependent people dress themselves, and explained that there are already robots being used in logistics or as receptionists, robots that screen what can go into the trash or the dishwasher, and even pet robots, developed to show and provoke emotion. The latter opens up a whole field of ethics, known as “robot ethics”, which Torras says is motivated in part by the humanization of robots, the perception that there is a human inside.” For Torras, the big question is “How will human nature evolve as a result of increased interaction between humans and robots? And, by extension, can this evolution be predicted?”

One of the problems in studying this evolution is the limited language available for describing the future. In Heidegger’s words, “It is through technique that we perceive the sea as navigable.” This is why many of the dilemmas are found in science fiction, one of the roles of which, according to Torras, is to “anticipate possible scenarios.” These can be seen in books like those by Asimov, Philip K. Dick and Ray Bradbury, in films like “Eva” or series like “Black Mirror”. Torras herself has written a science fiction novel entitled “La mutación sentimental”, through which she is developing an educational project.

On data

The increased computing power of computers and, above all, the advent of the Internet and the exponential increase in data available have led artificial intelligence to permeate many aspects of our lives. Algorithms are behind much of the news we read and the ads we see, but it can also be behind our credit rating or loan approval, the first screening of our resumes or studies of some of our health records. And it isn’t outlandish to think that it will also be guiding our vehicles, which in the future will surely be totally driverless.

One of the most widely studied areas is social media and access to information, which has noteworthy implications and dangers. For example, in effecting elections, as it has been suggested happened by using artificial intelligence to personalize the messages for Donald Trump’s campaign.

Apart from one-off campaigns, according to Cornelius Puschmann –a researcher at the Hans Bredow Institute for Media Research in Hamburg, algorithms “are a confusing way people sometimes speak about artificial intelligence,” and are responsible for “choosing” –based on our profile and prior preferences– which news articles tend to appear on our social media walls. There are two trends that algorithms and networks have brought to the forefront, neither of which is exempt from danger. The first, as noted Camilo Cristancho, a researcher at the Autonomous University of Barcelona, is the creation of “echo chambers” in which people with similar ideas tend to stick together. The other is “filter bubbles”: ideological bubbles created as a result of the contents each person is shown being customized based on their past preferences, which threatens to over-promote established beliefs, minimizing nuance and plurality.

Should we be worried about filter bubbles? Not too much, says Puschmann. “From a journalistic point of view, this is nothing new. We’ve been studying it for 50 years.” For Walter Quattrociocchi, head of the Laboratory of Computational Social Science at IMT Lucca in Italy, “It is still up for debate whether these bubbles are created by the algorithm or are just human nature.”

According to Puschmann, what seems clear is that fear comes from the feeling of losing control, the fact that something non-human has decision-making abilities. In an experiment with Facebook users, more than half weren’t aware that there is an algorithm behind what they see on their walls, and their first reaction when they found out was shock and anger. However, several months later, after being shown how they work, their satisfaction levels were similar to those seen before “discovering” this fact. And they used Facebook even more than before.

Quattrociochi mainly researches how misinformation spreads online, which he ties to a new word coined in 2016, post-truth. This concept “denotes circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief.” As well as what is known as the confirmation bias: “the tendency to search for information that confirms pre-existing beliefs.” Even the World Economic Forum has pointed to mass digital misinformation as “one of the main risks for our society.”

Quattrociochi’s research confirms that echo chambers clearly exist, in which mutual-reinforcement dynamics are similar regardless of the shared motivation. Perhaps the most concerning is that the most active and “committed” users in these chambers tend to be those defending conspiracy theories (like those who believe in UFOs or a link between vaccines and autism). The following paradox is even more concerning: publishing rigorous information seeking to discredit the theories in these echo chambers not only doesn’t work, but is counterproductive: it only strengthens their initial position.