2. Limitations and ethics of artificial intelligence

Despite the achievements and promises of artificial intelligence, it still faces serious, important limitations. Basically, programs can work very well for specific applications, but are far from being able to do so in the unpredictable real world. For Francisco Martín, president of BigML, techniques have hardly changed in recent years. They’ve merely increased the data they can work with. According to Martín, “a lot of people are reinventing the wheel at the same time,” without making any real advances. In fact, the processes “still require a lot of human experience and are done in a very manual way.”

One of the limitations has to do with the obscurity of the process programs use to reach their conclusions. Many of them are based on what is known as neural networks. These networks are mathematical constructions modeled on the human brain, with information moving between levels and becoming consolidated in a diffuse manner. They are very powerful models, but necessarily opaque, and have led people to speak of “the black box of artificial intelligence”. Some experts, like Pierre Baldi, don’t give any importance to this obscurity. In the end, “You use your brain all the time. You trust your brain all the time. And you have no idea how your brain works.” Others, like Marcello Pelillo, professor of Computer Science at the University of Venice, recognize that there are situations in which “it is important to be able to explain decisions.” For example, if it affects the decision of a judge or social institution. And, in general, to ensure respect for autonomy and personal dignity.

Because the use of artificial intelligence implies numerous ethical challenges and dilemmas. These are some proposed by Francesca Rossi, researcher at IBM and professor of Computer Science at the University of Padua:

  • Would artificial intelligence replace human work?
  • How would our interaction with other humans, society and education change?
  • In the future, what would happen with the possibility of developing autonomous weapons?
  • How will artificial intelligence develop its own ethics?

For Rossi, systems must be developed to discriminate results ethically. This would obviously have to be the case for autonomous systems, but should apply to others, too. For example, tools that advise doctors based on data analysis. “If these suggestions don’t follow a code of ethics, doctors won’t be able to trust the system.” The key is how to develop that code, as the program’s decision-making processes, as we’ve seen, are obscured by their own nature in many cases.

It isn’t simple. On one hand, due to flexibility. What may seem like common sense to us, isn’t necessarily so for a machine. “A food processor shouldn’t cook the cat if there isn’t anything in the fridge, even though it may think that would be an acceptable meal for us,” explains Rossi.

One solution would be to program according to professional codes. “But these have many gaps that we fill with common sense. This is difficult to instill in a machine.”

Can it be done, though? According to Rossi, yes. “There are several ways that we’re studying now.” Furthermore, the process has hidden advantages. “Doing so could help us be more aware and behave more ethically ourselves.”