Subscribe

RR. HH evita que los procesos basados en IA interfieran con la ética con Javier Moscoso

HR prevents AI-based processes from interfering with ethics

Javier Moscoso, philosopher of the CSIC and member of the IA+Igual Advisory Board is no stranger to the fears raised by the implementation of AI in HR, but he is optimistic: there is genuine interest among HR managers in ensuring that the efficiency of new AI-based processes does not interfere with the company's ethical values.

Madrid, April 17. According to a survey conducted by Bitdefender, 80% of Spaniards are concerned about the security and privacy of their data in the face of the growing implementation of artificial intelligence (AI). Javier Moscoso, CSIC philosopher and member of the IA+Igual Advisory Board, is no stranger to these fears, but in this interview he launches an optimistic message.

IA+Igual.- The implementation of artificial intelligence arouses opposing passions: hope and fear. Is it possible to contribute, from philosophy, to a more balanced and/or realistic vision?

Javier Moscoso.- Yes, not only from philosophy, but from the humanities in general. The public perception of AI is not very different from what we have seen throughout history on the occasion of other technological developments: from railroads to computers.

Before AI, we have seen equal parts fears and hopes in relation to nuclear energy or genetics, for example. In all cases, ways of preventing abuse and ethically limiting its exploitation have emerged. Control of AI is only one aspect to consider.

Perhaps more important than knowing who is in charge, and who or who we do not want to be in charge, is to clarify the goodness of the purposes for which this new tool can serve as a means.

IA+Igual.- A certain fear of the unknown is inevitable, as you say, especially when it is a change that not only affects work, but also our daily lives. As workers or citizens, what can we do from an individual point of view so that it becomes an ally instead of an enemy?

J.M.- Like any other technological development, AI places us before a mirror that asks us questions about ourselves. Much more than the fear of the unknown, the machine that thinks like a human being, we are disturbed by the idea of whether we ourselves have not been thinking like machines or manipulating the same ideas under different forms for too long.

The concern, perhaps legitimate, that AI is coming to supplant humans in their creative tasks is simply untrue at the moment. Only those who traffic in borrowed ideas can be concerned about what ChatGPT writes. Perhaps the problem is that there are many "creators" of content, whether artists or academics, who may not be so much.

IA+Igual.- The implementation of AI in HR processes can have discriminatory effects if the historical data used and the biases it contains are not "cured". Do you think that companies are sufficiently aware and prepared to avoid such biases?

J.M.- The use of algorithms in HR departments can, of course, bring with it all the biases of those who have designed them. At the same time, the use of new technologies, whatever they may be, can in no way mean a progressive dehumanization of companies, in particular, or of society as a whole.

Again, it is positive, to take a look at history. Reading, for example, would seem to be a solitary activity and, therefore, disintegrating. Nothing could be further from the truth, however. The same goes for social networks or new platforms. They are also cultural products that are not ethically marked: they do, however, carry the interests of those who have created them and, consequently, it is necessary to ensure that these interests are ethically acceptable.

As far as I know, there is a growing awareness in large companies of the dangers involved in the use of new technologies. My impression is that, as in many other sectors, from research to politics, there is a genuine and sincere interest in ensuring that the efficiency of new processes (e.g., selective ones) does not interfere with their ethical values.

IA+Igual.- Like any other breakthrough, AI is expected to improve productivity and company profits. Do you think that, in general, Spanish managers have the necessary ethical training to face this paradigm shift?

J.M.- I do not have enough data to know to what extent Spanish managers do or do not have sufficient ethical training. I do see a genuine interest in accommodating moral reasons and economic interests. When I say "genuine", I mean that the managers of the companies I know, as well as many other HR managers, understand, to begin with, that the application of ethical values to their business management is profitable from an economic point of view.

But even if this were not the case, I believe that there is sufficient social awareness for these same entrepreneurs to consider that their business objectives cannot be achieved at any price. The growing interest in studying the possible biases that the new algorithms provide to HR departments is, in my opinion, proof of this.

IA+Igual.- The IA+Igual Advisory Board emphasizes that algorithms make decision-making easier for the individual, but it will always be the individual who has the ultimate responsibility for their use. Will the responsibility of professionals who use AI be greater to the extent that they will be able to count on the great potential offered by AI?

J.M.- Indeed, the ultimate decision, and the responsibility, always lies with the user of the system. But it is also the responsibility of the designer. The responsibility of the decision-maker will be the same, although now it is a diversified responsibility that must assume the possible drawbacks of the tool used. 

Leave a Comment

Your email address will not be published. Required fields are marked *

en_GBEnglish
Scroll al inicio