Ana Valera: ante la IA, los empleos son como la energía: no se destruyen, se transforman

In the face of AI, jobs are like energy: they are not destroyed, they are transformed

In this interview, Ana Valera points out a series of very specific measures that companies should adopt to use artificial intelligence with guarantees of fairness. Her conclusion is clear: in the face of AI, jobs are like energy: they are not destroyed, they are transformed.

Madrid, May 23rd. Ana Valera is an expert in People Analytics and Human Resources, as well as a member of the Advisory Board of IA+Igual. In this interview she explains how companies should act to avoid possible biases in the use of AI in Human Resources.  

IA+Igual.- Until recently, the figure of People Analytics or HR Analytics manager seemed to be exclusive to large companies. How is the need to incorporate experts in data analysis evolving with the implementation of AI in Human Resources?

Ana Valera.- Certainly, large companies and consulting firms were the first to hire People Analytics profiles and set up specific teams. The analysis projects were tailor-made and required technical knowledge of analysis and visualization tools. This was not an option for SMEs: for an HR area made up of 1 or 2 people, which outsourced some functions (payroll management, training, etc.), it was unfeasible for one of them to be exclusively dedicated to numbers.

We are experiencing a democratization of AI and the advent of affordable software that offers more and more analytics functionality; all HR professionals use tools that have built-in AI metrics and metrics monitoring functionality.

According to a study by SD Worx, 66% of Spanish companies are adopting HR analysis tools. Therefore, we are not only facing the incorporation of technicians who dedicate 100% of their time to pure HR data analysis in more and more companies, but also the need for all profiles in the Talent area to have data analysis skills, to a greater or lesser extent, as a key tool that they will use in their day-to-day work.

IA+Igual.- In a conference he presented a concrete case of the use of AI in a selection process. Faced with CVs that were exactly the same, in which only the name changed, the artificial intelligence prioritized the profile of the man over that of the woman. Who should act against conscious and unconscious biases in the area of Human Capital?

A.V.- In that simulation, the AI - specifically ChatGPT - ended up committing that bias because I trained it with biased data (resumes of successful candidates where all were men) and gave it specific instructions to introduce the sex variable as one more in the ranking of the CVs. This brings us to an important point: who was to blame for the bias - ChatGPT or me? What usually happens is that the HR professional commits bias in the selection process and what AI does is perpetuate it. Therefore, there are several players who have the mission to act against these biases:

  • Senior management should promote and publicly commit to a culture based on diversity and inclusion.
  • HR professionals are responsible for analyzing our policies and data history to see if we are committing these types of biases and taking steps to avoid them.
  • The organization's legal team plays an important role in ensuring regulatory compliance - with and without the use of AI in the selection process.
  • The different software vendors in this area are a key player, acting at the basis of information collection and promoting measures to help avoid or mitigate these biases.

The mission to mitigate bias

IA+Igual.- How should all the people who have the mission to mitigate biases in the selection process act?

A.V.- We can recommend different measures to mitigate unconscious biases:

  • Use of blind CVs, eliminating personal information and photographs to avoid triggering bias based on name, nationality, age, gender, appearance, etc.
  • Training in unconscious bias to all people involved in the selection process, not only to recruiters (e.g. hiring managers, technical interviewers...).
  • Correct use of AI: although it can perpetuate unconscious biases inherited from historical data, when used correctly it can help to avoid them.
  • Diversity in selection teams: their different perspectives can help neutralize individual prejudices or biases.

IA+Igual.- The EU AI Regulation, finally approved last Tuesday, states that requirements related to data governance can be fulfilled by using third parties offering certified services. Is this a recommendation without further ado or, on the contrary, is it essential to build trust in the use of data by an AI model that affects workers? Would self-certification suffice?

A.V.- It can be seen more clearly with an example: when we contract any product or service, what gives us more confidence, seeing the website where the provider itself tells us how well it works or going to an OCU type analysis where a series of variables are compared in a neutral way and a score is given? The same goes for AI self-certification: as a People Analytics manager, I would always prefer to hire a vendor whose certification is endorsed by a third party.

IA+Igual.- In this fourth industrial revolution, humanist profiles must work side by side with statisticians, mathematicians, technicians, etc., to guarantee developments that, in addition to being legal, are ethical. Is AI+Equal's claim that the person must be at the center of the algorithm utopian?

A.V.- I think we are moving from the "how" boom to the "what for" era. AI is here and it is not going to go away, and until now it has been in the hands of technical profiles focused on "how" to make it work. To focus on the "what for", we must incorporate other views and points of view: lawyers, psychologists, philosophers, sociologists... must participate in this work table to make AI a useful tool that complements the human being. This position defended by IA+Igual is also mine and, therefore, I am happy to be part of this project.

Can we expect job destruction?

IA+Igual.- In the face of widespread fears about job destruction and the misuse of AI, how do you see the job outlook in the short and medium term?

A.V.- The Organization for Economic Cooperation and Development showed in its Employment Outlook 2023  Employment Outlook 2023 report the results of a survey of 2,000 business leaders and 5,300 employees in the financial sector and manufacturing industry in seven member countries. These data are striking: 60% of respondents fear losing their jobs because of AI in the next 10 years, while 63% believe that artificial intelligence allows them to fulfill themselves more as professionals. In other words, many employees face the ambivalent experience of feeling that AI is helping them, but the danger of it replacing them in the long term is not lost on them.

The report itself proposes the solution: training in the use of AI, at different levels of complexity and adoption, is essential and must be carried out both in educational institutions and by employers. It also needs to be accompanied by legislation that looks after the interests of all people and guarantees inclusion. In short, the phrase "jobs are like energy: they are not destroyed, they are transformed" is becoming more and more meaningful. If we human beings have been able to transform ourselves in previous industrial revolutions, why do we think that this time will be different?

Webinars Campus IA+Igual

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll al inicio