Subscribe

IA en RR. HH.

Human Resources should only use AI if the objective is clear

Elena Martín, researcher in Machine Learning Ethics at DATAI and Carlos Cabañas, Talent & HR Solutions at Aggity, participated in the IA+Igual webinar ' Practical examples of how and why to apply AI in HR'.The keys are to clearly establish what the company needs to use AI for, debug the data very well and have an expert to monitor the result.

Madrid, March 20. . During the session held yesterday, Carlos Cabañas, Talent & HR Solutions at Aggity, explained the success story of a multinational in the Pharma sector that has developed an algorithm to prevent talent drain and unwanted turnover in Spain.
Before explaining this case, whose algorithm is being audited by IA+Igual, Cabañas warned professionals against the temptation to use generative Artificial Intelligence (AIg) because it is fashionable: "HR should only use it if it has been developed by the company. HR should only use it if it has identified a problem and has a clear objective". In addition, he has established another clear premise: " "The value of data as a starting point is critical, but it must be taken into account throughout the process, including the results, to avoid possible anomalies. Human intervention decreases as technology gains weight, but we must always take it into account."

DATAI's Machine Learning Ethics researcher, Elena Martín, also reinforced this core idea, highlighting that "predictive performance is beyond the AIg and the data used and is the responsibility of the people behind the technology. Transparency with employees is essential so that they understand how the data is interpreted and how it affects them".

How to avoid talent drain

Following the COVID-19 pandemic and Farma's subsequent growth, talent retention became a priority for the sector given the difficulty of replacing employees who leave the company - these are profiles with a high learning curve. For this reason, Aggity has developed an algorithm for the aforementioned multinational, based on 103 variables, 33 of which were correlated with the risk of leaving (age range, number of children, length of service in the company); others, such as gender or performance, were discarded.

You have followed these steps:

  1. 1º. Data identification, analysis and integration The first step consisted of defining the objective of the project, based on the preprocessing of the data provided by HR, followed by a diagnosis of the possible reason for undesired turnover. This was followed by a diagnosis of the possible reasons for unwanted turnover. The design of a loyalty scorecard, a descriptive work, made it possible to reduce this talent turnover from 30% to 10%.
    2º. Predictive model of employee disengagement Then, the focus was put on predictive analysis with the raw information coded and homogenized to see what would happen in the next 12 months.
    3º. . Results and roadmap. From the last step, the prescriptive one, Aggity proposed to the company initiatives for engagement, levers for improving the predictive model, a follow-up scorecard and the realization of interdepartmental workshops to identify usage analytics

For Carlos Cabañas, an algorithm is "a cocktail shaker of data that must be refined to extract hypotheses without running the risk of being fooled by the results" . For this reason, Farma's company is currently working with the areas involved - HR, technology, etc. - to develop the algorithm. HR, technology, etc.-in developing a culture that uses AI ethically and intelligently. For Cabañas,"the transversal impact of AI tools will leave out those who do not know how to use them."

Martín started with the definition of AI: "A multidisciplinary field in which different areas such as mathematics, statistics and computer science have historically converged and, more recently, ethics and social sciences have gained ground". From there, for a machine learning to be responsible it must conform to four keys, according to the DATAI researcher: Fairness -it is necessary to work in three main ways to mitigate historical biases (data preprocessing, model that processes the data and result offered); Robustness -it must be assessed, for example, if a tool designed for Spain, such as the one exposed by Cabañas, is extrapolable to other countries; Transparency or explainability -why the algorithm arrives at a prediction-; and Reliability -it does not focus on a number if not on its level of confidence.

"It is very important to start from an ethical approach because of the impact that the machine learning model can have on a population," says the ethics expert.

He has used the example of the Ocean model, a black box AI model that gives a percentage from 0 to 100 to 5 dimensions of personality after watching a 1-minute video. The results change if the viewer wears glasses or not or puts a library in the background of the computer desktop, among other variables that have nothing to do with his or her personality.

Leave a Comment

Your email address will not be published. Required fields are marked *

en_GBEnglish
Scroll al inicio