Newsroom | Archive

Pablo Castillon: “In the industry, prejudiced AI models are a possibility that must be avoided.”

Written by Staff | 2021

Reforma’s Aylin Rios discusses with our Founder and CEO about AI facial recognition models, ethics, and the risk of bias.

 
 
 
 

Misusing facial recognition–a technology that is now more widely used–opens the possibility of discrimination, violation of human rights, and it may even compromise the presumption of innocence, experts warned. 

Some facial recognition deep learning models may have been trained using biased databases, making the resulting software algorithms discriminatory. 

“Prejudiced artificial intelligence models are a danger that should be identified and avoided”, Castillon concluded. 

Source: Reforma (you can read here this news’ original digital version).