Lien copié

Derrière la reconnaissance faciale : la connaissance du for intérieur.

La connaissance du for intérieur, de l’intimité, constitue l’autre enjeu de la reconnaissance faciale, en dehors de l’identification des personnes. Quand l’esprit humain devient accessible à l’intelligence artificielle par la reconnaissance des émotions, la classification biométrique ou encore la manipulation commerciale, qu’en est-il de son autonomie ? Sur ce sujet sensible, deux auteurs alertent sur les ambiguités d’un texte publié en avril dernier par la Commission européenne et portant sur la régulation de l’AI. Soulignant la qualité de ce texte, « the most advanced and comprehensive temps to regulate AI int the world », Gianclaudio Malgieri et Marcello Ienca soulignent que les protections proposées pour l’extraction et l’utilisation de données caractérisant le mental d’une personne restent à ce stade insuffisantes : « In our view, this insufficiently strict regulation of AI applications for mental data processing opens room for risk scenarios in which the only safeguard for an individual against having their mental information (e.g., emotions) automatically processed would be a mere compliance duty, such as notifying the individual, but without giving that individual any possibility to opt-out. Several ethically tainted uses of AI would benefit from this loophole. For example. human rights activists recently revealed that Uyghurs, a Turkic ethnic group native to the Xinjiang Uyghur Autonomous Region in Northwest China, were forced to be subject to emotion recognition software experiments. Further, methodologically ambiguous studies claimed to reveal sensitive characteristic of individuals relating to their mental domain such as their sexual orientation, intelligence, or criminal proclivities from face recognition AI. These practices are currently not considered high risk per se. Neither are other “mind-mining” practices such as social media analyses of emotions aimed at customizing the newsfeed of individual users and covertly influencing their behaviour through microtargeted advertising (except in the very rare cases where it produces physical or psychological harms or exploits vulnerabilities related to age or disability)».



The EU regulates AI but forgets to protect our mind

0 commentaire


Your email address will not be published. Required fields are marked *

Champs obligatoires*