Privacy statement: Your privacy is very important to Us. Our company promises not to disclose your personal information to any external company with out your explicit permission.
Select Language
This article is produced by NetEase Smart Studio (public number smartman 163). Focus on AI and read the next big era!
[NetEase smart news June 3 news] facial recognition system is controversial, at least it can be said. Last week, Amazon made headlines as it provided law enforcement agencies with facial scanning technology. Research shows that some face recognition algorithms have inherent biases against certain races.
People’s concerns about this artificial intelligence surveillance system prompted Toronto’s researchers to develop a tool for them. Parham Aarabi, a professor at the University of Toronto, and Avishek Bose, a graduate student, invented an algorithm to dynamically destroy the face recognition system by performing light conversion on the image.
"With the advancement of facial recognition technology, personal privacy has become a real problem," Aarabi said in a statement. "This is where anti-facial recognition systems come in."
Products and software designed to undermine facial recognition are nothing new. In a study in November 2016, researchers at Carnegie Mellon University in the United States designed a spectacle frame that could mislead the facial recognition system and make it misidentified. In November 2017, experts from the Massachusetts Institute of Technology and the University of Kyushu, Japan used an algorithm to mark a photograph of a 3D printed turtle as a rifle by changing one pixel in the photograph.
Figure: Researcher's anti-facial recognition system at work (Source: University of Toronto)
But according to Bose and Aarabi, this is one of the first solutions to use artificial intelligence. Their algorithm is trained on 600 face data sets. It sends out a real-time filter that can be applied to any image. Because its goal—a single pixel in the image—is specific, it is almost invisible to the naked eye.
The two researchers used adversarial training techniques. This technique makes two neural networks confront each other - one neural network obtains information from the data, and the other tries to destroy the task of the first neural network. Aarabi and Bose's system uses the first neural network to identify faces and uses a second neural network to disturb the facial recognition process.
Their research report will be published in the 2018 IEEE International Multimedia Signal Processing Symposium. Bose and Aarabi claimed that their algorithm reduced the proportion of faces detected in face recognition systems to 0.5%. They want to provide this neural network system on an application or website.
"Ten years ago, these algorithms had to be human-defined, but now neural networks can learn on their own - you don't need to provide anything other than training data," Aarabi said. "In the end, they can do some really amazing things. This is a very interesting area with great potential."
(From: VentureBeat Compilation: Netease Intelligence Engagement: Li Qing)
Focus on Netease smart public number (smartman163), for you to interpret the big events in the AI field, new ideas and new applications.
December 22, 2022
Mail an Lieferanten
December 22, 2022
December 22, 2022
Privacy statement: Your privacy is very important to Us. Our company promises not to disclose your personal information to any external company with out your explicit permission.
Fill in more information so that we can get in touch with you faster
Privacy statement: Your privacy is very important to Us. Our company promises not to disclose your personal information to any external company with out your explicit permission.