August 07, 2018


Pixelisation has always been a great technology for censoring. It has also proved its significance in blurring faces whenever anonymity was needed. However, the complete distortion of the faces has always rendered it somewhat ineffective in relaying the emotions of the story being displayed. We, as human beings, are quite visual people. We know we need smell for taste and visuals for proper information processing, and audios never being sufficient enough. But owing to lack of better technology, pixelisation had become the standard means for hiding the faces of the anonymous interviewees. 

However times are changing, and, we have seen how AI is consistently improving many of the less advanced human mechanical and technological processes. From enabling us to see through walls to solving the universe’s biggest mysteries, Artificial Intelligence has revolutionised how we used to consider our exploration limitations, in proportion with human capabilities. Now a research team from Simon Fraser University’s (SFU), School of Interactive Arts and Technology (SIAT), has put together another case for AI to improve‘face-blurring’ technology for better expression of information.


The researchers have received a grant from the Google/Knight foundation and have demonstrated their work at the Journalism 360 event on July 24 in the New York Times headquarters. In a succeeding conference, professor Steve DiPaola from the university told the CBC host of the Early Edition that, “It would look, pretty much, like a painting,”. The resultant image looks more like an oil painting than anything else, which clearly shows the expressive parts of the interviewees, like eyes and mouth while keeping their identity safe. The difference between the existing and the proposed technologies is easily visible in the image above. You can even see the entire work of the technology in motion through this demonstration video.

It is indeed amazing how the algorithm clearly hides all the identifiable marks while retaining the emotions of the person, for better relaying of the actual message. To train the system for such results, researchers have used the knowledge of portrait painters. As Professor DiPaola himself has expressed-

“We’re using artificial intelligence to take the hundreds of years of knowledge that portrait painters use; that they get your outer and, in some ways, your inner resemblance.” He added, “We have taught the system to lower the outer resemblance and keep as high as possible the subject’s inner resemblance — in other words, what they are conveying and how they are feeling”.

As the Professor puts it, “At every level, there is control.”, meaning while the technology is completely automated, an artist can add manual inputs to further add to the security of the product. For example, the artist can change more identifiable parts of the faces. Like if a person has noticeably large eyes, the artist can add manipulation and shrink the size of the eyes for better protection.

It is indeed amazing how AI never fails to amaze us by changing the horizon what we can actually achieve with it.


Load More