If you’re a homeowner, you might rely on a smart-home security system for peace of mind—but what if a third party was intercepting and viewing the activities without your knowledge?
“When you upload a video or your smart-home system streams to your security provider, the image itself is apparent and the server that holds the image’s data can identify you,” says Saeed Ranjbar Alvar, an SFU engineering science PhD student. “We see this when we upload to Google or Facebook and it suggests who’s in a photo using automatic facial recognition software. Smart-home security systems do this too, to protect us from intruders.”
Ranjbar Alvar and fellow PhD student Hyomin Choi are proposing a privacy-shielding solution using artificial intelligence (AI) that could protect innocent subjects in a streamed video—without compromising the video’s usefulness to security providers and law enforcement.
In recent privacy breaches involving Google’s Nest and Amazon’s Ring, unauthorized users viewed the home surveillance video of thousands of unsuspecting users and exposed the risks of using unprotected smart surveillance systems. A similar breach just days ago, this time at a subcontractor for the U.S. Customs and Border Protection Agency, showed that the fallout from this kind of hack isn’t just personal—it could have implications for national and international security.
To create their solution, Ranjbar Alvar and Choi developed a computer-vision algorithm that scrambles the images as the video is uploaded to a storage platform like the ones used in smart-home systems, or even a user’s personal Google Drive.
“What we’ve created is a way to scramble the data so that we humans can’t identify who is in the video without having the right permissions to unlock it, but the software can still distinguish them,” says Choi.
They published the research in the proceedings of two conferences on multimedia and signal processing of the Institute of Electrical and Electronic and Engineers. Professor Ivan Bajić supervised the project in the Multimedia Lab at SFU’s School of Engineering Science.
Ranjbar Alvar and Choi modified a video compression algorithm to blur and scramble the content to the human eye. Then, they applied machine-learning techniques to train an artificial intelligence model to recognize parts of the video’s data stream (called its bitstream) that correspond to human faces. The resulting video looks largely nonsensical to humans, but the artificial intelligence system can find human faces within it and track their activity across the video.
Because of this, a human moderator can still, with the help of the developed AI model, determine the presence of human subjects but cannot actually see what they look like. In the case of devices like Nest and Ring, a homeowner’s privacy would remain intact while the security provider could apply the software to tell if the occupants are permitted or are intruders. Hackers or would-be voyeurs would only see scrambled nonsense.
The students’ model is currently able to identify subjects’ faces in the same way as a typical facial recognition scanner. In the future, they hope to advance its abilities to identify motions and gestures—such as a person falling or a group fist-fighting—for a variety of public safety and personal security applications. In the meantime, the current model could also be used to protect the privacy of users whose personal photos have been stolen from hacked devices.
“We could see this technology also being used in taxi cabs, assisted-living facilities and many other places where the video needs to be useful enough to analyze but the subjects should have privacy,” says Bajić, who researches signal processing and machine learning for multimedia.
“Until recently, most people were pretty lax about their privacy,” he says. “But that’s changing and people are demanding that companies take it more seriously in the technologies we access every day.”
Ranjbar Alvar agrees, “Privacy is a big issue in the digital world. I’m hopeful this research can be part of a solution so we can protect people from these kinds of intrusions that they might not even realize are happening.”
Source: Simon Fraser University
Comment this news or article