Did you ever see the Black Mirror episode called “Arkangel?” Basically, it tells the story of an overly-cautious mother who has a chip implanted in her daughter’s brain so she can track her every movement. But she also upgrades it with a couple of features, like the ability to see everything she sees, and to block out images of anything that might be deemed “shocking.” Needless to say, things don’t turn out too well for anyone. Regardless, there is technology in the works today that could actually be used to automatically censor images in real time.
In this clip from TEDx Talks, computer interaction scientist Lonni Besançon introduces us to a technology that could do just that. The system works a bit differently from the version seen in Black Mirror, with the goal of preserving more information about the image that’s being obscured. Rather than just pixelate out the “offensive” imagery, processing technology would apply filters to make the image less shocking. The use case explained here is one in which a surgical image could be made less repulsive, while still preserving enough detail to understand what was going on.
The core of this particular technology is more about reducing the shocking nature of specific images or video footage, rather than making decisions about what is considered offensive or shocking. That said, Besançon’s team has made a prototype Chrome extension which can automatically identify violence, nudity, or medical imagery, and apply visual filters.
While there are legitimate uses for this kind of AI-powered censorship tech, like protecting social media moderators or police detectives from having to view disturbing imagery, it could also be used to impose unwanted censorship if used improperly or forced into consumer technology.