Everyone agrees that child exploitation material should be off the Web. The trouble is actually getting rid of it. The New York Times writes that technology companies last year reported 45 million online photos and videos of minors, including infants and toddlers, being sexually assaulted and in some cases even tortured.
The startling surge results in part from inventions that make images simple to disseminate and complicated to detect. This is society’s chronic Internet-induced headache: We’ve opened up the world, and that comes with horrors as well as wonders. But the spike also represents an increase in successful policing by platforms – which is better news. Still, there’s a long way to go.
The first challenge is technical. The most common technology for catching child exploitation imagery was crafted to identify illicit photographs, not videos. That roadblock is dwarfed by another: Today’s tried-and-true tools match uploads to known violating material; they do not nab new content as it comes in – but nabbing new content is essential to stopping ongoing abuse. Firms such as Facebook are developing algorithms to do this trickier job, as well as to anticipate abuse before it happens by detecting exploitative relationships as they’re forming.
That trickier job also involves trickier trade-offs. The Times found that even companies that do have access to top-tier technology for scanning content for abuse often choose not to use it. The cynical interpretation of this negligence is that platforms fear the negative press, plus the added expense, that may result from acknowledging there’s a problem. But it’s also the case that any system that monitors users’ communications at all poses a privacy risk, and the risk is heightened when the system relies on still-untested algorithms attempting to ensnare anything that resembles exploitation material. There’s more to consider: Scanning public content such as search engine results or posts on Twitter, for example, may be less of an infringement than combing through private messages or uploads to the cloud that have not been shared with others.
The complexity of the problem, not to mention its magnitude, makes a compelling case for platforms to coordinate more closely on how they police their own sites and how they share what they find with one another and the underfunded, overwhelmed federal clearinghouse, the National Center for Missing and Exploited Children. A legal wrinkle strengthens the case for private companies to take the initiative: More leadership from the government might also mean more Fourth Amendment challenges to court cases trying to take down perpetrators.
The responsibility to eradicate this scourge is one of many challenges being at least partially outsourced from Washington to Silicon Valley. Others, from fighting terrorism to monitoring misinformation, invite far more controversy. The photographed and filmed abuse of children is unquestionably reprehensible, and it’s also unquestionably illegal. Platforms’ ability to take on this task will say much about their ability to take on all the others.
Send questions/comments to the editors.
Comments are no longer available on this story