Every day, one The British analyst team faces seemingly endless horrors. A team of 21 people working at the Cambridgeshire Internet Watch Foundation office spent hours browsing images and videos containing child sexual abuse. Moreover, every time they find a photo or clip, they need to be evaluated and tagged. Last year alone, the team identified 153,383 web pages with links to images of child sexual abuse. This creates a huge database, which can then be shared internationally to stop the flow of abuse. problem? Different countries/regions classify images and videos in different ways.
So far, analysts at the UK-based child protection charity have checked whether the material they found falls into three categories: A, B, or C. These subgroups are based on British laws and sentencing guidelines for child sexual abuse and broadly stipulate the types of abuse. For example, images in the most serious category A include the most serious crimes against children. These classifications are then used to calculate how long the convicted person should be sentenced. But other countries use different classifications.
IWF now believes that data breakthroughs can eliminate some of these differences. The organization rebuilt the hash software called Intelligrade to automatically match images and videos with the rules and laws of Australia, Canada, New Zealand, the United States, and the United Kingdom (also known as the Five Eyes countries). This change should mean reducing repetitive analysis work and making it easier for technology companies to prioritize the most abusive images and videos.
“We believe that we can better share data so that more people can use it in a meaningful way, rather than all of us working in our own small island,” said Chris Hughes, director of the IWF report hotline. “Currently, when we share data, it is difficult to make any meaningful comparisons of the data because they are not properly meshed at all.”
Countries apply different weights to images based on what happens in the images and the age of the children involved. Some countries classify images according to whether the child is pre-adolescent or adolescent and the crime that is occurring. The most serious category A in the UK includes penetrative sex, bestiality and sadism. Hughes said that it does not necessarily include masturbation. In the United States, this belongs to a higher category. “At present, the United States requires that IWF Class A images will miss this level of content,” Hughes said.
All photos and videos viewed by the IWF are assigned a hash value, which is essentially a code, which is shared with technology companies and law enforcement agencies around the world. These hash values are used to detect and prevent known abusive content uploaded to the network again. The hash system has had a significant impact on the spread of online child sexual abuse materials, but the latest tools of the IWF add a lot of new information to each hash.
The secret weapon of the IWF is metadata. This is data about data-it can be the content, people, methods, and time contained in the image. Metadata is a powerful tool for investigators because it enables them to discover patterns in people’s behavior and analyze their trends.One of the biggest proponents of metadata is spies, and they say it’s better than People’s message content.
Hughes said that the IWF has increased the amount of metadata it creates for each image and video it adds to its hash list. Every new image or video it views is evaluated in more detail than ever before. In addition to determining whether sexual abuse content belongs to the three groups in the UK, its analysts have now added up to 20 different pieces of information to their report. These fields match what is needed to determine image classification in other Five Eyes countries—the charity’s policy staff compared each law and determined what metadata was needed. “We decided to provide high granularity about describing age, describing what happened in images, and confirming gender,” Hughes said.