New AI technique to block online child grooming launched
A new technique which uses artificial intelligence (AI) to identify and block child grooming conversations online has been launched by the Home Office and Microsoft in Seattle.
The technique, which began development at a hackathon co-hosted by Microsoft and the Home Office in November 2018, will automatically flag conversations which could be taking place between groomers and children, and pass on details of the flagged conversation to the relevant law enforcement agency.
The technique will be licensed from today (9 January), free of charge, to small and medium-sized technology companies to help them stamp out child grooming on their platforms.
Home Secretary Priti Patel said:
Predators must get the message loud and clear, that there is no safe space to groom children for abuse.
We are committed to stamping out this vile crime and this technique is just one part of that. Through collaboration with international partners and industry we are leading a worldwide effort to keep children safe from abuse.
Minister for Safeguarding and Vulnerability Victoria Atkins said:
Online grooming of children is utterly sickening which is why it’s vital to drive innovation to tackle this appalling crime.
The launch of this technology represents the culmination of months of hard work by those committed to keeping our children safe online.
Microsoft Chief Digital Safety Officer Courtney Gregoire said:
At Microsoft, we embrace a multi-stakeholder model to combatting online child exploitation that includes survivors and their advocates, government, tech companies, and civil society working together.
Today, we share a new technique – code name Project Artemis – to help prevent the online grooming of children for sexual purposes.
We invite other collaborators to embrace this technique, join the fight, and support continuous improvement.
The prototype of the technique was developed in Seattle in 2018. Engineers from Microsoft, Facebook, Google, Snap and Twitter worked for two days analysing thousands of conversations to understand patterns used by predators.
Since then, engineers have worked through technical, legal and policy aspects, analysing thousands more instances of grooming conversations to develop the technique. The work was led by a cross industry group made up of Microsoft, The Meet Group, Roblox, Kik, Thorn and others.
This group was spearheaded by leading academic Dr Hany Farid who had previously worked to develop a tool which assisted in the detection, disruption and reporting of child exploitation images.
Licensing and adoption of the technique will be handled by Thorn, a charity that focuses on harnessing the power of technology to protect children online.
This tool is part of the UK government’s work to tackle child sexual exploitation in all its forms. In July, the Home Secretary brought together counterparts from the US, Canada, Australia and New Zealand at the Five Country Ministerial meeting in London to discuss how to tackle online child sexual abuse. Ministers agreed to collaborate on designing a set of voluntary principles that will ensure online platforms have the systems needed to stop the viewing and sharing of child sexual abuse material, the grooming of children online, and the livestreaming of child sexual abuse.
Other measures to stamp out online child abuse include £30m funding to target the most the most dangerous and sophisticated offenders on the dark web and upgrades to the Child Abuse Image Database. These include new tools to improve the capabilities of Police enabling them to rapidly analyse seized devices and identify victims.
The Home Office is also working with the Joint Security and Resilience Centre (JSaRC) to develop tools to identify and block livestreamed child sexual abuse and pledged £300,000 in May to further develop capabilities.