Login: Email AddressPassword


New ‘extremism tool’ to fight terrorist content online
13 Feb 2018

<b><i>Amber Rudd: 'We are engaging<br> with internet companies to make <br>sure that their platforms<br>are not being abused'</b></i>
Amber Rudd: 'We are engaging
with internet companies to make
sure that their platforms
are not being abused'
The Home Secretary is due to meet her US counterparts today to discuss the development of new technology that automatically detects terrorist content on any online platform.

Amber Rudd is visiting Silicon Valley to hold a series of meetings with the main communication service providers in the US as the UK government unveiled a tool on Tuesday (February 13) that blocks the extremism material from being viewed.

She said she had not ruled out forcing technology companies to use it by law.

Tests have shown this new tool – created by London-based artificial intelligence company ASI Data Science with the help of £600,000 government funds – can automatically detect 94 per cent of Islamic State (Daesh) propaganda.

A Home Office statement said: “It has an extremely high degree of accuracy, for instance if it?analyses one million randomly selected videos, only 50 would require additional human review.

“The tool can be used by any platform, and integrated into the upload process, so that the majority of video propaganda is stopped before it ever reaches the internet.”

Thousands of hours of content posted by Daesh was run past the tool to "train" it to automatically spot extremist material. It has proved accurate to 99.995 per cent.

Anything the software was not sure about would then be flagged up for a human decision to be taken.

The Home Office and ASI will be sharing the methodology behind the new model with smaller companies to?help combat the abuse of their platforms by terrorists and their supporters.

Many of the major tech companies have developed technology specific to their own platforms and have publicly reported on the difference this is making in their fight against terrorist content.

Smaller platforms, however, are increasingly targeted by Daesh and its supporters and they often do not have the resources to develop such technology.

The model, which has been trained using over 1,000 Daesh videos, is not specific to one platform so can be used to support the detection of terrorist propaganda across a range of video-streaming and download sites in real-time.

Similar tools in the past have been heavily criticised by advocates of an "open" internet, saying such efforts can produce false positives – and that means content that is not particularly problematic ends up being taken down or blocked.

But in welcoming the new technology, Ms Rudd said: “Over the last year we have been engaging with internet companies to make sure that their platforms are not being abused by terrorists and their supporters.

“I have been impressed with their work so far following the launch of the Global Internet Forum to Counter-Terrorism, although there is still more to do, and I hope this new technology the Home Office has helped develop can support others to go further and faster.

“The purpose of these videos is to incite violence in our communities, recruit people to their cause, and attempt to spread fear in our society. We know that automatic technology like this, can heavily disrupt the terrorists’ actions, as well as prevent people from ever being exploited to these horrific images.

In Silicon Valley, Ms Rudd said the tool was made as a way to demonstrate that the Government's demand for a clampdown on extremist activity was not unreasonable.

"It's a very convincing example of the fact that you can have the information you need to make sure this material doesn't go online in the first place," she said.

"The technology is there. There are tools out there that can do exactly what we're asking for. For smaller companies, this could be ideal."

Silicon Valley giants such as Facebook and Google are pouring their own resources into solving this problem, but this tool is at first intended to be used by small companies, and they may one day be forced to use it.

"We're not going to rule out taking legislative action if we need to do it," she added.

"But I remain convinced that the best way to take real action, to have the best outcomes, is to have an industry-led forum like the one we've got."

As part of her two day visit to San Francisco, the Home Secretary will also meet Secretary of Homeland Security Kirstjen Nielsen to discuss how the UK and US can work together to tackle terrorist content online.

Ms Rudd will also meet with the Global Internet Forum to Counter Terrorism, which was launched last year following a roundtable convened at the Home Office in the aftermath of the Westminster Bridge attack.

The Home Secretary and Secretary Nielsen will agree that the internet itself is not the problem, but that those who commit terrorist attacks on our streets are increasingly influenced by what they read and what they see online.

She is expected to praise the hard work that industry, civil society and researchers are undertaking to tackle the threat head on.

Home Office analysis shows that Daesh supporters used more than 400 unique online platforms to push out their poisonous material in 2017, highlighting the importance of technology that can be applied across different platforms.

Previous research has found the majority of links to Daesh propaganda are disseminated within two hours of release, while a third of all links are disseminated within the first hour.

The Home Office estimates that between July and the end of 2017, extremist material appeared in 145 web services that had not been used for such propaganda before.

Technology expert Ian McLoughlin hails the ASI Direct Science creation as a “great result” and predicts that the government’s new AI tool will only get better over time.

But the welcoming words of Mr McLoughlin, Professor of Computing at the University of Kent, come with a ‘performance’ warning.

He says that of the 1,000 Daesh videos used in the project, “almost all” would have been used for training.

“For any machine learning system, final performance is related to the inherent ability of the analysis and processing technique, plus the quality and quantity of the training material. As time goes by, performance is clearly likely to improve,” added Mr McLoughlin.

“On the topic of performance, if 94 per cent of videos were correctly recognised with 99.995 per cent accuracy, the big question is what happened to the six per cent that were not mentioned?

“Were those actual terrorist content that would be missed (false negative), or legitimate content that was incorrectly flagged (false positive).

“The cost of the former is that something dangerous slips through; the cost of the latter is that a human – who would need to review any flagged content anyway – is loaded with additional work.

“It is important to analyse the errors in any AI system, and this is no exception. However, revealing the characteristics of this performance – i.e. which 60 videos are not captured – and especially revealing what kinds of videos are correctly and incorrectly recognised, would give too many secrets to those who are producing such material.

“In summary, a great result from ASI Data Science. Technology doesn’t stand still, and this will need to be improved as terrorists evolve their approaches, but let’s try to keep the exact technology and performance secret in the meantime.”

Meanwhile, Facebook executives have pledged to tackle a toxic online environment where propaganda, hate speech and disturbing content that exploits children thrive after Unilever warned technology firms that it will pull lucrative advertising contracts if companies allow their platforms to “breed division”.

Adam Mosseri, the social network’s head of news feed, told an audience in California that Unilever’s commitment to tackling issues such as online racism, sexism and terrorism is a “great thing”.

Unilever chief marketing officer Keith Weed called for better policing from Silicon Valley companies, adding: "Fake news, racism, sexism, terrorists spreading messages of hate, toxic content directed at children – parts of the Internet we have ended up with is a million miles from where we thought it would take us."


Share



Copyright © 2015 Police Professional
Terms & Conditions website by 64 Digital