(ISC)²’s two-day UK Secure Summit brings multi-subject sessions from hands-on practical workshops to keynotes and panel discussions, featuring local and international industry experts to maximise the learning experience and CPE opportunities.
Serving the entire (ISC)² EMEA professional community, the Summit offers a wealth of educational value, networking opportunities, and a community forum for like minded professionals, all of which are FREE to (ISC)² members & (ISC)² Chapter members. Read on for insights from one of our popular Secure Summit UK sessions:
(ISC)² 2018 Secure Summit UK saw Richard Hudson, Principal IT Consultant at msg systems, reveal how dramatic advances in Artificial Intelligence could mean machines can aid everything from regulatory compliance to the fight against terrorism.
Mr. Hudson explained how msg systems is a pioneering, fast-growing company with 7000 employees and approaching a €1 billion turnover this year. The company’s advances in Artificial Intelligence arose from a recent project in which they developed machine-learning tools that now have implications for the use of AI-powered data privacy and even counter-terrorism.
Msg had begun working with a company in the EU which had a joint venture partner outside the EU. This meant they were legally obliged to share large volumes of internal company data with a non-EU country which could put them at risk of breaching data-privacy laws or leaking IP. In order to protect corporate IP, private company information and personally-identifiable data, they had to scour 50-page documents for any errant words revealing anything from the document author and unpatented innovations to internal growth plans.
Staff were so frightened of being responsible for a legal violation that they were often withholding vital documents from the joint venture partner, which in turn put them in breach of their contractual obligations. This highlighted the massive tension between data privacy and efficiency, and between obligations to business customer or partners and to regulators.
Mr Hudson outlined how msg developed a solution called a ‘censor robot’ that automatically analyses documents and redacts sensitive words or sentences. The system was partly trained by user feedback on what constitutes sensitive information, and by language-processing technology. This meant it could analyse and identify both individual offending words or entire sentences that it had not encountered before. It was trained to spot topics common to legally-sensitive sentences or even spot when a word is a synonym of another word relating to a confidential topic (such as ‘acquisition’ being used instead of ‘takeover.’) The AI received continuous user feedback on its mistakes and successes, forming a virtuous circle of continuous improvement.
Initially used on word documents, it could now be extended to scan everything from PowerPoint files to images for sensitive information. The technology also has potential applications for other kinds of valuable information retrieval, from patent searches to the scanning of real-time email traffic for sensitive data. It could even be scaled up and used by intelligence services, to perform ‘bulk scanning’ of large volumes of internet traffic for words or sentences in conversations that could indicate a terrorist plot or nation-state attack.
Yet the technology breakthrough also contains valuable lessons. A machine cannot bear legal responsibility for decisions because it cannot understand nuance or context and has no ethical framework. For example, the machine can advise on potentially confidential words but it cannot autonomously decide on whether to withhold an email from a partner because it cannot decide between different conflating factors. This has implications for other fields of AI; for example, a driverless car camera can identify obstacles but it cannot make a moral choice between two types of accident, such as whether to swerve to avoid a dog even if this might injure the passengers. As a result, the AI was used to inform and guide human decisions on what documents to send, rather than taking the human out of the loop.
It demonstrates how technology can augment human decisions instead of replacing human jobs. Crucially, it shows that the true role of AI in cyber security and elsewhere is to make human decisions even more intelligent.
The post Censor Robots: Using AI to Redact Confidential Information appeared first on Cybersecurity Insiders.
October 06, 2018 at 09:08AM
0 comments:
Post a Comment