The Most Comprehensive Data Protection Solution
Discover, classify, and protect your data from all threats with the only Gartner Magic Quadrant DLP and Forrester Wave EDR Leader.
First and Only Solution to Converge:
- Data Loss Prevention
- Endpoint Detection and Response
- User and Entity Behavior Analytics
The UK's data protection authority is hoping that advances to artificial intelligence incorporate data protection by design and is seeking the public's opinion on how to do so.
The organization in charge of upholding information rights in the UK is seeking the public's opinion on how to interpret – and eventually regulate - artificial intelligence, or AI.
In particular the Information Commissioner's Office, the UK's data protection regulator, is looking for insight from the public around how organizations should inform individuals that their personal data may be processed by AI.
The ICO said this week that while there are clearly benefits to AI - the ICO notes how it has improved life in the health care, recruitment, and commerce sectors - there is also inherent risk with the intelligence.
The regulatory office hints that legislation akin to the General Data Protection Regulation, last year’s landmark data protection and privacy regulation, may be needed to achieve harmony around AI and data protection. The ICO is hoping that GDPR, which helped bake data protection risks into how organizations collect information and innovate, can serve as a compass here.
"Its considerable focus on new technologies reflects the concerns of legislators here in the UK and throughout Europe about the personal and societal effect of powerful data-processing technology like profiling and automated decision-making," the ICO said Monday.
The ICO, which is an executive non-departmental public body, is hoping the culmination of the project results in an auditing framework for AI, something that will give it a way to measure AI apps, and ensure they're transparent, fair, and have the proper mechanisms in place to assess and manage data protection risks. The office hopes the framerwork will inform future guidance for organizations hoping to implement AI
Simon McDougall, who was previously on the Board of Directors and the European Advisory Board at the International Association of Privacy Professionals and joined the ICO as its first-ever Executive Director for Technology Policy and Innovation last summer, is heading up the initiative.
The team that's slated to develop the framework, comprised of members of the ICO's Technology Policy and Innovation Directorate, will be spearheaded by Dr Reuben Binns, a Postdoctoral Research Fellow in Artificial Intelligence the ICO appointed last November.
AI is of the ICO's "three technology priorities" for 2018-19. The office also wants to look at (.PDF) how risks around cybersecurity, big data and machine learning, and web and cross device tracking can impact how personal data is collected, stored, and transmitted.
The ICO says it plans to use feedback it receives, from data scientists, app developers, or anyone who works with AI, in a formal consultation paper it plans on publishing by next January. The paper will factor into the office's finale AI auditing framework, scheduled for publication next spring.