Monthly Archives: February 2020

Google’s British service users’ data to get US oversight

Amid perceived (or professed) uncertainty around the UK’s future GDPR adequacy status, Google executives have opted to transfer oversight of their UK data subjects from their EU subsidiary Google Ireland Limited to their American HQ Google LLC.

Cited by outlets such as Reuters and The Guardian, Lea Kissner, Google’s former lead for global privacy technology, has stated,

“There’s a bunch of noise about the UK government possibly trading away enough data protection to lose adequacy under GDPR, at which point having them in Google Ireland’s scope sounds super-messy. […] Never discount the desire of tech companies not be caught in between two different governments.”

It’s important to remember that the UK doesn’t yet retain GDPR adequacy status, which is subject to current Brexit negotiations. Officially, the Government is seeking adequacy status, and data protection regulations are not expected to constitute an obstacle to any UK-EU departure deal.

Harry Smithson, 28th February 2020

European Commission publishes “White Paper on Artificial Intelligence”

19th February saw the release of the European Commission’s white paper on AI, which remains open to public consultation until May. While extolling the virtues of AI such as its much anticipated roles in fine-tuning medical diagnostics and mitigating climate breakdown, the white paper ranks intrusion and privacy risks among the four main issues facing policy-making around AI. The other three risks included opaque and/or discriminatory decision-making and criminal application.

The expected impact on governance that AI uptake could have, and the resulting conspicuous contrast with governance systems lacking cutting edge AI capacity, leads the Commission to go so far as to note that a common European framework for policy on AI is necessary to avoid “the fragmentation of the single market.”

The paper outlines a largely theoretical “European approach to excellence and trust,” emphasising the requirement for global competitiveness in AI innovation. It states however that “trustworthiness is a prerequisite for [AI] uptake.” For instance, safeguards on law enforcement’s expanded capacities due to AI technology are recommended, though currently not detailed. Much of this trust is purportedly to be garnered by taking the “human-centric approach” to AI application. This approach was explicated in a paper called “Communication on Building Trust in Human-Centric Artificial Intelligence” released by the Commission last year, in which privacy and data governance was among seven “key requirements that AI applications should respect.”

Concrete, technical policies for regulation are somewhat more elusive. Both papers reiterate the accuracy requirement for any datasets that AI may be using as fuel for thought, i.e. the necessity for data integrity, but the requirement for stored data to be accurate is enforced by the General Data Protection Regulation (GDPR), a framework which will remain in the UK after Brexit due to the Data Protection Act 2018 and is seeing emulation across the world. Quite how the Commission’s value system of human-centric ethics will manifest in AI development remains unclear.

Where the white paper on AI is most outspoken is the perceived limitations of current EU legislation to regulate or even conceptualise AI. Changes to the legal concept of ‘safety’ invoked by AI risk and predictive analysis are anticipated; ambiguity concerning responsibility between economic agents in the supply chain may pose judicial quandaries; and there is even a chapter dedicated to the problem of AI indecipherability: if human officials cannot ascertain how an AI programme reached a decision, how can they know whether such a decision was skewed by bias in a dataset? Human oversight of AI development is therefore recommended at each stage of the industrial chain.

Harry Smithson, 21st February 2020

Government expands Ofcom’s role to combat ‘Online Harms’

Online harms can take a variety of forms, privacy violations being among the most notorious. Regardless of how we categorise negative internet user experiences, we know from a recent Ofcom study that 61% of adults and 79% of 12-15 year olds have reported at least one potentially harmful online experience in the last 12 months.

As part of the government’s response to public consultation on the Online Harms White Paper, the DCMS announced on the 12th February that the UK’s telecoms and broadcasting regulator will also be the new online harms regulator. The Home Office and DCMS have been working together with Barnardo’s charity to provide greater protection for vulnerable internet users, particularly children, building upon growing institutional and regulatory oversight of digital services.

Unlike the General Data Protection Regulation (GDPR), which has far-reaching purviews, the regulation will likely only apply to fewer than 5% of UK business, as Ofcom will only be responsible for monitoring organisations that host user-generated content (comments, forums etc.).

But from a data protection perspective, it’s interesting to see how GDPR terminology and values have shaped this initiative – consider, for instance, former secretary of state Nicky Morgan’s statement on the government’s response to the white paper:

“We will give the regulator the powers it needs to lead the fight for an internet that remains vibrant and open but with the protections, accountability and transparency people deserve.”

We can expect to see the official anointing of the new Ofcom coming into force under Nicky Morgan’s recent successor, Oliver Dowden.

In the meantime, the Information Commissioner Elizabeth Denham, head of the UK’s enforcer for GDPR, has welcomed this expanded Ofcom as “an important step forward in addressing people’s growing mistrust of social media and online services.”

She continues, in an ICO press release on the heels of the DCMS announcement, “the scales are falling from our eyes as we begin to question who has control over what we see and how our personal data is used.”

If you have any questions about data protection, please contact us via email team@datacompliant.co.uk or call 01787 277742.

Harry Smithson, 14th February 2020