Category Archives: Data Compliant

Countdown to Brexit… 69 days to go

The new Parliamentary session starts on 3rd September. Inevitably the session will be, once again, dominated by Brexit. With so little time between the start of the session and the Brexit deadline of Hallowe’en (31st October) there will be little Parliamentary time given over to any issues other than the terms of the UK’s exit from the EU. Parliamentary time is limited further by the Party Conference season with a further recess between 14th September and 9th October.

The Conservative Party Conference runs from 29th September to 2nd October in Manchester.  Members of Cabinet will be expected to attend and no doubt their speeches from the platform and on the fringe will be scrutinised for new policy initiatives and especially the direction of policy post Brexit. 

Over the summer the political agenda was dominated by possibility of a “No Deal” Brexit with MPs from all parties floating a variety plans for how such an eventuality could be prevented. Prime Minister Johnson has been resolute in his belief that the No Deal option cannot be removed from the table.     

Data Protection Implications

The new Prime Minister wasted no time in assembling his new Cabinet, making his intentions very clear by appointing, with few exceptions, long-standing Brexit supporters. Notable among the exceptions were the appointment of Amber Rudd to the Work & Pensions brief she has held since November 2018 and Nicky Morgan who assumes a Cabinet role as Secretary of State for Digital, Culture, Media and Sport. This is of particular interest because the brief includes Data Protection regulation and writing the “UK GDPR” into UK law.

When the UK exits the EU, as is planned, the EU GDPR will no longer be  applicable in the UK (although the Data Protection Act 2018 which references the GPDR will still apply). The UK government intends to write the GDPR into UK law, with changes to tailor it for the UK.The government has already published the – ‘Keeling Schedule’ for the GDPR, which shows the planned amendments. It can be found here http://bit.ly/2Nsy9sw 

The amendments primarily relate to references to the European Parliament, EU Member States, and the EU Commission.

What Next?

Deal or No Deal on the exit date, the UK will become a ‘third country’ (to use the jargon).  It has been suggested that there will be a period of at least 2 years of negotiations to finalise the full terms of the divorce arrangements.  During this time the UK Government will continue to allow transfers to the EU.  This will be kept under review by the new Secretary of State.  Watch this space!

Gareth Evans 23.08.2019

Facebook’s cryptocurrency Libra under scrutiny amid concerns of ‘data handling practices’

It would be giving the burgeoning cryptocurrency Libra short shrift to call it ambitious. Its aims as stated in the Libra Association’s white paper are lofty even by the rhetorical standards of Silicon Valley. If defining Libra as ‘the internet of money’ isn’t enough to convince you of the level of its aspiration, the paper boasts the currency’s ability to financially enfranchise the world’s 1.7 billion adults without access to traditional banking networks or the global financial system.

Like its crypto predecessors, Libra uses blockchain technology to remain decentralised and inclusive, enabling anyone with the ability to pick up a smartphone to participate in global financial networks. Distinguishing itself, however, from existing cryptocurrencies, Libra promises stability thanks to the backing of a reserve of ‘real assets,’ held by the Libra Reserve. There is also the added benefit, hypothetically, of Libra proving to be more energy efficient than cryptocurrencies such as Bitcoin because there will be no ‘proof of work’ mechanism such as Bitcoin mining, which requires more and more electricity as the currency inflates.

So far, so Zuckerberg. It may seem unsurprising then, that global data protection regulators have seen the need to release a joint statement raising concerns over the ‘privacy risks posed by the Libra digital currency and infrastructure.’ While risks to financial privacy and related concerns have been raised by Western policymakers and other authorities, this is the first official international statement relating specifically to personal privacy.

The joint statement, reported on the UK’s Information Commissioner’s Office (ICO) on the 5th August, has signatories from Albania, Australia, Canada, Burkina Faso, the European Union, the United Kingdom and the United States. The primary concern is that there is essentially no information from Facebook, or their participating subsidiary Calibra, on how personal information will be handled or protected. The implementation of Libra is rapidly forthcoming – the target launch is in the first half of next year. Its expected uptake is anticipated to be similarly rapid and widescale thanks to Facebook’s goliath global status. It is likely, therefore, that the Libra Association (nominally independent, but for which Facebook, among other tech and communications giants, is a founding member) will become the custodian of millions of peoples’ data – many of whom will reside in countries that have no data protection laws – in a matter of months.

The statement poses six main questions (a ‘non-exhaustive’ conversation-starter) with a view to getting at least some information on how Libra will actually function both on a user-level and across the network, how the Libra Network will ensure compliance with relevant data protection regulations, how privacy protections will be incorporated into the infrastructure, etc. All of these questions are asked to get some idea of how Facebook and Calibra et al. have approached personal data considerations.

Profiling, Algorithms and ‘Dark Patterns’

The joint statement asks how algorithms and profiling involving personal data will be used, and how this will be made clear to data subjects to meet the standards for legal consent. These are important questions relating to the design of the access to the currency on a user-level, of which prospective stakeholders remain ill-informed. The Libra website does state that the Libra blockchain is pseudonymous, allowing users to hold addresses not linked to their real-world identity. How these privacy designs will manifest remains unclear, however, and there is as yet no guarantee de-identified information cannot be reidentified through nefarious means either internally or by third parties.

The regulators also bring up the use of nudges and dark patterns (sometimes known as dark UX) – methods of manipulating user behaviour that can rapidly become unethical or illegal. Nudges may be incorporated into a site (they may sometimes be useful, such as a ‘friendly reminder’ that Mother’s Day is coming up on a card website) in order to prompt commercial activity that may not have happened otherwise. There is not always a fine line between a reasonable nudge and a dubious one. Consider the example of Facebook asking a user ‘What’s on your mind?’, prompting the expression of a feeling or an attitude, for instance. We already know that Facebook has plans to scan information on emotional states ostensibly for the purposes of identifying suicidal ideation and preventing tragic mistakes. The benefits of this data to unscrupulous agents, however, could prove, and indeed has proved, incalculable.

The Libra Network envisions a ‘vibrant ecosystem’ (what else?) of app-developers and other pioneers to ‘spur the global use of Libra.’ Questions surrounding the Network’s proposals to limit data protection liabilities in these apps are highly pertinent considering the lightspeed pace with which the currency is being designed and implemented.

Will Libra be able to convince regulators that it can adequately distance itself from these practices? Practices which take place constantly and perennially online? Has there been any evidence of Data Protection Impact Assessments (DPIAs), as demanded unequivocally by the European Union’s General Data Protection Regulation (GDPR) on a data-sharing scale of this magnitude?

Hopefully, Facebook or one of its subsidiaries or partners will partake in this conversation started by the joint statement, providing the same level of cooperation and diligence shown to data protection authorities as they have to financial authorities. More updates to come.

Harry Smithson, 9th August 2019

Framework for EU-US data flows under scrutiny as ‘Schrems II’ case takes place at the CJEU

For those unfamiliar with the Schrems saga, a brief catch-up may be required. The original case, now known as ‘Schrems I,’ involved an Austrian activist, Max Schrems, filing a complaint with the Irish Data Protection Agency against Facebook. The complaint was that Facebook had allowed US authorities to access his personal data on social media in violation of EU data protection law. This case ultimately found its way to the Court of Justice of the European Union (CJEU) and resulted in the invalidation of the ‘Safe Harbor Framework,’ which was the framework companies relied on to transfer data from the EU to the US. This is largely because legislation in the States does not have adequate limits on what data authorities may access.

With the Safe Harbor Framework invalidated, the Irish DPA asked Max Schrems to reformulate the case. On the 9th July, ‘Schrems II’ was heard at the CJEU in Luxembourg. This case took aim at EU Standard Contractual Clauses (SCC), which Facebook has been relying to legitimise its international data flows. Advocates for Schrems also called for invalidation of the EU-US Privacy Shield, arguing it provides inadequate protection and privacy to data subjects.

The hearing included many supporters of SCC, who emphasised the role of DPAs in enforcing SCC and suspending data flows where necessary and appropriate. The CJEU will likely not reach a decision until early 2020, but with the two remaining frameworks for legitimate EU-US data flows under such heavy scrutiny, data protection practitioners should be preparing for the impact these potential invalidations will have on their clients’ or their companies’ data flows.

Harry Smithson, July 2019

Two high-profile GDPR fines for British Airways and Marriott International, Inc

The Information Commissioner’s Office (ICO) has released two statements this week declaring intention to fine British Airways and Marriott International, Inc £183.39m and £99m respectively for breaches of the General Data Protection Regulation (GDPR). In both cases, which affect data subjects from countries across the world, the ICO was the lead supervisory authority acting on behalf of other EU Member State data protection authorities.

These punitive measures are provided under the GDPR, and are the largest fines issued by the ICO to date. These fines both therefore break the former record, which was the £500,000 fine issued to Facebook last year for the social media giant’s role in the Cambridge Analytica scandal (which was actually the maximum fine possible under the previous, much more lenient legislation, since much of the action had taken place prior to GDPR’s implementation).

These two warning shots are fines amounting to 1.5% of the respective company’s global turnover, out of a possible 4% provided by GDPR. This leniency is availed by the companies’ willingness to cooperate with the authority and make immediate improvements where possible. However, it is expected that the companies will appeal the decision.

Failure to protect their customers’ data due to negligent digital security was at the heart of the decisions. The ICO discovered that from June to September 2018, users of BA’s website were being diverted to a fraudulent site used to harvest data. Roughly 500,000 customers had their personal information compromised in this way. Arguably on an even greater scale, the hotel giant Marriott was found to be presiding over a system exposing 339 million guest records to the internet.

Due diligence is the important aspect to these decisions, associated to the principle of ‘accountability’ defined in the GDPR. In the case of BA, poor security arrangements on the website were responsible for the cyber attackers being able to harvest personal data relating to log-in details, payment cards, travel bookings, names and addresses. Similarly, Marriott had failed to pursue due diligence when the company acquired Starwood (a hotel chain), which maintained a vulnerability in its guest reservation database dating back to 2014.

Marriott’s CEO has emphasised the fact that their subsidiary was victim to a cyberattack indeed the company itself notified data protection authorities of the breach, but as the Information Commissioner Elizabeth Denham has stated, “the GDPR makes it clear that organisations must be accountable for the personal data they hold. This can include carrying out proper due diligence when making a corporate acquisition, and putting in place proper accountability measures to assess not only what personal data has been acquired, but also how it is protected.”

These decisions set a strong precedent, and will hopefully encourage companies to take greater responsibility for the personal data they hold. Being victim to a cyberattack is not in itself an excuse: companies and organisations must demonstrate that they have attempted to take appropriate and robust security measures. The accountability principle as explained in the GDPR is very clear on this.

Harry Smithson, July 2019

University data protection policies under scrutiny as report finds threats of cyber attacks

A report published by the Higher Education Policy Institute and conducted by Jisc, a digital infrastructure provider for HE, has emphasised the expanding risks of cyberattacks among UK universities and academic institutions in general. Last year saw an increase (17%) in attacks and breaches from the year before, and the trend is likely to continue. The cyberattacks will not only increase in frequency, but also in sophistication.

It is common knowledge that the higher education sector is expanding massively as more and more young people at home and abroad become students in the UK. On top of this, universities have become increasingly involved in cyber security research, making these institutions ever more desirable targets for, in the report’s words, “organised criminals and some unscrupulous nation states.” According to separate research conducted by VMware, 36% of universities believe that a successful cyberattack on their research data would pose a risk to national security.

The report (titled “How safe is your data? Cyber-security in higher education”) begins by relating a couple of everyday scenarios in academia in which cyberattacks can easily occur. These scenarios include a Distributed Denial of Service (DDoS) attack on a student using a Virtual Learning Environment (VLE); and a ransomware infection affecting a university’s digital infrastructure after a member of staff visits a website containing malicious code.

Threats such as these compound the sector’s somewhat underreported history of data protection challenges (to put it lightly). Thousands of records, many containing special category data (prior to the GDPR, ‘sensitive personal data’), have been breached across a host of institutions throughout 2017 and 2018. A whistle-stop tour of these incidents might include the University of East Anglia’s email scandal in which a spreadsheet containing health records connected to essay extensions was leaked to hundreds of students; the University of Greenwich receiving a £120,000 fine for holding data on an unsecured server; and Oxford and Cambridge research papers being stolen and sold on Farsi language websites.

To understand the extent of vulnerability that the HE sector’s data protection policies and practices have demonstrated, one need only look at Jisc’s penetration tests on an array of institutions’ resilience to ‘spear-phishing,’ an attack in which a specific individual is targeted with requests for information (often an email using the name of a senior member of staff, requesting, for example, gift voucher purchases or the review of an attached document

containing malware). 100% of Jisc’s attempts to use spear-phishing to gain access to data or find cyber vulnerabilities were successful.

Data protection policies come hand in hand with cyber security. Vast amounts of information are stored and used in university research projects, containing data relating not only to students and faculty, but to many external individuals and third parties. Robust data protection policy, including appropriate training for staff and regular risk assessments that analyse cybersecurity penetrability, is vital to reduce the risk of phishing and vulnerability to breaches and hackers.

As the report concludes, “It is imperative that those in higher education continually assess and improve their security capability and for higher education leaders to take the lead in managing cyber risk to protect students, staff and valuable research data from the growing risk of attack.”

Harry Smithson, June 2019

European Commission reports awareness throughout Europe of data rights and data protection

The Special Eurobarometer 487a report on GDPR conducted by survey and data insight consultancy Kantar at the request of the European Commission has been published this month. Where relevant, the report’s findings are compared to findings from the Special Eurobarometer 431 on Data Protection conducted in 2015.

The salient finding is that two-thirds of Europeans have heard of the General Data Protection Regulation (GDPR). Moreover, a clear majority are aware of most of the rights guaranteed by GDPR and nearly six out of ten Europeans know of a national authority tasked with protecting their data and responding to breaches.

The level of general awareness of GDPR varies across the EU, ranging from nine in ten respondents in Sweden to just over four in ten in France (44%).

The sixty-eight page report contains detailed comparative data on European attitudes toward the Internet, social media, online purchasing, data security and other GDPR-related phenomena.

Social Media

Despite a general increase in data awareness, the majority of social network users in Europe who responded affirmatively to the question, ‘have you ever tried to change the privacy settings of your personal profile from the default settings on an online social network?’ has decreased by 4% (from 60% in 2015 to 56% in 2019). Trust in social media giants among Europeans, therefore, seems to remain stable.

Interestingly, while UK internet-users are by some way the most likely in Europe to regularly purchase online (64%, followed by the Dutch and Swedish on 50%), they are also among the most likely to ‘never’ use social networks (one in five), following only the Czech Republic (21%) and France (28%). Might this not place under scrutiny the common assumption of a significantly positive correlation between marketing on social media and online sales? While online purchasing has remained stable since 2015, use of social media has expanded significantly, by 15%.

Privacy Statements

For anyone working on privacy statements or considering reworking the ones they have, the report’s findings on this subject may be useful. Your average EU28 internet-user is only 13% likely to ‘fully read’ a privacy statement. 47% of respondents said they would read them ‘partially,’ while 37% would not read them at all. These figures are also fairly consistent across all demographics and member states.

Perhaps unsurprisingly, it is the length of privacy statements that is the main reason respondents give for not fully reading them (at 66% of respondents, who could choose multiple reasons in the survey). In the UK this is higher than average at 75%, in line with the finding that high rates of internet usage correlate with people finding things too long to read.

Length of privacy statement is followed by finding them unclear or difficult to understand (31%), the sufficiency of a privacy statement existing on the website at all (17%), the belief that the law will protect them in any case (15%), the statement ‘isn’t important’ (11%), distrust in the website honouring the statement (10%, although this has fallen by 5% since 2015), and finally ‘other’ and ‘not knowing where the find them’ (5% each).

Websites do seem to have improved the clarity or wording of their privacy statements slightly over the last four years, given the mild reduction (7%) in Europeans claiming the statements are difficult to read. Respondents in the UK are among the least likely in Europe (at 19%) to find privacy statements unclear, on a par with Croatians and just below Latvians (at 15%).

Concern over control of data

The report shows there is still more that can be done by organisations and even data protection authorities wanting to build confidence among people providing information online. More than six in ten Europeans are concerned about not having complete control over the information they provide online. Indeed, 16% responded that they were ‘very concerned’. The British and the Irish are among the most concerned, either ‘very’ or ‘fairly’, at 73% and 75% respectively.

Overall, there has been a mild decrease across Europe of respondents expressing concern over control of their data, with significant decreases of up to 20% in Eastern Europe. Five countries show minor increases of concern, the highest being France and Cyprus with 5%.

Conclusions

Respondents are not only broadly aware of the rights guaranteed under GDPR, but many have begun to exercise them. Nearly a quarter of Europeans (24%) have cited the right to not receive direct marketing in taking action against this infringement. While awareness of the right to have a say when decisions are automated remains relatively low (41%), this proportion is likely to increase.

As the report states, ‘the GDPR regulation is now more important than ever – almost all respondents use the Internet (84%), with three quarters doing so daily.’ Organisations have the opportunity to pave the way for greater confidence and trust in online activities involving consumers’ data.

Harry Smithson, 28th June 2019

Belgian Data Protection Authority’s first GDPR fine imposed on public official 

The Belgian DPA delivered a strong message on 28th May 2019, that data protection is “everyone’s concern” and everyone’s responsibility, by premiering the GDPR’s sanctioning provision in Belgium with a fine of €2,000 imposed on a mayor (‘bourgmestre’) for the illegal utilisation of personal data. 

Purpose Limitation was Breached 

The mayor in question used personal data obtained for the purposes of mayoral operations in an election campaign, in breach of GDPR, particularly the purpose limitation principle, which states that data controllers and/or processors must only collect personal data for a specific, explicit and legitimate purpose. Given the fairly moderate fine, the data the mayor obtained would not have contained special category data (formerly known as sensitive personal data in the UK). The Belgian DPA also looked at other factors when deciding on the severity of the sanction, including the limited number of affected data subjects; nature and gravity of the infringement; and duration.  

‘Not Compatible with Initial Purpose’ 

The Belgian DPA received a complaint from the affected data subjects themselves, whose consent to the processing of their data was based on the assumption it would be used appropriately, in this case for administrative mayoral duties. The plaintiffs and the defendant were heard by the DPA’s Litigation Chamber, which concluded along GPDR lines that ‘the personal data initially collected was not compatible with the purpose for which the data was further used by the mayor.’ 

The decision was signed off by the relatively new Belgian commissioner, David Stevens, as well as the Director of the Litigation Chamber, a Data Protection Authority chamber independent from the rest of the Belgian judicial system. 

Harry Smithson, 3rd June 2019

Personal Data Protection Act (PDPA) comes into effect in Thailand after royal endorsement

On 27th May, the Kingdom of Thailand’s first personal data protection law was published in the Government Gazette and made official. This comes three months after the National Legislative Assembly passed the Personal Data Protection Act in late February and submitted the act for royal endorsement. 

While the law is now technically in effect

Its main ‘operative provisions’ such as data subjects’ rights to consent and access requests, civil liabilities and penalties, will not come into proper effect until a year after their publishing, i.e. May 2020. This was planned to give data controllers and processors a grace period of one year to prepare for PDPA compliance, the requirements and obligations of which were designed to have demonstrative equivalency with the EU’s international spearheading General Data Protection Regulation (GDPR). As such, within a year, Thai public bodies, businesses and other organisations will qualify for ‘adequate’ third-party status regarding data protection provisions, allowing market activity and various other transactions involving proportionate data processing between EU member states and Thailand. The GDPR has special restrictions on the transfer of data outside the EEA, but the PDPA may prompt the EU to make a data protection provision adequacy decision regarding Thailand. 

 It is uncommon in Thai law for provisions to have an ‘extraterritorial’ purview, but the new PDPA was designed to have this scope to protect Thai data subjects from the risks of data processing, particularly marketing or ‘targeting’ conducted by non-Thai agents offering goods or services in Thailand. 

 The PDPA contains many word-for-word translations of the GDPR, including the special category of ‘sensitive personal data’ and Subject Access Requests (SARs). Personal data itself is defined, along GDPR lines, as “information relating to a person which is identifiable, directly or indirectly, excluding the information of a dead person.” 

 The European Commission has so far recognised Andorra, Argentina, Canada (commercial organisations), Faroe Islands, Guernsey, Israel, Isle of Man, Japan, Jersey, New Zealand, Switzerland, Uruguay and the United States of America (limited to the Privacy Shield framework) and most recently Japan, as third countries providing adequate data protection. Adequacy agreements are currently being sought with South Korea, and with Thailand’s latest measures, it may be that the southeast Asian nation will be brought into the fold. 

Irrespective of Brexit

The Data Protection Act passed by Theresa May’s administration last year manifests the GDPR within the UK’s regulatory framework. However, on exit the UK will become a so called third country and will have to seek an adequacy decision.  The ICO will continue to be the UK Supervisory Authority. 

 Harry Smithson, 30 May 2019 

GDPR’s 1st Birthday

General Data Protection Regulation reaches its first birthday

This blogpost arrives as the General Data Protection Regulation (GDPR) reaches its first birthday, and a week after a report from the Washington-based Center for Data Innovation (CDI) suggested amendments to the GDPR.

The report argues that regulatory relaxations would help foster Europe’s ‘Algorithmic Economy,’ purporting that GDPR’s restrictions of data sharing herald setbacks for European competitiveness in the AI technology field. 

Citing the European Commission’s ambition “for Europe to become the world-leading region for developing and deploying cutting-edge, ethical and secure AI,” the report then proceeds to its central claim that “the GDPR, in its current form, puts Europe’s future competitiveness at risk.”  

That being said, the report notes with approval France’s pro-AI strategy within the GDPR framework, in particular the country’s use of the clause that “grants them the authority to repurpose and share personal data in sectors that are strategic to the public interest—including health care, defense, the environment, and transport. 

Research is still being conducted into the legal and ethical dimensions of AI and the potential ramifications of automated decision-making on data subjects. In the UK, the ICO and the government’s recent advisory board, the Centre for Data Ethics and Innovation (CDEI – not to be confused with aforementioned CDI), are opening discussions and conducting call-outs for evidence regarding individuals’ or organisations’ experiences with AI. There are of course responsible ways of using AI, and organisations hoping to make the best of this new technology have the opportunity to shape the future of Europe’s innovative, but ethical use of data. 

The Information Commissioner’s Office (ICO) research fellows and technology policy advisors release short brief for combatting Artificial Intelligence (AI) security risks 

 The ICO’s relatively young AI auditing framework blog (set up in March this year) discusses security risks in their latest post, using comparative examples of data protection threats between traditional technology and AI systems. The post focuses “on the way AI can adversely affect security by making known risks worse and more challenging to control.” 

 From a data protection perspective, the main issue with AI is its complexity and the volume of not only data, but externalities or ‘external dependencies’ that AI requires to function, particularly in the AI subfield Machine Learning (ML)Externalities take the form of third-party or open-source software used for building ML systems, or third-party consultants or suppliers who use their own or a partially externally dependent ML system. The ML systems themselves may have over a hundred external dependencies, including code libraries, software or even hardware, and their effectiveness will be determined by their interaction with multiple data sets from a huge variety of sources. 

 Organisations will not have contracts with these third parties, making data flows throughout the supply chain difficult to track and data security hard to keep on top of. AI developers come from a wide array of backgrounds, and there is no unified or coherent policy for data protection within the AI engineering community. 

 The ICO’s AI auditing blog uses the example of an organisation hiring a recruitment company who use Machine Learning to match candidate CVs to job vacancies. A certain amount of personal data would have been transferred between the organisation and the recruitment agency using manual methods. However, additional steps in the ML system will mean that data will be stored and transferred in different formats across different servers and systems. They conclude, “for both the recruitment firm and employers, this will increase the risk of a data breach, including unauthorised processing, loss, destruction and damage. 

 For example, they write: 

  • The employer may need to copy HR and recruitment data into a separate database system to interrogate and select the data relevant to the vacancies the recruitment firm is working on. 
  • The selected data subsets will need to be saved and exported into files, and then transferred to the recruitment firm in compressed form. 
  • Upon receipt the recruitment firm could upload the files to a remote location, eg the cloud.  
  • Once in the cloud, the files may be loaded into a programming environment to be cleaned and used in building the AI system. 
  • Once ready, the data is likely to be saved into a new file to be used at a later time. 

This example will be relevant to all organisations contracting external ML services, which is the predominant method for UK businesses hoping to harness the benefits of AI. The blog provides three main pieces of advice based on ongoing research into this new, wide (and widening) area of data security. They suggest that organisations should, 

  • Record and document the movement and storing of personal data, noting when the transfer took place, the sender and recipient, and the respective locations and formats. This will help monitor risks to security and data breaches; 
  • Intermediate files such as compressed versions should be deleted when required – as per best-practice data protection guidelines; and 
  • Use de-identification and anonymization techniques and technologies before they are taken from the source and shared either internally or externally. 

Harry Smithson,  May 2019

What is a Data Protection Officer (DPO), and do you need one?

A DPO (Data Protection Officer) is an individual responsible for ensuring that their organisation is processing the data of its staff, customers, providers and any other individuals, i.e. data subjects, in compliance with data protection regulations. As of the EU-wide General Data Protection Regulation (GDPR), a DPO is mandatory for:

  1. Public authorities; and
  2. Organisations that process data
  • On a large scale; 
  • With regular and systematic monitoring; 
  • As a core activity; or 
  • In large volumes of ‘special category data,’ formerly known as ‘sensitive personal data,’ i.e. information related to a living individual’s racial or ethnic origin, political opinions, religious beliefs, trade union membership, physical or mental health condition, sex life or sexual orientation, or biometric data.  

It may not be immediately obvious whether an organisation must have a designated DPO under GDPR. If so, it is necessary to make a formal evaluation, recording the decision and the reasons behind it. The WP29 Guidelines on Data Protection Officers (‘DPO’), endorsed by the European Data Protection Board (EDBP), recommends that organisations should conduct and document an internal analysis to determine whether or not a DPO should be appointed. Ultimately, such decision-making should always take into account the organisation’s obligation to fulfil the rights of the data subject, the primary concern of the GDPR: does the scale, volume or type of data processing in your organisation risk adversely effecting an individual or the wider public?

A DPO is not legally required

Organisations may benefit from voluntarily appointing an internal DPO or hiring an advisor – this will ensure best-practice data protection policies and practices, improving cyber security, staff and consumer trust, and other business benefits. When a DPO is designated voluntarily, they will be considered as mandatory under GDPR – i.e. the voluntarily appointed DPO’s responsibilities as defined in articles 37 and 39 of the GDPR will correspond to those of a legally mandated DPO (in other words, GDPR does not recognise a quasi-DPO with reduced responsibility). As an excerpt from the GDPR explains “if an organisation is not legally required to designate a DPO, and does not wish to designate a DPO on a voluntary basis, that organisation is quite at liberty to employ staff or outside consultants to provide information and advice relating to the protection of personal data.

However, it is important to ensure that there is no confusion regarding their title, status, position and tasks. Therefore, it should be made clear, in any communications within the company, as well as with data protection authorities, data subjects, and the public at large, that the title of this individual or consultant is not a data protection officer (DPO).

But how are the conditions that make a DPO mandatory defined under GDPR?

Large-scale processing: there is no absolute definition under GDPR, but there are evaluative guidelines. The GDPR’s WP29 guidance suggests data controllers should consider:

  • The number of data subjects concerned;
  • The volume of data processed;
  • The range of data items being processed;
  • The duration or permanence of the data processing activity; and
  • The geographical extent.

Regular and systematic monitoring: as with ‘large-scale processing,’ there is no definition as such, but WP29 guidance clarifies that monitoring involves any form of tracking or profiling on the internet, including for the purposes of behavioural advertising. Here are a number of examples of regular and systematic monitoring:

  • Data-driven marketing activities;
  • Profiling and scoring for purposes of risk assessment;
  • Email retargeting;
  • Location tracking (e.g. by mobile apps); or
  • Loyalty programmes.

 What does a Data Protection Officer do?

Article 39 of the GDPR, ‘Tasks of the data protection officer,’ lists and explains the DPO’s obligations. It explains that, as a minimum, the responsibility of a DPO is the items summarised below:

  1. Inform and advise the controller or the processor and the employees
  2. Monitor compliance with the Regulation, with other Union or Member State data protection provisions and with the policies of the controller or processor in relation to the protection of personal data, including the assignment of responsibilities, awareness-raising and training of staff involved in processing operations, and the related audits
  3. Provide advice as regards data protection impact assessments and monitor performance
  4. Cooperate with the supervisory authority
  5. Act as the contact point for the supervisory authority on issues relating to data processing

 

 Harry Smithson 2019