Monthly Archives: May 2019

Personal Data Protection Act (PDPA) comes into effect in Thailand after royal endorsement 

On 27th May, the Kingdom of Thailand’s first personal data protection law was published in the Government Gazette and made official. This comes three months after the National Legislative Assembly passed the Personal Data Protection Act in late February and submitted the act for royal endorsement.

While the law is now technically in effect, its main ‘operative provisions’ such as data subjects’ rights to consent and access requests, civil liabilities and penalties, will not come into proper effect until a year after their publishing, i.e. May 2020. This was planned to give data controllers and processors a grace period of one year to prepare for PDPA compliance, the requirements and obligations of which were designed to have demonstrative equivalency with the EU’s international spearheading General Data Protection Regulation (GDPR). As such, within a year, Thai public bodies, businesses and other organisations will qualify for ‘adequate’ third-party status regarding data protection provisions, allowing market activity and various other transactions involving proportionate data processing between EU member states and Thailand. The GDPR has special restrictions on the transfer of data outside the EEA, but the PDPA may prompt the EU to make a data protection provision adequacy decision regarding Thailand.

It is uncommon in Thai law for provisions to have an ‘extraterritorial’ purview, but the new PDPA was designed to have this scope to protect Thai data subjects from the risks of data processing, particularly marketing or ‘targeting’ conducted by non-Thai agents offering goods or services in Thailand.

The PDPA contains many word-for-word translations of the GDPR, including the special category of ‘sensitive personal data’ and Subject Access Requests (SARs). Personal data itself is defined, along GDPR lines, as “information relating to a person which is identifiable, directly or indirectly, excluding the information of a dead person.”

The European Commission has so far recognised Andorra, Argentina, Canada (commercial organisations), Faroe Islands, Guernsey, Israel, Isle of Man, Japan, Jersey, New Zealand, Switzerland, Uruguay and the United States of America (limited to the Privacy Shield framework) and most recently Japan, as third countries providing adequate data protection. Adequacy agreements are currently being sought with South Korea, and with Thailand’s latest measures, it may be that the southeast Asian nation will be brought into the fold.

Irrespective of Brexit, the Data Protection Act passed by Theresa May’s administration last year manifests the GDPR within the UK’s regulatory framework. However, on exit the UK will become a so called third country and will have to seek an adequacy decision.  The ICO will continue to be the UK Supervisory Authority.

Harry Smithson, 30 May 2019 

 

GDPR’s 1st Birthday

General Data Protection Regulation reaches its first birthday

This blogpost arrives as the General Data Protection Regulation (GDPR) reaches its first birthday, and a week after a report from the Washington-based Center for Data Innovation (CDI) suggested amendments to the GDPR.

The report argues that regulatory relaxations would help foster Europe’s ‘Algorithmic Economy,’ purporting that GDPR’s restrictions of data sharing herald setbacks for European competitiveness in the AI technology field.

Citing the European Commission’s ambition “for Europe to become the world-leading region for developing and deploying cutting-edge, ethical and secure AI,” the report then proceeds to its central claim that “the GDPR, in its current form, puts Europe’s future competitiveness at risk.”

That being said, the report notes with approval France’s pro-AI strategy within the GDPR framework, in particular the country’s use of the clause that “grants them the authority to repurpose and share personal data in sectors that are strategic to the public interest—including health care, defense, the environment, and transport.”

Research is still being conducted into the legal and ethical dimensions of AI and the potential ramifications of automated decision-making on data subjects. In the UK, the ICO and the government’s recent advisory board, the Centre for Data Ethics and Innovation (CDEI – not to be confused with aforementioned CDI), are opening discussions and conducting call-outs for evidence regarding individuals’ or organisations’ experiences with AI. There are of course responsible ways of using AI, and organisations hoping to make the best of this new technology have the opportunity to shape the future of Europe’s innovative, but ethical use of data.

The Information Commissioner’s Office (ICO) research fellows and technology policy advisors release short brief for combatting Artificial Intelligence (AI) security risks

The ICO’s relatively young AI auditing framework blog (set up in March this year) discusses security risks in their latest post, using comparative examples of data protection threats between traditional technology and AI systems. The post focuses “on the way AI can adversely affect security by making known risks worse and more challenging to control.”

From a data protection perspective, the main issue with AI is its complexity and the volume of not only data, but externalities or ‘external dependencies’ that AI requires to function, particularly in the AI subfield Machine Learning (ML). Externalities take the form of third-party or open-source software used for building ML systems, or third-party consultants or suppliers who use their own or a partially externally dependent ML system. The ML systems themselves may have over a hundred external dependencies, including code libraries, software or even hardware, and their effectiveness will be determined by their interaction with multiple data sets from a huge variety of sources.

Organisations will not have contracts with these third parties, making data flows throughout the supply chain difficult to track and data security hard to keep on top of. AI developers come from a wide array of backgrounds, and there is no unified or coherent policy for data protection within the AI engineering community.

The ICO’s AI auditing blog uses the example of an organisation hiring a recruitment company who use Machine Learning to match candidate CVs to job vacancies. A certain amount of personal data would have been transferred between the organisation and the recruitment agency using manual methods. However, additional steps in the ML system will mean that data will be stored and transferred in different formats across different servers and systems. They conclude, “for both the recruitment firm and employers, this will increase the risk of a data breach, including unauthorised processing, loss, destruction and damage.”

For example, they write: 

  • The employer may need to copy HR and recruitment data into a separate database system to interrogate and select the data relevant to the vacancies the recruitment firm is working on.
  • The selected data subsets will need to be saved and exported into files, and then transferred to the recruitment firm in compressed form.
  • Upon receipt the recruitment firm could upload the files to a remote location, eg the cloud.
  • Once in the cloud, the files may be loaded into a programming environment to be cleaned and used in building the AI system.
  • Once ready, the data is likely to be saved into a new file to be used at a later time.

This example will be relevant to all organisations contracting external ML services, which is the predominant method for UK businesses hoping to harness the benefits of AI. The blog provides three main pieces of advice based on ongoing research into this new, wide (and widening) area of data security. They suggest that organisations should,

  • Record and document the movement and storing of personal data, noting when the transfer took place, the sender and recipient, and the respective locations and formats. This will help monitor risks to security and data breaches;
  • Intermediate files such as compressed versions should be deleted when required – as per best-practice data protection guidelines; and
  • Use de-identification and anonymization techniques and technologies before they are taken from the source and shared either internally or externally.

Harry Smithson,  May 2019

What is a Data Protection Officer (DPO), and do you need one?

A DPO (Data Protection Officer) is an individual responsible for ensuring that their organisation is processing the data of its staff, customers, providers and any other individuals, i.e. data subjects, in compliance with data protection regulations. As of the EU-wide General Data Protection Regulation (GDPR), a DPO is mandatory for:

  1. Public authorities; and
  2. Organisations that process data
  • On a large scale; 
  • With regular and systematic monitoring; 
  • As a core activity; or 
  • In large volumes of ‘special category data,’ formerly known as ‘sensitive personal data,’ i.e. information related to a living individual’s racial or ethnic origin, political opinions, religious beliefs, trade union membership, physical or mental health condition, sex life or sexual orientation, or biometric data.  

It may not be immediately obvious whether an organisation must have a designated DPO under GDPR. If so, it is necessary to make a formal evaluation, recording the decision and the reasons behind it. The WP29 Guidelines on Data Protection Officers (‘DPO’), endorsed by the European Data Protection Board (EDBP), recommends that organisations should conduct and document an internal analysis to determine whether or not a DPO should be appointed. Ultimately, such decision-making should always take into account the organisation’s obligation to fulfil the rights of the data subject, the primary concern of the GDPR: does the scale, volume or type of data processing in your organisation risk adversely effecting an individual or the wider public?

Even if a DPO is not legally required, organisations may benefit from voluntarily appointing an internal DPO or hiring an advisor – this will ensure best-practice data protection policies and practices, improving cyber security, staff and consumer trust, and other business benefits. When a DPO is designated voluntarily, they will be considered as mandatory under GDPR – i.e. the voluntarily appointed DPO’s responsibilities as defined in articles 37 and 39 of the GDPR will correspond to those of a legally mandated DPO (in other words, GDPR does not recognise a quasi-DPO with reduced responsibility). As an excerpt from the GDPR explains “if an organisation is not legally required to designate a DPO, and does not wish to designate a DPO on a voluntary basis, that organisation is quite at liberty to employ staff or outside consultants to provide information and advice relating to the protection of personal data.

However, it is important to ensure that there is no confusion regarding their title, status, position and tasks. Therefore, it should be made clear, in any communications within the company, as well as with data protection authorities, data subjects, and the public at large, that the title of this individual or consultant is not a data protection officer (DPO).

But how are the conditions that make a DPO mandatory defined under GDPR?

Large-scale processing: there is no absolute definition under GDPR, but there are evaluative guidelines. The GDPR’s WP29 guidance suggests data controllers should consider:

  • The number of data subjects concerned;
  • The volume of data processed;
  • The range of data items being processed;
  • The duration or permanence of the data processing activity; and
  • The geographical extent.

Regular and systematic monitoring: as with ‘large-scale processing,’ there is no definition as such, but WP29 guidance clarifies that monitoring involves any form of tracking or profiling on the internet, including for the purposes of behavioural advertising. Here are a number of examples of regular and systematic monitoring:

  • Data-driven marketing activities;
  • Profiling and scoring for purposes of risk assessment;
  • Email retargeting;
  • Location tracking (e.g. by mobile apps); or
  • Loyalty programmes.

 What does a Data Protection Officer do?

Article 39 of the GDPR, ‘Tasks of the data protection officer,’ lists and explains the DPO’s obligations. It explains that, as a minimum, the responsibility of a DPO is the items summarised below:

  1. Inform and advise the controller or the processor and the employees
  2. Monitor compliance with the Regulation, with other Union or Member State data protection provisions and with the policies of the controller or processor in relation to the protection of personal data, including the assignment of responsibilities, awareness-raising and training of staff involved in processing operations, and the related audits
  3. Provide advice as regards data protection impact assessments and monitor performance
  4. Cooperate with the supervisory authority
  5. Act as the contact point for the supervisory authority on issues relating to data processing

 

 Harry Smithson 2019

 

HMRC’s 28 days to delete unlawfully obtained biometric data

In a statement released on 3rd May, the Information Commissioner’s Office reiterated their decision to issue HMRC a preliminary enforcement notice in early April. This initial notice was based on an investigation conducted by the ICO after a complaint from Big Brother Watch concerning HMRC’s Voice ID service on a number of the department’s helplines since January 2017.

blurred-background-cellphone-cellular-1426939

The voice authentication for customer verification uses a type of biometric data considered special category information under the GDPR, and is therefore subject to stricter conditions. ICO’s investigation found that HMRC did “not give customers sufficient information about how their biometric data would be processed and failed to give them the chance to give or withhold consent.” HMRC was therefore in breach of GDPR.

The preliminary enforcement notice issued by the ICO on April 4th stated that HMRC must delete all data within the Voice ID system for which the department was never given explicit consent to have or use. According to Big Brother Watch, this data amounted to approximately five million records of customers’ voices. These records would have been obtained on HMRC’s helplines, but due to poor data security policy for the Voice ID system, the customers had no means of explicitly consenting to HMRC’s processing of this data.

Steve Wood, Deputy Commissioner at the ICO, stated, “We welcome HMRC’s prompt action to begin deleting personal data that it obtained unlawfully. Our investigation exposed a significant breach of data protection law – HMRC appears to have given little or no consideration to it with regard to its Voice ID service.”

The final enforcement notice is expected 10th May. This will give HMRC a twenty-eight-day timeframe to complete the deletion of this large compilation of biometric data.

The director of Big Brother Watch, Silkie Carlo, was encouraged by the ICO’s actions:

“To our knowledge, this is the biggest ever deletion of biometric IDs from a state-held database. This sets a vital precedent for biometrics collection and the database state, showing that campaigners and the ICO have real teeth and no government department is above the law.”

 Harry Smithson, May 2019. 

Be Data Aware: the ICO’s campaign to improve data awareness

As the Information Commissioners Office’s ongoing investigation into the political weaponisation of data analytics and harvesting sheds more and more light on the reckless use of ‘algorithms, analysis, data matching and profiling’ involving personal information, consumers are becoming more data conscious. The ICO, as of 8th May, has launched an awareness campaign, featuring a video, legal factsheets reminding citizens of their rights under GDPR, and advice guidelines on internet behaviour. Currently the campaign is floating on Twitter under #BeDataAware.

D6ntRrGXkAEz8uX

While the public is broadly aware of targeted marketing, and fairly accustomed to the process of companies attempting to reach certain demographics, the political manipulation of data is considered, if not a novel threat, then a problem compounded by the new frontier of online data analytics. Ipsos MORI’s UK Cyber Survey conducted on behalf of the DCMS found that 80% of respondents considered cyber security to be a ‘high priority,’ but that many of these people would not be in groups likely to take much action to prevent cybercrime personally. What this could indicate is that while consumers may be concerned about cybercrime being used against themselves, they are also aware of broader social, economic and political dangers that the inappropriate or illegal use of personal information poses.

ICO’s video (currently on Vimeo, but not YouTube), titled ‘Your Data Matters,’ asks at the beginning, “When you search for a holiday, do you notice online adverts become much more specific?” Proceeding to graphics detailing this relatively well-known phenomenon, the video then draws a parallel with political targeting: “Did you know political campaigners use these same targeting techniques, personalising their campaign messaging to you, trying to influence your vote?” Importantly, the video concludes, “You have the right to know who is targeting you and how your data is used.”

To take a major example of an organisation trying to facilitate this right, Facebook allows users to see why they may have been targeted by an advert with a clickable, dropdown option called ‘Why am I seeing this?’ Typically, the answer will read ‘[Company] is trying to reach [gender] between the ages X – Y in [Country].’ But the question remains as to whether this will be sufficiently detailed in the future. With growing pressure on organisations to pursue best practice when it comes to data security, and with the public’s growing perception of the political ramifications of data security policies, will consumers and concerned parties demand more information on, for instance, which of their online behaviours have caused them to be targeted?

A statement from the Information Commissioner Elizabeth Denham as part of the Be Data Aware campaign has placed the ICO’s data security purview firmly in the context of upholding democratic values.

“Our goal is to effect change and ensure confidence in our democratic system. And that can only happen if people are fully aware of how organisations are using their data, particularly if it happens behind the scenes.

“New technologies and data analytics provide persuasive tools that allow campaigners to connect with voters and target messages directly at them based on their likes, swipes and posts. But this cannot be at the expense of transparency, fairness and compliance with the law.”

Uproar surrounding the data analytics scandal, epitomised by Cambridge Analytica’s data breach beginning in 2014, highlights the public’s increasing impatience with the reckless use of data. The politicisation of cybercrime, and greater knowledge and understanding of data misuse, means that consumers will be far less forgiving of companies that are not seen to be taking information security seriously.

Harry Smithson 9 May 2019