Category Archives: Uncategorized

Lessons from Darts: Team Dynamics in Data Protection

Teams are an essential part of life – from school to adulthood, from sports to business.  A well-functioning team leads to extraordinary achievements, whether in a local darts league or a data governance team.

The Darts Team Triumph

Consider my local darts team, which recently won the team title, along with individual singles titles. This victory wasn’t just about individual knowledge and talent; it was the result of shared goals, a strategy to achieve them, collaboration, strong mentoring, and mutual support. Each of our players’ unique skills, camaraderie and collective effort all contributed to the team’s overall success. 

Transferring Team Dynamics to Data Governance

The same principles apply to data protection governance teams. Every member of the Team must understand its overall objectives ensuring that they are responsible and accountable for data management and governance. The Team will need a framework for success, including communication and collaboration, and creating and maintaining policies and procedures around data collection, privacy, compliance, integrity and security. And it must provide regular reports to senior management who are ultimately accountable. 

Roles, Goals and Data Stewardship

Individuals within the team will take on data stewardship roles.  In essence they will oversee the entire lifecycle of personal data from collection to deletion, and be accountable for compliance and security at all stages. All team members will support each other, sharing knowledge and expertise to help manage challenges and foster a culture of continuous improvement. And each will have their own individual areas of responsibility including embedding data protection throughout their own area of the business.

Education and Continuous Improvement

Like in darts, governance team members learn from each other’s techniques, and share knowledge, best practices and insights. This knowledge is then used to help build awareness throughout the organisation about data protection and data security, and to educate employees about crucial data protection principles.

Risk Management

Sports and business both carry risks, and the team must take responsibility for identifying, assessing and mitigating them – in data governance, for example through Data Protection Impact Assessments (DPIAs).  The team must also develop and execute its response plans so that it knows how to respond if there is a data breach or security incident.

Enabling Team Leaders

Team Leaders are crucial. They are pivotal in flowing down information to their specific areas of the business – in data governance, for example, it’s helpful to have leaders from IT, HR, Marketing, Operations, Payroll and so on. It’s those Team Leaders who will then ensure that everyone in their team understands their roles and responsibilities, and who provide the resources and training so that every individual in an organisation can thrive and contribute effectively.

Conclusion

Effective teams enable the individuals in your organisation to achieve more together than they ever could alone. With a data governance team that fosters collaboration, shared problem-solving and continuous education, your organisation will benefit from strong and highly successful outcomes.

Data Compliant International

If you would like help or assistance with any of your data protection obligations, please email dc@datacompliant.co.uk or call 01787 277742.  And for more information about how to meet your Accountability and Governance obligations, please see here. 

Data Protection and Privacy Impacts of the New UK Data (Use and Access) Bill

Background

On Wednesday 23 October 2024, the UK Government published its Data (Use and Access) Bill (“DUA“). It promised to “harness the enormous power of data to boost the UK economy by £10 billion” and “unlock the secure and effective use of data for the public interest“. 

The DUA mirrors many of the concepts and provisions from the previous Government’s abandoned Data Protection and Digital Information Bill (“DPDI“), though there are subtle changes. The DUA appears to place greater focus on data sharing and digital. 

It is worth noting that the EU is set to review the UK’s data transfer adequacy status in mid-2025. Maintaining adequacy status is vital to the UK. (Possibly) as a result, some of the more contentious issues included in the discarded DPDI have been removed from the DUA. 

With the mid-2025 adequacy review date in mind, the government will undoubtedly try to get the Bill through as quickly as possible. After two readings in the House of Lords, it is now at Committee Stage.

DUA – Key Points for organisations

The key points of the DUA are:

  • UK Adequacy Status:  As stated above, the EU is reviewing the UK’s adequacy status in Mid-June.
  • Accountability requirements:  in the DPDI, there were plans to amend and simplify the accountability obligations required under GDPR.  These have NOT been carried over into the DUA.  Specifically there are to be no changes to:
    • the requirements for a DPO
    • requirements for Records of Processing Activities
    • requirements for Data Protection Impact Assessments.
  • ICO Reform: The Information Commissioner’s Office will be replaced by a new corporate body called Information Commission.  Executive members will be appointed and scrutinised by the Chair and non-executive members.  The Commissioner will be required to look to public interest factors around data protection. For example, it must consider the desirability of promoting innovation and competition.  There is also emphasis on protecting children in relation to data processing.
  • Special Category Data:  the Secretary of State has the power to add and remove new special categories of data. Those that already exist in Article 9 may not be removed. 
  • Data Subject Access Requests (DSARs): The discarded DPDI inlcuded the concept of an exception around “vexatious” requests. This has NOT been included in the DUA. However, proportionality is a key consideration in the DUA, which makes responding to DSARs more straightforward, including by confirming that a DSAR search for personal data need only be “reasonable and proportionate”
    • The 30-day time period to complete a DSAR begins only after the organisation has confirmed the individual’s identity.
    • The DUA also helps businesses by turning common DSAR practices, based on ICO guidance, into law.This offers certainty for organisations. For example, where
      • If an organisation has large amounts of information about the data subject, it may ask the subject to narrow down the information requested. 
      • While it seeks this information, it may briefly halt the time frame.
  • Legitimate Interests: there is a new concept of recognised legitimate interests where certain data processing activities will not require a full Legitimate Interest Assessment (LIA), specifically, for example:
    • safeguarding national security or public safety
    • responding to an emergency
    • crime prevention / investigation
    • public health
    • exercising data subject rights, regulatory functions or civil law claims. 
  • This list can be updated ongoing subject to parliamentary approval. 
  • It is worth noting that the European Court of Justice has consistently ruled that any interest that is legal may be a legitimate interest – i.e. that a purely commercial interest can be a legitimate interest.
  • In addition, when conducting an LIA, it is acceptable to take into account not only the benefits to the individuals, but also so the environment (e.g. paper reduction), economy (e.g. generating growth and spending budgets in a targeted manner).
  • Privacy and Electronic Communications Regulations:  PECR is included in DUA, and therefore is aligned with the levels of fine available for GDPR breaches.  This is a massive increase from the £500,000 maximum fine currently in place.  In addition, the DPDI’s email soft opt-in for non-commercial organisations (such as charities) is NOT currently included (though lobbying is ongoing).
  • Cookie Consent Exemptions: The aim is to reduce the number of cookie consent banners.  DUA allows the use of cookies without consent in specific circumstances, such as ensuring security or preventing fraud, collecting information for statistical purposes for own use, to improve the website functionality and appearance to the user, and to provide emergency assistance.  This is particularly beneficial to those parties who do not use advertising cookies – for example B2B websites.
  • Digital Verification Services: DUA aims to create a framework for trusted online identity verification services, moving away from paper-based and in-person tasks (e.g. registering births and deaths online). Companies providing digital verification tools must be certified against government standards and will receive a ‘trust mark’.
  • Smart Data Schemes: The introduction of smart data schemes will require businesses in sectors like financial services and public utilities to enable data interoperability and secure data sharing. This aims to enhance consumer confidence and drive innovation
  • Data Access Provisions: The DUA introduces data access standards similar to the EU’s Data Governance Act, enabling controlled data sharing between businesses and public authorities. 
  • Automated Decision Making: The DUA will make it easier for organisations to adopt a broader use of automated decision-making for low-risk, beneficial processing – for example when using artificial intelligence (AI) systems. It limits the scope of the UK’s GDPR Article 22 to cover only “significant” decisions, and those based either in part or entirely on special category data. 
  • Data Transfers: the DUA replaces Chapter 5 of the UK GDPR with a new “data protection test” for the Secretary of State to consider international data transfers, in which the objective is to ensure standards are not materially lower than in the UK.  This differs from the EU approach which looks for equivalence.

Timetable

With the upcoming adequacy review in mind, it seems likely that the government is trying to get the Bill through as quickly as possible – it has already had two readings in the House of Lords and is currently at Committee Stage in the House of Lords.

Data Compliant

If you would like help or assistance with any of your data protection obligations, please email dc@datacompliant.co.uk or call 01787 277742,  And, for more information about to meet your accountancy obligations – both before and after the DUA comes into force, please see here.

Victoria Tuffill

18th December 2024

Massive €290 Million Uber fine for EU-US data transfers

Last week the Dutch Data Protection Authority fined Uber a massive €290 million for transferring personal data from EU to US servers without adequate protections. This is a massive fine – one of the largest seen to date under GDPR.

According to the Dutch DPA, Uber collected sensitive information (eg account details, taxi licenses, location data, photos, identity documents and some criminal and medial data) from its EU drivers and stored it on servers in the US without protective transfer tools for 2 years. There were 170 complaints from French drivers (French complaints, but the Dutch DPA issued the fines as Uber’s European Office is in the Netherlands).

When did the breach happen?

The two-year period spanned the time that the Privacy Shield was invalidated, and the Data Privacy Framework came into force. According to the DPA, Uber stopped using SCCs in August 2021, so it found that the data of EU drivers had not been protected adequately. Of course, now that the Data Privacy Framework is in force, there is no ongoing breach.

How could it have been avoided?

Uber could have used Standard Contractual Clauses to transfer its personal data to the US.

What does this mean for others?

This gives an indication of how significant data transfer mechanisms and risk assessments are. Uber intends to appeal the fine and the outcome of the appeal will be of interest to many businesses. The 2020 EU Court ruling that invalidated Privacy Shield really left a great deal of legal uncertainty over how to continue the data flows that were already in place. There was also very limited help or guidance after the invalidation of Privacy Shield. And it took until 2023 – that’s three years – for the Data Privacy Framework to be established. There will be many companies who would have been slow to – or failed to – build Standard Contractual Clauses into their contracts, and who will be concerned about the nature of this retroactive fine.

Victoria Tuffill –2nd September 2024

If you have any questions or concerns about data protection, please call 01787 277742 or email dc@datacompliant.co.uk

And please take a look at our services.

Phishing / Vishing – if you’re tricked … report it fast

In today’s digital age, cybersecurity is critical to every organisation’s operations. One of the most common causes of data breaches is when attackers trick you into giving them personal information they should not have.

Vishing and Phishing tricks

For example, through vishing attacks, where somebody calls you and pretends to be a company you work with, and they ask for what seems perfectly reasonable information. You provide it.

Or you click on a link in a phishing email because it looks as though it has come from someone you know.  It takes you to a web page asking for information. You provide it. 

In both cases, you have provided the attacker with information that should never have been given to them.

What to do next?

These social engineering attempts are becoming increasingly sophisticated, making them hard to detect.  And it’s too easy to fall into their trap.

Remember, if you DO click on a malicious link, or enter your password details in a site which looks suspicious …. don’t just pretend it didn’t happen. It is crucial that the error is immediately reported to your line manager or IT department. 

Why should I own up to causing a data breach?

Because if you do so immediately, the breach can be prevented.  Here’s an example from one of our clients. 

One of its employees received an email from a contact he had previously communicated with via a cloud-sharing platform he would normally access, and at first glance, it all looked fine.  Until he clicked on a malicious link, entered his credentials and authenticated using Multi-Factor Authentication (MFA).  He realised his mistake and reported it to IT immediately.

Here’s what happened

How did reporting it help?

Thanks to the swift reporting and decisive actions taken by the IT team, any potential security breach was thwarted before it could escalate into a more serious incident.  There was no breach of confidentiality, inappropriate access, or unavailability of the company’s personal data.

Organisations can prevent potential security breaches and protect their valuable assets by acting swiftly and decisively.  Fast action can make all the difference between a minor incident and a significant data breach.

What if I don’t own up?

Well, let’s imagine it. Those login attempts would, without doubt, have succeeded.  All the personal data to which that employee had access would be in the hands of the attacker. The company would be facing a severe data breach situation.  They’d be using time and resources to investigate.  They’d probably be calling in expert help to mitigate the damage.  The cost, both in time and money, would be substantial, as they considered:

  • How much damage have I done to my customers?
  • How can I contain the damage?
  • Will I be sued?
  • How many customers will I lose?
  • Who can I use – internally and/or externally –  to help me work through this?
  • How much will I have to spend on expert advice and support through the process?
  • How will I pay for it?
  • Is the breach severe enough to warrant reporting to the supervisory authority – if so, how can I do so within 72 hours?
  • How can I save us from the corresponding reputational damage
  • How much will I be fined, and what can I do to try to reduce the amount?

But because this issue was reported without delay … none of the above was necessary.

Employers must encourage employees to report errors

Employers play a crucial role in fostering a culture of reporting errors. It is important to create an environment where employees feel safe to admit their mistakes and are encouraged to report any suspicious activity.

So, as an employer, it is your job to make sure your staff know that they must report any errors of this kind.  Instead of making them scared to report their mistake, offer them help – for example, by providing additional or repeat training to help them recognise such tricks in future. 

Train your workers to stay vigilant, stay informed, and promptly report any suspicious activity to your IT team.  This will also have the advantage of fostering a culture of vigilance, awareness and honesty.  It helps everyone in the organisation understand the importance of information security and how to safeguard personal data.

If you’re in any doubt, re-read the story above.  And stop to think about the potential damage that would have ensued if the individual concerned had not reported his error.  The attackers would – without doubt – have logged in successfully, and would then have been able to succeed in meeting their aims – to the detriment of your company and your customers.

Victoria Tuffill

20th August 2024

If you have any questions or concerns with data security and increasing staff awareness, please call 01787 277742 or email dc@datacompliant.co.uk

And please take a look at our services.

EU AI Act 2024: Key Obligations for Providers and Deployers

Summary

The EU AI Act, which came into force on August 1st, 2024, introduces a risk-based legal framework for AI systems, and categorises AI systems into different risk levels with corresponding levels of obligation. Some uses of AI that are banned, for example social scoring by governments or manipulating people subliminally.

Similar to the GDPR, the Act also emphasises transparency and accountability, and highlights the importance of data governance, data quality, and data protection.

This document summarises the key obligations of those subject to the EU Act who provide or deploy AI. Those AI systems and models already on the market may be exempt or have longer compliance deadlines.

Territorial Scope of the AI Act

Territorially, the AI Act applies to:

  • deployers that have their place of establishment or who are located within the EU;
  • providers – whether or not they are established or located within the EU –  if they are placing AI systems on the market / using them in the EU
  • those who import or distribute AI systems in the EU
  • product manufacturers who place or put into service AI systems in the EU under their own name or trademark
  • providers and deployers of AI systems where the output of the AI systems are used in the EU, regardless of where they are established or located
  • persons affected by the use of an AI system located in the EU.

Risk-Based Classification

The AI Act introduces a risk-based legal framework based on the specific AI system’s use and the associated risks. It establishes four risk-based types of AI systems:

  • Unacceptable risk – prohibited AI systems
  • High-risk AI systems
  • Limited risk AI systems
  • Minimal risk AI systems

The obligations of the AI Act are mainly focused on providers and deployers of AI systems, and a summary of the risk ratings and obligations of providers and deployers when using high, limited/minimal risk and general-purpose AI systems is shown below:

HIGH RISK: For example: critical infrastructure, hiring processes, employee management, educational and vocational training opportunities and outcomes, credit scoring, emotion recognition, prioritisation of emergency first response services, social scoring, risk assessment and pricing of life and health insurance, some areas of law enforcement, migration, asylum/border management, administration of justice and democratic processes.

Providers must:

  • register the system in an EU database
  • establish, document and maintain appropriate AI risk management system
  • have effective data governance required re training, testing and validation – including bias mitigation
  • draft and maintain AI system technical documentation
  • ensure record keeping, logging and tracing obligations are met
  • put in place robust cyber security controls
  • ensure the system is fair and transparent
  • enable effective human oversight over the system
  • in some instances, complete Conformity Assessment before releasing the system
  • affix the CE marking to the AI system and include the provider’s contact information on the AI system, packaging or accompanying documentation

Deployers must:

  • ensure a competent person with appropriate training, authority and support provides oversight of the system
  • ensure that data input is relevant and sufficiently representative to meet the purpose
  • inform impacted individuals where a system will make or assist in making decisions about them
  • where AI will impact workers, inform them and their representatives that they are subject to a high-risk AI system
  • conduct a rights impact assessment – e.g. where credit testing or pricing of life or health insurance is involved
  • where AI results in a legal (or similar) effect, provide a clear, comprehensible explanation of the role of the AI system in the decision-making process, along with the main elements of the decision
  • develop an ethical AI Code of Conduct

LIMITED OR MINIMAL RISK: for example, AI systems with specific transparency risks or an ability to mislead; AI systems intended to interact directly with individuals; AI systems generating synthetic image, audio, video or text content; emotion recognition systems, biometric categorisation systems; deep fake systems, systems that generate or manipulate public interest information texts made available to the public

Providers and Deployers must:

  • Be transparent – users must be aware that they are interacting with a machine
  • Ensure that any AI-generated content is detectable as artificially generated and / or manipulated
  • Comply with any labelling obligations
  • Develop an ethical AI Code of Conduct (recommended)

GENERAL PURPOSE AI SYSTEMS, including AI systems that can perform general functions such as image/speech recognition, audio/visual generation, pattern detection, question answering, translation, chatbots, spam filters, and so on.

Providers must

  • draft and maintain AI system technical documentation
  • provide information to those who intend to integrate the AI model into their own AI systems
  • implement a policy to comply with EU law on copyright and related rights
  • document and make public a meaningful summary of the content used for training the general-purpose AI model

Fines

Fines for non-compliance range from €7.5 million to a maximum of €35 million (or 1% to 7% of global annual turnover), depending on the severity of the infringement.

PENALTY / FINECOMPLIANCE ISSUE
€35 million or 7% of total worldwide annual turnover for the preceding year, whichever is higherNon-compliance with the rules about prohibited AI systems
€15 million or 3% of total worldwide annual turnover for the preceding year, whichever is higherNon-compliance with most obligations under the AI Act

Noncompliance from providers of general-purpose AI models
€7.5 million or 1% of total worldwide annual turnover for the preceding year, whichever is higherFor the supply of incorrect, incomplete or misleading information

If you have any questions or concerns with how you can use AI in compliance with GDPR or other data protection legislation, please call +44 1787 277742 or email dc@datacompliant.co.uk

And please take a look at our services.

Victoria Tuffill – 12th August 2024

Belgian Data Protection Authority’s first GDPR fine imposed on public official

The Belgian DPA delivered a strong message on 28th May 2019, that data protection is “everyone’s concern” and everyone’s responsibility, by premiering the GDPR’s sanctioning provision in Belgium with a fine of €2,000 imposed on a mayor (‘bourgmestre’) for the illegal utilisation of personal data. 

Purpose Limitation was Breached 

The mayor in question used personal data obtained for the purposes of mayoral operations in an election campaign, in breach of GDPR, particularly the purpose limitation principle, which states that data controllers and/or processors must only collect personal data for a specific, explicit and legitimate purpose. Given the fairly moderate fine, the data the mayor obtained would not have contained special category data (formerly known as sensitive personal data in the UK). The Belgian DPA also looked at other factors when deciding on the severity of the sanction, including the limited number of affected data subjects; nature and gravity of the infringement; and duration.  

‘Not Compatible with Initial Purpose’ 

The Belgian DPA received a complaint from the affected data subjects themselves, whose consent to the processing of their data was based on the assumption it would be used appropriately, in this case for administrative mayoral duties. The plaintiffs and the defendant were heard by the DPA’s Litigation Chamber, which concluded along GPDR lines that ‘the personal data initially collected was not compatible with the purpose for which the data was further used by the mayor.’ 

The decision was signed off by the relatively new Belgian commissioner, David Stevens, as well as the Director of the Litigation Chamber, a Data Protection Authority chamber independent from the rest of the Belgian judicial system. 

Harry Smithson, 3rd June 2019

Personal Data Protection Act (PDPA) comes into effect in Thailand after royal endorsement 

On 27th May, the Kingdom of Thailand’s first personal data protection law was published in the Government Gazette and made official. This comes three months after the National Legislative Assembly passed the Personal Data Protection Act in late February and submitted the act for royal endorsement.

While the law is now technically in effect, its main ‘operative provisions’ such as data subjects’ rights to consent and access requests, civil liabilities and penalties, will not come into proper effect until a year after their publishing, i.e. May 2020. This was planned to give data controllers and processors a grace period of one year to prepare for PDPA compliance, the requirements and obligations of which were designed to have demonstrative equivalency with the EU’s international spearheading General Data Protection Regulation (GDPR). As such, within a year, Thai public bodies, businesses and other organisations will qualify for ‘adequate’ third-party status regarding data protection provisions, allowing market activity and various other transactions involving proportionate data processing between EU member states and Thailand. The GDPR has special restrictions on the transfer of data outside the EEA, but the PDPA may prompt the EU to make a data protection provision adequacy decision regarding Thailand.

It is uncommon in Thai law for provisions to have an ‘extraterritorial’ purview, but the new PDPA was designed to have this scope to protect Thai data subjects from the risks of data processing, particularly marketing or ‘targeting’ conducted by non-Thai agents offering goods or services in Thailand.

The PDPA contains many word-for-word translations of the GDPR, including the special category of ‘sensitive personal data’ and Subject Access Requests (SARs). Personal data itself is defined, along GDPR lines, as “information relating to a person which is identifiable, directly or indirectly, excluding the information of a dead person.”

The European Commission has so far recognised Andorra, Argentina, Canada (commercial organisations), Faroe Islands, Guernsey, Israel, Isle of Man, Japan, Jersey, New Zealand, Switzerland, Uruguay and the United States of America (limited to the Privacy Shield framework) and most recently Japan, as third countries providing adequate data protection. Adequacy agreements are currently being sought with South Korea, and with Thailand’s latest measures, it may be that the southeast Asian nation will be brought into the fold.

Irrespective of Brexit, the Data Protection Act passed by Theresa May’s administration last year manifests the GDPR within the UK’s regulatory framework. However, on exit the UK will become a so called third country and will have to seek an adequacy decision.  The ICO will continue to be the UK Supervisory Authority.

Harry Smithson, 30 May 2019 

 

GDPR’s 1st Birthday

General Data Protection Regulation reaches its first birthday

This blogpost arrives as the General Data Protection Regulation (GDPR) reaches its first birthday, and a week after a report from the Washington-based Center for Data Innovation (CDI) suggested amendments to the GDPR.

The report argues that regulatory relaxations would help foster Europe’s ‘Algorithmic Economy,’ purporting that GDPR’s restrictions of data sharing herald setbacks for European competitiveness in the AI technology field.

Citing the European Commission’s ambition “for Europe to become the world-leading region for developing and deploying cutting-edge, ethical and secure AI,” the report then proceeds to its central claim that “the GDPR, in its current form, puts Europe’s future competitiveness at risk.”

That being said, the report notes with approval France’s pro-AI strategy within the GDPR framework, in particular the country’s use of the clause that “grants them the authority to repurpose and share personal data in sectors that are strategic to the public interest—including health care, defense, the environment, and transport.”

Research is still being conducted into the legal and ethical dimensions of AI and the potential ramifications of automated decision-making on data subjects. In the UK, the ICO and the government’s recent advisory board, the Centre for Data Ethics and Innovation (CDEI – not to be confused with aforementioned CDI), are opening discussions and conducting call-outs for evidence regarding individuals’ or organisations’ experiences with AI. There are of course responsible ways of using AI, and organisations hoping to make the best of this new technology have the opportunity to shape the future of Europe’s innovative, but ethical use of data.

The Information Commissioner’s Office (ICO) research fellows and technology policy advisors release short brief for combatting Artificial Intelligence (AI) security risks

The ICO’s relatively young AI auditing framework blog (set up in March this year) discusses security risks in their latest post, using comparative examples of data protection threats between traditional technology and AI systems. The post focuses “on the way AI can adversely affect security by making known risks worse and more challenging to control.”

From a data protection perspective, the main issue with AI is its complexity and the volume of not only data, but externalities or ‘external dependencies’ that AI requires to function, particularly in the AI subfield Machine Learning (ML). Externalities take the form of third-party or open-source software used for building ML systems, or third-party consultants or suppliers who use their own or a partially externally dependent ML system. The ML systems themselves may have over a hundred external dependencies, including code libraries, software or even hardware, and their effectiveness will be determined by their interaction with multiple data sets from a huge variety of sources.

Organisations will not have contracts with these third parties, making data flows throughout the supply chain difficult to track and data security hard to keep on top of. AI developers come from a wide array of backgrounds, and there is no unified or coherent policy for data protection within the AI engineering community.

The ICO’s AI auditing blog uses the example of an organisation hiring a recruitment company who use Machine Learning to match candidate CVs to job vacancies. A certain amount of personal data would have been transferred between the organisation and the recruitment agency using manual methods. However, additional steps in the ML system will mean that data will be stored and transferred in different formats across different servers and systems. They conclude, “for both the recruitment firm and employers, this will increase the risk of a data breach, including unauthorised processing, loss, destruction and damage.”

For example, they write: 

  • The employer may need to copy HR and recruitment data into a separate database system to interrogate and select the data relevant to the vacancies the recruitment firm is working on.
  • The selected data subsets will need to be saved and exported into files, and then transferred to the recruitment firm in compressed form.
  • Upon receipt the recruitment firm could upload the files to a remote location, eg the cloud.
  • Once in the cloud, the files may be loaded into a programming environment to be cleaned and used in building the AI system.
  • Once ready, the data is likely to be saved into a new file to be used at a later time.

This example will be relevant to all organisations contracting external ML services, which is the predominant method for UK businesses hoping to harness the benefits of AI. The blog provides three main pieces of advice based on ongoing research into this new, wide (and widening) area of data security. They suggest that organisations should,

  • Record and document the movement and storing of personal data, noting when the transfer took place, the sender and recipient, and the respective locations and formats. This will help monitor risks to security and data breaches;
  • Intermediate files such as compressed versions should be deleted when required – as per best-practice data protection guidelines; and
  • Use de-identification and anonymization techniques and technologies before they are taken from the source and shared either internally or externally.

Harry Smithson,  May 2019

What is a Data Protection Officer (DPO), and do you need one?

A DPO (Data Protection Officer) is an individual responsible for ensuring that their organisation is processing the data of its staff, customers, providers and any other individuals, i.e. data subjects, in compliance with data protection regulations. As of the EU-wide General Data Protection Regulation (GDPR), a DPO is mandatory for:

  1. Public authorities; and
  2. Organisations that process data
  • On a large scale; 
  • With regular and systematic monitoring; 
  • As a core activity; or 
  • In large volumes of ‘special category data,’ formerly known as ‘sensitive personal data,’ i.e. information related to a living individual’s racial or ethnic origin, political opinions, religious beliefs, trade union membership, physical or mental health condition, sex life or sexual orientation, or biometric data.  

It may not be immediately obvious whether an organisation must have a designated DPO under GDPR. If so, it is necessary to make a formal evaluation, recording the decision and the reasons behind it. The WP29 Guidelines on Data Protection Officers (‘DPO’), endorsed by the European Data Protection Board (EDBP), recommends that organisations should conduct and document an internal analysis to determine whether or not a DPO should be appointed. Ultimately, such decision-making should always take into account the organisation’s obligation to fulfil the rights of the data subject, the primary concern of the GDPR: does the scale, volume or type of data processing in your organisation risk adversely effecting an individual or the wider public?

Even if a DPO is not legally required, organisations may benefit from voluntarily appointing an internal DPO or hiring an advisor – this will ensure best-practice data protection policies and practices, improving cyber security, staff and consumer trust, and other business benefits. When a DPO is designated voluntarily, they will be considered as mandatory under GDPR – i.e. the voluntarily appointed DPO’s responsibilities as defined in articles 37 and 39 of the GDPR will correspond to those of a legally mandated DPO (in other words, GDPR does not recognise a quasi-DPO with reduced responsibility). As an excerpt from the GDPR explains “if an organisation is not legally required to designate a DPO, and does not wish to designate a DPO on a voluntary basis, that organisation is quite at liberty to employ staff or outside consultants to provide information and advice relating to the protection of personal data.

However, it is important to ensure that there is no confusion regarding their title, status, position and tasks. Therefore, it should be made clear, in any communications within the company, as well as with data protection authorities, data subjects, and the public at large, that the title of this individual or consultant is not a data protection officer (DPO).

But how are the conditions that make a DPO mandatory defined under GDPR?

Large-scale processing: there is no absolute definition under GDPR, but there are evaluative guidelines. The GDPR’s WP29 guidance suggests data controllers should consider:

  • The number of data subjects concerned;
  • The volume of data processed;
  • The range of data items being processed;
  • The duration or permanence of the data processing activity; and
  • The geographical extent.

Regular and systematic monitoring: as with ‘large-scale processing,’ there is no definition as such, but WP29 guidance clarifies that monitoring involves any form of tracking or profiling on the internet, including for the purposes of behavioural advertising. Here are a number of examples of regular and systematic monitoring:

  • Data-driven marketing activities;
  • Profiling and scoring for purposes of risk assessment;
  • Email retargeting;
  • Location tracking (e.g. by mobile apps); or
  • Loyalty programmes.

 What does a Data Protection Officer do?

Article 39 of the GDPR, ‘Tasks of the data protection officer,’ lists and explains the DPO’s obligations. It explains that, as a minimum, the responsibility of a DPO is the items summarised below:

  1. Inform and advise the controller or the processor and the employees
  2. Monitor compliance with the Regulation, with other Union or Member State data protection provisions and with the policies of the controller or processor in relation to the protection of personal data, including the assignment of responsibilities, awareness-raising and training of staff involved in processing operations, and the related audits
  3. Provide advice as regards data protection impact assessments and monitor performance
  4. Cooperate with the supervisory authority
  5. Act as the contact point for the supervisory authority on issues relating to data processing

 

 Harry Smithson 2019

 

Data Compliant GDPR panic

GDPR – panic … or not?

myth or fact

GDPR – don’t get bogged down by fear-mongering and myth

GDPR is beset with myth, rumour, and so-called experts. The amount of confusion and misinformation provided is incredibly detrimental. And this is largely because many organisations and individuals who are trying to promote their services are using fear tactics to do so.

But they’re missing the point.

We have a Data Protection Act currently in place, and Privacy and Electronic Communication Regulations to support it.  Any organisation which is ignoring the current data protection legislation has every reason to panic about GDPR. Ignorance is no excuse.  And they won’t be able to get away with ignoring GDPR willfully just because they consider data protection an inconvenient restriction preventing them taking unethical actions to make more money.

On the other hand, organisations who conform to the current legislation have a head-start when addressing how to comply with the new regulation.

GDPR – a simple summary

At its simplest, GDPR is a long-overdue evolution which is primarily about all organisations (whether data controllers or data processors):

  1. putting the individual first
  2. being held accountable for protecting that individual’s data

At the same time, GDPR addresses the vast changes to the data landscape since the original data protection legislation of the 1990s:

  • it takes account of technological advances – bear in mind, there was barely an internet in the early ’90s!
  • it seeks to protect EU citizens from  misuse of their personal data wherever that data is processed
  • it addresses (at least in part) the disparity in data protection legislation throughout the EU and its members

GDPR increases both compliance obligations on the part of organisations, and enforcement powers on the part of the regulator.

Compliance Obligations:  The principle of Accountability puts a heavy administrative burden on data controllers and data processors.  Robust record-keeping in relation to all data processing is essential; evidenced decisions around data processing will be critical.

Enforcement Powers:  Yes, there are massive fines for non-compliance.  And yes, they will go up to £20,000,000 or 4% of global turnover.  But is that really the key headline?

GDPR’s Key Message:  Put the Individual First

Rights human rights

As GDPR comes closer, individuals are going to become increasingly aware of their rights – new and old

All organisations who process personal data need to understand that individuals must be treated fairly, and have, under GDPR, greater rights than before.  This means that organisations need to be transparent about their data processing activity, and take full responsibility for protecting the personal or personally identifiable data they process.

What does that mean in practice?

  • Tell the individuals what you intend to do with their data – and make it absolutely plain what you mean
  • Explain that there’s a value exchange – by all means help them understand the benefits to providing the data and allowing the processing – but don’t tell lies, and don’t mislead them
  • If you don’t want to tell them what you’re doing … you probably shouldn’t be doing it
  • If you need their consent, make sure you obtain it fairly, with simple messaging and utter clarity around precisely what it is to which they are consenting
  • Tell them all their rights (including the right to withdraw consent; to object to processing where relevant; to be provided with all the information you hold about them, to be forgotten, etc)
  • Always balance your rights as an organisation against their rights as an individual

Look out for your Reputation

shame

Never underestimate the reputational damage caused by a data breach

The Information Commissioner, Elizabeth Denham, states clearly that, while the ICO has heavy-weight power to levy massive fines, “we intend to use those powers proportionately and judiciously”.  So the ICO may issue warnings, reprimands, corrective orders and fines, but that could be the least of your worries.

Something that tends to be overlooked when talking about penalties of non-compliance is reputational damage.  All the ICO’s sanctions (from warnings to fines) are published on the ICO website.  And the press loves nothing more than a nice, juicy data breach.

So even if no fine is levied, reputations will suffer.  At worst, customers will be lost.  Shareholders will lose confidence.  Revenues will decline.  Board members will lose their jobs.  And, to quote Denham again, “You can’t insure against that.”

Victoria Tuffill     18th August 2017

Data Compliant advises on GDPR compliance – if you’d like more information, please call 01787 277742 or email dc@datacompliant.co.uk