Tag Archives: AI

AI: Balancing Innovation, Ethics, Privacy & Governance  

After last week’s AI Action Summit in Paris, AI ethics and safety legislation has become a hot topic globally.  Various regions are taking different approaches. U.S. Vice President J.D. Vance made it very clear that the Trump administration was firmly opposed to “excessive regulation” of AI, and argued that it would stifle innovation and hinder the growth of the AI industry.

Global Divide in AI Regulation

With different regions in the world taking different approaches, the landscape is complex.  Even within the US, there are divided approaches.  In the absence of federal guidance, some states are actively implementing their own AI governance state laws to address ethical and safety concerns.  These, of course, will now conflict with the current federal stance, which leans towards minimal regulation in favour of rapid AI development.

Global AI race risks safety, privacy and ethics

Globally, it’s a race, with China and the US at the forefront of AI development. China’s AI strategy focuses on becoming the world leader by 2030, with significant investments in research and development. The US has a similar goal and is doubling its AI research investment. Britain’s Starmer also has ambitions for rapid development.  But the global competitive race is clearly in danger of compromising ethical considerations and safety – and sustainability issues – in favour of innovation and rapid development.

Trustworthy AI governance

So it is somewhat reassuring that the UK, South Korea, France, Ireland and Australia data protection authorities have issued a joint statement on “building trustworthy data governance frameworks to encourage development of innovative and privacy-protective AI”.  It does at least show that these countries are making a concerted effort to balance innovation with ethical, privacy and safety considerations

In summary the joint statement :

  • States the need for AI to be developed and deployed in accordance with data protection and privacy rules, including robust data governance frameworks, and embedding privacy-by-design into AI systems from the start of the planning process
  • Aims to provide legal certainty and safeguards including transparency and fundamental rights
  • Commits to clarifying the legal bases for processing personal data in the context of AI
  • The  countries will exchange and establish a shared understand of proportionate security measures, which will be updated to keep up with evolving AI data processing activities
  • They will monitor the technical and societal impacts of AI and leverage the expertise and experience of Data Protection Authorities and other relevant entities
  • They aim to reduce legal uncertainty, while creating opportunities for innovation in a compliant environment
  • Commits to strengthening interaction with other authorities to improve consistency between the various regulatory frameworks for AI systems, tools and applications

It does not, however, address other concerning issues such as:

  • Bias and fairness (for example in areas such as hiring, lending, law enforcement). However the EU’s AI Act works towards mitigating these biases
  • Environmental impact (includes significant electricity demand and massive drinking water consumption. The extraction of raw materials and the generation of electronic waste to produce and transport high-performance computing hardware.) The Artificial Intelligence Environmental Impacts Act of 2024 in the US (if Trump doesn’t repeal it) and UNEP’s guidelines are steps towards addressing these concerns.

Data Protection Legislation Applies

In essence, regardless of guidelines and specific AI legislation and guidelines, the data protection legislation fundamentals do not change just because the processing involves AI. All AI personal data processing must abide by the prevailing data protection legislation – wherever in the world you are. 

Data Compliant

If you would like help or assistance with any of your data protection obligations, please email dc@datacompliant.co.uk or call 01787 277742,  And, for more information about to meet your AI obligations, please see here.

Victoria Tuffill

17th February 2025

European Commission publishes “White Paper on Artificial Intelligence”

19th February saw the release of the European Commission’s white paper on AI, which remains open to public consultation until May. While extolling the virtues of AI such as its much anticipated roles in fine-tuning medical diagnostics and mitigating climate breakdown, the white paper ranks intrusion and privacy risks among the four main issues facing policy-making around AI. The other three risks included opaque and/or discriminatory decision-making and criminal application.

The expected impact on governance that AI uptake could have, and the resulting conspicuous contrast with governance systems lacking cutting edge AI capacity, leads the Commission to go so far as to note that a common European framework for policy on AI is necessary to avoid “the fragmentation of the single market.”

The paper outlines a largely theoretical “European approach to excellence and trust,” emphasising the requirement for global competitiveness in AI innovation. It states however that “trustworthiness is a prerequisite for [AI] uptake.” For instance, safeguards on law enforcement’s expanded capacities due to AI technology are recommended, though currently not detailed. Much of this trust is purportedly to be garnered by taking the “human-centric approach” to AI application. This approach was explicated in a paper called “Communication on Building Trust in Human-Centric Artificial Intelligence” released by the Commission last year, in which privacy and data governance was among seven “key requirements that AI applications should respect.”

Concrete, technical policies for regulation are somewhat more elusive. Both papers reiterate the accuracy requirement for any datasets that AI may be using as fuel for thought, i.e. the necessity for data integrity, but the requirement for stored data to be accurate is enforced by the General Data Protection Regulation (GDPR), a framework which will remain in the UK after Brexit due to the Data Protection Act 2018 and is seeing emulation across the world. Quite how the Commission’s value system of human-centric ethics will manifest in AI development remains unclear.

Where the white paper on AI is most outspoken is the perceived limitations of current EU legislation to regulate or even conceptualise AI. Changes to the legal concept of ‘safety’ invoked by AI risk and predictive analysis are anticipated; ambiguity concerning responsibility between economic agents in the supply chain may pose judicial quandaries; and there is even a chapter dedicated to the problem of AI indecipherability: if human officials cannot ascertain how an AI programme reached a decision, how can they know whether such a decision was skewed by bias in a dataset? Human oversight of AI development is therefore recommended at each stage of the industrial chain.

Harry Smithson, 21st February 2020