Artificial Intelligence, Health Care and Privacy

Artificial intelligence (AI) is fast becoming a critical component in any innovative solution. I have noticed this in my work with start-up and scale-up companies. AI is no longer the stuff of science fiction. AI will soon be mainstream, and I predict that it will be a dominant feature of all health information systems within the next five years.

What is AI/ Machine Learning (ML)?
AI is the concept used to describe computer systems that are able to learn from their own experiences and solve complex problems in different situations. ML is an application of AI that provides systems the ability to automatically learn and improve from experience without being explicitly programmed. Machine learning focuses on the development of computer programs that can access data and use it learn for themselves.

How does AI/ML work?
In its simplest form ML starts with training data containing patterns or similarities. The training data are prepared by a researcher or data scientist and fed into an algorithm that identifies the patterns found in the information. A model is generated that can recognize these patterns. Once generated, the model can receive data from other sources and decide which pattern the new data most resembles. The model then processes the new data and produces an estimated result. Models refine themselves over time as more new data is processed.

AI/ML in Health Care
Healthcare is poised to become a hotbed of AI innovation. A good example is precision medicine. Precision medicine is an emerging approach for disease management and prevention that takes into account individual variability in genes, environment and lifestyle for each person. It requires significant computing power, algorithms that can learn by themselves and at unprecedented rates, and an approach that uses the cognitive capabilities of physicians on a new scale. There is no precision medicine without AI!1

AI/ML and Privacy
AI/ML are taking us to new places in the health privacy debate. We are now dealing with technologies that can act independent of human intervention. There are two main aspects of AI that are of particular relevance for privacy. The first is that the software itself can make decisions, and the second is that the system develops by learning from experience.2 AI is introducing brand-new privacy challenges that must be addressed. These include:

  • AI based discrimination – in a high stakes game of garbage-in/garbage-out, AI is only as good as the training data used to seed the machine learning process. There is a risk that biased training data will skew the results of AI which could be serious for automated decision-making or decision-support applications
  • Blackbox processing – as massive amounts of data are used to generate AI models and outputs, it will become increasingly difficult know or understand how a given result is produced. The complexity of AI renders traditional methods of monitoring and audit obsolete. It also runs counter to the need for transparency which is essential for accountability.
  • Automated decision-making – I don’t think we have reached the point where we have surrendered clinical decision-making to AI applications. At this stage AI is being seen as an aid or support clinical decision-making, where the clinician will have final say on diagnosis and treatment. However, as human resources become increasingly constrained, and AI becomes increasingly available, there’ll be a temptation to rely more and more on the AI option. This might not be a bad thing. We may realize better outcomes with AI. But we will need to preserve our accountability mechanisms to ensure that humans are in charge and not the machine.
  • Re-identification of data – many AI applications work effectively using de-identified data for training and output purposes. This is been a good thing for many applications because it removes a significant privacy barrier. However, the power of AI is such that in many instances the re-identification of data is a trivial exercise.

AI Meets the GDPR
Many privacy legislative regimes around the world, including those in Canada, are ill-equipped to deal with the challenges of AI/ML. Products of the last century, these laws are not up to the task. However, we can look to Europe’s General Data Protection Regulation (GDPR) for some guidance down this rocky but fast-moving road.

The GDPR counters algorithmic bias with the fairness principle which requires all processing of personal information to be conducted with respect for the data subject’s interests. The black box issue is addressed by the principle of transparent processing. Data subjects have the right not to be subject to a decision based solely on automated processing, including profiling.

Some pundits have suggested that the GDPR will be the death of AI. Nothing can be further from the truth. The GDPR codifies Privacy by Design (PbD). PbD’s fourth principle is “Full Functionality – Positive Sum, not Zero Sum“. This means It’s not a question of AI OR privacy,
but rather, how do we achieve AI AND privacy.

We Have the Tools
The good news is that we have the tools needed to address privacy in the AI world. We are at an early enough stage in the evolution of AI where we can effectively apply the principles of PbD. Using tools such as Privacy Impact Assessment will enable us to address the need for privacy and other data protection issues on a project by project basis.

New privacy issues beget new privacy solutions. That’s how the world works!

Brendan Seaton is the Chief Creative Officer for Privacy Horizon Inc.

1Bertalan Mesko (2017) The role of artificial intelligence in precision medicine, Expert Review of Precision Medicine and Drug Development, 2:5, 239-241, DOI: 10.1080/23808993.2017.1380516
2Norwegian Data Protection Authority, Artificial intelligence and privacy, Report, January 2018, p. 7

share this article...
Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedIn