4 Ways To Reduce Risks From Deepfakes In Healthcare

Deepfakes were hardly in the news a few years ago. Today, the story is entirely different. With the rise of generative AI, digital replicas are being created and abused at an unprecedented scale to target people from all walks of life, from high-profile celebrities like Taylor Swift and Gal Gadot to politicians like Joe Biden. They all are being portrayed doing or saying things they would never do in real life.

But, here’s the worrying bit, the story of deepfakes is just getting started. As the technology aiding their creation continues to evolve and become more accessible, we’ll see more digital counterfeits hitting critical sectors. Banking and finance have already been targeted and other industries are beginning to face the heat, especially healthcare.

Despite being one of the most regulated industries, rooted in the principles of trust and accuracy, healthcare remains unaware of the risks of deepfakes. Digital counterfeits, in their current form, can easily lead patients to follow unverified and unproven treatments that could cost lives. It is, therefore, urgent to understand and address the problems stemming from the use of deepfakes in healthcare to preclude confusion, mistrust, and compromised health outcomes.

Understanding The Rise Of Deepfakes

At the core, a deepfake can be described as the artificial image/video/audio of any individual created with the help of deep learning technology. Initially, deepfakes mainly involved face swapping, where one person’s likeness was superimposed onto existing videos or images. However, in the past year or so, the advent and proliferation of text-based generative AI technology have transformed this process. Now, anyone can create nearly realistic manipulated content from scratch, portraying doctors, actors and politicians in misleading ways to deceive internet users.

More importantly, the advancement of technology has also made it difficult for people to distinguish authentic from manipulated, deepfaked content. Even experts sometimes find it hard to discern the differences. This makes it easier for the malicious actors to operate quickly and remain under the radar for longer.

How Deepfakes Can Harm The Healthcare Industry

Effective healthcare stands on the idea of trust and accuracy of information. If a person visits a doctor, they entrust their medical history to that professional and follow the prescription with the belief that the professional knows what they are doing. Deepfakes can easily disturb this ecosystem, deceiving patients and eroding their trust in the information provided by the doctor.

The problem is pretty straightforward: what if a deepfake video of a well-known doctor or celebrity goes viral? The manipulated clip may portray the doctor/celebrity as endorsing an illegal treatment or product for a particular medical condition, nudging hundreds or thousands of people to follow those instructions. Similar attacks can also be executed by faking one-on-one meetings with the patients.

Now, such content would appear fake to the more educated and tech-savvy individuals but unsuspecting people suffering from a particular condition can fall for it. This can easily influence their decisions and put lives at stake. In fact, according to a study conducted by Deepfakes & Disinformation Working Group in 2023, fabricated celebrity endorsements of unproven treatments are exploiting public trust.

This makes it more important than ever for stakeholders to take stock of this whole situation and come up with ways to keep the foundation of trust in healthcare intact.

How To Address The Deepfake Menace In Healthcare

To deal with deepfakes in healthcare, stakeholders at all levels must come together and take action with steps such as:

Improved Detection and Removal

The sooner a deepfake is identified, the lesser will be its impact. This idea should drive the policies of all social media platforms out there. Essentially, the platforms should look for and adapt technologies that could detect and automatically remove deepfake images and videos that could mislead users. Deep learning methods such as CNN designed to identify deepfake content in medical images, texts, audio, and videos can significantly help with this. These models can first be trained with community feedback and then used autonomously to flag malicious content, much like how AI was initially used to flag hate speech content. Further, these platforms should also collaborate with industry peers, AI vendors, to use their tools (like Google’s SynthID) designed to flag content generated from their respective platforms.

Education and Awareness

Education and awareness campaigns are vital. Healthcare professionals and the general public must understand what deepfakes are and how to avoid them. Information is key, and spreading awareness about the dangers and detection of deepfakes can help mitigate their impact.

Ethical Standards and International Consensus

Healthcare professionals should also adhere to international ethical AI standards, such as the Montreal Declaration for Responsible AI Development. This approach can help them use AI internally with a focus on fairness, transparency and societal well-being. Not to mention, this also pushes the industry as a whole to ensure ethical AI use and reduce the risk of deepfake misuse.

Regulatory Measures

Finally, governments should step in to mitigate the spread of deepfakes in healthcare. Policymakers should establish strict guidelines for disseminating medical content and impose penalties for those using deepfakes for malicious purposes. Effective regulation can help maintain the integrity of healthcare information.

In a nutshell, deepfakes pose a significant and urgent threat to healthcare. While stakeholders remain committed to addressing the negative consequences of this technology, coordinated efforts are necessary to prevent the problem from growing.

By working together, healthcare professionals, tech developers, regulators and the public can protect the integrity of the healthcare system, ensuring the well-being of patients everywhere.


Leave a Reply

Your email address will not be published. Required fields are marked *