Deepfake and the Risk to Healthcare
On the threat of deepfake, how deepfakes could risk medical integrity and patient safety, and potential mitigation approaches.
Deepfake represents the dark side of AI.
Deepfakes are incredibly realistic-looking videos, images or audio recordings, made using artificial intelligence, that can make it appear people are saying or doing things they never actually did.
Deepfakes are not new. But they are becoming very real, to the extent that it is scary.
Deepfake technology allows creating very realistic fake media, that can be abused to spread false information, manipulate public opinion, drive scams, generate chaos, and damage reputations.
One of the key concerns around deepfakes is that they could risk the integrity of elections by creating fake content that could be used to influence voters. For example, a deepfake video could associate a candidate with corruption or misconduct, biasing public opinion based on fake information. Deepfakes could mislead voters, and undermine the democratic process by distributing disinformation.
And perhaps the most daunting of them all are deepfake images and videos involving nonconsensual explicit adult content (aka fakeporn). Recent research shows that this type of deepfakes represents 98% of all deepfake videos online. In most of the cases, those are featuring individuals from the entertainment industry, and 99% of them are used against women, creating abuse and humiliation.
So bad.
Video Generation Technology
Few months ago, in my blog post about Generative AI and the Hollywood strike, I wrote about video generation technology not being there yet, and how those videos reminded me of the animated images on the Daily Prophet newspaper from the Harry Potter films. Well, time to take that back.
Video generation technology is evolving fast. The recent videos created by Sora, the new video generation model from OpenAI, are nothing less than amazing, and demonstrate how Generative AI is becoming more and more sophisticated in producing videos.
This week, researchers published an impressive demo that generates, in real time, very realistic talking face videos with precise lip-audio sync and natural facial behavior, based on a single portrait photo plus speech audio. Respectfully, they included a responsible AI disclaimer in their publication, clarifying they are exploring visual effects generation for virtual interactive characters, not for impersonating any person in the real world, that it was only a research demo, and there's no product or API release plan.
Cool.
But - while video generation technology, by itself, could be used for good or legitimate purposes, deepfakes are an abuse of video generation, representing the dark side of AI.
We often hear concerns about deepfakes being abused in the context of politics or explicit adult content. But deepfakes also pose unique risks in the context of healthcare & life sciences.
How Deepfake Technology Works
Deepfake technology uses algorithms to analyze, map, and imitate a person’s voice or facial expressions captured in source media, such as a video or image. Deepfake detects facial features, and replaces the face in an image or video frames, but it’s not as simple as swapping one face for another. The process involves using machine learning algorithms to analyze and map the facial expressions and features of the face that is to be replaced, recreating the facial expressions with the new face, and adjusting the result to make it look real.
Generative Adversarial Networks (GANs) play a significant role in deepfake. GANs work by setting two neural networks in direct competition with one another – Generator and Discriminator. The Generator produces a new image based on what it was taught. The Discriminator classifies whether the image is real or fake. Both components constantly interact. The Generator creates the images, and the Discriminator marks the mistakes. Then the Generator learns what it has done wrong and corrects its mistakes. This technique results in high-quality fake artifacts.
The Risk of Deepfakes in Healthcare & Life Sciences
We often hear concerns about deepfakes being abused in the context of politics or explicit adult content. We should be thinking about what that would mean in other areas too, areas like healthcare. Deepfakes could pose unique risks in the context of healthcare & life sciences. Here are a few examples of highly concerning scenarios:
Disinformation and Public Health Risks
Deepfake videos could spread false health information by impersonating public health officials or fabricating fake medical news, misleading the public and causing potentially dangerous health behaviors. Deepfakes could be abused to spread false health advice or misinformation about diseases and treatments, posing direct risks to public health. Imagine a fake video of a leader recommending the public something insane like drinking detergents as a way to prevent COVID. Oh wait, that one really happened…but seriously, deepfakes could impose a real threat to public safety.
Manipulated Medical Records and Imaging
Deepfake technology could be abused to alter medical images or records for fraud purposes, such as insurance fraud, affecting diagnoses and patient care. This manipulation could lead to incorrect treatments, unnecessary procedures, or failure to provide necessary care.
Undermining Trust in Healthcare Professionals
What if fake videos or audio recordings of doctors providing false or harmful medical advice would be circulated? Deepfakes could undermine trust in healthcare professionals and damage their reputation.
Deepfake could also be abused by imposters pretending to be doctors, generating fake accreditation, fake publications, fake license certificates etc.
Damaging Reputations and Risking Privacy
Deepfakes could threaten patients by creating fake statements from patients or doctors, potentially violating privacy of patients or damaging patient reputation. Just imagine the bad impact of a fake video of a world leader sharing that they are terminally ill, or a fake video of a doctor claiming their patient had died.
Deepfakes could be abused in sophisticated phishing scams, targeting individuals with highly personalized and convincing messages, potentially leading to theft of personal and medical information. Just this week, we heard of an employees that was targeted in a phishing attack that involved deepfake featuring his company CEO.
Misrepresentation of Scientific Findings
Deepfake technology poses risks to research integrity and the credibility of scientific information. Deepfakes could be used to fabricate or alter interviews or discussions involving scientists, making it seem as if they are presenting research findings that are misleading.
Deepfake technology could be applied to manipulate images and videos used in scientific publications, such as microscopic images or experiment results, leading to false conclusions and undermining the scientific process.
Deepfake videos or audio clips of scientists discussing fake research findings could erode public trust in the validity of scientific discoveries. Deepfakes could be abused to create unethical or harmful content involving clinical trials, for example fake videos of animal testing, potentially biasing the public against legitimate research.
We often hear concerns about deepfakes being abused in the context of politics or explicit adult content. But deepfakes could also pose unique risks in the context of healthcare.
Really, really bad.
While the above might seem like theoretical horror scenarios, those are just a few examples of things that could happen if we don’t put an end to it.
Mitigating the Risks
Addressing the risks that are introduced by deepfakes requires a combined approach, that includes technology solutions, legal frameworks to govern the creation and distribution of deepfakes, and public education to raise awareness about disinformation.
Technology Solutions
It is critical to promote the development and deployment of detection tools that can identify deepfake content with high accuracy. At the time of writing this, there are several emerging tools and technologies, such as highlighted in this recent article, and this area continues to evolve.
Another approach is Watermarking of any artifact that is generated by AI. Watermarking of AI-generated content is a technique used to identify and authenticate content that has been produced by AI. This is done by embedding a signal or pattern into the content, which can be used to verify its origin. The primary purpose of watermarking is to help users distinguish between real and AI-generated content.
Social media platforms have a unique role to play when it comes to deepfakes. Surfacing watermarking of AI-generated content and deploying deepfake detectors in social media platforms, news sites etc., would help the battle against deepfakes.
Guardrails within content generation models are an important part too. For examples, built-in filters that block the use of images of real people, or preventing the creation of certain types of content.
The troubling thought here is that many AI models are published as Open Source, which creates a challenge with including such filtering or watermarking functionality, and allows ungoverned malicious entities to still generate deepfakes based on the victim’s existing online data – potentially openly available from social media.
Just because a technology exists, does not mean all its use cases are legitimate.
Regulatory and Legal frameworks
Per research published by the Responsible AI institute last year, some countries have regulations in place to address the abuse of deepfakes. Fast forward, the EU has taken a proactive approach to deepfake regulation, with increased requirements around deepfake detection and prevention, and clear labeling of artificially generated content. Recently we heard of the US drafting new laws to protect against AI-generated deepfakes, and some states have passed laws governing deepfake, primarily focused on targeting deepfake pornography. China seems to be most advanced, imposing a strict ban regarding the use of certain deepfakes.
Establishing laws and regulations that specifically address the creation and distribution of deepfake content is essential here. Just because a technology exists, does not mean all its use cases are legitimate.
Governments are starting to talk about penalties to creators of deepfake, but also penalties to people who share or distribute deepfakes. This could cause people to think twice before sharing iffy content and question its origin.
Regulators could also enforce the removal of deepfake media from social platforms. And to continue the watermarking point, at minimum regulators could enforce on the social media applications and news sites to surface the watermarks in the form of disclaimer on the posted content. It is not sufficient to enforce watermarking on AI systems that generate the content - what is watermarking good for, if the end-user is not made aware of it?
How to enforce regulation on apps and platforms? Well, regulators could consider shutting down apps and platforms that do not adhere to the regulation, or enforcing the removal of those apps from the app store.
And while regulation is critical, regulation is not enough, because enforcing regulation on ungoverned malicious entities is hard.
what is watermarking good for, if the end-user is not made aware of it?
Collaboration and Awareness
We need to encourage collaboration between tech companies, policymakers, and civil society to develop standards and best practices for managing deepfake risks.
And we need to increase public awareness and promote education in this area. Recently, jaw-dropping deepfake videos featuring Barack Obama and Mark Zuckerberg were published as means to raise awareness to deepfakes.
Educating the public about the existence of deepfakes and about the need to assess the authenticity of digital content is critical in this battle against disinformation.
What It All Means
Deepfakes pose a significant risk to human rights by enabling abuse and manipulation. This technology can be used to create fake evidence, falsely tying individuals to illegal activities or socially unacceptable behaviors and harming their reputation. Deepfakes contribute to gender-based violence, by creating nonconsensual explicit material, which violates the victims' rights to dignity and privacy. Deepfakes could risk democratic processes, bias elections and threaten individuals.
With deepfake technology now being so widely accessible, the inevitable conclusion is that deepfake is a threat to our society.
But deepfake is not just a human rights issue, it’s also a public safety issue. And it is becoming an urgent problem.
About Verge of Singularity.
About me: Real person. Opinions are my own. Blog posts are not generated by AI.
See more here.
LinkedIn: https://www.linkedin.com/in/hadas-bitran/
X: @hadasbitran
Instagram: @hadasbitran
Recent posts: