Funded by Innoswisse and Innovate UK
Funded by Innoswisse and Innovate UK
How are deepfakes currently being used to perpetrate harm?
How can we responsibly develop tech to tackle deepfake facilitated online harms?
Is the current legal landscape fit for protecting citizens from deepfake facilitated harms?
Iain Corby from the AVPA presented on Project DefAI at the Trust and Safety Professionals Association EMEA Trust and Safety Summit. You can view his PowerPoint presentation below.
The "Introduction to euCONSENT and the DefAI Project" YouTube video discusses the euCONSENT project, which aimed to create interoperable age verification and parental consent solutions within the EU. The European Trade Association became involved due to the need for GDPR-compliant methods, and the project included tech companies, age verification providers, and experts. The project, which ran from 2021 to 2022, used tokens or cookies for age verification and piloted the system in five European countries. Notable achievements included the development of a network connecting independent age verification providers and a commitment to data minimisation and privacy by design. The DefAI Project, a new industry collaboration, was also introduced to ensure the robustness of age and ID verification systems against deepfake attacks, with funding from the Swiss innovation agency and Innovate UK.
In the "Workshop - Ethical & Inclusion Implications of Deepfake/Video Injection Attack Test Protocols," George Billinge and other speakers discussed the ethical implications of age estimation technology, particularly in relation to deepfake and video injection attacks. The workshop aimed to understand the use of generative AI in these attacks and build tools to detect and defend against them. The speakers emphasised the importance of accuracy, robustness, reliability, and fairness in age estimation technology and raised concerns about the potential harm caused by inaccurate or biased technology. They also discussed the ethical dilemma of generating attacks to train against deepfakes and the need for transparency, informed consent, and involvement of diverse stakeholders in the development and implementation of testing protocols. The potential negative impacts on marginalised communities and individuals were also highlighted, and the importance of ethical considerations and inclusive development was emphasised.
In the "Workshop: Evaluating Deepfake & Video Injection Attack Resilience in Age Assurance Systems", speakers discuss evaluating age assurance systems against deepfake and video injection attacks. The project benchmarks systems using false acceptance and rejection rates and includes a deepfake detection system. Testing for age bias and evaluating error rates for different ages is highlighted. The speakers distinguish between presentation and injection attacks and stress the need for effective presentation attack detection before synthetic data detection. The use of deepfakes to bypass age estimation, especially in challenge-response systems, and the impact of makeup on models are discussed. They also address challenges with age recognition on iPhones and potential deepfake detection solutions, such as imaging standards with watermarks or infrared markers, system delays, and challenge-response systems. The discussion underscores the importance of tackling deepfakes and video injection attacks in age verification systems.
In the "Workshop - Threat Vectors for Age Assurance Systems" video, speakers discuss the ongoing challenges of defending against deepfake and AI attacks in age assurance systems. The DefAI project, is working to address these issues with innovation grants from the UK and Switzerland. They plan to survey the market to identify and prioritise attacks, focusing on those that are accessible, widely used, and effective. The project also aims to audit and certify suppliers to demonstrate their ability to defend against these attacks, with the certification scheme becoming part of the audit process. The speakers also mention the emergence of new techniques for creating deep fakes and AI-generated videos, making it an evolving challenge. Human detection in identifying fake identities was discussed, with speakers acknowledging the limitations and sharing examples of systems that can detect fake identities that human eyes cannot. However, concerns were raised about a large US identity provider experiencing a high volume of attacks but not implementing effective measures to combat them.
In the "Workshop - Computer Vision & Modelling of Deepfake and Video Injection" YouTube video, Pavel Korshunov discusses the challenges of creating and detecting deepfakes, focusing on the manipulation of voices and faces. He demonstrates deepfakes of himself and his child, noting the struggles with children's data and ethical concerns. The speaker also discusses the potential harm of deepfakes, particularly towards children, and the underground industry creating and selling them. The workshop covers the process of converting one person's image into another using a pre-trained model, as well as the limitations of text-to-video technology. The speaker also touches upon the challenges of creating realistic deepfakes due to visual artifacts and the use of makeup as a means to create deepfakes. The workshop highlights the importance of understanding these challenges to effectively detect and combat deepfakes.
Copyright © 2024 Project DefAI - All Rights Reserved.