Funded by Innoswisse and Innovate UK
Funded by Innoswisse and Innovate UK
Project DefAi's Objective:
To investigate the dangers of presentation and injection attacks on age verification, possible defenses one can build against them, and the evaluation and standardisation approaches the industry would require to assess its readiness to withstand such attacks.
Launched in 2023, Project DefAI started as a joint UK/Swiss grant funded project. Our core focus revolves around safeguarding children online. The pervasive threat of deepfake technology undermines age verification measures, placing our children at risk. We are resolute in our determination to counter this threat and uphold the safety of our young ones in the digital sphere.
This is a global first project. At the moment, deep fake and video injection AI/ML test assets do not exist for testing
Age Assurance Systems independently. International Standards refer to ID checking (1:1 & 1:N) but not to
capabilities to spot Age Assurance spoofing attempts. The project is directly aligned to worldwide policy
development around Online Safety, Child Protection and Age Appropriate Design. It harnesses advanced scientific
approaches to AI/ML to defeat the negative effects of AI/ML in use. It develops what will be part of critical test
infrastructure to ensure the reliability of protective systems deployed in an online environment.
Deepfake technology is an advanced form of AI that produces highly realistic, manipulated videos and audio recordings. It uses deep learning algorithms to superimpose one person's likeness onto another's, creating convincing but fake content. This poses a significant risk to age verification technology, potentially allowing children access to online content intended only for adults.
Copyright © 2024 Project DefAI - All Rights Reserved.