iiDENTIFii cautions financial institutions on deepfake threats
August 30, 2023491 views0 comments
Given the pace of technological advancement, iiDENTIFii, a remote face authentication and automated onboarding technology platform, has warned financial institutions on the need to guard against deepfake cyber attacks.
A deepfake is a video, visual, or audio recording that has been distorted, manipulated, or synthetically created using deep learning techniques to present an individual, or a hybrid of several people, saying or doing something that they did not say or do.
With deepfake technology, it is simple and possible to edit a person’s facial and vocal likeness with alarming accuracy. These deepfakes are often used in digital injection attacks which are sophisticated, highly scalable, and replicable cyberattacks that bypass the camera on a device or are injected into a data stream.
According to iiDENTIFii, financial crime and cybercrime have become more inextricably linked than ever before, as more people set up digital accounts and do their banking online.
Speaking on the high increase of deepfake attacks, Murray Collyer, chief operating officer of iiDENTIFii, said, “Digital injection attacks present the highest threat to financial services, as the AI technology behind it is affordable, and the attacks are rapidly scalable.
“In fact, a recent digital security report by our technology partner, iProov, illustrates how, in an indiscriminate attempt to bypass an organisation’s security systems, some 200-300 attacks were launched globally from the same location within a 24-hour period,’’ he added.
Collyer noted that deepfake technology is one of the most rapidly growing threats within financial services, yet not all verification technologies are resilient to it, adding that password-based systems are highly susceptible to fraud.
The chief operating officer pointed out that technology and processes exist to safeguard financial services companies against the deepfake method of fraud.
“A growing percentage of face biometric technology incorporates some form of liveness checks – such as wink and blink – to verify and authenticate customers. Liveness detection uses biometric technology to determine whether the individual presenting is a real human being, not a presented artefact. Therefore, this technology can detect a deepfake if it were to be played on a device and presented to the camera,’’ Collyer said.
However, he noted that while many liveness detection technologies can determine if someone is conducting fraud by holding up a physical image (for example, a printed picture or mask of the person transacting) to the screen, many solutions cannot detect digital injection attacks.
“Specialised technology is required to combat deepfakes. Within iiDENTIFii, we have seen success with the use of sophisticated yet accessible 4D liveness technology, which includes a timestamp and is further verified through a three-step process where the user’s selfie and ID document data are checked with relevant government databases. This enables us to accurately authenticate someone’s identity,” he said.
“With the right technology, it is not only possible to protect consumers and businesses against deepfake financial crimes but also create a user experience that is simple, accessible and safe for all,” Collyer added.