Protecting payments in an era of deepfakes and advanced AI – TechRepublic

Posted: May 11, 2022 at 11:36 am

Image: VectorMine/Adobe Stock

In the midst of unprecedented volumes of e-commerce since 2020, the number of digital payments made every day around the planet has exploded hitting about $6.6 trillion in value last year, a 40 percent jump in two years. With all that money flowing through the worlds payments rails, theres even more reason for cybercriminals to innovate ways to nab it.

To help ensure payments security today requires advanced game theory skills to outthink and outmaneuver highly sophisticated criminal networks that are on track to steal up to $10.5 trillion in booty via cybersecurity damages, according to a recent Argus Research report. Payment processors around the globe are constantly playing against fraudsters and improving upon their game to protect customers money. The target invariably moves, and scammers become ever more sophisticated. Staying ahead of fraud means companies must keep shifting security models and techniques, and theres never an endgame.

SEE: Password breach: Why pop culture and passwords dont mix (free PDF) (TechRepublic)

The truth of the matter remains: There is no foolproof way to bring fraud down to zero, short of halting online business altogether. Nevertheless, the key to reducing fraud lies in maintaining a careful balance between applying intelligent business rules, supplementing them with machine learning, defining and refining the data models, and recruiting an intellectually curious staff that consistently questions the efficacy of current security measures.

As new, powerful computer-based methods evolve and iterate based on more advanced tools, such as deep learning and neural networks, so do their plethora of uses both benevolent and malicious. One practice that makes its way across recent mass-media headlines is the concept of deepfakes, a portmanteau of deep learning and fake. Its implications for potential breaches in security and losses for both the banking and payments industries have become a hot topic. Deepfakes, which can be hard to detect, now rank as the most dangerous crime of the future, according to researchers at University College London.

Deepfakes are artificially manipulated images, videos and audio in which the subject is convincingly replaced with someone elses likeness, leading to a high potential to deceive.

These deepfakes terrify some with their near-perfect replication of the subject.

Two stunning deepfakes that have been broadly covered include a deepfake of Tom Cruise, birthed into the world by Chris Ume (VFX and AI artist) and Miles Fisher (famed Tom Cruise impersonator), and deepfake young Luke Skywalker, created by Shamook (deepfake artist and YouTuber) and Graham Hamilton (actor), in a recent episode of The Book of Boba Fett.

While these examples mimic the intended subject with alarming accuracy, its important to note that with current technology, a skilled impersonator, trained in the subjects inflections and mannerisms, is still required to pull off a convincing fake.

Without a similar bone structure and the subjects trademark movements and turns of phrase, even todays most advanced AI would be hard-pressed to make the deepfake perform credibly.

For example, in the case of Luke Skywalker, the AI used to replicate Lukes 1980s voice, Respeecher, utilized hours of recordings of the original actor Mark Hamills voice at the time the movie was filmed, and fans still found the speech an example of the Siri-like hollow recreations that should inspire fear.

On the other hand, without prior knowledge of these important nuances of the person being replicated, most humans would find it difficult to distinguish these deepfakes from a real person.

Luckily, machine learning and modern AI work on both sides of this game and are powerful tools in the fight against fraud.

While deepfakes pose a significant threat to authentication technologies, including facial recognition, from a payments-processing standpoint there are fewer opportunities for fraudsters to pull off a scam today. Because payment processors have their own implementations of machine learning, business rules and models to protect customers from fraud, cybercriminals must work hard to find potential gaps in payment rails defenses and these gaps get smaller as each merchant creates more relationship history with customers.

The ability for financial companies and platforms to know their customers has become even more paramount in the wake of cybercrimes rise. The more a payments processor knows about past transactions and behaviors, the easier it is for automated systems to validate that the next transaction fits an appropriate pattern and is likely authentic.

Automatically identifying fraud in these cases keys off of a large number of variables, including history of transactions, value of transactions, location and past chargebacks and it doesnt look at the persons identity in a way that deepfakes might come into play.

The highest risk of fraud from deepfakes for payments processors rests in the operation of manual review, particularly in cases where the transaction value is high.

In manual review, fraudsters can take advantage of the chance to use social-engineering techniques to dupe the human reviewers into believing, by way of digitally manipulated media, that the transactor has the authority to make the transaction.

And, as covered by The Wall Street Journal, these types of attacks can be unfortunately very effective, with fraudsters even using deepfaked audio to impersonate a CEO to scam one U.K.-based company out of nearly a quarter-million dollars.

As the stakes are high, there are several ways to limit the gaps for fraud in general and stay ahead of fraudsters attempts at deepfake hacks at the same time.

Sophisticated methods of debunking deepfakes exist, utilizing a number of varied checks to identify mistakes.

For example, since the average person doesnt keep photos of themselves with their eyes closed, selection bias in the source imagery used to train AI creating the deepfake might cause the fabricated subject to either not blink, not blink at a normal rate or to simply get the composite facial expression for the blink wrong. This bias may impact other deepfake aspects such as negative expressions because people tend not to post these types of emotions on social media a common source for AI-training materials.

Other ways to identify the deepfakes of today include spotting lighting problems, differences in the weather outside relative to the subjects supposed location, the timecode of the media in question or even variances in the artifacts created by the filming, recording or encoding of the video or audio when compared to the type of camera, recording equipment or codecs utilized.

While these techniques work now, deepfake technology and techniques are quickly approaching a point where they may even fool these types of validation.

Until deepfakes can fool other AIs, the best current options to fight them are to:

In addition to these methods, several security practices should help immediately:

The key to reducing fraud from deepfakes today is primarily won by limiting the circumstances under which manipulated media can play a role in the validation of a transaction. This is accomplished by evolving fraud-fighting tools to curtail manual reviews and by constant testing and refinement of toolsets to stay ahead of well-funded, global cybercriminal syndicates, one day at a time.

Rahm Rajaram, VP of operations and data at EBANX, is an experienced, financial services professional, with extensive expertise in security and analytic topics following executive roles at companies including American Express, Grab and Klarna.

Here is the original post:

Protecting payments in an era of deepfakes and advanced AI - TechRepublic

Related Posts