OFFICIAL PUBLICATION OF THE NEBRASKA SOCIETY OF CERTIFIED PUBLIC ACCOUNTANTS

Pub. 6 2024 Issue 3

AI Can Be a Weapon for Hackers

What Businesses Should Do

Artificial intelligence has transformed the way we live and work, offering immense potential for innovation and progress. However, as with any technology, AI also has its drawbacks. Emerging technology like deepfakes, which are AI-generated synthetic media that can convincingly manipulate or fabricate audio, video, and images, have rapidly gained popularity among cyber criminals as a potent tool for cyberattacks. By leveraging deepfakes, they can easily manipulate information, deceive individuals, and exploit vulnerabilities within organizations. The consequences of these attacks can be severe, ranging from financial losses to reputational damage.

As technology continues to advance at a rapid pace, both the opportunities and risks in terms of detection and defense against deepfakes are expanding. Many businesses now have access to the technology and can potentially use it to defend themselves against an attack. Implementing these tools can be challenging, however, due to external regulations and internal barriers such as skill gaps within the workforce and financial constraints—giving the advantage to malicious actors who may exploit the opportunity first.

A May 2024 KPMG survey found that 76% of security leaders are concerned about the increasing sophistication of new cyber threats and attacks. Hackers have found various ingenious ways to use deepfakes as part of their existing cyberattack strategies to make them more credible, such as business email compromise (BEC) scams, insider threats, and market manipulation.

BEC scams involve impersonating high-ranking executives or business partners to deceive employees into transferring funds or sharing sensitive information, while phishing attacks are used to trick individuals into revealing sensitive information. Deepfakes make these scams even more convincing, as hackers can manipulate audio or video to mimic the voice and appearance of the targeted individual. This increases the likelihood of victims falling for the scam, leading to data breaches, financial fraud, and identity theft.

Meanwhile, as far as insider threats, deepfakes can be used to create fake videos or audio recordings of employees, which can then be used to blackmail or manipulate them. Hackers exploit these deepfakes to gain unauthorized access to sensitive information or compromise the integrity of a business or financial entity. Insider threats pose a significant risk, as they can cause substantial financial and reputational damage.

Deepfakes can also be employed to spread false information or manipulate stock prices, resulting in financial gains for hackers. By creating realistic videos or audio recordings of influential figures, hackers can create panic or generate hype, causing significant fluctuations in the market. This can lead to investors making uninformed decisions and suffering financial losses.

As the threat continues to rise, acquiring the necessary funding needed to detect advanced deepfake technology—which requires maintaining the necessary computing power, forensic algorithms, and audit processes—has been a major challenge.

Additionally, while businesses look to implement effective countermeasures, the rate of digital transformation only continues to pick up. This speed can come at the expense of security. As organizations innovate and embrace digital acceleration, the attack surface expands, and the number of assets requiring advanced security increases, putting them at risk.

If they want to protect their organizations from deepfake-related malicious attacks, it is crucial for chief information security officers, or CISOs, to have conversations with senior decision makers to ensure cybersecurity budgets account for the costs associated with implementing new processes, tools, and strategies. Improving organizational risk intelligence can help build a stronger argument to get the necessary funding by quantifying the financial impact of security risks and threats posed by manipulated content.

Once sufficient funding has been acquired, several measures may be taken to address the cybersecurity threat posed by deepfakes. First, it is important to develop a strong cybersecurity culture and promote good technology hygiene practices among employees. This includes educating employees about deepfakes, their potential risks, and how to identify them. By training employees to be cautious when interacting with media content, for example, businesses can reduce the likelihood of falling victim to deepfake attacks. Implementing robust authentication measures to ensure only authorized individuals have access to sensitive information or systems is critical. This can involve using multifactor authentication and biometrics to strengthen security.

Leveraging a zero-trust approach can also offer several benefits for mitigating attacks. It provides a comprehensive framework for mitigating deepfake cyberattacks by prioritizing strong authentication, access control, continuous monitoring, segmentation, and data protection. Organizations can implement granular access controls, allowing them to restrict access to specific resources based on user roles, privileges, and other contextual factors. By doing so, it helps prevent unauthorized users from gaining access to critical systems and data that could be used to propagate deepfakes.

Furthermore, zero trust encourages continuous monitoring of user behavior and network activity and promotes network segmentation and isolation. By actively monitoring for suspicious behavior or anomalies, organizations can detect and respond to potential attacks in real-time, minimizing the damage caused. By separating critical systems and data from less secure areas, organizations can limit the spread of deepfake content and prevent it from infiltrating sensitive areas. Lastly, it protects data at all stages, including data in transit and at rest. By implementing strong encryption and data protection measures, organizations can safeguard their data from being manipulated or tampered with to create deepfakes.

In addition, businesses should proactively employ advanced monitoring and detection technologies like AI-based tools and algorithms to identify anomalies in audio, video, or image files that may indicate the presence of deepfakes. In fact, according to the recent KPMG survey, 50% of cybersecurity leaders are already using AI and advanced analytics to predict potential threats. Other proactive measures can include collaborating and sharing information with regulatory agencies to leverage their expertise and resources critical for the development of effective policies that can ultimately help safeguard critical systems against emerging threats.

Development of an incident response plan specifically outlining the steps to be taken if a deepfake attack occurs, including communication protocols, legal considerations, and technical countermeasures, should also be a priority for leaders.

Finally, regular organization-wide system updates and patching are crucial to maintaining a strong defense against deepfakes as well. Keeping all software, applications, and systems up to date with the latest security patches helps protect against known vulnerabilities that could be exploited by cyber criminals.

Overall, the rise of new technology such as deepfakes presents a significant threat to businesses and financial entities. The scale at which deepfake attacks can cause financial harm, reputational damage, and data breaches should not be underestimated. As AI technology continues to advance, so, too, will the capabilities of deepfakes. Hackers will undoubtedly find new and innovative ways to exploit this technology for their malicious purposes. Businesses and financial entities need to be vigilant by collaborating with cybersecurity experts, researchers, and law enforcement agencies to stay updated on the latest deepfake techniques and countermeasures.

In this ever-evolving landscape of AI and cybersecurity, it is essential to remain proactive and adaptive. By staying informed, implementing best practices, and leveraging the power of AI for defense, businesses and financial entities can mitigate the risks posed by deepfake attacks and safeguard their operations, reputation, and stakeholders’ trust.

Matthew P. “Matt” Miller is a principal in the New York office of KPMG LLP’s Advisory Services practice and is the U.S. Cyber Security Services banking industry lead. With 20-plus years of experience, Miller’s focus areas include insider threat and internal fraud, third-party risk, quantitative and qualitative risk assessment, and incident management. He holds a Bachelor of Science in Computer Science and Business from the University of Puget Sound.

Reprinted with permission of The Georgia Society of CPAs.

Get Social and Share!

Sign Up to Receive this Publication in your inbox