The Dark Side of AI: Previewing Criminal Uses

Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development

Threats Include Social Engineering, Insider Trading, Face-Seeking Assassin Drones


November 20, 2020    

The Dark Side of AI: Previewing Criminal Uses
Advertisement for a real-time voice cloning tool (Source: “Malicious Uses and Abuses of Artificial Intelligence”)

“Has anyone witnessed any examples of criminals abusing artificial intelligence?”

See Also: Gartner Magic Quadrant for APM


That’s a query safety corporations have been elevating lately. But a brand new public/non-public report into AI and ML identifies seemingly methods during which such assaults may happen – and gives examples of threats already rising.


“Criminals are likely to make use of AI to facilitate and improve their attacks.” 

The most certainly prison use instances will contain “AI as a service” choices, in addition to AI enabled or supported choices, as a part of the broader cybercrime-as-a-service ecosystem. That’s in response to the EU’s legislation enforcement intelligence company, Europol, the United Nations Interregional Crime and Justice Research Institute – UNICRI – and Tokyo-based safety agency Trend Micro, which ready the joint report: “Malicious Uses and Abuses of Artificial Intelligence”.


AI refers to discovering methods to make computer systems do issues that might in any other case require human intelligence – comparable to speech and facial recognition or language translation. A subfield of AI, known as machine studying, includes making use of algorithms to assist techniques frequently refine their success charge.



Defined: AI and ML (Source: “Malicious Uses and Abuses of Artificial Intelligence”)

Criminals’ Top Goal: Profit


If that is the excessive stage, the utilized stage is that criminals have by no means shied away from discovering revolutionary methods to earn a bootleg revenue, be it by way of social engineering refinements, new business models or adopting new kinds of expertise (see: Cybercrime: 12 Top Tactics and Trends).


And AI isn’t any exception. “Criminals are likely to make use of AI to facilitate and improve their attacks by maximizing opportunities for profit within a shorter period, exploiting more victims and creating new, innovative criminal business models – all the while reducing their chances of being caught,” in response to the report.


Thankfully, all isn’t doom and gloom. “AI promises the world greater efficiency, automation and autonomy,” says Edvardas Šileris, who heads Europol’s European Cybercrime Center, aka EC3. “At a time where the public is getting increasingly concerned about the possible misuse of AI, we have to be transparent about the threats, but also look into the potential benefits from AI technology.”


Emerging Concerns


The new report desecribess some rising legislation enforcement and cybersecurity issues about AI and ML, together with:





  • AI-supported hacking: Already, Russian-language cybercrime boards are promoting a rentable software known as XEvil 4, which makes use of neural networks to bypass CAPTCHA safety checks. Another software, PWnagotchi 1.0.0, makes use of a neural community mannequin to enhance its WiFi hacking efficiency. “When the system successfully de-authenticates Wi-Fi credentials, it gets rewarded and learns to autonomously improve its operation,” in response to Trend Micro.

  • AI-assisted password guessing: For credential stuffing, Trend Micro says it discovered a GitHub repository earlier this yr with an AI-based software “that can analyze a large dataset of passwords retrieved from data leaks” and predict how customers will alter and replace their passwords sooner or later, comparable to altering ‘hiya123’ to ‘h@llo123,’ after which to ‘h@llo!23.'” Such capabilities might enhance the effectiveness of password-guessing instruments, comparable to John the Ripper and HashCat.

  • Small assassination drones: AI-powered facial recognition drones carrying a gram of explosives are actually being developed, the report warns. “These drones are specifically for micro-targeted or single-person bombings. They are also usually operated via cellular internet and designed to look like insects or small birds. It is safe to assume that this technology will be used by criminals in the near future.”

  • Insider buying and selling: Criminals already try to revenue from insider information. But banking insiders, specifically, may create shadow AI fashions that money in, based mostly on inside information about large trades deliberate or executed by their group, all whereas preserving the illicit trades sufficiently small to keep away from controls designed to detect cash laundering, terrorism financing or insider trades.

  • Human impersonation on social networks: AI can be utilized to create bots that resemble precise people. One AI-enhanced bot being marketed on the Null cybercrime discussion board claims to have the ability to “to mimic several Spotify users simultaneously” whereas utilizing proxies to keep away from detection, Trend Micro says. “This bot increases streaming counts – and subsequently, monetization – for specific songs. To further evade detection, it also creates playlists with other songs that follow human-like musical tastes rather than playlists with random songs, as the latter might hint at bot-like behavior.”



  • Deepfakes: In 2018, Reddit banned images and movies during which a celeb’s face was superimposed over specific content material. Since then, nonetheless, a wide range of instruments have made it simpler to generate such content material. Although a number of social media platforms have banned deepfakes and pledged to maintain defenses to spot and block them, issues stay. Election safety consultants, for instance, have warned that they might be used as a part of disinformation campaigns.


Criminals Keep Seeking Small Improvements


The assaults described within the paper are largely theoretical. Recently, Philipp Amann, head of technique for Europol’s EC3, instructed me that there are as but few identified prison instances involving AI and ML.


In one case, “criminals allegedly used an online tool to emulate the voice of the CEO,” says Europol’s Philipp Amann



Even prison uptake of deepfakes has been scant. “The main use of deepfakes still overwhelmingly appears to be for non-consensual pornographic purposes,” in response to the report. It cites analysis from final yr by the Amsterdam-based AI agency Deeptrace , which “found 15,000 deepfake videos online, of which 96% were pornographic and 99% of which used mapped faces of female celebrities onto pornographic actors.”


Maybe that is as a result of criminals are nonetheless looking for good use instances?


For instance, Amann instructed me that one identified case allegedly concerned “an online tool to emulate the voice of the CEO” at an organization. A fraudster seems to have phoned a German senior monetary officer based mostly within the U.Okay. The officer reported that the voice on the opposite finish seemed like a local German speaker who self-identified because the CEO and was looking for an pressing cash switch.



Access to such instruments makes it simpler for criminals to probably improve the success of their assaults by making their social engineering simpler. “It’s just another way of convincing you that you actually are talking to your counterpart,” Amann mentioned. “So the social engineering is something that we need to be aware of and which requires training, awareness and education, on an ongoing basis.”


‘Malicious Innovations’


Criminals hardly ever reinvent the wheel. Ransomware, for instance, is simply the most recent variation on the outdated kidnapping-and-ransom racket (see: Ransomware: Old Racket, New Look).


Expect criminals to make use of something that makes the most recent assaults extra automated and simpler to execute at scale, less expensive and extra dependable and efficient.


“Cybercriminals have always been early adopters of the latest technology and AI is no different,” says Martin Roesler, head of forward-looking menace analysis at Trend Micro. “It is already being used for password guessing, CAPTCHA-breaking and voice cloning, and there are many more malicious innovations in the works.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here