More online criminals are weaponising artificial intelligence to steal from Australian businesses, including using the technology to create deepfakes of employees’ voices and appearance.
Small and medium-sized businesses were at highest risk from the emerging trend, but almost all Australian organisations had encountered AI-based online attacks over the past year, a report has found.
Security firm SoSafe has released the findings in its Cybercrime Trends report, which also found Australia is one of the nations most often targeted by AI-generated attacks.
The warnings come one month after some of the nation’s biggest superannuation firms were hit with a co-ordinated online attack that saw $750,000 stolen from personal accounts.

Sydney animal vaccine firm Virbac has been regularly targeted by hackers who use AI to create realistic invoices.
Chris Mousley, a supply chain analytics specialist at the firm, said Virbac had been forced to educate staff and change its process for paying suppliers to avoid being robbed by cyber criminals.
“I get at least five to 10 of these a month and they’re extremely convincing commercial documents that look like pro-forma invoices,” Mr Mousley said.
“These are very specific documents and they’re AI-generated to look like companies we would deal with.”
The fake invoices were often for specific raw materials, he said, which indicated criminals were specifically targeting the firm and its industry.
AI software was not only being used to improve the grammar and apparent legitimacy of email scams, but to craft targeted and sophisticated attacks across different platforms, SoSafe human-centric security advocate Jacqueline Jayne said.
“We’ve had deepfakes using people’s voices to pretend to be someone on the phone and it is incredibly difficult, unless you have a code word, to be able to tell are we talking to (a colleague) or is this someone pretending to be her,” she said.
“It’s getting harder and harder to pick the difference.”

The Cybercrime Trends report, released in Friday, was prepared by research firm Censuswide and surveyed 500 IT workers across nine countries.
Despite their prevalence, only one-in-four IT workers rated their ability to detect AI-based attacks as “high”.
Most Australian organisations had experienced attacks delivered to workers’ personal devices such as phones and laptops, the report found.
Companies were also targeted by “multi-channel attacks” that used their email and social media accounts, messaging apps and voice calls.
Educating employees in how to detect deepfake scams would be vital to shutting down the attacks, particularly in small and medium-sized businesses that often did not deploy the same level of cybersecurity, Ms Jayne said.
“We’re going to see more AI-assisted and driven attacks in Australia and globally,” she told AAP.
“One way to address it is to think about what humans are doing, how they’re responding to (attacks), and how we can help them to think before they do anything.”
Companies needed to educate staff in how to scrutinise incoming communication carefully, Mr Mousley said.
This included looking for hints such as misspellings and different payment methods, and running credit checks on local firms.
“You can’t be complacent,” he said.
“We didn’t get any of these 12 months ago.”
Australian Associated Press is the beating heart of Australian news. AAP is Australia’s only independent national newswire and has been delivering accurate, reliable and fast news content to the media industry, government and corporate sector for 85 years. We keep Australia informed.