Criminals may start using artificial intelligence (AI) technologies such as ChatGPT for fraud and other cybercrime, Europol has warned.

(AFP file photo)

[Compilation of Lin Yuxuan/Comprehensive Report] Europol warned on the 27th that the capabilities generated by the rapid evolution of chatbots such as ChatGPT are not only beneficial to human life, but may also be used by unscrupulous people. Criminals may begin to use artificial intelligence ( AI) technology for scams and other cybercriminals, including extortion techniques such as phishing, disinformation, and malware.

ChatGPT, launched by U.S. start-up OpenAI in November last year, quickly attracted the attention of consumers, who were amazed by its ability to clearly answer difficult questions, compose poems and programs, and even pass exams.

However, Europol, based in The Hague, the Netherlands, said the prospect of criminals using such AI systems to commit crimes was worrying.

Please read on...

Some members of the U.S. Congress have publicly declared that they are "frightened by AI, especially unrestricted and unregulated AI," and proposed a draft resolution calling for stronger regulation of AI.

Europol's new "Innovation Lab" (Innovation Lab) focuses on the use of chatbots as a whole, but in a series of workshops it specifically focuses on ChatGPT because ChatGPT is the most visible and widely used.

They found that criminals may use ChatGPT to greatly accelerate the research process in unknown areas, including drafting fraud plans, or inquiring about how to break into houses, carry out terrorist operations, cyber crimes, child sexual abuse and other information.

Chatbots' ability to anthropomorphize narratives is especially useful for phishing attacks, where perpetrators lure users into clicking fake email links in an attempt to steal their data.

In addition, ChatGPT's ability to quickly produce realistic voice texts makes it an ideal tool for propaganda and dissemination of disinformation, and criminals can produce and disseminate messages of specific discourses without much effort.

ChatGPT can also be used to write computer programs, which is especially useful for criminals who know little or nothing about coding and development.

Europol pointed out that preliminary research by US-Israel cyber threat intelligence firm Check Point Research showed that chatbots can be used to create phishing emails and infiltrate network systems.

Although ChatGPT has protection measures such as content review and will refuse to answer questions classified as harmful or biased, there are still a large number of hackers who use clever prompts to avoid it.

Not long ago, the network security platform GBHackers disclosed a criminal act in which hackers used ChatGPT to commit fraud. Hackers used ChatGPT to generate complete fraudulent words in a short period of time, quickly creating a "virtual character", from self-introduction, chat content to ChatGPT can produce elaborate "love letters" with one click, making victims think they "fall in love" and eventually suffer money fraud.

According to reports, this kind of deception is actually not new. The problem is that now with the blessing of AI, it is difficult for people to tell whether the other end of the screen is a human or a machine.