ChatGPT: Russians seek to access it for malicious purposes

0

 


While access to AIs developed by OpenAI is blocked in Russia, hackers in the country meet on forums to give each other tips on how to access them.


As Futura mentioned yesterday, hackers are currently testing the capabilities of ChatGPT to generate malicious code. Different types of scripts created by artificial intelligence have already been discovered on Darknet hacker forums by experts from Check Point. For the moment, they would only be in their infancy and these scripts are still far from being part of the arsenal of the most sophisticated cyberattacks. On the other hand, it is a completely different discovery around ChatGPT that Check Point raised. By searching the forums, cybersecurity researchers have identified exchanges led by Russian hackers. Their questions revolved around the issue of access to the chatbot from Russian territory.


TO OPEN AN ACCOUNT ON CHATGPT, YOU MUST RECEIVE A CODE BY SMS. BUT MOBILES FROM RUSSIA ARE BLOCKED BY OPENAI. TO OVERCOME THIS BARRIER, THERE ARE RUSSIAN TEMPORARY PHONE NUMBER GENERATION SERVICES THAT COSTS ONLY A FEW RUBLES. © CHECKPOINT
Hackers seek to circumvent blocking in Russia
In a discussion thread about using ChatGPT to generate malware, hackers shared information on how to circumvent geoblocks to be able to leverage AI in creating malicious code. Bypassing these geofencing is not complicated to achieve by the admission of the Check Point spokesperson. Thus, hackers explain how they could access the service by paying to buy existing user accounts. Obviously to pay, they imagine using stolen bank card data. But to create an account, you must necessarily use a phone number. The interlocutors therefore mentioned the use of a Russian SMS generation service which could pass the various blocking measures.

ChatGPT: hackers are already using it to create malware

Hackers were quick to use ChatGPT for malicious purposes, as experts from cybersecurity firm Check Point have just discovered.
Article by Sylvain Biget, published on 01/17/2023

Since it opened to the general public at the beginning of December, ChatGPT has been touted as the next internet revolution. The chatbot amuses, impresses and is regularly pushed to its limits. We now know that he knows how to lie with aplomb, but what we also discover is that he can also work on the dark side.

Cybersecurity firm Check Point Research has discovered that OpenIA's AI has been used by hackers to craft malicious code. The researchers had previously tested ChatGPT to create an entire chain of hacks, starting with an inducing phishing email, then injecting malicious code. The team started from the observation that, if they had had this idea, the cybercriminals too.
By pushing their analysis on the large communities of hackers, they were able to confirm that the first cases of malicious use of ChatGPT are in progress. The advantage is that these are not the most experienced hackers, but cybercriminals who do not have special development skills. Suffice to say, these aren't fancy malware creations, but given the potential, it may well be that AI will soon be used in advanced hacking tool developments.
On hacker forums, participants are actively testing ChatGPT to recreate malware. The lab experts have thus dissected a script created by ChatGPT which searches for common file formats on a hard disk and which will copy, compress and send them to a server controlled by hackers.

ON THIS DARK WEB FORUM, A VERY LOUD HACKER CALLED USDOD HAS PUBLISHED A MULTI-LAYER ENCRYPTION TOOL. HE ADMITTED HAVING HELPED BY CHATGPT'S AI TO COMPLETE HIS DEVELOPMENT. © CHECKPOINT.


Hackers hijack ChatGPT

In another example, a Java code was used to download a network client and force its execution via the Windows administration console. The script could then download any malware. But there is more, with the writing of a Python script that can perform complex encryption operations. In principle, the creation of this code is neither good nor bad, but on a hacker forum, we can legitimately assume that this encryption system could be integrated into ransomware.
Finally, the researchers also saw discussions and tests that hijacked ChatGPT, not to generate malicious code, but to create an illicit automated trading platform for the Dark Web. Overall, what emerges from these investigations is that cybercriminals are doing much like everyone else in all areas with the ChatGPT chatbot. They discuss and test the AI ​​to see how it could help them and if its performance is useful for their evil work.
For its part, like any computer program, the AI ​​only does what it is asked to do. It does not necessarily do it well, as can be seen from a study by Stanford University in the United States. His findings show that when developers use AIs to write code, they tend to generate flaws. They wouldn't necessarily exist if a human had composed it.

Post a Comment

0Comments
Post a Comment (0)