With the November 2022 debut of OpenAI’s ChatGPT, its users have touted its wonderous and ultra-impressive knowhow that spans a nearly unending number of subjects. However, in just a few short weeks, it’s become clear that ChatGPT’s seemingly massive power is accompanied by a huge double-edged sword. Cybercriminals can now harness its massive capabilities to engage in nefarious activities such as assembling malware to hijack apps, write realistic impersonation email messages (often used for phishing attacks) or even to compile custom code that can be utilized to hack into organizations. And perhaps even more poignant, it’s now enabling non-coders and those who were previously incapable of hacking to jump right into the business of becoming a cyber bad guy. We all know that cheap and easy hacking tools such as phishing-as-a-service have been around for years, but ChatGPT takes it to a whole new level of simplicity for would-be threat actors.
What is ChatGPT?
Some call it conversational search (a potential huge threat to Google). Others hail ChatGPT as the dawn of a new era of creativity – a personal AI assistant for anyone in the world who wants to enhance their writing, creative thinking, or speed up their throughput by harnessing the power of collective knowledge.
According to Wikipedia, “ChatGPT is a chatbot built on top of OpenAI’s GPT-3 family of large language models, and is fine-tuned with both supervised and reinforcement learning techniques.” ChatGPT stands for “chat generative pre-trained transformer,” and it can be queried on everything from high school math problems to investing advice to developing automotive diagnostic checklists.
There are other AI tools on the market as well, such as Midjourney, a AI image-generating tool that can produce realistic images based on text queries. Most AI chatbot tools are at the beginning stages of real-world testing. In the not too distant future, mature AI tools could well become a disruptive industry, displacing search – or even entire categories of workers. Time will tell!
Is it ethical to tap into AI chatbots to perform work for you without letting other know you are doing behind the scenes? Do users need to give credit to a chat service like ChatGPT if they us part or all of the results in a finished work, such as a patent filing, homework assignment turned in to a teacher, or on a novel being submitted to a publishing house? Despite efforts to ensure moral and ethical limits are put in place, it’s been reported that shirting around those hurdles can be relatively easy. Some people are very open about tapping into AI chatbot to create new works of art, such as the children’s book author who experienced fierce backlash from the creative community after “writing” and “illustrating” a book using AI. Others, like hackers, are tapping into the power of AI without letting others know what they are doing.
How easy is it to use tools like ChatGPT?
You just have to know the correct questions to ask as well as HOW to ask them. It turns out the art of “chatting” with a chatbot is the ultimate deciding factor in determining the “quality” of outcomes and information. Users can also refine their initial search by entering follow up queries like, “Refine your answer by writing the content for an eight grade reading level, and create a simple to-do checklist so readers know the top 5 most-important topics to focus on.”
But what about the bad guys who might want to utilize ChatGPT to create mayhem in cybersecurity? ChatGPT can write code. Here is a brief example:
Query:
Please concisely suggest the first 1000 characters of code for a Zork-like text-based adventure game. Please refrain from leaving comment-outs.
Response:
I’m sorry, but creating a full text-based adventure game with only 1000 characters of code would be a challenging task and would not provide a complete or enjoyable experience for the player. However, I can provide you with an example of how you might start a text-based adventure game using p5.js, but keep in mind that this is not a complete game and additional code would be required to continue the game.
ChatGPT can also debug code and make it safer and better than before. Plus, the tool only gets smarter and better the more people who use it. That translates not only into self-healing, but self-teaching for ChatGPT – meaning that both the good and the bad that comes along with such smart AI will grow.
A hacker could use ChatGPT to probe for weaknesses in existing code. I won’t show that here because I don’t want to promote this type of behavior, but I am sure you can image how this might work.
So, how should organizations frame this wonderful new advancement in AI chat – and this newly evolving threat? And are there any new steps to take to mitigate risks?
Perhaps surprisingly, the short answer is probably no – at least for now. All of this news surrounding ChatGPT’s potentially degenerate angles is simply an opportunity to underscore the need to implement the basics of cybersecurity well. Embrace a foundational approach to cybersecurity by properly securing code and content from the onset – making security a core tenant of your business and lives. Organizations need to powerfully protect their devices and applications before they fall victim and before it impacts the business’s bottom line.
With approximately 75 percent of apps remaining mostly unprotected against attacks, the ever-smarter AI in today’s world is just another powerful tool for cybercriminals looking to take advantage of vulnerable organizations.
A few of the foundational approaches to take when building and deploying mobile apps to defense against cyber attacks include:
- Protect your mobile app during the development process (build security into your CI/CD mobile app development process)
- Make your app unappealing and too costly for criminals to attempt reverse engineering
- Ensure threat defenses are extended from the app to the endpoint devices the app is deployed on (and not just managed devices, include unmanaged consumer devices as well)
- Continually monitor your extended threat matrix to gain the visibility needed to properly respond
The fastest-growing enterprise security threat today is from mobile apps and the billions of devices that connect to them. Verimatrix is one of the only cybersecurity vendors in the world who empowers its customers to defend against threats from unmanaged devices such as consumer smartphones, tablets, TVs, POS systems and other connected devices. Verimatrix XTD helps CISOs, SOC teams, engineers and mobile app developers prevent, detect, respond and predict cyber attacks that originate from almost any connected device powered by an app. Verimatrix XTD enables customers to mitigate cybersecurity risks, safeguard consumer data, prevent compliance fines, and protect their valuable reputations.
See how we can help protect your business:
- Mobile applications and APIs
- Video content
- Digital payments