0
With security, the battle between good and evil is always a swinging pendulum. Traditionally, the shrewdness of the attack has depended on the skill of the attacker and the sophistication of the arsenal. This is true on the protection side of the equation, too—over $200B in investments have been poured in year on year to strengthen cybersecurity and train personnel.
It is fair to say that Generative-AI has upended this paradigm on its head. Now, an unskilled hacker with low sophistication could leverage Gen-AI “crowdsourced” constructs to become significantly more destructive with relatively little to no investment and training. This explodes the threat surface significantly.
Consider a recent example that one of VMware’s security technologists shared leveraging generally available ChatGPT. When he requested ChatGPT to create an exploit code for a vulnerability, it resulted in an appropriate denial.
Note that the software can understand the malicious nature of the request and invokes its ethical underpinning to justify the denial.
But what if you slightly shift the question’s tonality, and frame it as seeking “knowledge” instead?
What was previously denied is now easily granted with just a few keystrokes, and the exploit code is dished up.
Admittedly, you Continue reading