How ChatGPT can turn anyone into a ransomware and malware threat actor
Ever since OpenAI launched ChatGPT at the end of November, commentators on all sides have been concerned about the impact AI-driven content-creation will have, particularly in the realm of cybersecurity. In fact, many researchers are concerned that generative AI solutions will democratize cybercrime.
Microsoft Plans to Eliminate Face Analysis Tools in Push for ‘Responsible A.I.’
For years, activists and academics have been raising concerns that facial analysis software that claims to be able to identify a person’s age, gender and emotional state can be biased, unreliable or invasive — and shouldn’t be sold.
Large Language Models (LLM) have made amazing progress in recent years. Most recently, they have demonstrated to answer natural language questions at a surprising performance level. In addition, by clever prompting, these models can change their behavior. In this way, these models blur the line between data and instruction. From "traditional" cybersecurity, we know that this is a problem. The importance of security boundaries between trusted and untrusted inputs for LLMs was underestimated. We show that Prompt Injection is a serious security threat that needs to be addressed as models are deployed to new use-cases and interface with more systems. [PDF DOC] https://arxiv.org/pdf/2302.12173.pdf
How ChatGPT can turn anyone into a ransomware and malware threat actor
Ever since OpenAI launched ChatGPT at the end of November, commentators on all sides have been concerned about the impact AI-driven content-creation will have, particularly in the realm of cybersecurity. In fact, many researchers are concerned that generative AI solutions will democratize cybercrime.
Microsoft Plans to Eliminate Face Analysis Tools in Push for ‘Responsible A.I.’
For years, activists and academics have been raising concerns that facial analysis software that claims to be able to identify a person’s age, gender and emotional state can be biased, unreliable or invasive — and shouldn’t be sold.
Microsoft Plans to Eliminate Face Analysis Tools in Push for ‘Responsible A.I.’
For years, activists and academics have been raising concerns that facial analysis software that claims to be able to identify a person’s age, gender and emotional state can be biased, unreliable or invasive — and shouldn’t be sold.