Found 80 bookmarks
Custom sorting
Large Language Models and Elections
Large Language Models and Elections
Earlier this week, the Republican National Committee released a video that it claims was “built entirely with AI imagery.” The content of the ad isn’t especially novel—a dystopian vision of America under a second term with President Joe Biden—but the deliberate emphasis on the technology used to create it stands out: It’s a “Daisy” moment for the 2020s.
·schneier.com·
Large Language Models and Elections
Prompt Injections are bad, mkay?
Prompt Injections are bad, mkay?
Large Language Models (LLM) have made amazing progress in recent years. Most recently, they have demonstrated to answer natural language questions at a surprising performance level. In addition, by clever prompting, these models can change their behavior. In this way, these models blur the line between data and instruction. From "traditional" cybersecurity, we know that this is a problem. The importance of security boundaries between trusted and untrusted inputs for LLMs was underestimated. We show that Prompt Injection is a serious security threat that needs to be addressed as models are deployed to new use-cases and interface with more systems. [PDF DOC] https://arxiv.org/pdf/2302.12173.pdf
·greshake.github.io·
Prompt Injections are bad, mkay?
How ChatGPT can turn anyone into a ransomware and malware threat actor  
How ChatGPT can turn anyone into a ransomware and malware threat actor  
Ever since OpenAI launched ChatGPT at the end of November, commentators on all sides have been concerned about the impact AI-driven content-creation will have, particularly in the realm of cybersecurity. In fact, many researchers are concerned that generative AI solutions will democratize cybercrime.
·venturebeat-com.cdn.ampproject.org·
How ChatGPT can turn anyone into a ransomware and malware threat actor  
Microsoft Plans to Eliminate Face Analysis Tools in Push for ‘Responsible A.I.’
Microsoft Plans to Eliminate Face Analysis Tools in Push for ‘Responsible A.I.’
For years, activists and academics have been raising concerns that facial analysis software that claims to be able to identify a person’s age, gender and emotional state can be biased, unreliable or invasive — and shouldn’t be sold.
·nytimes.com·
Microsoft Plans to Eliminate Face Analysis Tools in Push for ‘Responsible A.I.’
Prompt Injections are bad, mkay?
Prompt Injections are bad, mkay?
Large Language Models (LLM) have made amazing progress in recent years. Most recently, they have demonstrated to answer natural language questions at a surprising performance level. In addition, by clever prompting, these models can change their behavior. In this way, these models blur the line between data and instruction. From "traditional" cybersecurity, we know that this is a problem. The importance of security boundaries between trusted and untrusted inputs for LLMs was underestimated. We show that Prompt Injection is a serious security threat that needs to be addressed as models are deployed to new use-cases and interface with more systems. [PDF DOC] https://arxiv.org/pdf/2302.12173.pdf
·greshake.github.io·
Prompt Injections are bad, mkay?
How ChatGPT can turn anyone into a ransomware and malware threat actor  
How ChatGPT can turn anyone into a ransomware and malware threat actor  
Ever since OpenAI launched ChatGPT at the end of November, commentators on all sides have been concerned about the impact AI-driven content-creation will have, particularly in the realm of cybersecurity. In fact, many researchers are concerned that generative AI solutions will democratize cybercrime.
·venturebeat-com.cdn.ampproject.org·
How ChatGPT can turn anyone into a ransomware and malware threat actor  
Microsoft Plans to Eliminate Face Analysis Tools in Push for ‘Responsible A.I.’
Microsoft Plans to Eliminate Face Analysis Tools in Push for ‘Responsible A.I.’
For years, activists and academics have been raising concerns that facial analysis software that claims to be able to identify a person’s age, gender and emotional state can be biased, unreliable or invasive — and shouldn’t be sold.
·nytimes.com·
Microsoft Plans to Eliminate Face Analysis Tools in Push for ‘Responsible A.I.’
Microsoft Plans to Eliminate Face Analysis Tools in Push for ‘Responsible A.I.’
Microsoft Plans to Eliminate Face Analysis Tools in Push for ‘Responsible A.I.’
For years, activists and academics have been raising concerns that facial analysis software that claims to be able to identify a person’s age, gender and emotional state can be biased, unreliable or invasive — and shouldn’t be sold.
·nytimes.com·
Microsoft Plans to Eliminate Face Analysis Tools in Push for ‘Responsible A.I.’