Found 2 bookmarks
Newest
Google says its AI-based bug hunter found 20 security vulnerabilities
Google says its AI-based bug hunter found 20 security vulnerabilities
techcrunch.com - Google’s AI-powered bug hunter has just reported its first batch of security vulnerabilities. Heather Adkins, Google’s vice president of security, announced Monday that its LLM-based vulnerability researcher Big Sleep found and reported 20 flaws in various popular open source software. Adkins said that Big Sleep, which is developed by the company’s AI department DeepMind as well as its elite team of hackers Project Zero, reported its first-ever vulnerabilities, mostly in open source software such as audio and video library FFmpeg and image-editing suite ImageMagick. Given that the vulnerabilities are not fixed yet, we don’t have details of their impact or severity, as Google does not yet want to provide details, which is a standard policy when waiting for bugs to be fixed. But the simple fact that Big Sleep found these vulnerabilities is significant, as it shows these tools are starting to get real results, even if there was a human involved in this case. “To ensure high quality and actionable reports, we have a human expert in the loop before reporting, but each vulnerability was found and reproduced by the AI agent without human intervention,” Google’s spokesperson Kimberly Samra told TechCrunch. Royal Hansen, Google’s vice president of engineering, wrote on X that the findings demonstrate “a new frontier in automated vulnerability discovery.” LLM-powered tools that can look for and find vulnerabilities are already a reality. Other than Big Sleep, there’s RunSybil and XBOW, among others.
·techcrunch.com·
Google says its AI-based bug hunter found 20 security vulnerabilities
AI slop and fake reports are coming for your bug bounty programs
AI slop and fake reports are coming for your bug bounty programs
techcrunch.com 24.07 - "We're getting a lot of stuff that looks like gold, but it's actually just crap,” said the founder of one security testing firm. AI-generated security vulnerability reports are already having an effect on bug hunting, for better and worse. So-called AI slop, meaning LLM-generated low-quality images, videos, and text, has taken over the internet in the last couple of years, polluting websites, social media platforms, at least one newspaper, and even real-world events. The world of cybersecurity is not immune to this problem, either. In the last year, people across the cybersecurity industry have raised concerns about AI slop bug bounty reports, meaning reports that claim to have found vulnerabilities that do not actually exist, because they were created with a large language model that simply made up the vulnerability, and then packaged it into a professional-looking writeup. “People are receiving reports that sound reasonable, they look technically correct. And then you end up digging into them, trying to figure out, ‘oh no, where is this vulnerability?’,” Vlad Ionescu, the co-founder and CTO of RunSybil, a startup that develops AI-powered bug hunters, told TechCrunch. “It turns out it was just a hallucination all along. The technical details were just made up by the LLM,” said Ionescu. Ionescu, who used to work at Meta’s red team tasked with hacking the company from the inside, explained that one of the issues is that LLMs are designed to be helpful and give positive responses. “If you ask it for a report, it’s going to give you a report. And then people will copy and paste these into the bug bounty platforms and overwhelm the platforms themselves, overwhelm the customers, and you get into this frustrating situation,” said Ionescu. “That’s the problem people are running into, is we’re getting a lot of stuff that looks like gold, but it’s actually just crap,” said Ionescu. Just in the last year, there have been real-world examples of this. Harry Sintonen, a security researcher, revealed that the open source security project Curl received a fake report. “The attacker miscalculated badly,” Sintonen wrote in a post on Mastodon. “Curl can smell AI slop from miles away.” In response to Sintonen’s post, Benjamin Piouffle of Open Collective, a tech platform for nonprofits, said that they have the same problem: that their inbox is “flooded with AI garbage.” One open source developer, who maintains the CycloneDX project on GitHub, pulled their bug bounty down entirely earlier this year after receiving “almost entirely AI slop reports.” The leading bug bounty platforms, which essentially work as intermediaries between bug bounty hackers and companies who are willing to pay and reward them for finding flaws in their products and software, are also seeing a spike in AI-generated reports, TechCrunch has learned.
·techcrunch.com·
AI slop and fake reports are coming for your bug bounty programs