Criminals Use Generative Artificial Intelligence to Facilitate Financial Fraud
The FBI is warning the public that criminals exploit generative artificial intelligence (AI) to commit fraud on a larger scale which increases the believability of their schemes. Generative AI reduces the time and effort criminals must expend to deceive their targets. Generative AI takes what it has learned from examples input by a user and synthesizes something entirely new based on that information. These tools assist with content creation and can correct for human errors that might otherwise serve as warning signs of fraud. The creation or distribution of synthetic content is not inherently illegal; however, synthetic content can be used to facilitate crimes, such as fraud and extortion.1 Since it can be difficult to identify when content is AI-generated, the FBI is providing the following examples of how criminals may use generative AI in their fraud schemes to increase public recognition and scrutiny.
Exclusive: Chinese researchers develop AI model for military use on back of Meta's Llama
Papers show China reworked Llama model for military tool China's top PLA-linked Academy of Military Science involved Meta says PLA 'unauthorised' to use Llama model * Pentagon says it is monitoring competitors' AI capabilities
MITRE’s AI Incident Sharing initiative helps organizations receive and hand out data on real-world AI incidents. Non-profit technology and R&D company MITRE has introduced a new mechanism that enables organizations to share intelligence on real-world AI-related incidents. Shaped in collaboration with over 15 companies, the new AI Incident Sharing initiative aims to increase community knowledge of threats and defenses involving AI-enabled systems.
Critical flaw in NVIDIA Container Toolkit allows full host takeover
A critical vulnerability in NVIDIA Container Toolkit impacts all AI applications in a cloud or on-premise environment that rely on it to access GPU resources.
In China, AI transformed Ukrainian YouTuber into a Russian
Olga Loiek, a University of Pennsylvania student was looking for an audience on the internet – just not like this. Shortly after launching a YouTube channel in November last year, Loiek, a 21-year-old from Ukraine, found her image had been taken and spun through artificial intelligence to create alter egos on Chinese social media platforms. Her digital doppelgangers - like "Natasha" - claimed to be Russian women fluent in Chinese who wanted to thank China for its support of Russia and make a little money on the side selling products such as Russian candies.
Breaking: Meta halts AI rollout in Europe after ‘request’ from Irish data protection authorities
Facebook and Instagram's parent company Meta is pausing its plans to roll our artificial intelligence tools in Europe, following a request from Ireland's Data Protection Commission (DPC), the firm said in a Friday (14 June) blogpost.
Here’s what to know about Adobe’s Terms of Use updates
We recently rolled out a re-acceptance of our Terms of Use which has led to concerns about what these terms are and what they mean to our customers. This has caused us to reflect on the language we use in our Terms, and the opportunity we have to be clearer and address the concerns raised by the community. Over the next few days, we will speak to our customers with a plan to roll out updated changes by June 18, 2024.
Private Cloud Compute: A new frontier for AI privacy in the cloud
Secure and private AI processing in the cloud poses a formidable new challenge. To support advanced features of Apple Intelligence with larger foundation models, we created Private Cloud Compute (PCC), a groundbreaking cloud intelligence system designed specifically for private AI processing. Built with custom Apple silicon and a hardened operating system, Private Cloud Compute extends the industry-leading security and privacy of Apple devices into the cloud, making sure that personal user data sent to PCC isn’t accessible to anyone other than the user — not even to Apple. We believe Private Cloud Compute is the most advanced security architecture ever deployed for cloud AI compute at scale.
I don’t think it’s an exaggeration to predict that artificial intelligence will affect every aspect of our society. Not by doing new things. But mostly by doing things that are already being done by humans, perfectly competently. Replacing humans with AIs isn’t necessarily interesting. But when an AI takes over a human task, the task changes.
‘The machine did it coldly’: Israel used AI to identify 37,000 Hamas targets
Israeli intelligence sources reveal use of ‘Lavender’ system in Gaza war and claim permission given to kill civilians in pursuit of low-ranking militants
AI bots hallucinate software packages and devs download them
Not only that but someone, having spotted this reoccurring hallucination, had turned that made-up dependency into a real one, which was subsequently downloaded and installed thousands of times by developers as a result of the AI's bad advice, we've learned. If the package was laced with actual malware, rather than being a benign test, the results could have been disastrous.
Lass Security's recent research on AI Package Hallucinations extends the attack technique to GPT-3.5-Turbo, GPT-4, Gemini Pro (Bard), and Coral (Cohere).
Microsoft Copilot for Security: General Availability details
Microsoft Copilot for Security will be generally available on April 1st. Read this blog to learn about new productivity research, product capabilities,..
World’s first major act to regulate AI passed by European lawmakers
The European Union Parliament on Wednesday approved the world's first major set of regulatory ground rules to govern the mediatized artificial intelligence at the forefront of tech investment.
Microsoft AI engineer says Copilot Designer creates disturbing images
Shane Jones, who’s worked at Microsoft for six years, has been testing the company’s AI image generator in his free time and told CNBC he is disturbed by his findings. He’s warned Microsoft of the sexual and violent content that the product, Copilot Designer, is creating, but said the company isn’t taking appropriate action. On Wednesday, Jones escalated the matter, sending letters to FTC Chair Lina Khan and to Microsoft’s board, which were viewed by CNBC.