data privacy/ethics

71 bookmarks
Custom sorting
Suno's mission is to make it possible for everyone to make music. We imagine a future where music is a bigger, more valuable, and more meaningful part of people's lives than it even is today. Technology enables a future where the whole world can explore, create, and be active…
Suno's mission is to make it possible for everyone to make music. We imagine a future where music is a bigger, more valuable, and more meaningful part of people's lives than it even is today. Technology enables a future where the whole world can explore, create, and be active…
— Suno (@suno_ai_)
Suno's mission is to make it possible for everyone to make music. We imagine a future where music is a bigger, more valuable, and more meaningful part of people's lives than it even is today. Technology enables a future where the whole world can explore, create, and be active participants in an art form most have only ever consumed. From professional musicians seeking inspiration to friends and family writing songs for each other, we are exploring new ways to create, listen to, and experience music. So far, more than 12 million people are engaging with music in new ways that wouldn't be possible without Suno. We see this as early but promising progress. Major record labels see this vision as a threat to their business. Each and every time there's been innovation in music — from the earliest forms of recorded music, to sampling, to drum machines, to remixing, MP3s, and streaming music — the record labels have attempted to limit progress. They have spent decades attempting to control the terms of how we create and enjoy music, and this time is no different. So, it is perhaps not a surprise that on June 24th, members of the Recording Industry Association of America, which represents the major record labels, filed a lawsuit against Suno, alleging that the data used in training our music generation technologies infringed on the copyrights of the major record labels that they represent. This lawsuit is fundamentally flawed on both the facts and the law, and is nothing more than yet another instance where they chose litigation over innovation. For starters, the major record labels clearly hold misconceptions about how our technology works. Suno helps people create music through a similar process to one humans have used forever: by learning styles, patterns, and forms (in essence, the "grammar" or music), and then inventing new music around them. The major record labels are trying to argue that neural networks are mere parrots — copying and repeating — when in reality model training looks a lot more like a kid learning to write new rock songs by listening religiously to rock music. Like that kid, Suno gets better the more our AI learns. We train our models on medium- and high-quality music we can find on the open internet — just as Google's Gemini, Microsoft's Copilot, Anthropic's Claude, OpenAI's ChatGPT, and even Apple's new Apple Intelligence train their models on the open internet. Much of the open internet indeed contains copyrighted materials, and some of it is owned by major record labels. But, just like the kid writing their own rock songs after listening to the genre — or a teacher or a journalist reviewing existing materials to draw new insights — learning is not infringing. It never has been, and it is not now. The timing of this lawsuit was somewhat surprising. When this lawsuit landed, Suno was, in fact, having productive discussions with a number of the RIAA's major record label members to find ways of expanding the pie for music together. We did so not because we had to, but because we believe that the music industry could help us lead this expansion of opportunity for everyone, rather than resisting it. Whether this lawsuit is the result of over-eager lawyers throwing their weight around, or a conscious strategy to gain leverage in our commercial discussions, we believe that this lawsuit is an unnecessary impediment to a larger and more valuable future for music. This is particularly the case because Suno is a new kind of musical instrument, one that enables a new kind of creative process for everyone and opens new business opportunities for the industry. Suno is designed for original music, and we prize originality, both in how we build our product and in how people use it. People who use Suno are using the product to create their own, original music. They are not trying to recreate an existing song that can be heard somewhere else on the internet for free. But, even if they were trying to copy existing music, we have myriad controls in place to encourage originality and prevent duplicative use cases. We do so more aggressively than any other company in the industry, including other startups. Some of our originality-guarding features include checking for and preventing copyrighted content in audio uploads, and disallowing artist-based descriptions in requests to generate music. Why do we work to encourage originality? We do this because it makes for a more fun and engaging experience to create entirely original compositions on Suno. We do it because we think it makes Suno incredibly valuable to be a place where new musical talent can shine. AI allows anyone to realize the songs in their head, regardless of the money, equipment, or connections that they have. The future is an explosion of new artists that are creating music in new ways, building fan bases, finding new reasons to smile, and getting famous. We hope that the major record labels realize that we can build a stronger foundation
·x.com·
Suno's mission is to make it possible for everyone to make music. We imagine a future where music is a bigger, more valuable, and more meaningful part of people's lives than it even is today. Technology enables a future where the whole world can explore, create, and be active…
Court ruling suggests AI systems may be in the clear as long as they don't make exact copies
Court ruling suggests AI systems may be in the clear as long as they don't make exact copies
A California district court has partially dismissed a copyright lawsuit against Microsoft's GitHub Copilot programming tool and its former underlying language model, OpenAI's Codex. The ruling could set a precedent for AI tools trained on copyrighted data.
A California district court has partially dismissed a copyright lawsuit against GitHub Copilot and OpenAI's Codex, rejecting claims that the AI tools infringe copyrights by reproducing source code without adhering to license terms. The court found that plaintiffs failed to prove Copilot makes identical copies of protected works, which is necessary for Digital Millennium Copyright Act claims. It dismissed arguments about Copilot's ability to accurately reproduce copyrighted code. The decision could set a precedent for AI systems trained on copyrighted data. While dismissing claims for unjust enrichment and unfair competition, the court allowed a claim for breach of open-source license agreements to proceed.
·the-decoder.com·
Court ruling suggests AI systems may be in the clear as long as they don't make exact copies
About The Student Privacy Pledge | Pledge to Parents & Students
About The Student Privacy Pledge | Pledge to Parents & Students
The Student Privacy Pledge was introduced to safeguard student privacy regarding the collection, maintenance, and use of student personal information.
·studentprivacypledge.org·
About The Student Privacy Pledge | Pledge to Parents & Students
4 Types of Gen AI Risk and How to Mitigate Them
4 Types of Gen AI Risk and How to Mitigate Them
Many organizations are understandably hesitant to adopt gen AI applications, citing concerns about privacy and security threats, copyright infringement, the possibility of bias and discrimination in its outputs, and other hazards. Risk around using gen AI can be classified based on two factors: intent and usage. Accidental misapplication of gen AI is different from deliberate malpractices (intent). Similarly, using gen AI tools to create content is differentiated from consuming content that other parties may have created with gen AI (usage). To mitigate the risk of gen AI content misuse and misapplication, organizations need to develop the capabilities to detect, identify, and prevent the spread of such potentially misleading content.
·hbr.org·
4 Types of Gen AI Risk and How to Mitigate Them
A Framework for Ethical Decision Making
A Framework for Ethical Decision Making
Step by step guidance on ethical decision making, including identifying stakeholders, getting the facts, and applying classic ethical approaches.
·scu.edu·
A Framework for Ethical Decision Making
A.I. Has a Measurement Problem
A.I. Has a Measurement Problem
Which A.I. system writes the best computer code or generates the most realistic image? Right now, there’s no easy way to answer those questions.
·nytimes.com·
A.I. Has a Measurement Problem
AI Ethics: Global Perspectives
AI Ethics: Global Perspectives
AI Ethics: Global Perspectives is a free, online course designed to raise awareness and help institutions work toward a more responsible use of AI. It brings together leading experts in the field of AI from around the world to consider ethical ramifications of AI and rectify initiatives that might be harmful to particular people and groups in society.
·aiethicscourse.org·
AI Ethics: Global Perspectives
Welcome to the Era of BadGPTs
Welcome to the Era of BadGPTs
A new crop of nefarious chatbots with names like “BadGPT” and “FraudGPT” are springing up on the darkest corners of the web, as cybercriminals look to tap the same artificial intelligence behind OpenAI’s ChatGPT.
·wsj.com·
Welcome to the Era of BadGPTs
AI in Education: Privacy and Security
AI in Education: Privacy and Security
AI has arrived in K-12, and with it some new questions about student data privacy and security. Here's what you need to know.
·esparklearning.com·
AI in Education: Privacy and Security
Kriti Sharma: How to keep human bias out of AI
Kriti Sharma: How to keep human bias out of AI
AI algorithms make important decisions about you all the time -- like how much you should pay for car insurance or whether or not you get that job interview. But what happens when these machines are built with human bias coded into their systems? Technologist Kriti Sharma explores how the lack of diversity in tech is creeping into our AI, offering three ways we can start making more ethical algorithms.
·ted.com·
Kriti Sharma: How to keep human bias out of AI
FTC's Newly Proposed Privacy Rules Could Bring "Substantial Changes" to Ed-Tech Industry - Market Brief
FTC's Newly Proposed Privacy Rules Could Bring "Substantial Changes" to Ed-Tech Industry - Market Brief
The consumer protection agency's proposed rules could limit companies' ability to collect data for one product and use it to develop another one.
Vance, from the Public Interest Privacy Center, said the rule could mean that teachers are restricted from trying out ed-tech products individually in their classrooms if using that product requires accessing students’ personal data, and the district doesn’t provide them with a data custodian or set process for data sharing.
·marketbrief.edweek.org·
FTC's Newly Proposed Privacy Rules Could Bring "Substantial Changes" to Ed-Tech Industry - Market Brief
The impact of generative AI on Black communities
The impact of generative AI on Black communities
Generative AI has the potential to widen the racial economic gap in the United States by $43 billion each year. But deployed thoughtfully, it could actually remove barriers to Black economic mobility.
·mckinsey.com·
The impact of generative AI on Black communities
AI: A Fork in the Road | Full Documentary
AI: A Fork in the Road | Full Documentary
We are about to enter the advent of a great new technology era. While AI has been around for a while, the advances we’ve seen in the sophistication and the ...
·youtube.com·
AI: A Fork in the Road | Full Documentary