AI_bookmarks

1491 bookmarks
Newest
Kriti Sharma: How to keep human bias out of AI
Kriti Sharma: How to keep human bias out of AI
AI algorithms make important decisions about you all the time -- like how much you should pay for car insurance or whether or not you get that job interview. But what happens when these machines are built with human bias coded into their systems? Technologist Kriti Sharma explores how the lack of diversity in tech is creeping into our AI, offering three ways we can start making more ethical algorithms.
·ted.com·
Kriti Sharma: How to keep human bias out of AI
OpenAI and journalism
OpenAI and journalism
We support journalism, partner with news organizations, and believe The New York Times lawsuit is without merit.
Our goal is to develop AI tools that empower people to solve problems that are otherwise out of reach. People worldwide are already using our technology to improve their daily lives. Millions of developers and more than 92% of Fortune 500 are building on our products today.While we disagree with the claims in The New York Times lawsuit, we view it as an opportunity to clarify our business, our intent, and how we build our technology. Our position can be summed up in these four points, which we flesh out below:We collaborate with news organizations and are creating new opportunitiesTraining is fair use, but we provide an opt-out because it’s the right thing to do“Regurgitation” is a rare bug that we are working to drive to zeroThe New York Times is not telling the full story1. We collaborate with news organizations and are creating new opportunitiesWe work hard in our technology design process to support news organizations. We’ve met with dozens, as well as leading industry organizations like the News/Media Alliance, to explore opportunities, discuss their concerns, and provide solutions. We aim to learn, educate, listen to feedback, and adapt.Our goals are to support a healthy news ecosystem, be a good partner, and create mutually beneficial opportunities. With this in mind, we have pursued partnerships with news organizations to achieve these objectives:Deploy our products to benefit and support reporters and editors, by assisting with time-consuming tasks like analyzing voluminous public records and translating stories.Teach our AI models about the world by training on additional historical, non-publicly available content.Display real-time content with attribution in ChatGPT, providing new ways for news publishers to connect with readers.Our early partnerships with the Associated Press, Axel Springer, American Journalism Project and NYU offer a glimpse into our approach.2. Training is fair use, but we provide an opt-out because it’s the right thing to doTraining AI models using publicly available internet materials is fair use, as supported by long-standing and widely accepted precedents. We view this principle as fair to creators, necessary for innovators, and critical for US competitiveness.The principle that training AI models is permitted as a fair use is supported by a wide range of academics, library associations, civil society groups, startups, leading US companies, creators, authors, and others that recently submitted comments to the US Copyright Office. Other regions and countries, including the European Union, Japan, Singapore, and Israel also have laws that permit training models on copyrighted content—an advantage for AI innovation, advancement, and investment.That being said, legal right is less important to us than being good citizens. We have led the AI industry in providing a simple opt-out process for publishers (which The New York Times adopted in August 2023) to prevent our tools from accessing their sites.3. “Regurgitation” is a rare bug that we are working to drive to zeroOur models were designed and trained to learn concepts in order to apply them to new problems.Memorization is a rare failure of the learning process that we are continually making progress on, but it’s more common when particular content appears more than once in training data, like if pieces of it appear on lots of different public websites. So we have measures in place to limit inadvertent memorization and prevent regurgitation in model outputs. We also expect our users to act responsibly; intentionally manipulating our models to regurgitate is not an appropriate use of our technology and is against our terms of use.Just as humans obtain a broad education to learn how to solve new problems, we want our AI models to observe the range of the world’s information, including from every language, culture, and industry. Because models learn from the enormous aggregate of human knowledge, any one sector—including news—is a tiny slice of overall training data, and any single data source—including The New York Times—is not significant for the model’s intended learning.4. The New York Times is not telling the full storyOur discussions with The New York Times had appeared to be progressing constructively through our last communication on December 19. The negotiations focused on a high-value partnership around real-time display with attribution in ChatGPT, in which The New York Times would gain a new way to connect with their existing and new readers, and our users would gain access to their reporting. We had explained to The New York Times that, like any single source, their content didn't meaningfully contribute to the training of our existing models and also wouldn't be sufficiently impactful for future training. Their lawsuit on December 27—which we learned about by reading The New York Times—came as a surprise and disappointment to us.Along the way, they had mentioned seeing some regurgitation of their content but repeated
·openai.com·
OpenAI and journalism
You Don't Think AI Could Do Your Job. What If You're Wrong? : Consider This from NPR
You Don't Think AI Could Do Your Job. What If You're Wrong? : Consider This from NPR
2023 might go down as the year that artificial intelligence became mainstream. It was a topic of discussion everywhere - from news reports, to class rooms to the halls of Congress.ChatGPT made its public debut a little over a year ago. If you'd never thought much about AI before, you're probably thinking - and maybe worrying - about it now.Jobs are an area that will almost certainly be impacted as AI develops. But whether artificial intelligence will free us from drudge work, or leave us unemployed depends on who you talk to.Host Ari Shapiro speaks with NPR's Andrea Hsu on how people are adapting to AI in the workplace and ways to approach the technology with a plan instead of panic.This episode also feature's reporting on AI and Hollywood background actors from NPR's Bobby Allyn.Email us at considerthis@npr.org
·npr.org·
You Don't Think AI Could Do Your Job. What If You're Wrong? : Consider This from NPR
Copilot and Accessibility
Copilot and Accessibility
Learn how Copilot in Windows (https://blogs.windows.com/windowsexperience/2023/09/26/the-most-personal-windows-11-experience-begins-rolling-out-today/) and M...
·youtube.com·
Copilot and Accessibility
🌍 Mapping High Schools with AI Advisory Boards
🌍 Mapping High Schools with AI Advisory Boards
My mission is to spread awareness about the incredible potential of AI and AI advisory boards in education. Through my website, aiadvisoryboards.wordpress.com, I aim to inspire educators, administr…
·aiadvisoryboards.wordpress.com·
🌍 Mapping High Schools with AI Advisory Boards
FTC's Newly Proposed Privacy Rules Could Bring "Substantial Changes" to Ed-Tech Industry - Market Brief
FTC's Newly Proposed Privacy Rules Could Bring "Substantial Changes" to Ed-Tech Industry - Market Brief
The consumer protection agency's proposed rules could limit companies' ability to collect data for one product and use it to develop another one.
Vance, from the Public Interest Privacy Center, said the rule could mean that teachers are restricted from trying out ed-tech products individually in their classrooms if using that product requires accessing students’ personal data, and the district doesn’t provide them with a data custodian or set process for data sharing.
·marketbrief.edweek.org·
FTC's Newly Proposed Privacy Rules Could Bring "Substantial Changes" to Ed-Tech Industry - Market Brief
The Problem of Misinformation in an Era Without Trust
The Problem of Misinformation in an Era Without Trust
Elon Musk thinks a free market of ideas will self-correct. Liberals want to regulate it. Both are missing a deeper predicament.
“On Disinformation: How to Fight for Truth and Protect Democracy,”
·nytimes.com·
The Problem of Misinformation in an Era Without Trust
ChatGPT Helps, and Worries, Business Consultants, Study Finds
ChatGPT Helps, and Worries, Business Consultants, Study Finds
The A.I. tool helped most with creative tasks. With more analytical work, however, the technology led to more mistakes.
Studies this year of ChatGPT in legal analysis and white-collar writing chores have found that the bot helps lower-performing people more than it does the most skilled. Dr. Lakhani and his colleagues found the same effect in their study.On a task that required reasoning based on evidence, however, ChatGPT was not helpful at all. In this group, volunteers were asked to advise a corporation that had been invented for the study. They needed to interpret data from spreadsheets and relate it to mock transcripts of interviews with executives.Here, ChatGPT lulled employees into trusting it too much. Unaided humans had the correct answer 85 percent of the time. People who used ChatGPT without training scored just over 70 percent. Those who had been trained did even worse, getting the answer only 60 percent of the time.
·nytimes.com·
ChatGPT Helps, and Worries, Business Consultants, Study Finds
Fillout | Make any form in minutes
Fillout | Make any form in minutes
Create powerful forms, surveys and quizzes your audience will answer. Store responses directly where you need them.
·fillout.com·
Fillout | Make any form in minutes
Audioread: Read. In Audio.
Audioread: Read. In Audio.
Listen to articles, PDFs, emails, etc. in your podcast app or browser. Ultra-realistic AI voices. Read while you exercise, cook, commute, and do other things.
·audioread.com·
Audioread: Read. In Audio.
2023 in AI: The Insane Year That Changed In All!
2023 in AI: The Insane Year That Changed In All!
Here's a recap of the craziest year in the world of AI. Grab Invideo AI at https://apps.apple.com/in/app/invideo-ai/id6471394316 and if you upgrade, use code...
·youtube.com·
2023 in AI: The Insane Year That Changed In All!
GPT-4.5 details may have just leaked
GPT-4.5 details may have just leaked
Details about OpenAI's next LLM update, GPT-4.5 have leaked, offering information about the prices and capabilities of ChatGPT's next update.
·bgr.com·
GPT-4.5 details may have just leaked
State of AI Report 2023
State of AI Report 2023
The State of AI Report analyses the most interesting developments in AI. Read and download here.
·stateof.ai·
State of AI Report 2023