What is Multimodal Generative Artificial Intelligence?
The term multimodal generative intelligence is getting thrown around a lot recently – even more so now that the most popular models like GPT have added features like image recognition and gen…
Transduction, on the other hand, is changing meaning across modes, such as from text to image. So in image generation, or audio, or video, we are changing the meaning from one mode to another – or, rather, the algorithm is changing the meaning in response to our prompt.
Cheating Fears Over Chatbots Were Overblown, New Research Suggests
A.I. tools like ChatGPT did not boost the frequency of cheating in high schools, Stanford researchers say.
Among the high school students who said they had used an A.I. chatbot, about 55 to 77 percent said they had used it to generate an idea for a paper, project or assignment; about 19 to 49 percent said they had used it to edit or complete a portion of a paper; and about 9 to 16 percent said they had used it to write all of a paper or other assignment, the Stanford researchers found.
National ChatGPT Survey: Teachers Even More Accepting of Chatbot Than Students
42% of students use ChatGPT, up from 33% in a prior survey. Their teachers are way ahead of them, with now 63% saying they’ve used the tool on the job
Teacher and parent attitudes about ChatGPT, the popular AI chatbot that debuted in late 2022, are shifting slightly, according to new findings out today from the polling firm Impact Research.
The survey is the latest in a series commissioned by the Walton Family Foundation, which is tracking the topic, as well as attitudes about STEM education more broadly.
The researchers say Americans and teachers especially are beginning to see the potential of incorporating AI tools like ChatGPT into K-12 education — and that, in their experience, it’s already helping students learn.
The new findings come as the U.S. Federal Trade Commission opens an investigation into OpenAI, ChatGPT’s creator, probing whether it put personal reputations and data at risk. The FTC has warned that consumer protection laws apply to AI, even as the Biden administration and Congress push for new regulations on the field.
RelatedThe Promise of Personalized Learning Never Delivered. Today’s AI Is Different
OpenAI is also a defendant in several recent lawsuits filed by authors — including the comedian Sarah Silverman — who say the technology “ingested” their work, improperly appropriating their copyrighted books without the authors’ consent to train its AI program. The suits each seek nearly $1 billion in damages, the Los Angeles Times reported.
The latest results are based on a national survey of 1,000 K-12 teachers, 1,002 students, ages 12-18; 802 voters and 916 parents. It was conducted by Impact Research between June 23 and July 6. The plus-or-minus margin of error is 3 percentage points for the teacher and student results, 3.5 percentage points for the voter results and 3.2 for the parent responses.
Here are the top five findings:
1. Nearly everyone knows what ChatGPT is
About seven months after it first debuted publicly, pretty much everyone knows what ChatGPT is. It’s broadly recognized by 80% of registered voters, according to the new survey, by 71% of parents and 73% of teachers.
Meanwhile, slightly fewer students — just 67% — tell pollsters they know what it is.
2. Despite the doom-and-gloom headlines about AI taking over the world, lots of people view ChatGPT favorably
Surprisingly, parents now view the chatbot more favorably than teachers: 61% of parents are fine with it, according to the new survey, compared with only 58% of teachers and just 54% of students.
3. Just a fraction of students say they’re using ChatGPT … but lots of teachers admit to using it
In February, a previous survey found that 33% of students said they’d used ChatGPT for school. That figure is now creeping up to 42%.
But their teachers are way ahead of them: 63% of teachers say they’ve used the chatbot on the job, up from February, when just 50% of teachers were taking advantage of the tool. Four in 10 (40%) teachers now report using it at least once a week.
4. Teachers … and parents … believe it’s legit
Teachers who use ChatGPT overwhelmingly give it good reviews. Fully 84% say it has positively impacted their classes, with about 6 in 10 (61%) predicting it will have “legitimate educational uses that we cannot ignore.”
Related‘This Changes Everything’: AI Is About to Upend Teaching and Learning
Nearly two-thirds (64%) of parents think teachers and schools should allow the use of ChatGPT for schoolwork. That includes 28% who say they should not just tolerate but encourage its use.
5. It’s not just for cheating anymore
While lots of headlines since last winter have touted ChatGPT’s superior ability to help students cheat on essays and the like, just 23% of teachers now believe cheating will be its likely sole use, down slightly from the spring (24%).
Opinion: Is ChatGPT's Hype Outpacing Its Usefulness?
The history of artificial intelligence is rife with grandiose predictions, and while ChatGPT can help students organize large quantities of data or produce creative insights, it's still quite limited and prone to error.
Infusing AI into edtech will open a new world of teaching and learning opportunities
“The ability to create performance tasks aligned to rubrics and generate multiple examples for students to learn from will be a game changer for assessment.”
Language Matters, and What Matters Has Changed — Conrad Wolfram
LLMs have dramatically changed computer programming. Much of the lower-level coding will be done by AIs. Higher-level languages, like Wolfram Language, will be even more accessible with AI assistance.
AP and IB Programs Disagree Over Whether to Allow ChatGPT
The two agencies, which provide curriculum for advanced high school classes, published very different policies on their websites, with one banning the use of generative AI and the other welcoming it.
"Recently, I tried an informal experiment, calling colleagues and asking them if there’s anything specific on which we can all seem to agree. I’ve found that there is a foundation of agreement. We all seem to agree that deepfakes—false but real-seeming images, videos, and so on—should be labelled as such by the programs that create them. Communications coming from artificial people, and automated interactions that are designed to manipulate the thinking or actions of a human being, should be labelled as well. We also agree that these labels should come with actions that can be taken. People should be able to understand what they’re seeing, and should have reasonable choices in return.
How can all this be done? There is also near-unanimity, I find, that the black-box nature of our current A.I. tools must end. The systems must be made more transparent. We need to get better at saying what is going on inside them and why. This won’t be easy. The problem is that the large-model A.I. systems we are talking about aren’t made of explicit ideas. There is no definite representation of what the system “wants,†no label for when it is doing a particular thing, like manipulating a person. There is only a giant ocean of jello—a vast mathematical mixing. A writers’-rights group has proposed that real human authors be paid in full when tools like GPT are used in the scriptwriting process; after all, the system is drawing on scripts that real people have made. But when we use A.I. to produce film clips, and potentially whole movies, there won’t necessarily be a screenwriting phase. A movie might be produced that appears to have a script, soundtrack, and so on, but it will have been calculated into existence as a whole. Similarly, no sketch precedes the generation of a painting from an illustration A.I. Attempting to open the black box by making a system spit out otherwise unnecessary items like scripts, sketches, or intentions will involve building another black box to interpret the first—an infinite regress.
At the same time, it’s not true that the interior of a big model has to be a trackless wilderness. We may not know what an “idea†is from a formal, computational point of view, but there could be tracks made not of ideas but of people. At some point in the past, a real person created an illustration that was input as data into the model, and, in combination with contributions from other people, this was transformed into a fresh image. Big-model A.I. is made of people—and the way to open the black box is to reveal them.
This concept, which I’ve contributed to developing, is usually called “data dignity.†It appeared, long before the rise of big-model “A.I.,†as an alternative to the familiar arrangement in which people give their data for free in exchange for free services, such as internet searches or social networking. Data dignity is sometimes known as “data as labor†or “plurality research.†The familiar arrangement has turned out to have a dark side: because of “network effects,†a few platforms take over, eliminating smaller players, like local newspapers. Worse, since the immediate online experience is supposed to be free, the only remaining business is the hawking of influence. Users experience what seems to be a communitarian paradise, but they are targeted by stealthy and addictive algorithms that make people vain, irritable, and paranoid.
In a world with data dignity, digital stuff would typically be connected with the humans who want to be known for having made it. In some versions of the idea, people could get paid for what they create, even when it is filtered and recombined through big models, and tech hubs would earn fees for facilitating things that people want to do. Some people are horrified by the idea of capitalism online, but this would be a more honest capitalism. The familiar “free†arrangement has been a disaster.
One of the reasons the tech community worries that A.I. could be an existential threat is that it could be used to toy with people, just as the previous wave of digital technologies have been. Given the power and potential reach of these new systems, it’s not unreasonable to fear extinction as a possible result. Since that danger is widely recognized, the arrival of big-model A.I. could be an occasion to reformat the tech industry for the better.
Implementing data dignity will require technical research and policy innovation. In that sense, the subject excites me as a scientist. Opening the black box will only make the models more interesting. And it might help us understand more about language, which is the human invention that truly impresses, and the one that we are still exploring after all these hundreds of thousands of years.
Could data dignity address the economic worries that are often expressed about A.I.? The main concern is that workers will be devalued or displaced. Publicly, techies will sometimes say that, in the coming years, people who work with A.I. will be more productive and will find new types of jobs in a more productive economy. (A worker might become a prompt engineer for A.I. programs, for instance—someone who collaborates with or controls an A.I.) And yet, in private, the same people will quite often say, “No, A.I. will overtake this idea of collaboration.†No more remuneration for today’s accountants, radiologists, truck drivers, writers, film directors, or musicians.
A data-dignity approach would trace the most unique and influential contributors when a big model provides a valuable output. For instance, if you ask a model for “an animated movie of my kids in an oil-painting world of talking cats on an adventure,†then certain key oil painters, cat portraitists, voice actors, and writers—or their estates—might be calculated to have been uniquely essential to the creation of the new masterpiece. They would be acknowledged and motivated. They might even get paid.
There is a fledgling data-dignity research community, and here is an example of a debate within it: How detailed an accounting should data dignity attempt? Not everyone agrees. The system wouldn’t necessarily account for the billions of people who have made ambient contributions to big models—those who have added to a model’s simulated competence with grammar, for example. At first, data dignity might attend only to the small number of special contributors who emerge in a given situation. Over time, though, more people might be included, as intermediate rights organizations—unions, guilds, professional groups, and so on—start to play a role. People in the data-dignity community sometimes call these anticipated groups mediators of individual data (mids) or data trusts. People need collective-bargaining power to have value in an online world—especially when they might get lost in a giant A.I. model. And when people share responsibility in a group, they self-police, reducing the need, or temptation, for governments and companies to censor or control from above. Acknowledging the human essence of big models might lead to a blossoming of new positive social institutions.
Data dignity is not just for white-collar roles. Consider what might happen if A.I.-driven tree-trimming robots are introduced. Human tree trimmers might find themselves devalued or even out of work. But the robots could eventually allow for a new type of indirect landscaping artistry. Some former workers, or others, might create inventive approaches—holographic topiary, say, that looks different from different angles—that find their way into the tree-trimming models. With data dignity, the models might create new sources of income, distributed through collective organizations. Tree trimming would become more multifunctional and interesting over time; there would be a community motivated to remain valuable. Each new successful introduction of an A.I. or robotic application could involve the inauguration of a new kind of creative work. In ways large and small, this could help ease the transition to an economy into which models are integrated.
Many people in Silicon Valley see universal basic income as a solution to potential economic problems created by A.I. But U.B.I. amounts to putting everyone on the dole in order to preserve the idea of black-box artificial intelligence. This is a scary idea, I think, in part because bad actors will want to seize the centers of power in a universal welfare system, as in every communist experiment. I doubt that data dignity could ever grow enough to sustain all of society, but I doubt that any social or economic principle will ever be complete. Whenever possible, the goal should be to at least establish a new creative class instead of a new dependent class.
There are also non-altruistic reasons for A.I. companies to embrace data dignity. The models are only as good as their inputs. It’s only through a system like data dignity that we can expand the models into new frontiers. Right now, it’s much easier to get an L.L.M. to write an essay than it is to ask the program to generate an interactive virtual-reality world, because there are very few virtual worlds in existence. Why not solve that problem by giving people who add more virtual worlds a chance for prestige and income?
Could data dignity help with any of the human-annihilation scenarios? A big model could make us incompetent, or confuse us so much that our society goes collectively off the rails; a powerful, malevolent person could use A.I. to do us all great harm; and some people also think that the model itself could “jailbreak,†taking control of our machines or weapons and using them against us.
We can find precedents for some of these scenarios not just in science fiction but in more ordinary market and technology failures. An example is the 2019 catastrophe related to Boeing’s 737 max jets. The planes included a flight-path-co...