Hank Green's AI Blunder

Hank Green's AI Blunder

45 bookmarks
Custom sorting
How Religion Intersects With Americans’ Views on the Environment
How Religion Intersects With Americans’ Views on the Environment
Most U.S. adults – including a solid majority of Christians and large numbers of people who identify with other religious traditions – consider the Earth sacred and believe God gave humans a duty to care for it. But highly religious Americans are far less likely than other U.S. adults to express concern about warming temperatures around the globe.
·pewresearch.org·
How Religion Intersects With Americans’ Views on the Environment
Shocking number of Americans believe we are living in the end times
Shocking number of Americans believe we are living in the end times
Those who believe are living in the end times are less likely to say climate change is an extremely or very serious problem than those who don't.
Those who think we are living in the end times are less likely to say climate change is an extremely or very serious problem (51 percent) than those who do not believe this (62 percent).
·newsweek.com·
Shocking number of Americans believe we are living in the end times
EU proposes softening AI and data privacy regulations
EU proposes softening AI and data privacy regulations
The EU is responding to calls by businesses and member states that have argued the bloc needs to keep up with tech innovation. Meanwhile, cookie consent pop-up banners are also set to be scaled back.
·dw.com·
EU proposes softening AI and data privacy regulations
"“The whole thing looks to me like a media stunt, to try to grab the attention of the media, the public, and policymakers and focus everyone on the distraction of scifi scenarios,” @emilymbender “This would seem to serve two purposes: it paints their tech as way more powerful
"“The whole thing looks to me like a media stunt, to try to grab the attention of the media, the public, and policymakers and focus everyone on the distraction of scifi scenarios,” @emilymbender “This would seem to serve two purposes: it paints their tech as way more powerful
·x.com·
"“The whole thing looks to me like a media stunt, to try to grab the attention of the media, the public, and policymakers and focus everyone on the distraction of scifi scenarios,” @emilymbender “This would seem to serve two purposes: it paints their tech as way more powerful
AI and the threat of "human extinction": What's really going on here?
AI and the threat of "human extinction": What's really going on here?
Commentary: Philosopher Émile P. Torres unpacks the claims that AI threatens the extinction of our species
Understanding this is a two-step process. First, we need to make sense of what’s behind this statement. The short answer concerns a cluster of ideologies that Dr. Timnit Gebru and I have called the “TESCREAL bundle.” The term is admittedly clunky, but the concept couldn’t be more important, because this bundle of overlapping movements and ideologies has become hugely influential among the tech elite.
TESCREALism — meaning the worldview that arises from this bundle — is simple enough: at its heart is a techno-utopian vision of the future in which we re-engineer humanity, colonize space, plunder the cosmos, and establish a sprawling intergalactic civilization full of trillions and trillions of “happy” people, nearly all of them “living” inside enormous computer simulations. In the process, all our problems will be solved, and eternal life will become a real possibility.
·salon.com·
AI and the threat of "human extinction": What's really going on here?
AI doomsayers funded by billionaires ramp up lobbying
AI doomsayers funded by billionaires ramp up lobbying
Nonprofits backed by tech billionaires and warning of an AI cataclysm are deploying lobbyists in an effort to press Capitol Hill to pass AI safety bills.
The uptick in lobbying work — and the policies CAIP and CAIS are pushing — could directly benefit top AI firms, said Suresh Venkatasubramanian, a professor at Brown University who co-authored a 2022 White House document that focused more on AI’s near-term risks, including its potential to undermine privacy or increase discrimination through biased screening tools.
The similarly named Center for AI Policy and Center for AI Safety both registered their first lobbyists in late 2023, raising the profile of a sprawling influence battle that’s so far been fought largely through think tanks and congressional fellowships.
Each nonprofit spent close to $100,000 on lobbying in the last three months of the year. The groups draw money from organizations with close ties to the AI industry like Open Philanthropy, financed by Facebook co-founder Dustin Moskovitz, and Lightspeed Grants, backed by Skype co-founder Jaan Tallinn.
Their message includes policies like CAIP’s call for legislation that would hold AI developers liable for “severe harms,” require permits to develop “high-risk” systems and empower regulators to “pause AI projects if they identify a clear emergency.”
·politico.com·
AI doomsayers funded by billionaires ramp up lobbying
AI Doomerism Is a Decoy
AI Doomerism Is a Decoy
Big Tech’s warnings about an AI apocalypse are distracting us from years of actual harms their products have caused.
·theatlantic.com·
AI Doomerism Is a Decoy
The Anti-AI Grift + College Football Surveillance
The Anti-AI Grift + College Football Surveillance
+ The great Kirkification Pop the Balloon dating, AI childbirth, apocalypse capitalism, Costco singles night, Vine 2.0, Trump Clinton shippers, flipping the camera, and a Ralph Lauren x-mas
·usermag.co·
The Anti-AI Grift + College Football Surveillance
The Anti-AI Grift + College Football Surveillance
The Anti-AI Grift + College Football Surveillance
+ The great Kirkification Pop the Balloon dating, AI childbirth, apocalypse capitalism, Costco singles night, Vine 2.0, Trump Clinton shippers, flipping the camera, and a Ralph Lauren x-mas
·usermag.co·
The Anti-AI Grift + College Football Surveillance
There’s an event tonight in for a book supposedly critiquing the AI industry called If Anyone Builds It, Everyone Dies. The PR person for the book invited me to the event, I said great, RSVPd. Then they called and said I was DISINVITED for being too critical of the tech industry!
There’s an event tonight in for a book supposedly critiquing the AI industry called If Anyone Builds It, Everyone Dies. The PR person for the book invited me to the event, I said great, RSVPd. Then they called and said I was DISINVITED for being too critical of the tech industry!
·x.com·
There’s an event tonight in for a book supposedly critiquing the AI industry called If Anyone Builds It, Everyone Dies. The PR person for the book invited me to the event, I said great, RSVPd. Then they called and said I was DISINVITED for being too critical of the tech industry!
OpenAI’s Sam Altman Regenerates the Gilded Age Playbook - Bloomberg
OpenAI’s Sam Altman Regenerates the Gilded Age Playbook - Bloomberg
Regulatory Capture
1962, Kolko published his Triumph of Conservatism, which advanced the heretical claim that “business leaders, and not the reformers, inspired the era’s legislation regulating business.” In fact, Kolko argued, “regulatory movements were usually initiated by the dominant business to be regulated.”
Kolko, though, mocked the idea that Sinclair had “brought the packers to their knees.” In fact, he concluded that the biggest meatpackers “were warm friends of regulation” because only the biggest packers, not their smaller, less organized competitors, had the resources to comply with the new restrictions.
Regulation was a way of restraining competition and protecting market share. Kolko observed that Swift and its fellow industry giants celebrated passage of the Meat Inspection Act of 1906 with advertisements declaring: “It is a wise law. Its enforcement must be universal and uniform.”
In this first book and in a more focused study titled Railroads and Regulation, published in 1970, Kolko offered a sweeping indictment of what he believed to be a naïve, simplistic understanding of the relationship between government and business. Kolko argued that regulatory agencies overwhelmingly served the interests of big business, which adeptly used them to fix prices and restrain competition.
Railroads, for example, worked with the Interstate Commerce Commission to mute competition, ensuring higher profits via stable freight rates. For other industries, incumbent firms embraced regulation as a way of rewarding their capital investments in cutting-edge technology. For example, the condiments king Henry J. Heinz and brewer Frederick Pabst benefited greatly when Congress passed food purity laws that enshrined their methods of manufacturing as the new standard, leaving smaller rivals at a distinct disadvantage.
In 1971, Stigler published an article titled “The Theory of Economic Regulation” that provided a theoretical understanding for why regulation almost always ended up serving the interests of big business. This became known as the theory of regulatory capture, which held that large, incumbent firms typically used regulations to thwart competitors. The analysis went on to become one of the most-cited economics articles ever written, helping Stigler win the Nobel Prize in Economic Sciences in 1982.
Not that you would know any of this from watching Congress lap up Sam Altman’s pleas for regulation of the blossoming AI industry. Instead, our senators swallowed the tech titan’s testimony with all the skepticism of a bunch of slack-jawed yokels listening to a carnival barker.
·archive.ph·
OpenAI’s Sam Altman Regenerates the Gilded Age Playbook - Bloomberg
Regulatory Capture: Why AI regulation favours the incumbents - DataScienceCentral.com
Regulatory Capture: Why AI regulation favours the incumbents - DataScienceCentral.com
We are seeing a flurry of regulation  But we should ask ourselves if we are seeing regulatory capture — ie letting corporations write lax rules that lead to public harm. Andrew Ng points out some contradictions: “It’s also a mistake to set reporting requirements based on a computation threshold for model training. This will stifle… Read More »Regulatory Capture: Why AI regulation favours the incumbents
·datasciencecentral.com·
Regulatory Capture: Why AI regulation favours the incumbents - DataScienceCentral.com
The risks of expanding the definition of ‘AI safety’
The risks of expanding the definition of ‘AI safety’
AI experts who worry about existential threats say including algorithmic bias under AI safety politicizes it and erodes its meaning.
Researcher Eliezer Yudkowsky, who’s been warning about the risks of AI for more than a decade on podcasts and elsewhere, believes that lumping all of those concerns into one bucket is a bad idea. “You want different names for the project of ‘having AIs not kill everyone’ and ’have AIs used by banks make fair loans,” he said. “Broadening definitions is usually foolish, because it is usually wiser to think about different problems differently, and AI extinction risk and AI bias risk are different risks.”
Yudkowsky, an influential but controversial figure for his alarmist views on AI (he wrote in Time about the potential necessity of ordering airstrikes on AI data centers), isn’t alone in his views. Others in the AI industry worry that “safety” in AI, which has come to underpin guardrails that companies are implementing, may become politicized as it grows to include hot button social issues like bias and diversity.
Yudkowsky, an influential but controversial figure for his alarmist views on AI (he wrote in Time about the potential necessity of ordering airstrikes on AI data centers), isn’t alone in his views. Others in the AI industry worry that “safety” in AI, which has come to underpin guardrails that companies are implementing, may become politicized as it grows to include hot button social issues like bias and diversity. That could erode its meaning and power, even as it receives huge public and private investment, and unprecedented attention.
·semafor.com·
The risks of expanding the definition of ‘AI safety’
Despite the AI safety hype, a new study finds little research on the topic
Despite the AI safety hype, a new study finds little research on the topic
The findings by Georgetown University show a lopsided balance between research that advances AI and studies on how to make it safe.
Moskovitz and the field of AI safety in general are tied to the Effective Altruism movement, which hopes to curb existential risks to humanity, such as runaway AI systems.The topic of AI safety is a hot button issue in the tech industry and has spawned a counter movement, called Effective Accelerationism, which believes that focusing on the risks of technology does more harm than good by hindering critical progress.
As the CSET study points out, Google and Microsoft are some of the biggest contributors to published papers on AI safety research.
·semafor.com·
Despite the AI safety hype, a new study finds little research on the topic
I am Nate Soares, AMA! — EA Forum
I am Nate Soares, AMA! — EA Forum
Hello Effective Altruism Forum, I am Nate Soares, and I will be here to answer your questions tomorrow, Thursday the 11th of June, 15:00-18:00 US Pac…
·forum.effectivealtruism.org·
I am Nate Soares, AMA! — EA Forum