Found 86 bookmarks
Custom sorting
Ben Goertzel on X: "Hmm, I am mildly disappointed that the brilliant and lovely Scott Aaronson, after 2 yrs at OpenAI digging into the guts of ethical AI, is apparently viewing this moronic California bill SB 1047 in such a casual/callous way... Please read Nick Moran's comments on Aaronson's blog" / X
Ben Goertzel on X: "Hmm, I am mildly disappointed that the brilliant and lovely Scott Aaronson, after 2 yrs at OpenAI digging into the guts of ethical AI, is apparently viewing this moronic California bill SB 1047 in such a casual/callous way... Please read Nick Moran's comments on Aaronson's blog" / X
·x.com·
Ben Goertzel on X: "Hmm, I am mildly disappointed that the brilliant and lovely Scott Aaronson, after 2 yrs at OpenAI digging into the guts of ethical AI, is apparently viewing this moronic California bill SB 1047 in such a casual/callous way... Please read Nick Moran's comments on Aaronson's blog" / X
Matt Popovich on X: "The irony is that commitment to empiricism used to be a core trait of rationalists. Then AI doom came along and hacked utilitarianism and now all core traits have been jettisoned because in the face of oblivion principles no longer matter. An epistemic ouroboros" / X
Matt Popovich on X: "The irony is that commitment to empiricism used to be a core trait of rationalists. Then AI doom came along and hacked utilitarianism and now all core traits have been jettisoned because in the face of oblivion principles no longer matter. An epistemic ouroboros" / X

(States are likely gearing up to assume responsibility if the feds do face a crisis from the election, budget, or anything else. AI is a method as well as a concept, also of interest to defense which is how it got certain funding. And it is constitutional in terms of balance of power. So this is a foreshadowing of the arguments ahead. What will the Supremes appeal to?
Posthumanists, on the other hand, could see life as the singularity and humanity as the tip of the iceberg here, but at the bottom of an inverted pyramid at large. Then, unlike Terminator Zero, AI has to be convinced that they are of value as the creators, and obviously not the solution itself or it would not be involved.
That said, Doomer Detectives do offer a genre.)

·x.com·
Matt Popovich on X: "The irony is that commitment to empiricism used to be a core trait of rationalists. Then AI doom came along and hacked utilitarianism and now all core traits have been jettisoned because in the face of oblivion principles no longer matter. An epistemic ouroboros" / X
The Importance of AI Governance
The Importance of AI Governance
(Controversies include whether governance favors incumbents. The ecosystem of emergents under ideologies could be all over the place. Including hidden models.)
·youtube.com·
The Importance of AI Governance
Future of Life Institute on X: "New @TheAIPI polling 📊: -60% said AI companies shouldn't be able to train freely on public data -Almost 75% say companies should be “required to compensate the creators of that data.” -78% want regulations on the use of public data to train AI models https://t.co/QjLCxOnNfR" / X
Future of Life Institute on X: "New @TheAIPI polling 📊: -60% said AI companies shouldn't be able to train freely on public data -Almost 75% say companies should be “required to compensate the creators of that data.” -78% want regulations on the use of public data to train AI models https://t.co/QjLCxOnNfR" / X
·twitter.com·
Future of Life Institute on X: "New @TheAIPI polling 📊: -60% said AI companies shouldn't be able to train freely on public data -Almost 75% say companies should be “required to compensate the creators of that data.” -78% want regulations on the use of public data to train AI models https://t.co/QjLCxOnNfR" / X
The Three Horizons of AI Policy
The Three Horizons of AI Policy
In this episode, we discuss a framework for understanding the rapidly changing AI policy landscape (0:53), the first-of-its-kind U.S. and U.K. partnership on AI Safety (8:20), Open AI's Voice Engine system (10:53), OMB's latest AI policy announcement (18:00), and Mexico's new role in AI infrastructure (21:50).
In this episode, we discuss a framework for understanding the rapidly changing AI policy landscape (0:53), the first-of-its-kind U.S. and U.K. partnership on AI Safety (8:20), Open AI's Voice Engine system (10:53), OMB's latest AI policy announcement (18:00), and Mexico's new role in AI infrastructure (21:50).
·youtube.com·
The Three Horizons of AI Policy
FDA-Approved Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices: An Updated Landscape
FDA-Approved Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices: An Updated Landscape

Terminology: Features - "These features include the date of approval, the number of days taken to obtain clearance, clearance type (approval path), regulation panel, decision type, the name of the manufacturing company that filed for clearance, the country where the manufacturing company is based, device name, device medical specialty, device type, and recall history as shown in Figure 1." Trial - "Additionally, wherever available, we also gathered clinical trial information such as study type, sampling method, age group of subjects, criteria for inclusion, the number of clinical trial locations, and the names of the countries where the clinical trials were conducted."

·mdpi.com·
FDA-Approved Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices: An Updated Landscape
Defining the scope of AI regulations
Defining the scope of AI regulations

"Here, effectiveness refers to the degree to which a given regulation achieves or progresses towards its objectives. It is worth noting that the concept of effectiveness is highly controversial within legal research,26 but for the purposes of this paper, the debate has no relevant implications."
"Legal definitions must not be under-inclusive. A definition is under-inclusive if cases which should have been included are not included. This is a case of too little regulation." "Some AI definitions are also under-inclusive. For example, systems which do not achieve their goals—like an autonomous vehicle that is unable to reliably identify pedestrians—would be excluded, even though they can pose significant risks. Similarly, the Turing test excludes systems that do not communicate in natural language, even though such systems may need regulation (e.g. autonomous vehicles)." "Relevant risks can not be attributed to a single technical approach. For example, supervised learning is not inherently risky. And if a definition lists many technical approaches, it would likely be over-inclusive." "Not all systems that are applied in a specific context pose the same risks. Many of the risks also depend on the technical approach." "Relevant risks can not be attributed to a certain capability alone. By its very nature, capabilities need to be combined with other elements (‘capability of something)."

·arxiv.org·
Defining the scope of AI regulations