Digital Ethics

Digital Ethics

3809 bookmarks
Custom sorting
How big tech is force-feeding us AI
How big tech is force-feeding us AI
Plus, OpenAI's absurd listening tour, top AI scientists say AI is evolving beyond our control, Facebook is putting data centers in tents, and the AI bubble question — answered?
·bloodinthemachine.com·
How big tech is force-feeding us AI
The true price of AI
The true price of AI
DRIVEN by national digitalisation strategies, rapid advances in artificial intelligence (AI) and booming cloud computing, South-east Asia is accelerating its data infrastructure build-out. These facilities – critical for AI training, Big Data processing and digital services – are now central to the region’s economic competitiveness and technological growth. Read more at The Business Times.
·businesstimes.com.sg·
The true price of AI
Can ChatGPT Diagnose this Car? | Chevy Trax P0171, P1101, P0420
Can ChatGPT Diagnose this Car? | Chevy Trax P0171, P1101, P0420
Let's see if an "intelligent" large language model can correctly diagnose a broken 2015 Chevy Trax with a 1.4L turbo. Send us a postcard: Watch Wes Work P.O. Box 106 Fulton, IL 61252 Send us an email: mail@watchweswork.com
·youtube.com·
Can ChatGPT Diagnose this Car? | Chevy Trax P0171, P1101, P0420
AI-Driven Incident Management Needs Human Empathy
AI-Driven Incident Management Needs Human Empathy
However, as good as AI is at spotting incidents, it can’t put those incidents in a business context; it can’t tell you why they matter.
·forbes.com·
AI-Driven Incident Management Needs Human Empathy
The uselessness of AI ethics
The uselessness of AI ethics
As the awareness of AI’s power and danger has risen, the dominant response has been a turn to ethical principles. A flood of AI guidelines and codes of ethics have been released in both the public and private sector in the last several years. However, these are meaningless principles which are contested or incoherent, making them difficult to apply; they are isolated principles situated in an industry and education system which largely ignores ethics; and they are toothless principles which lack consequences and adhere to corporate agendas. For these reasons, I argue that AI ethical principles are useless, failing to mitigate the racial, social, and environmental damages of AI technologies in any meaningful sense. The result is a gap between high-minded principles and technological practice. Even when this gap is acknowledged and principles seek to be “operationalized,” the translation from complex social concepts to technical rulesets is non-trivial. In a zero-sum world, the dominant turn to AI principles is not just fruitless but a dangerous distraction, diverting immense financial and human resources away from potentially more effective activity. I conclude by highlighting alternative approaches to AI justice that go beyond ethical principles: thinking more broadly about systems of oppression and more narrowly about accuracy and auditing.
·link.springer.com·
The uselessness of AI ethics
This June I attended a summer school on at AI governance.
This June I attended a summer school on at AI governance.
This June I attended a summer school on at AI governance. Throughout the lectures, It was super clear and frustrating that the lecturers with their different backgrounds (law, international relations, AI governance, etc.) push only the narrative of Big tech companies about superintelligence and its dangers referring only to research done by Big tech. It was equally surprising, frustrating and disappointing, the the lecturers never heard of Timnit Gebru, Emily M. Bender or Alex Hanna, Ph.D. In fact they did not seem to be familiar with the research on critical AI studies. I was looked crazy as I asked them how come they don't know these researchers and they don't include their research in their understanding of AI and AI capabilities. I could not understand how come? why?. Today as I was reading chapter 36 In the "Handbook of critical studies of Artificial intelligence" under the title "Barriers to regulating AI: critical observations from a fractured field", by Ashlin Lee, Will Orr, Walter G.Johnson, Jenna Imad Hard and Kathryn Henne, I finally understood why. The authors argue that since nation states want to support the growth of AI, they decided to defer regulatory responsibilities to external stakeholder groups including think tanks and corporations (My summer school was organised by a think tank). This process is called hybridising governance. With this type of governance, these groups are allowed to define the formal and informal regulations for the state with little direction. The authors go on to explain that "This [type of governance] creates a disorderly regulatory environment that cement power among those already invested in AI while making it difficult for those outside these privileges groups [researchers on critical AI and people harmed by AI] to contribute their knowledge and experience." The authors then go on and explain that "External stakeholders stand to benefit from hybridising regulation of AI, with the public potentially less well served by this arrangement." This explains why AI governance, in its current format, ultimately overly focuses on Ethical AI guidelines as a mechanism of self-regulation over enforceable regulations. This also explains why the heads of the AI governance school were pushing for the same narrative that Big tech companies keeps repeating that we need to regulate the scary futuristic super intelligence rather than regulating the currently harmful AI systems. I upload here this section of the chapter for you to read, it is very interesting.
·linkedin.com·
This June I attended a summer school on at AI governance.
The mainstream view of AI for science says AI will rapidly accelerate science, and that we're on track to cure cancer, double the human lifespan, colonize space, and achieve a century of progress in the next decade.
The mainstream view of AI for science says AI will rapidly accelerate science, and that we're on track to cure cancer, double the human lifespan, colonize space, and achieve a century of progress in the next decade.
The mainstream view of AI for science says AI will rapidly accelerate science, and that we're on track to cure cancer, double the human lifespan, colonize space, and achieve a century of progress in the next decade. In a new essay, Arvind Narayanan and I argue that AI's impact could be precisely the opposite: AI could *slow* rather than hasten science. Link to essay: https://lnkd.in/e_sD7dzg 1) The production-progress paradox Scientific papers have increased 500-fold since 1900, with funding and researchers growing exponentially. Yet genuine progress—measured by disruptive discoveries, new scientific terminology, Nobel-worthy breakthroughs, and research productivity—has remained constant or declined. Multiple metascience studies confirm this troubling disconnect between production and progress. AI could worsen this by making it even easier to chase productivity metrics while homogenizing research approaches. 2) Science is not ready for software, let alone AI Scientists are notoriously poor software engineers, lacking basic practices like testing and version control. Papers rarely share code, and when they do, it's often error-riddled. AI has already led to widespread errors across 600+ papers in 30 fields, with many COVID-19 diagnosis papers proving clinically useless. Science needs to catch up to 50 years of software engineering—fast. 3) AI might prolong the reliance on flawed theories AI excels at prediction without understanding, like adding epicycles to the geocentric model (which improved predictive accuracy) rather than discovering heliocentrism. Scientific progress requires theoretical advances, not just predictive accuracy. AI might trap fields in intellectual ruts by making flawed theories more useful without revealing their fundamental errors. 4) Human understanding remains essential Science isn't just about finding solutions—it's about building human understanding. AI risks short-circuiting this process, like using a forklift at the gym. Evidence shows AI-adopting papers focus on known problems rather than generating new ones. 5) Implications for the future of science Individual researchers should develop software skills and avoid using AI as a crutch. More importantly, institutions must invest in meta-science research, reform publish-or-perish incentives, and rethink AI tools to target actual bottlenecks like error detection rather than flashy discoveries. Evaluation should consider collective impacts, not just individual efficiency. 6) Final thoughts While we ourselves use AI enthusiastically in our workflows, we warn against conflating individual benefits with institutional impacts. Science lacks the market mechanisms that provide quality control in industry, making rapid AI adoption particularly risky. We're optimistic that scientific norms will eventually adapt, but expect a bumpy ride ahead. | 75 comments on LinkedIn
·linkedin.com·
The mainstream view of AI for science says AI will rapidly accelerate science, and that we're on track to cure cancer, double the human lifespan, colonize space, and achieve a century of progress in the next decade.
@51north/project-template-nuxt
@51north/project-template-nuxt
In this report, through original research, we show how public opinion about AI is changing.
·report2025.seismic.org·
@51north/project-template-nuxt
Lessons from a Chimp: AI "Scheming" and the Quest for Ape Language
Lessons from a Chimp: AI "Scheming" and the Quest for Ape Language
We examine recent research that asks whether current AI systems may be developing a capacity for "scheming" (covertly and strategically pursuing misaligned goals). We compare current research practices in this field to those adopted in the 1970s to test whether non-human primates could master natural language. We argue that there are lessons to be learned from that historical research endeavour, which was characterised by an overattribution of human traits to other agents, an excessive reliance on anecdote and descriptive analysis, and a failure to articulate a strong theoretical framework for the research. We recommend that research into AI scheming actively seeks to avoid these pitfalls. We outline some concrete steps that can be taken for this research programme to advance in a productive and scientifically rigorous fashion.
·arxiv.org·
Lessons from a Chimp: AI "Scheming" and the Quest for Ape Language
The party trick called LLM - blowing away smoke and break some mirrors - De Staat van het Web!
The party trick called LLM - blowing away smoke and break some mirrors - De Staat van het Web!
Large Language Models fool you. They don't produce language, but place words in a row. But it's understandable that you think you are dealing with a clever computer. One that occasionally says something that resembles the truth and sounds nice and reliable. You are excused to believe in this magic of ‘AI’ but not after I tell you the trick.
·destaatvanhetweb.nl·
The party trick called LLM - blowing away smoke and break some mirrors - De Staat van het Web!