Excerpt from a message I just posted in a #diaspora team internal f...
Excerpt from a message I just posted in a #diaspora team internal forum category. The context here is that I recently get pinged by slowness/load spikes on the diaspora* project web infrastructure (Discourse, Wiki, the project website, ...), and looking at the traffic logs makes me impressively angry.
In the last 60 days, the diaspora* web assets received 11.3 million requests. That equals to 2.19 req/s - which honestly isn't that much. I mean, it's more than your average personal blog, but nothing that my infrastructure shouldn't be able to handle.
However, here's what's grinding my fucking gears. Looking at the top user agent statistics, there are the leaders:
2.78 million requests - or 24.6% of all traffic - is coming from Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; GPTBot/1.2; +https://openai.com/gptbot).
1.69 million reuqests - 14.9% - Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/600.2.5 (KHTML, like Gecko) Version/8.0.2 Safari/600.2.5 (Amazonb...
Don’t Throw the Baby Out With the Generative AI Bullshit Bathwater
If I had wanted to write a column about presidential pardons, I’d find ChatGPT’s assistance a far better starting point than I’d have gotten through any general web search. But to quote Reagan: “Trust, but verify.”
Revealed: bias found in AI system used to detect UK benefits fraud
Exclusive: Age, disability, marital status and nationality influence decisions to investigate claims, prompting fears of ‘hurt first, fix later’ approach
What I Learned Using Private LLMs to Write an Undergraduate History Essay
TL;DR Context Writing A 1996 Essay Again in 2023, This Time With Lots More Transistors ChatGPT 3 Gathering the Sources PrivateGPT Ollama (and Llama2:70b) Hallucinations What I Learned TL;DR I used …
The Institute for Technology, Ethics and Culture (ITEC), housed at the Markkula Center for Applied Ethics, is a collaboration between the Center and the Vatican’s Dicastery for Culture and Education. The Institute convenes leaders from business, civil society, academia, government, and all faith and belief traditions, to promote deeper thought on technology’s impact on humanity.
Signal’s Meredith Whittaker: ‘These are the people who could actually pause AI if they wanted to’
The president of the not-for-profit messaging app on how she believes existential warnings about AI allow big tech to entrench their power, and why the online safety bill may be unworkable
Kansas U Researchers Claim 99% Accuracy Detecting ChatGPT Fakes
A new research paper describes an algorithm that can detect other research papers written by robots. Scientists from the University of Kansas say they developed a technique to identify ChatGPT's writing in scientific writing close to 100% of the time.
Top AI researcher dismisses AI 'extinction' fears, challenges 'hero scientist' narrative | VentureBeat
First of all, I think that there are just too many letters. Generally, I’ve never signed any of these petitions. I always tend to be a bit more careful when I sign my name on something. I don’t know why people are just signing their names so lightly.
Lindsey Graham pointed out the military use of AI. That is actually happening now. But Sam Altman couldn’t even give a single proposal on how the immediate military use of AI should be regulated. At the same time, AI has a potential to optimize healthcare so that we can implement a better, more equitable healthcare system, but none of that was actually discussed.
But now the hero scientist narrative has come back in. There’s a reason why in these letters, they always put Geoff and Yoshua at the top. I think this is actually harmful in a way that I never thought about.
’m not a fan of Effective Altruism (EA) in general. And I am very aware of the fact that the EA movement is the one that is actually driving the whole thing around AGI and existential risk. I think there are too many people in Silicon Valley with this kind of savior complex. They all want to save us from the inevitable doom that only they see and they think only they can solve.
Realistically, we should all be preparing for a world where AI is not trustworthy. Because AI tools can be so incredibly useful, they will increasingly pervade our lives, whether we trust them or not. Being a digital citizen of the next quarter of the twenty-first century will require learning the basic ins and outs of LLMs so that you can assess their risks and limitations for a given use case. This will better prepare you to take advantage of AI tools, rather than be taken advantage by them.