This article is part two of a series with one, two, three articles, leading to the launch of iA Writer 7, on November 30th. In this article, we answer the following five questions: How good is AI for writing? When is AI useful for writing, when is it not? When is it right and when is it wrong to use AI? What is the problem? What should we design?
As a word, robot is a relative newcomer to the English language. It was the brainchild of a brilliant Czech playwright, novelist and journalist named Karel Čapek (1880-1938) who introduced it in his 1920 hit play, R.U.R., or Rossum’s Universal Robots. Robot is drawn from an old Church Slavonic word, robota, for “servitude,” “forced labor” or “drudgery.” The word, which also has cognates in German, Russian, Polish and Czech, was a product of the central European system of serfdom by which a tenant’s rent was paid for in forced labor or service
Education Is Casualty of Israel-Hamas War, as Bombs Hit Gaza Universities
Education has come to a halt in the Gaza Strip and much of Palestine as Israeli forces continue to bombard Gaza in response to the deadly surprise attack that Hamas militants launched on Israel on October 7.
HedgeDoc (formerly known as CodiMD) is an open-source, web-based, self-hosted, collaborative markdown editor. You can use it to easily collaborate on notes, graphs and even presentations in real-time. All you need to do is to share your note-link to your co-workers and they’re ready to go.
Arabic Collections Online (ACO) is a publicly available digital library of public domain Arabic language content. ACO currently provides digital access to 17,699 volumes across 10,473 subjects drawn from rich Arabic collections of distinguished research libraries. Established with support from NYU Abu Dhabi, and currently supported by major grants from Arcadia, a charitable fund of Lisbet Rausing and Peter Baldwin, and Carnegie Corporation of New York, this mass digitization project aims to feature up to 23,000 volumes from the library collections of NYU and partner institutions. These institutions are contributing published books in all fields—literature, business, science, and more—from their Arabic language collections.
Stay connected, inspired and engaged with the University of Melbourne’s arts and culture through our collections, virtual tours, videos, catalogues and podcasts. Delve into our events and public programs. Also, one helluva WordPress design!
Artificial Intelligence: a digital heritage leadership briefing | The National Lottery Heritage Fund
We commissioned Dr Mathilde Pavis to produce a snapshot of what innovation in Artificial Intelligence (AI) looks like across the UK heritage sector. It can help you decide whether to, how and when to use AI.
Why does someone’s account page look completely blank? Is it really blank? | Fedi.Tips – An Unofficial Guide to Mastodon and the Fediverse
If a profile looks blank, it may not actually be blank! Fediverse servers work like this: servers only notice accounts from other servers if someone follows or interacts with them. If no one on your server follows a particular account, and that account is on another server, then that account may appear blank to you.
HTML First is a set of principles that aims to make building web software easier, faster, more inclusive, and more maintainable by... Leveraging the default capabilities of modern web browsers. Leveraging the extreme simplicity of HTML's attribute syntax. Leveraging the web's ViewSource affordance.
Since the introduction of ChatGPT, adding “AI” to everything has become the dominant trend in IT. We considered AI for iA Writer. But obviously, we had to make sure not to destroy everything we had built over the last 15 years. We did not want our app to become an AI feature. Writing is thinking. iA Writer is designed to make thinking enjoyable. A writing app that thinks for you is a robot that does your jogging. After a year of observation, experimentation, and testing, we may have found a careful response to the challenges we face with AI. In fact we ended up doing the opposite of adding ChatGPT. Now, let’s take one step at a time. First, let’s take a look at where we are. This is the first of three posts leading up to the launch of iA Writer 7, our cautious response to AI. In this post, we’ll review what has happened in the app industry since the introduction of AI last November.
Two-Eyed Seeing is the Guiding Principle brought into the Integrative Science co-learning journey by Mi'kmaw Elder Albert Marshall in Fall 2004. Etuaptmumk is the Mi'kmaw word for Two-Eyed Seeing. We often explain Etuaptmumk - Two-Eyed Seeing by saying it refers to learning to see from one eye with the strengths of Indigenous knowledges and ways of knowing, and from the other eye with the strengths of Western knowledges and ways of knowing ... and learning to use both these eyes together, for the benefit of all. Elder Albert indicates that Etuaptmumk - Two-Eyed Seeing is the gift of multiple perspective treasured by many Aboriginal peoples. We believe it is the requisite Guiding Principle for the new consciousness needed to enable Integrative Science work, as well as other integrative or transcultural or transdisciplinary or collaborative work.
The Bras d'Or Lakes Collaborative Environmental Planning Initiative (CEPI) is a very special Collaborative effort initiated by the 5 Mi'kmaq Chiefs of Unama'ki in 2003. We want to keep the Bras d'Or Lakes Golden with your help.
This course will take you through the basics of quick source and claim-checking, and introduce you to our "four moves", a series of actions to take when encountering claims and sources on the web. This is the SIFT Model- Stop, Investigate the source, Find better coverage, and Trace claims, quotes, and media to the original source. In this course, we show you how to fact and source-check in five easy lessons, taking about 30 minutes apiece. The entire online curriculum is two and a half to three hours and is suitable homework for the first week of a college-level module on disinformation or online information literacy, or the first few weeks of a course if assigned with other discipline-focused homework. Once students have completed the starter course they can move on to any number of additional topical modules we will be rolling out. The topical modules go into more depth on skills, and explore specific social issues around information pollution. This course is built so that it can easily copied and modified by teachers wishing to customize it. The text and media of this site, where possible, is released into the CC BY, and free for reuse and revision.
Help Us Investigate Facebook Pixel Tracking – The Markup
With nearly three billion monthly active users, Facebook remains one of the top destinations on the web and makes money by targeting its users with ads based on their behavior. But few people are aware of how expansively Facebook tracks users when they are not on Facebook—whether they’re Facebook users or not. Facebook offers an invisible tracking tool, called a pixel, that websites across the internet can embed to enable much of that tracking. The pixel is a snippet of code that, once installed on a webpage, sends data to Facebook as people visit. When you view content on some pages, type in your payment information, or buy something, pages with Facebook pixel installed can then transmit that information to Facebook, which can then use the data to target advertisements. Websites add the code to their pages in hopes of better targeting their products and services on Facebook to potentially interested customers.
In 2023, Resolve to Fix Your Organization’s Meta Pixel Problem – The Markup
We all use the internet to complete increasingly sensitive tasks: book doctor’s appointments, file taxes, apply for financial aid. When we do, our data can be tracked from the moment we open our browsers to when we click “book” or “submit.” This type of data tracking is done by analytics software that organizations install on their websites to gather information about visitors. One common use for these trackers is to help companies “retarget” ads toward people who have already shown an interest in their products or services. The Markup has been tracking the trackers, and in 2022 we looked deeper into one in particular—the Meta Pixel—that is present on more than 30 percent of popular websites. The pixel collects data on visitors regardless of whether or not they have a Facebook account. Although Meta has policies against collecting sensitive data, our reporting over the past year found that the pixel often did just that, collecting the identities of people who it knew applied for financial aid, gathering the amount of taxpayers’ refunds, and even seeing users’ prescriptions and their answers to questions about addiction and migraine symptoms. Companies choose to embed the Meta Pixel and can change its settings, and yet, repeatedly, those we found sharing sensitive data through the pixel told us they didn’t know they were collecting it and soon removed the pixel or changed its settings. As we move into 2023, here’s what organizations (and their employees) can do to investigate whether their company has the Meta Pixel installed and, if so, what information it’s passing to Facebook.
OER Lebanon is a non-governmental organization devoted to raising awareness and promoting the use of open educational resources (OER) in Lebanon and the region. We raise awareness of the benefits of OER by conducting workshops, participating in conferences, and disseminating information about the open access movement worldwide.
Digital Commons Network | Free full-text scholarly articles
The Digital Commons Network brings together free, full-text scholarly articles from hundreds of universities and colleges worldwide. Curated by university librarians and their supporting institutions, the Network includes a growing collection of peer-reviewed journal articles, book chapters, dissertations, working papers, conference proceedings, and other original scholarly work.
How to create and deliver lessons that work and build a teaching community around them. Grassroots groups have sprung up around the world to teach programming, web design, robotics, and other skills to free-range learners. These groups exist so that people don’t have to learn these things on their own, but ironically, their founders and teachers are often teaching themselves how to teach. There’s a better way. Just as knowing a few basic facts about germs and nutrition can help you stay healthy, knowing a few things about cognitive psychology, instructional design, inclusivity, and community organization can help you be a more effective teacher. This book presents key ideas you can use right now, explains why we believe they are true, and points you at other resources that will help you go further.
The Carpentries teaches foundational coding and data science skills to researchers worldwide. Software Carpentry, Data Carpentry, and Library Carpentry workshops are based on our lessons. Our vision is to be the leading inclusive community teaching data and coding skills. The Carpentries builds global capacity in essential data and computational skills for conducting efficient, open, and reproducible research. We train and foster an active, inclusive, diverse community of learners and instructors that promotes and models the importance of software and data in research. We collaboratively develop openly-available lessons and deliver these lessons using evidence-based teaching practices. We focus on people conducting and supporting research.
How You Will Never Be Able to Trust Generative AI (and Why That's OK) –
In this post, I will explain “hallucination” and other memory problems with generative AI. This is one of my longer ones; I will take a deep dive to help you sharpen your intuitions and tune your expectations. But if you’re not up for the whole ride, here’s the short version: Hallucinations and imperfect memory problems are fundamental consequences of the architecture that makes current large language models possible. While these problems can be reduced, they will never go away. AI based on today’s transformer technology will never have the kind of photographic memory a relational database or file system can have. When vendors tout that you can now “talk to your data,” they really mean talk to Steve, who has looked at your data and mostly remembers it. You should also know that the easiest way to mitigate this problem is to throw a lot of carbon-producing energy and microchip-cooling water at it. Microsoft is literally considering building nuclear reactors to power its AI. Their global water consumption post-AI has spiked 34% to 1.7 billion gallons. This brings us back to the coworker analogy. We know how to evaluate and work with our coworkers’ limitations. And sometimes, we decide not to work with someone or hire them for a particular job because the fit is not good. While anthropomorphizing our technology too much can lead us astray, it can also provide us with a robust set of intuitions and tools we already have in our mental toolboxes. As my science geeks say, “All models are wrong, but some are useful.” Combining those models or analogies with an understanding of where they diverge from reality can help you clear away the fear and the hype to make clear-eyed decisions about how to use the technology. I’ll end with some education-specific examples to help you determine how much you trust your synthetic coworkers with various tasks.
I spoke at the 2023 Museum Computer Network conference, in Philadelphia, last week and presented a talk titled Wishful Thinking – A critical discussion of "extended reality" technologies in the cultural heritage sector. These are my notes for the talk. This is what I was planning to say sticking to these notes closely enough (because the talk was only 15 minutes) that I expected people would notice. I said as much at the beginning of the talk but by the time I reached the second slide I realized that I had neglected to make sure that the display setup allowed me to see my notes. It did not and it was too late to do anything about that so I did the talk from memory. I missed a few points that I wanted to make but, somehow, still managed to capture the gist of it. The first is that the practice of revisiting is the bedrock of the humanities. Revisiting is what distinguishes entertainment from culture. The second is that recall has always been a power dynamic. You see this throughout history: In the right to assembly; In the question of access to basic literacy; The entire notion of a public library; The restrictions around intellectual property; Paywalls and the rights to broadcast or re-broadcast. Third, technology is a reflection of worldview. How you think about the role and the function of technology is a pretty good mirror on how you think about the questions of recall and revisiting.
The (open) web is good, actually (13 Nov 2023) – Pluralistic: Daily links from Cory Doctorow
The great irony of the platformization of the internet is that platforms are intermediaries, and the original promise of the internet that got so many of us excited about it was disintermediation – getting rid of the middlemen that act as gatekeepers between community members, creators and audiences, buyers and sellers, etc. The platformized internet is ripe for rent seeking: where the platform captures an ever-larger share of the value generated by its users, making the service worse for both, while lock-in stops people from looking elsewhere. Every sector of the modern economy is less competitive, thanks to monopolistic tactics like mergers and acquisitions and predatory pricing. But with tech, the options for making things worse are infinitely divisible, thanks to the flexibility of digital systems, which means that product managers can keep subdividing the Jenga blocks they are pulling out of the services we rely on. Combine platforms with monopolies with digital flexibility and you get enshittification... But "openness" is a necessary precondition for preservation and access, which are the necessary preconditions for recall and revisiting. Here on the last, melting fragment of the open internet, as tech- and entertainment-barons are seizing control over our attention and charging rent on our ability to talk and think together, openness is our best hope of a new, good internet.
'From the river to the sea' – a Palestinian historian explores the meaning and intent of scrutinized slogan
What does the call “From the river to the sea, Palestine will be free” mean to Palestinians who say it? And why do they keep using the slogan despite the controversy that surrounds its use? As both a scholar of Palestinian history and someone from the Palestinian diaspora, I have observed the decades-old phrase gain new life – and scrutiny – in the massive pro-Palestinian marches in the U.S. and around the world that have occurred during the Israeli bombing campaign in the Gaza Strip in retaliation for Hamas’ Oct. 7 attack on Israel. Pro-Israel groups, including the U.S.-based Anti-Defamation League, have labeled the phrase “antisemitic.” It has even led to a rare censure of House Rep. Rashida Tlaib, the only Palestinian-American member of Congress, for using the phrase. But to Tlaib, and countless others, the phrase isn’t antisemitic at all. Rather, it is, in Tlaib’s words, “an aspirational call for freedom, human rights and peaceful coexistence.”
The average AI criticism has gotten lazy, and that's dangerous - Redeem Tomorrow
Let’s get one thing out of the way: the expert AI ethicists who are doing what they can to educate lawmakers and the public are heroes. While I may not co-sign all of it, I support their efforts to act as a check on the powerful in the strongest possible terms. Unfortunately, a funny thing is happening on the way into popular discourse: most of the AI criticism you’ll hear on any given digital street corner is lazy as hell. We have to up our game if we want a future worth living in.
A collection of assignments and materials inspired by the humanities, for educators curious about how AI affects their students and their syllabi. How do we make informed, intentional decisions about the role of AI in the classroom? How can students develop critical relationships with these tools? How can imaginative applications of AI technologies enhance learning? The AI Pedagogy Project helps educators engage their students in conversations about the capabilities and limitations of AI informed by hands-on experimentation. Educators and education administrators (and parents!) at many levels of education are concerned and curious about AI. High school teachers may want to check out the resources page section for high school educators. We also recommend Part 1: AI Starter for all educators and others, including K12 and higher ed. All assignments in this collection, which will continue to grow over time, were created by educators. Please customize them to your own pedagogical values and classroom needs. If you’re new to these tools (or want to learn more), check out our AI Guide, which has essential information to help you get started. The AI Pedagogy Project was created by the metaLAB (at) Harvard within the Berkman Klein Center for Internet & Society. We have consulted with numerous colleagues, students, and experts in creating this resource. In addition to those who submitted new assignments, we would like to thank the educators who publicly published their materials elsewhere, which permitted us to find them and include them on this site. All assignments in this collection, which will continue to grow over time, were created by educators.
This paper focuses on the fact that AI is predominantly portrayed as white—in colour, ethnicity, or both. We first illustrate the prevalent Whiteness of real and imagined intelligent machines in four categories: humanoid robots, chatbots and virtual assistants, stock images of AI, and portrayals of AI in film and television. We then offer three interpretations of the Whiteness of AI, drawing on critical race theory, particularly the idea of the White racial frame. First, we examine the extent to which this Whiteness might simply reflect the predominantly White milieus from which these artefacts arise. Second, we argue that to imagine machines that are intelligent, professional, or powerful is to imagine White machines because the White racial frame ascribes these attributes predominantly to White people. Third, we argue that AI racialised as White allows for a full erasure of people of colour from the White utopian imaginary. Finally, we examine potential consequences of the racialisation of AI, arguing it could exacerbate bias and misdirect concern.
Images of Artificial Intelligence: a Blind Spot in AI Ethics | Philosophy & Technology
This paper argues that the AI ethics has generally neglected the issues related to the science communication of AI. In particular, the article focuses on visual communication about AI and, more specifically, on the use of certain stock images in science communication about AI — in particular, those characterized by an excessive use of blue color and recurrent subjects, such as androgyne faces, half-flesh and half-circuit brains, and variations on Michelangelo’s The Creation of Adam. In the first section, the author refers to a “referentialist” ethics of science communication for an ethical assessment of these images. From this perspective, these images are unethical. While the ethics of science communication generally promotes virtues like modesty and humility, similar images are arrogant and overconfident. In the second section, the author uses French philosopher Jacques Rancière’s concepts of “distribution of the sensible,” “disagreement,” and “pensive image.” Rancière’s thought paves the way to a deeper critique of these images of AI. The problem with similar images is not their lack of reference to the “things themselves.” It rather lies in the way they stifle any possible forms of disagreement about AI. However, the author argues that stock images and other popular images of AI are not a problem per se, and they can also be a resource. This depends on the real possibility for these images to support forms of pensiveness. In the conclusion, the question is asked whether the kind of ethics or politics of AI images proposed in this article can be applied to AI ethics tout court.