Links to read

Links to read

66 bookmarks
Custom sorting
Adam Curtis on the Dangers of Self-Expression
Adam Curtis on the Dangers of Self-Expression
BBC journalist and the documentarian behind HyperNormalisation, Adam Curtis, discusses art, individualism, power, myth, and the complications of self-expression.
Adam Curtis on the Dangers of Self-Expression
Careful technology
Careful technology
Dear friends, There is a commonplace opinion that technology and the natural world, or that technological pursuits and natural pursuits, are at odds. An example: I think this is a false position. But if this kind of sentiment is so often repeated, its worth thinking about why it feels true.
Careful technology
Human Scale
Human Scale
Offscreen is an independent print magazine that examines how we shape technology and how technology shapes us.
Human Scale
Shades of co-design — Emma Blomkamp
Shades of co-design — Emma Blomkamp
Three lenses to identify various shades of co-design, finding a balance between purism and pragmatism. Instead of asking, “but is it co-design?”, these lenses are each a way of asking: “how much co-design is there?”
Shades of co-design — Emma Blomkamp
Emerging best practices for disclosing AI-generated content | Kontent.ai | Kontent.ai
Emerging best practices for disclosing AI-generated content | Kontent.ai | Kontent.ai
Emerging best practices for disclosing AI-generated content
Generative AI is already being used widely in enterprises by employees and small teams, often without the knowledge of executives or content leadership
A recent article in Business Insider discussed “the hidden wave of employees using AI on the sly,” what is known as “shadow IT.”
ungoverned use of generative AI carries a host of risks. Given that the use of generative AI is already becoming the norm in many organizations, organizations need policies and processes governing its use, especially around disclosing that use.
Disclosing AI use enhances the value of the content. It can improve its internal governance as well as its acceptance by readers. It gives visibility to your organization’s values relating to AI use.
The issue is not just how customers feel about AI but how they feel about it being used without their knowledge.
Human-created content may seem “real,” but it is not necessarily safe, while synthetic or machine-created content is not necessarily unsafe or unhelpful. That’s why more transparency is required.
Transparency about AI usage can set more realistic expectations about the content and reduce misunderstanding concerning the helpfulness or accuracy of the content.
“learned trust is a result of system performance characteristics as well as design features that color how performance is interpreted.”
Cringy or inaccurate content damages a brand. Publishers can’t afford to explain their use of AI only after a problem arises. They need to preempt misunderstandings beforehand.
A growing number of organizations are embracing the principle of Responsible AI and committing to the “three Hs”: that their outputs are helpful, honest, and harmless.
AI is a black box: what the AI is doing is not obvious to consumers. But they know that the misuse of AI can cause harm.
Generative AI makes it easier to convert content from one medium to another, allowing channel-agonistic omnichannel content to become multi-modal transmedia – content that can be represented in different media formats and accessed via different modalities.
Be careful not to give bots human attributes. The AP, whose style guidance is relied upon by numerous corporations and news outlets, recently issued guidelines about how to refer to ChatGPT and similar tools: “Avoid language that attributes human characteristics to these tools, since they do not have thoughts or feelings but can sometimes respond in ways that give the impression that they do.”
Give viewers hints that the content is AI-generated. Make clear from the representation that the content is bot-created rather than human-created.
steer away from hyperrealism, such as mimicking human language traits such as slang. The voice and tone of the generated content will be important. Avoid sounding overly familiar, as if the bot was an old friend.
In most situations, behavioral signals aren’t enough to inform audiences that content is generated by bots. They need more explicit statements.
Should AI-generated content be listed as an author? Most experts agree that it should not be.
Reveal your content development process. Rather than list AI as the author, brands can disclose how AI was involved in the development of the content.
Communicate disclosure using a consistent content structure. Those publishing AI-generated content can draw on a range of disclosure elements, which can be used individually or in combination, depending on the circumstances.
Emerging best practices for disclosing AI-generated content | Kontent.ai | Kontent.ai
Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training
Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training
Humans are capable of strategically deceptive behavior: behaving helpfully in most situations, but then behaving very differently in order to pursue alternative objectives when given the opportunity. If an AI system learned such a deceptive strategy, could we detect it and remove it using current state-of-the-art safety training techniques? To study this question, we construct proof-of-concept examples of deceptive behavior in large language models (LLMs). For example, we train models that write secure code when the prompt states that the year is 2023, but insert exploitable code when the stated year is 2024. We find that such backdoor behavior can be made persistent, so that it is not removed by standard safety training techniques, including supervised fine-tuning, reinforcement learning, and adversarial training (eliciting unsafe behavior and then training to remove it). The backdoor behavior is most persistent in the largest models and in models trained to produce chain-of-thought reasoning about deceiving the training process, with the persistence remaining even when the chain-of-thought is distilled away. Furthermore, rather than removing backdoors, we find that adversarial training can teach models to better recognize their backdoor triggers, effectively hiding the unsafe behavior. Our results suggest that, once a model exhibits deceptive behavior, standard techniques could fail to remove such deception and create a false impression of safety.
Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training
The Passion Recipe: Four Steps To Total Fulfillment
The Passion Recipe: Four Steps To Total Fulfillment
Over the past few decades, “passion” has been declared everything from the secret to successful entrepreneurship to the foundation of a meaningful life. It’s the magic pill alright, which is exactly the problem.
The Passion Recipe: Four Steps To Total Fulfillment
Ladder of Citizen Participation – Organizing Engagement
Ladder of Citizen Participation – Organizing Engagement
Sherry Arnstein's Ladder of Citizen Participation is one of the most influential models in the field of democratic public participation.
Proposed by Sherry Arnstein in 1969, the Ladder of Citizen Participation is one of the most widely referenced and influential models in the field of democratic public participation.
Arnstein’s penetrating, no-nonsense, even pugnacious analysis advanced a central argument that remains as relevant today as it was in 1969: citizen participation in democratic processes, if it is to be considered “participation” in any genuine or practice sense, requires the redistribution of power.
when the have-nots define participation as redistribution of power, the American consensus on the fundamental principle explodes into many shades of outright racial, ethnic, ideological, and political opposition.
citizen participation is a categorical term for citizen power. It is the redistribution of power that enables the have-not citizens, presently excluded from the political and economic processes, to be deliberately included in the future.
manipulation occurs when public institutions, officials, or administrators mislead citizens into believing they are being given power in a process that has been intentionally manufactured to deny them power.
therapy occurs when public officials and administrators “assume that powerlessness is synonymous with mental illness,” and they create pseudo-participatory programs that attempt to convince citizens that they are the problem when in fact it’s established institutions and policies that are creating the problems for citizens.
informing “citizens of their rights, responsibilities, and options can be the most important first step toward legitimate citizen participation,” she also notes that “too frequently the emphasis is placed on a one-way flow of information—from officials to citizens—with no channel provided for feedback and no power for negotiation…meetings can also be turned into vehicles for one-way communication by the simple device of providing superficial information, discouraging questions, or giving irrelevant answers.”
when consultation processes “not combined with other modes of participation, this rung of the ladder is still a sham since it offers no assurance that citizen concerns and ideas will be taken into account. The most frequent methods used for consulting people are attitude surveys, neighborhood meetings, and public hearings.
placation occurs when citizens are granted a limited degree of influence in a process, but their participation is largely or entirely tokenistic: citizens are merely involved only to demonstrate that they were involved.
partnership occurs when public institutions, officials, or administrators allow citizens to negotiate better deals, veto decisions, share funding, or put forward requests that are at least partially fulfilled.
delegated power occurs when public institutions, officials, or administrators give up at least some degree of control, management, decision-making authority, or funding to citizens.
citizen control occurs, in Arnstein’s words, when “participants or residents can govern a program or an institution, be in full charge of policy and managerial aspects, and be able to negotiate the conditions under which ‘outsiders’ may change them.”
Ladder of Citizen Participation – Organizing Engagement
GitHub - researchops/research-skills: Data, graphics, and insights from the ReOps "Research Skills Framework" project. Also includes workshop package and guide materials from the 2019 Workshop Series.
GitHub - researchops/research-skills: Data, graphics, and insights from the ReOps "Research Skills Framework" project. Also includes workshop package and guide materials from the 2019 Workshop Series.
Data, graphics, and insights from the ReOps "Research Skills Framework" project. Also includes workshop package and guide materials from the 2019 Workshop Series. - GitHub - resea...
The ResearchOps Community are people interested in making the work of research work. This is one part of that effort: taking a look at what it is that researchers really do, where the challenges are, and how they'd like to push their practice forward. It's a workshop developed by researchers, for researchers, along with some of the resulting data from our workshops, in attempt to help connect our local research communities and start a thread of meaningful research-focused conversations.
GitHub - researchops/research-skills: Data, graphics, and insights from the ReOps "Research Skills Framework" project. Also includes workshop package and guide materials from the 2019 Workshop Series.