Technology Commentary

Technology Commentary

8228 bookmarks
Custom sorting
Common Sense Is a Bad Thing
Common Sense Is a Bad Thing
Common Sense Is a Bad Thing Software Development, API development, Indutrial IOT
·kevwe.com·
Common Sense Is a Bad Thing
The Why of Reddit Protests
The Why of Reddit Protests
Photo by Brett Jordan on Unsplash Reddit is going dark. In a big way! The community that makes up the platform is protesting against the company’s decision to impose draconian pricin…
·om.co·
The Why of Reddit Protests
To Save the News, We Need an End-to-End Web
To Save the News, We Need an End-to-End Web
This is part five of an ongoing, five-part series. Part one, the introduction, is here. Part two, about breaking up ad-tech companies, is here. Part three, about banning surveillance ads, is here. Part four, about opening up app stores, is here. Once, news organizations enthusiastically piled into...
·eff.org·
To Save the News, We Need an End-to-End Web
Paper: Accident Report Interpretation
Paper: Accident Report Interpretation
This week's paper is titled Accident Report Interpretation [https://www.mdpi.com/2313-576X/4/4/46/htm] by Derek Heraghty. In this one, the author takes a real life incident review on a construction site, then tries to write 2 other variations using all the factual information that was used to write the first one, and sends it to 3 distinct groups of people for recommendations, and compares what he gets out of it. I particularly like it because I once tried doing something similar in a much less scientific manner and it's real nice to see someone driving that experiment for real. I've covered other papers that looked at language and interpretations before, and there are other ones cited in this one such as how describing crime with the words "beast" led to enforcement-based solutions while describing it as a "virus" led to proposed solutions that focused on social reform. This paper specifically looks at incident reports and makes the supposition that reports that evoke blame and fault within operators can have dire consequences in the long term for organizations: This leads organisations to believe that it is the operator who is the main contributor to accidents and it is the operator who is the problem within their system which requires rectification. [...] Workforces are much less likely to report mistakes for fear of retribution, creating an organisational culture where a chief executive officer (CEO) only learns of the problems within their organisation after a person is seriously injured. Using punishment to deal with error is likely to create a culture where the workforce resent the very system that was supposed to enable them to work safely because of it using them as the sacrificial lamb to appease society when something goes wrong. The greatest impediment to learning from mistakes is the use of punishment as you cannot have a system which punishes those who make mistakes while also trying to maintain a learning culture which needs people to be open and honest when an error is made. The stance safety folks aim for nowadays is one where people ask how to fix the issues, not the people, and in which people part of incidents need help to recover rather than being vilified. Specifically when writing a report, the idea goes that for accident investigators "what you look for is what you find"; seek blame and you'll find blame, look for a systemic framing and you'll find systemic recommendations. The readers will depend on the report to form their own views, and this paper tries to demonstrate this empirically by having 3 reports, sent to 3 groups of 31 people and then asking for 3 recommendations from each reader, and analyzing the results. I won't show the reports—most of the paper is all 3 of them put in appendices—but the broad strokes are: 1. Variant 1: the original report written for an actual incident, which tries to be fact-based and come up with an objective linear telling of the events. In doing so it ends up being based on the decisions and actions of workers and tends to ignore underlying conditions that influenced their decision-making. 2. Variant 2: a report that tries to take a very system-based approach. Many frameworks exist (SCAD, FRAM, STAMP, Accimaps, ...) and the authors settled on a SWOT analysis [https://en.wikipedia.org/wiki/SWOT_analysis] that focuses on briefings, personnel, tools and equipments, the work environment, and task execution. The goal of this report is to see how different parts of the system dealt with managed and unmanaged risks. 3. Variant 3: follows the multiple stories approach where rather than trying to come with a more objective re-telling, they instead just let each participant's verbatim re-telling of the event be accessible, so they are heard and can provide explanations about what was going on and felt significant for each of them. Front-line operatives are seen as victims rather than perpetrators. Specifically, all 3 reports are entirely factual (the authors did not invent anything that did not come up in the investigation), but the approach taken for each means some facts are omitted or emphasized differently. The recommendations were then coded for analysis [https://en.wikipedia.org/wiki/Coding_(social_sciences)] and the effect is quite noticeable: A chart showing the result for each of the 3 variants on two axes: human+blame and system focus. Approximately, the first variant shows a roughly 28%/72% divide, the second variant shows a 8%/92% divide, and the third variant shows a 5%/95% divide between human vs. system recommendations [https://s3.us-east-2.amazonaws.com/ferd.ca/cohost/interpret-fig1.png] A table showing all recommendations divided into 9 sub-categories over each report [https://s3.us-east-2.amazonaws.com/ferd.ca/cohost/interpret-table1.png] Some interesting effects pointed out: * Only readers of Variant 1 proposed punitive options * Readers of Variant 1 proposed far more individual-based solutions (training, reinforcement) than other variants * Both Variants 2 and 3 had a definite preference for solutions based on a systemic view, although Variant 3 a little more so than Variant 2 * Readers of Variant 2 proposed changes to the physical workplace and to work practices more than other variants * Readers of Variant 3 proposed changes or reinforcement to practices not directly related to the incident in higher proportions than others One interesting element the author points out here is that reports similar to Variant 1 are often used because those from Variants 2 and 3 are seen as opinion-based or hearsay, rather than being factual, but in doing so, people "prove" errors in hindsight, often because it is just easier to do that than prove systems are "broken" when each individual part functions as designed. Variants 2 and 3 allow more room for background information and individual factors that end up being omitted otherwise. Once again, these three reports start from the same data sources, but based on how they are framed, they construct an entirely different perception in the reader, who in turn propose different types of correcting actions, that impact different parts of the organization going forward. In the end, Variant 1 ends up often focusing on "who" did something. Variant 2 migrates the focus on the "what" rather than the "who", and Variant 3 ends up creating a focus on the constraints people were facing and creates a deeper understanding of the world they were operating on. The author suspects that these framings are what drives people to make different recommendations and changes to the work environment. The author concludes: Our results suggest that the pursuit of a linear report based only on facts determined as important by the author may increase the potential for recommended actions to be blame-focussed and impede the organisation from dealing with more serious issues which have been deemed irrelevant by the author. [...] As with accident reports themselves, readers should take care in under or over interpreting the results from this study. The results are strongly suggestive that accident report style influences accident analysis outcomes—but alternate interpretations can be drawn from the results.
·cohost.org·
Paper: Accident Report Interpretation
Paper: Imaginaries of Omniscience
Paper: Imaginaries of Omniscience
The last paper I annotated [https://cohost.org/mononcqc/post/1591376-paper-accident-repo] led people I was chatting with to make a statement along the lines of "it's funny how when we aim mainly for facts so it feels more objective, it ends up obscuring a much richer and useful picture you'd get from more subjective descriptions of experience." Someone then nerd-sniped the conversation with a paper from Lucy Suchman titled Imaginaries of omniscience: Automating intelligence in the US Department of Defense [https://journals.sagepub.com/doi/10.1177/03063127221104938], which I'm covering here. It's a bit different from most of the papers I read, since it advances a political point of view by contrasting the US foreign policy and approaches to unconventional warfare, their desire for AI in an approach to signal processing, concepts for cybernetics and the OODA loop [https://en.wikipedia.org/wiki/OODA_loop], drone killings, and the importance of press. That's a tall order, but a very interesting paper, albeit a tricky one to annotate. The paper weaves in the history of US military policy throughout. It's not my forte, but it's also impossible to separate it from the more cognitive aspects of it, so I'll try to make a quick rundown of the various points made by the author: * Starting with WWII, the US foreign policy has become a mandate for global military supremacy * A metaphor for this stance is one of a "closed world" or "dome of global technological oversight" that started with Truman in 1946 and reinforced itself through Vietnam in the 60s and the cold war arms race in the 1980s * That vision turned into "building weapons, systems, and strategies whose components could function in a seamless web", which the author describes it as "fantasy of total surveillance and complete control over the battlefield from the safety of a distant, high-tech command center" * This caused further centralization of operations, and widened a gap between official discourse of success and pessimistic assessments of independent observers and soldiers on the ground * The battlefield progressively turned into a "hunting ground"—special forces, pop-up bases, no lengthy occupation, and precision strikes as a favoured approach * A shift to counter-terrorism where resources are geared towards eliminating "imagined but potentially catastrophic" futures that do create conditions for these to happen She states: the closed world and its theaters of operation rest upon an objectivist onto-epistemology that takes as self-evident the independent existence of a world ‘out there’ to which military action is a necessary response. I had to look up "onto-epistemology" and that wasn't the clearest of thing but I understood it to be what we know, how we get to know it, and how things come to be, specifically in this case a view that is based on the observation of demonstrable facts. This puts a lot of the burden on the actual data gathering as an approach, coined under the broader term situational awareness ("the perception of environmental elements and events with respect to time or space, the comprehension of their meaning, and the projection of their future status"). In this view, the stimulus is considered external to the actor observing it, and responses that can be seen are considered to be an effect of having observed the stimulus. In traditional cybernetics views, this is the "human in the loop" of weapon systems, which brings us to the cycle known as "Observe, Orient, Decide and Act" (OODA) loop. A simple view of it looks like this: a simplified ooda loop, where observe points to orient, orient to decide, decide to act, and act back to observe [https://s3.us-east-2.amazonaws.com/ferd.ca/cohost/ooda-simple.png] This view however over-represents the decision-making aspect of it, and the more classical cybernetics loop is more like this: A much more complex loop that expands each of the words, and adds feedback mechanisms across most steps [https://s3.us-east-2.amazonaws.com/ferd.ca/cohost/ooda.png] A lot of decisions made are often more "automated" pattern matches made based on the "orientation" part, which contains people's mental models of reality that impact predictions made about the effect of actions: Within the context of war fighting, effective operations under the OODA model require that ‘our’ side have a shared Orientation, [...] consistent ‘overall mind time-space scheme’ or a ‘common outlook [that] represents a unifying theme that can be used to simultaneously encourage subordinate initiative yet realize superior intent’. At its imagined ideal, this shared mental model obviates the need for explicit command and control, as the force operates as a single body. Situational Awareness can be framed as both "Observe" and "Orient" steps. Being able to observe and know what is going on is what sets the US toward the goal of "information dominance." However, this leads us to issues where more sensors are needed, more processing is needed, and therefore, more automation is required. This in turn, made the DoD buy into AI as a solution. Still paraphrasing a lot of content here: * A Defense Innovation Advisory Board (DIB) was created, staffed with Silicon Valley people, to fund "start-up" military R&D projects * They started pushing for AI, harder and harder, as the promise of better-than-human capabilities is taken as definitively possible, both in terms of accuracy and in properly identifying who is or isn't a target * Any risk or demonstrated weakness in AI is seen as a need for even further investment in AI * data sharing across departments and historical standard is excessively difficult * They believe the commercial sector was best equipped to lead its development with an enterprise cloud solution * most sources are really vague about what the sources for training data would be * by making the future of security rely on big data, big data becomes its own weak target, and data centers must exist in bunkers and various security mechanisms to prevent sabotage In the end, the author states: Utopian futures of profit and a conjured specter of disaster are conjoined. The disaster anticipated redoubles itself, as the promised solution to an insufficiency of data becomes a new site of vulnerability. Circling back to the OODA loop, though, because orientation is a crucial step, it follows that orienting the now-unavoidable AI is also a key element if we want it to be able to make decisions rapidly and effectively. A few issues exist there, such as trying to gather so much noisy data that you can get adequate signal (which tends to never pan out well), but also because the labelling of that data and the orientation of AI tends to be a re-encoding of existing "imperial impulses" which come with their share of violent patterns often aligned on racial lines. Technology plays an important role in legitimizing some of these views by putting a sharp focus on some elements and placing unwieldy ones outside of its scope. For example, target identification demands making life and injury decisions based on sensor data, and fundamentally demands a classification on a civilian/combattant axis, an ultimately binary choice. As conflict shifted toward civilian areas however, the space for someone to be considered "civilian" has consistently been shrinking, and legitimizes more and more "extra-legal state violence". Keeping the idealized image of AI requires suppressing inconvenient truths, and getting a good discrimination for patterns within signal and noise demands having created that pattern in the first place, which relies on existing ideologies. The author makes a list of various military incidents, drone killings, the tendency of the US military to blow up the evidence (and bodies) of those who would let them know whether a decision was actually adequate or not, and so on. Or, to make it short, the OODA loop approach with AI in a military context, its chase for objective facts, tends to ignore its own initial framing and creates a poorer, narrower view of the world that reinforces its own existing pattern. To counter that effect, the author concludes that rather than improving that effort at data gathering and analysis, situational awareness could be improved by broadening the frame of reference used, if not by outright reversing it: Expanding situational awareness would require an inversion of current practices so that all of those killed in an operation would be assumed innocent until an administration was able to prove otherwise. [...] The aspiration to closure, integral to the logics of an international order based in military dominance, propels the destructiveness of a US foreign policy that regenerates the insecurities that it ostensibly eradicates. The closed world relies, moreover, on forms of systemic ignorance required to maintain the premise that war fighting can be conducted rationally through a seamless web of technologically generated situational awareness. [...] This premise rests upon the conflation of signals with information, through erasure of the situated knowledges through which information is produced. [...] I have suggested that the most powerful alternative to closed-world knowledge making is investigative journalism and other modes of on-the-ground research and reporting. These accounts convey the radical openness of war, foregrounding its associated injuries, challenging the military’s attempt to make clean demarcations where there are none to be made, and demonstrating knowledge-producing practices that do not fit the military’s imaginaries of omniscience. Once again, this was a challenging paper for me to review, very much outside my wheelhouse, but I found the context of my recent reviews, the current AI context, and the self-reinforcing approach to be difficult to ignore as a good source of lessons that would be generalizable.
·cohost.org·
Paper: Imaginaries of Omniscience
The Surprising Power of Documentation
The Surprising Power of Documentation
I’m a big fan of documentation. I think it’s my favorite boring thing to do after coding. It brings the business so much long-term value that every hour
·vadimkravcenko.com·
The Surprising Power of Documentation
Precision in Technical Communication
Precision in Technical Communication
Quantifying the Costs of Imprecise Communication in Remote Environments
·makeartwithpython.com·
Precision in Technical Communication
The Long View
The Long View
This blog has been looking like my personal obituary section, and I suppose it is. While I promise to change that, for this post I’ll stick with the theme a bit, and surface some corresponden…
·blogs.harvard.edu·
The Long View
When DRY goes wrong
When DRY goes wrong
DRY has become a mantra throughout the industry. Any time repetitive code shows up, DRY gets applied as a cure all. If you even start to question DRYing up a piece of code, you are viewed as a heretic to the entire industry. Ok, maybe it’s not that bad, but many times DRY gets applied without much thought. This careless application of DRY leads to brittle code, making even simple changes scary because they could have a huge ripple effect.
·hackeryarn.com·
When DRY goes wrong
Can DevEx Metrics Drive Developer Productivity?
Can DevEx Metrics Drive Developer Productivity?
A new DevEx approach to developer experience looks to answer: What should leaders measure and focus on to improve developer productivity?
·thenewstack.io·
Can DevEx Metrics Drive Developer Productivity?
Coming Soon: AutoOps
Coming Soon: AutoOps
We have DevOps, GitOps, DevSecOps, IaC, CloudOps, AIOps … and the list is still growing. But they all move toward one thing: AutoOps.
·devops.com·
Coming Soon: AutoOps
How Will AI Impact Cybersecurity? - HackerRank Blog
How Will AI Impact Cybersecurity? - HackerRank Blog
An inside look at the effects artificial intelligence is having on cybersecurity, including the benefits, opportunities, and challenges.
·hackerrank.com·
How Will AI Impact Cybersecurity? - HackerRank Blog