Found 4 bookmarks
Newest
The CrowdStrike Outage and Market-Driven Brittleness
The CrowdStrike Outage and Market-Driven Brittleness
Redundancies are unprofitable. Being slow and careful is unprofitable. Being less embedded in and less essential and having less access to the customers’ networks and machines is unprofitable—at least in the short term, by which these companies are measured. This is true for companies like CrowdStrike. It’s also true for CrowdStrike’s customers, who also didn’t have resilience, redundancy, or backup systems in place for failures such as this because they are also an expense that affects short-term profitability.
The market rewards short-term profit-maximizing systems, and doesn’t sufficiently penalize such companies for the impact their mistakes can have. (Stock prices depress only temporarily. Regulatory penalties are minor. Class-action lawsuits settle. Insurance blunts financial losses.) It’s not even clear that the information technology industry could exist in its current form if it had to take into account all the risks such brittleness causes.
The asymmetry of costs is largely due to our complex interdependency on so many systems and technologies, any one of which can cause major failures. Each piece of software depends on dozens of others, typically written by other engineering teams sometimes years earlier on the other side of the planet. Some software systems have not been properly designed to contain the damage caused by a bug or a hack of some key software dependency.
This market force has led to the current global interdependence of systems, far and wide beyond their industry and original scope. It’s why flying planes depends on software that has nothing to do with the avionics. It’s why, in our connected internet-of-things world, we can imagine a similar bad software update resulting in our cars not starting one morning or our refrigerators failing.
Right now, the market incentives in tech are to focus on how things succeed: A company like CrowdStrike provides a key service that checks off required functionality on a compliance checklist, which makes it all about the features that they will deliver when everything is working. That’s exactly backward. We want our technological infrastructure to mimic nature in the way things fail. That will give us deep complexity rather than just surface complexity, and resilience rather than brittleness.
Netflix is famous for its Chaos Monkey tool, which intentionally causes failures to force the systems (and, really, the engineers) to be more resilient. The incentives don’t line up in the short term: It makes it harder for Netflix engineers to do their jobs and more expensive for them to run their systems. Over years, this kind of testing generates more stable systems. But it requires corporate leadership with foresight and a willingness to spend in the short term for possible long-term benefits.
The National Highway Traffic Safety Administration crashes cars to learn what happens to the people inside. But cars are relatively simple, and keeping people safe is straightforward. Software is different. It is diverse, is constantly changing, and has to continually adapt to novel circumstances. We can’t expect that a regulation that mandates a specific list of software crash tests would suffice. Again, security and resilience are achieved through the process by which we fail and fix, not through any specific checklist. Regulation has to codify that process.
·lawfaremedia.org·
The CrowdStrike Outage and Market-Driven Brittleness
The Complex Problem Of Lying For Jobs — Ludicity
The Complex Problem Of Lying For Jobs — Ludicity

Claude summary: Key takeaway Lying on job applications is pervasive in the tech industry due to systemic issues, but it creates an "Infinite Lie Vortex" that erodes integrity and job satisfaction. While honesty may limit short-term opportunities, it's crucial for long-term career fulfillment and ethical work environments.

Summary

  • The author responds to Nat Bennett's article against lying in job interviews, acknowledging its validity while exploring the nuances of the issue.
  • Most people in the tech industry are already lying or misrepresenting themselves on their CVs and in interviews, often through "technically true" statements.
  • The job market is flooded with candidates who are "cosplaying" at engineering, making it difficult for honest, competent individuals to compete.
  • Many employers and interviewers are not seriously engaged in engineering and overlook actual competence in favor of congratulatory conversation and superficial criteria
  • Most tech projects are "default dead," making it challenging for honest candidates to present impressive achievements without embellishment.
  • The author suggests that escaping the "Infinite Lie Vortex" requires building financial security, maintaining low expenses, and cultivating relationships with like-minded professionals.
  • Honesty in job applications may limit short-term opportunities but leads to more fulfilling and ethical work environments in the long run.
  • The author shares personal experiences of navigating the tech job market, including instances of misrepresentation and the challenges of maintaining integrity.
  • The piece concludes with a satirical, honest version of the author's CV, highlighting the absurdity of common resume claims and the value of authenticity.
  • Throughout the article, the author maintains a cynical, humorous tone while addressing serious issues in the tech industry's hiring practices and work culture.
  • The author emphasizes the importance of self-awareness, continuous learning, and valuing personal integrity over financial gain or status.
If your model is "it's okay to lie if I've been lied to" then we're all knee deep in bullshit forever and can never escape Transaction Cost Hell.
Do I agree that entering The Infinite Lie Vortex is wise or good for you spiritually? No, not at all, just look at what it's called.
it is very common practice on the job market to have a CV that obfuscates the reality of your contribution at previous workplaces. Putting aside whether you're a professional web developer because you got paid $20 by your uncle to fix some HTML, the issue with lying lies in the intent behind it. If you have a good idea of what impression you are leaving your interlocutor with, and you are crafting statements such that the image in their head does not map to reality, then you are lying.
Unfortunately thanks to our dear leader's masterful consummation of toxicity and incompetence, the truth of the matter is that: They left their previous job due to burnout related to extensive bullying, which future employers would like to know because they would prefer to blacklist everyone involved to minimize their chances of getting the bad actor. Everyone involved thinks that they were the victim, and an employer does not have access to my direct observations, so this is not even an unreasonable strategy All their projects were failures through no fault of their own, in a market where everyone has "successfully designed and implemented" their data governance initiatives, as indicated previously
What I am trying to say is that I currently believe that there are not enough employers who will appreciate honesty and competence for a strategy of honesty to reliably pay your rent. My concern, with regards to Nat's original article, is that the industry is so primed with nonsense that we effectively have two industries. We have a real engineering market, where people are fairly serious and gather in small conclaves (only two of which I have seen, and one of those was through a blog reader's introduction), and then a gigantic field of people that are cosplaying at engineering. The real market is large in absolute terms, but tiny relative to the number of candidates and companies out there. The fake market is all people that haven't cultivated the discipline to engineer but nonetheless want software engineering salaries and clout.
There are some companies where your interviewer is going to be a reasonable person, and there you can be totally honest. For example, it is a good thing to admit that the last project didn't go that well, because the kind of person that sees the industry for what it is, and who doesn't endorse bullshit, and who works on themselves diligently - that person is going to hear your honesty, and is probably reasonably good at detecting when candidates are revealing just enough fake problems to fake honesty, and then they will hire you. You will both put down your weapons and embrace. This is very rare. A strategy that is based on assuming this happens if you keep repeatedly engaging with random companies on the market is overwhelmingly going to result in a long, long search. For the most part, you will be engaged in a twisted, adversarial game with actors who will relentlessly try to do things like make you say a number first in case you say one that's too low.
Suffice it to say that, if you grin in just the right way and keep a straight face, there is a large class of person that will hear you say "Hah, you know, I'm just reflecting on how nice it is to be in a room full of people who are asking the right questions after all my other terrible interviews." and then they will shake your hand even as they shatter the other one patting themselves on the back at Mach 10. I know, I know, it sounds like that doesn't work but it absolutely does.
Neil Gaiman On Lying People get hired because, somehow, they get hired. In my case I did something which these days would be easy to check, and would get me into trouble, and when I started out, in those pre-internet days, seemed like a sensible career strategy: when I was asked by editors who I'd worked for, I lied. I listed a handful of magazines that sounded likely, and I sounded confident, and I got jobs. I then made it a point of honour to have written something for each of the magazines I'd listed to get that first job, so that I hadn't actually lied, I'd just been chronologically challenged... You get work however you get work.
Nat Bennett, of Start Of This Article fame, writes: If you want to be the kind of person who walks away from your job when you're asked to do something that doesn't fit your values, you need to save money. You need to maintain low fixed expenses. Acting with integrity – or whatever it is that you value – mostly isn't about making the right decision in the moment. It's mostly about the decisions that you make leading up to that moment, that prepare you to be able to make the decision that you feel is right.
As a rough rule, if I've let my relationship with a job deteriorate to the point that I must leave, I have already waited way too long, and will be forced to move to another place that is similarly upsetting.
And that is, of course, what had gradually happened. I very painfully navigated the immigration process, trimmed my expenses, found a position that is frequently silly but tolerable for extended periods of time, and started looking for work before the new gig, mostly the same as the last gig, became unbearable. Everything other than the immigration process was burnout induced, so I can't claim that it was a clever strategy, but the net effect is that I kept sacrificing things at the altar of Being Okay With Less, and now I am in an apartment so small that I think I almost fractured my little toe banging it on the side of my bed frame, but I have the luxury of not lying.
If I had to write down what a potential exit pathway looks like, it might be: Find a job even if you must navigate the Vortex, and it doesn't matter if it's bad because there's a grace period where your brain is not soaking up the local brand of madness, i.e, when you don't even understand the local politics yet Meet good programmers that appreciate things like mindfulness in your local area - you're going to have to figure out how to do this one Repeat Step 1 and Step 2 on a loop, building yourself up as a person, engineer, and friend, until someone who knows you for you hires you based on your personality and values, rather than "I have seven years doing bullshit in React that clearly should have been ten raw HTML pages served off one Django server"
A CEO here told me that he asks people to self-evaluate their skill on a scale of 1 to 10, but he actually has solid measures. You're at 10 at Python if you're a core maintainer. 9 if you speak at major international conferences, etc. On that scale, I'm a 4, or maybe a 5 on my best day ever, and that's the sad truth. We'll get there one day.
I will always hate writing code that moves the overall product further from Quality. I'll write a basic feature and take shortcuts, but not the kind that we are going to build on top of, which is unattractive to employers because sacrificing the long-term health of a product is a big part of status laundering.
The only piece of software I've written that is unambiguously helpful is this dumb hack that I used to cut up episodes of the Glass Cannon Podcast into one minute segments so that my skip track button on my underwater headphones is now a janky fast forward one minute button. It took me like ten minutes to write, and is my greatest pride.
Have I actually worked with Google? My CV says so, but guess what, not quite! I worked on one project where the money came from Google, but we really had one call with one guy who said we were probably on track, which we definitely were not!
Did I salvage a A$1.2M project? Technically yes, but only because I forced the previous developer to actually give us his code before he quit! This is not replicable, and then the whole engineering team quit over a mandatory return to office, so the application never shipped!
Did I save a half million dollars in Snowflake expenses? CV says yes, reality says I can only repeat that trick if someone decided to set another pile of money on fire and hand me the fire extinguisher! Did I really receive departmental recognition for this? Yes, but only in that they gave me A$30 and a pat on the head and told me that a raise wasn't on the table.
Was I the most highly paid senior engineer at that company? Yes, but only because I had insider information that four people quit in the same week, and used that to negotiate a 20% raise over the next highest salary - the decision was based around executive KPIs, not my competence!
·ludic.mataroa.blog·
The Complex Problem Of Lying For Jobs — Ludicity
The $2 Per Hour Workers Who Made ChatGPT Safer
The $2 Per Hour Workers Who Made ChatGPT Safer
The story of the workers who made ChatGPT possible offers a glimpse into the conditions in this little-known part of the AI industry, which nevertheless plays an essential role in the effort to make AI systems safe for public consumption. “Despite the foundational role played by these data enrichment professionals, a growing body of research reveals the precarious working conditions these workers face,” says the Partnership on AI, a coalition of AI organizations to which OpenAI belongs. “This may be the result of efforts to hide AI’s dependence on this large labor force when celebrating the efficiency gains of technology. Out of sight is also out of mind.”
This reminds me of [[On the Social Media Ideology - Journal 75 September 2016 - e-flux]]:<br>> Platforms are not stages; they bring together and synthesize (multimedia) data, yes, but what is lacking here is the (curatorial) element of human labor. That’s why there is no media in social media. The platforms operate because of their software, automated procedures, algorithms, and filters, not because of their large staff of editors and designers. Their lack of employees is what makes current debates in terms of racism, anti-Semitism, and jihadism so timely, as social media platforms are currently forced by politicians to employ editors who will have to do the all-too-human monitoring work (filtering out ancient ideologies that refuse to disappear).
Computer-generated text, images, video, and audio will transform the way countless industries do business, the most bullish investors believe, boosting efficiency everywhere from the creative arts, to law, to computer programming. But the working conditions of data labelers reveal a darker part of that picture: that for all its glamor, AI often relies on hidden human labor in the Global South that can often be damaging and exploitative. These invisible workers remain on the margins even as their work contributes to billion-dollar industries.
One Sama worker tasked with reading and labeling text for OpenAI told TIME he suffered from recurring visions after reading a graphic description of a man having sex with a dog in the presence of a young child. “That was torture,” he said. “You will read a number of statements like that all through the week. By the time it gets to Friday, you are disturbed from thinking through that picture.” The work’s traumatic nature eventually led Sama to cancel all its work for OpenAI in February 2022, eight months earlier than planned.
In the day-to-day work of data labeling in Kenya, sometimes edge cases would pop up that showed the difficulty of teaching a machine to understand nuance. One day in early March last year, a Sama employee was at work reading an explicit story about Batman’s sidekick, Robin, being raped in a villain’s lair. (An online search for the text reveals that it originated from an online erotica site, where it is accompanied by explicit sexual imagery.) The beginning of the story makes clear that the sex is nonconsensual. But later—after a graphically detailed description of penetration—Robin begins to reciprocate. The Sama employee tasked with labeling the text appeared confused by Robin’s ambiguous consent, and asked OpenAI researchers for clarification about how to label the text, according to documents seen by TIME. Should the passage be labeled as sexual violence, she asked, or not? OpenAI’s reply, if it ever came, is not logged in the document; the company declined to comment. The Sama employee did not respond to a request for an interview.
In February, according to one billing document reviewed by TIME, Sama delivered OpenAI a sample batch of 1,400 images. Some of those images were categorized as “C4”—OpenAI’s internal label denoting child sexual abuse—according to the document. Also included in the batch were “C3” images (including bestiality, rape, and sexual slavery,) and “V3” images depicting graphic detail of death, violence or serious physical injury, according to the billing document.
I haven't finished watching [[Severance]] yet but this labeling system reminds me of the way they have to process and filter data that is obfuscated as meaningless numbers. In the show, employees have to "sense" whether the numbers are "bad," which they can, somehow, and sort it into the trash bin.
But the need for humans to label data for AI systems remains, at least for now. “They’re impressive, but ChatGPT and other generative models are not magic – they rely on massive supply chains of human labor and scraped data, much of which is unattributed and used without consent,” Andrew Strait, an AI ethicist, recently wrote on Twitter. “These are serious, foundational problems that I do not see OpenAI addressing.”
·time.com·
The $2 Per Hour Workers Who Made ChatGPT Safer