Found 16 bookmarks
Newest
Mark Zuckerberg Is Not Done With Politics – Pixel Envy
Mark Zuckerberg Is Not Done With Politics – Pixel Envy
Journalists do not write the headlines; I hope the editor responsible for this one is soaked with regret. Zuckerberg is not “done with politics”. He is very much playing politics. He supported some more liberal causes when it was both politically acceptable and financially beneficial, something he has continued to do today, albeit by having no discernible principles. Do not mistake this for savviness or diplomacy, either. It is political correctness for the billionaire class.
·pxlnv.com·
Mark Zuckerberg Is Not Done With Politics – Pixel Envy
The CrowdStrike Outage and Market-Driven Brittleness
The CrowdStrike Outage and Market-Driven Brittleness
Redundancies are unprofitable. Being slow and careful is unprofitable. Being less embedded in and less essential and having less access to the customers’ networks and machines is unprofitable—at least in the short term, by which these companies are measured. This is true for companies like CrowdStrike. It’s also true for CrowdStrike’s customers, who also didn’t have resilience, redundancy, or backup systems in place for failures such as this because they are also an expense that affects short-term profitability.
The market rewards short-term profit-maximizing systems, and doesn’t sufficiently penalize such companies for the impact their mistakes can have. (Stock prices depress only temporarily. Regulatory penalties are minor. Class-action lawsuits settle. Insurance blunts financial losses.) It’s not even clear that the information technology industry could exist in its current form if it had to take into account all the risks such brittleness causes.
The asymmetry of costs is largely due to our complex interdependency on so many systems and technologies, any one of which can cause major failures. Each piece of software depends on dozens of others, typically written by other engineering teams sometimes years earlier on the other side of the planet. Some software systems have not been properly designed to contain the damage caused by a bug or a hack of some key software dependency.
This market force has led to the current global interdependence of systems, far and wide beyond their industry and original scope. It’s why flying planes depends on software that has nothing to do with the avionics. It’s why, in our connected internet-of-things world, we can imagine a similar bad software update resulting in our cars not starting one morning or our refrigerators failing.
Right now, the market incentives in tech are to focus on how things succeed: A company like CrowdStrike provides a key service that checks off required functionality on a compliance checklist, which makes it all about the features that they will deliver when everything is working. That’s exactly backward. We want our technological infrastructure to mimic nature in the way things fail. That will give us deep complexity rather than just surface complexity, and resilience rather than brittleness.
Netflix is famous for its Chaos Monkey tool, which intentionally causes failures to force the systems (and, really, the engineers) to be more resilient. The incentives don’t line up in the short term: It makes it harder for Netflix engineers to do their jobs and more expensive for them to run their systems. Over years, this kind of testing generates more stable systems. But it requires corporate leadership with foresight and a willingness to spend in the short term for possible long-term benefits.
The National Highway Traffic Safety Administration crashes cars to learn what happens to the people inside. But cars are relatively simple, and keeping people safe is straightforward. Software is different. It is diverse, is constantly changing, and has to continually adapt to novel circumstances. We can’t expect that a regulation that mandates a specific list of software crash tests would suffice. Again, security and resilience are achieved through the process by which we fail and fix, not through any specific checklist. Regulation has to codify that process.
·lawfaremedia.org·
The CrowdStrike Outage and Market-Driven Brittleness
The Complex Problem Of Lying For Jobs — Ludicity
The Complex Problem Of Lying For Jobs — Ludicity

Claude summary: Key takeaway Lying on job applications is pervasive in the tech industry due to systemic issues, but it creates an "Infinite Lie Vortex" that erodes integrity and job satisfaction. While honesty may limit short-term opportunities, it's crucial for long-term career fulfillment and ethical work environments.

Summary

  • The author responds to Nat Bennett's article against lying in job interviews, acknowledging its validity while exploring the nuances of the issue.
  • Most people in the tech industry are already lying or misrepresenting themselves on their CVs and in interviews, often through "technically true" statements.
  • The job market is flooded with candidates who are "cosplaying" at engineering, making it difficult for honest, competent individuals to compete.
  • Many employers and interviewers are not seriously engaged in engineering and overlook actual competence in favor of congratulatory conversation and superficial criteria
  • Most tech projects are "default dead," making it challenging for honest candidates to present impressive achievements without embellishment.
  • The author suggests that escaping the "Infinite Lie Vortex" requires building financial security, maintaining low expenses, and cultivating relationships with like-minded professionals.
  • Honesty in job applications may limit short-term opportunities but leads to more fulfilling and ethical work environments in the long run.
  • The author shares personal experiences of navigating the tech job market, including instances of misrepresentation and the challenges of maintaining integrity.
  • The piece concludes with a satirical, honest version of the author's CV, highlighting the absurdity of common resume claims and the value of authenticity.
  • Throughout the article, the author maintains a cynical, humorous tone while addressing serious issues in the tech industry's hiring practices and work culture.
  • The author emphasizes the importance of self-awareness, continuous learning, and valuing personal integrity over financial gain or status.
If your model is "it's okay to lie if I've been lied to" then we're all knee deep in bullshit forever and can never escape Transaction Cost Hell.
Do I agree that entering The Infinite Lie Vortex is wise or good for you spiritually? No, not at all, just look at what it's called.
it is very common practice on the job market to have a CV that obfuscates the reality of your contribution at previous workplaces. Putting aside whether you're a professional web developer because you got paid $20 by your uncle to fix some HTML, the issue with lying lies in the intent behind it. If you have a good idea of what impression you are leaving your interlocutor with, and you are crafting statements such that the image in their head does not map to reality, then you are lying.
Unfortunately thanks to our dear leader's masterful consummation of toxicity and incompetence, the truth of the matter is that: They left their previous job due to burnout related to extensive bullying, which future employers would like to know because they would prefer to blacklist everyone involved to minimize their chances of getting the bad actor. Everyone involved thinks that they were the victim, and an employer does not have access to my direct observations, so this is not even an unreasonable strategy All their projects were failures through no fault of their own, in a market where everyone has "successfully designed and implemented" their data governance initiatives, as indicated previously
What I am trying to say is that I currently believe that there are not enough employers who will appreciate honesty and competence for a strategy of honesty to reliably pay your rent. My concern, with regards to Nat's original article, is that the industry is so primed with nonsense that we effectively have two industries. We have a real engineering market, where people are fairly serious and gather in small conclaves (only two of which I have seen, and one of those was through a blog reader's introduction), and then a gigantic field of people that are cosplaying at engineering. The real market is large in absolute terms, but tiny relative to the number of candidates and companies out there. The fake market is all people that haven't cultivated the discipline to engineer but nonetheless want software engineering salaries and clout.
There are some companies where your interviewer is going to be a reasonable person, and there you can be totally honest. For example, it is a good thing to admit that the last project didn't go that well, because the kind of person that sees the industry for what it is, and who doesn't endorse bullshit, and who works on themselves diligently - that person is going to hear your honesty, and is probably reasonably good at detecting when candidates are revealing just enough fake problems to fake honesty, and then they will hire you. You will both put down your weapons and embrace. This is very rare. A strategy that is based on assuming this happens if you keep repeatedly engaging with random companies on the market is overwhelmingly going to result in a long, long search. For the most part, you will be engaged in a twisted, adversarial game with actors who will relentlessly try to do things like make you say a number first in case you say one that's too low.
Suffice it to say that, if you grin in just the right way and keep a straight face, there is a large class of person that will hear you say "Hah, you know, I'm just reflecting on how nice it is to be in a room full of people who are asking the right questions after all my other terrible interviews." and then they will shake your hand even as they shatter the other one patting themselves on the back at Mach 10. I know, I know, it sounds like that doesn't work but it absolutely does.
Neil Gaiman On Lying People get hired because, somehow, they get hired. In my case I did something which these days would be easy to check, and would get me into trouble, and when I started out, in those pre-internet days, seemed like a sensible career strategy: when I was asked by editors who I'd worked for, I lied. I listed a handful of magazines that sounded likely, and I sounded confident, and I got jobs. I then made it a point of honour to have written something for each of the magazines I'd listed to get that first job, so that I hadn't actually lied, I'd just been chronologically challenged... You get work however you get work.
Nat Bennett, of Start Of This Article fame, writes: If you want to be the kind of person who walks away from your job when you're asked to do something that doesn't fit your values, you need to save money. You need to maintain low fixed expenses. Acting with integrity – or whatever it is that you value – mostly isn't about making the right decision in the moment. It's mostly about the decisions that you make leading up to that moment, that prepare you to be able to make the decision that you feel is right.
As a rough rule, if I've let my relationship with a job deteriorate to the point that I must leave, I have already waited way too long, and will be forced to move to another place that is similarly upsetting.
And that is, of course, what had gradually happened. I very painfully navigated the immigration process, trimmed my expenses, found a position that is frequently silly but tolerable for extended periods of time, and started looking for work before the new gig, mostly the same as the last gig, became unbearable. Everything other than the immigration process was burnout induced, so I can't claim that it was a clever strategy, but the net effect is that I kept sacrificing things at the altar of Being Okay With Less, and now I am in an apartment so small that I think I almost fractured my little toe banging it on the side of my bed frame, but I have the luxury of not lying.
If I had to write down what a potential exit pathway looks like, it might be: Find a job even if you must navigate the Vortex, and it doesn't matter if it's bad because there's a grace period where your brain is not soaking up the local brand of madness, i.e, when you don't even understand the local politics yet Meet good programmers that appreciate things like mindfulness in your local area - you're going to have to figure out how to do this one Repeat Step 1 and Step 2 on a loop, building yourself up as a person, engineer, and friend, until someone who knows you for you hires you based on your personality and values, rather than "I have seven years doing bullshit in React that clearly should have been ten raw HTML pages served off one Django server"
A CEO here told me that he asks people to self-evaluate their skill on a scale of 1 to 10, but he actually has solid measures. You're at 10 at Python if you're a core maintainer. 9 if you speak at major international conferences, etc. On that scale, I'm a 4, or maybe a 5 on my best day ever, and that's the sad truth. We'll get there one day.
I will always hate writing code that moves the overall product further from Quality. I'll write a basic feature and take shortcuts, but not the kind that we are going to build on top of, which is unattractive to employers because sacrificing the long-term health of a product is a big part of status laundering.
The only piece of software I've written that is unambiguously helpful is this dumb hack that I used to cut up episodes of the Glass Cannon Podcast into one minute segments so that my skip track button on my underwater headphones is now a janky fast forward one minute button. It took me like ten minutes to write, and is my greatest pride.
Have I actually worked with Google? My CV says so, but guess what, not quite! I worked on one project where the money came from Google, but we really had one call with one guy who said we were probably on track, which we definitely were not!
Did I salvage a A$1.2M project? Technically yes, but only because I forced the previous developer to actually give us his code before he quit! This is not replicable, and then the whole engineering team quit over a mandatory return to office, so the application never shipped!
Did I save a half million dollars in Snowflake expenses? CV says yes, reality says I can only repeat that trick if someone decided to set another pile of money on fire and hand me the fire extinguisher! Did I really receive departmental recognition for this? Yes, but only in that they gave me A$30 and a pat on the head and told me that a raise wasn't on the table.
Was I the most highly paid senior engineer at that company? Yes, but only because I had insider information that four people quit in the same week, and used that to negotiate a 20% raise over the next highest salary - the decision was based around executive KPIs, not my competence!
·ludic.mataroa.blog·
The Complex Problem Of Lying For Jobs — Ludicity
Toxic Gaslighting: How 3M Executives Convinced a Scientist the Forever Chemicals She Found in Human Blood Were Safe
Toxic Gaslighting: How 3M Executives Convinced a Scientist the Forever Chemicals She Found in Human Blood Were Safe
Johnson asked Hansen to figure out whether the lab had made a mistake. Detecting trace levels of chemicals was her specialty: She had recently written a doctoral dissertation about tiny particles in the atmosphere.
Hansen didn’t want to share her results until she was certain that they were correct, so she and her team spent several weeks analyzing more blood, often in time-consuming overnight tests. All the samples appeared to be contaminated. When Hansen used a more precise method, liquid chromatography, the results left little doubt that the chemical in the Red Cross blood was PFOS. Hansen now felt obligated to update her boss. Johnson was a towering, bearded man, and she liked him: He seemed to trust her expertise, and he found something to laugh about in most conversations. But, when she shared her findings, his response was cryptic. “This changes everything,” he said. Before she could ask him what he meant, he went into his office and closed the door.
In the middle of this testing, Johnson suddenly announced that he would be taking early retirement. After he packed up his office and left, Hansen felt adrift. She was so new to corporate life that her office clothes — pleated pants and dress shirts — still felt like a costume. Johnson had always guided her research, and he hadn’t told Hansen what she should do next. She reminded herself of what he had said — that the chemical wasn’t harmful in factory workers. But she couldn’t be sure that it was harmless.
Hansen’s bosses never told her that PFOS was toxic. In the weeks after Johnson left 3M, however, she felt that she was under a new level of scrutiny. One of her superiors suggested that her equipment might be contaminated, so she cleaned the mass spectrometer and then the entire lab. Her results didn’t change. Another encouraged her to repeatedly analyze her syringes, bags and test tubes, in case they had tainted the blood. (They had not.) Her managers were less concerned about PFOS, it seemed to Hansen, than about the chance that she was wrong.
Hansen doubted herself. She was 28 and had only recently earned her Ph.D. But she continued her experiments, if only to respond to the questions of her managers. 3M bought three additional mass spectrometers, which each cost more than a car, and Hansen used them to test more blood samples. In late 1997, her new boss, Bacon, even had her fly out to the company that manufactured the machines, so that she could repeat her tests there. She studied the blood of hundreds of people from more than a dozen blood banks in various states. Each sample contained PFOS. The chemical seemed to be everywhere.
After the war, 3M hired some Manhattan Project chemists and began mass-producing chains of carbon atoms bonded to fluorine atoms. The resulting chemicals proved to be astonishingly versatile, in part because they resist oil, water and heat. They are also incredibly long-lasting, earning them the moniker “forever chemicals.”
One afternoon in 1998, a trim 3M epidemiologist named Geary Olsen arrived with several vials of blood and asked her to test them. The next morning, she read the results to him and several colleagues — positive for PFOS. As Hansen remembers it, Olsen looked triumphant. “Those samples came from my horse,” he said — and his horse certainly wasn’t eating at McDonald’s or trotting on Scotchgarded carpets. Hansen felt that he was trying to humiliate her. (Olsen did not respond to requests for comment.) What Hansen wanted to know was how PFOS was making its way into animals.
PFOS, a man-made chemical produced by her employer, really was in human blood, practically everywhere. Hansen’s team found it in Swedish blood samples from 1957 and 1971. After that, her lab analyzed blood that had been collected before 3M created PFOS. It tested negative. Apparently, fluorochemicals had entered human blood after the company started selling products that contained them. They had leached out of 3M’s sprays, coatings and factories — and into all of us.
Almost as soon as Hansen placed her first transparency on the projector, the attendees began interrogating her: Why did she do this research? Who directed her to do it? Whom did she inform of the results? The executives seemed to view her diligence as a betrayal: Her data could be damaging to the company. She remembers defending herself, mentioning Newmark’s similar work in the ’70s and trying, unsuccessfully, to direct the conversation back to her research. While the executives talked over her, Hansen noticed that DeSimone’s eyes had closed and that his chin was resting on his dress shirt. The CEO appeared to have fallen asleep. (DeSimone died in 2017. A company spokesperson did not answer my questions about the meeting.)
In 2002, when 3M announced that it would be replacing PFOS with another fluorochemical, PFBS, Hansen knew that it, too, would remain in the environment indefinitely. Still, she decided not to involve herself. She skipped over articles about the chemicals in scientific journals and newspapers, where they were starting to be linked to possible developmental, immune system and liver problems.
In the 2016 book “Secrecy at Work,” two management theorists, Jana Costas and Christopher Grey, argue that there is nothing inherently wrong or harmful about keeping secrets. Trade secrets, for example, are protected by federal and state law on the grounds that they promote innovation and contribute to the economy. The authors draw on a large body of sociological research to illustrate the many ways that information can be concealed. An organization can compartmentalize a secret by slicing it into smaller components, preventing any one person from piecing together the whole. Managers who don’t want to disclose sensitive information may employ “stone-faced silence.” Secret-keepers can form a kind of tribe, dependent on one another’s continued discretion; in this way, even the existence of a secret can be kept secret. Such techniques become pernicious, Costas and Grey write, when a company keeps a dark secret, a secret about wrongdoing.
Hansen’s superiors had given her the same explanation that they gave journalists, she finally said — that factory workers were fine, so people with lower levels would be, too. Her specialty was the detection of chemicals, not their harms. “You’ve got literally the medical director of 3M saying, ‘We studied this, there are no effects,’” she told me. “I wasn’t about to challenge that.” Her income had helped to support a family of five. Perhaps, I wondered aloud, she hadn’t really wanted to know whether her company was poisoning the public.
Jim Johnson, who is now an 81-year-old widower, lives with several dogs in a pale-yellow house in North Dakota. When I first called him, he said that he had begun researching PFOS in the ’70s. “I did a lot of the very original work on it,” he told me. He said that when he saw the chemical’s structure he understood “within 20 minutes” that it would not break down in nature. Shortly thereafter, one of his experiments revealed that PFOS was binding to proteins in the body, causing the chemical to accumulate over time. He told me that he also looked for PFOS in an informal test of blood from the general population, around the late ’70s, and was not surprised when he found it there.
Johnson said that he eventually tired of arguing with the few colleagues with whom he could speak openly about PFOS. “It was time,” he said. So he hired an outside lab to look for the chemical in the blood of 3M workers, knowing that it would also test blood bank samples for comparison — the first domino in a chain that would ultimately take the compound off the market. Oddly, he compared the head of the lab to a vending machine. “He gave me what I paid for,” Johnson said. “I knew what would happen.” Then Johnson tasked Hansen with something that he had long avoided: going beyond his initial experiments and meticulously documenting the chemical’s ubiquity. While Hansen took the heat, he took early retirement. Johnson described Hansen as though she were a vending machine, too. “She did what she was supposed to do with the tools I left her,” he said.
I pointed out that Hansen had suffered professionally and personally, and that she now feels those experiences tainted her career. “I didn’t say I was a nice guy,” Johnson replied, and laughed. After four hours, we were nearing the bottom of our bottomless coffees.
Average levels of PFOS are falling, but nearly all people have at least one forever chemical in their blood, according to the Centers for Disease Control and Prevention. “When you have a contaminated site, you can clean it up,” Elsie Sunderland, an environmental chemist at Harvard University, told me. “When you ubiquitously introduce a toxicant at a global scale, so that it’s detectable in everyone ... we’re reducing public health on an incredibly large scale.” Once everyone’s blood is contaminated, there is no control group with which to compare, making it difficult to establish responsibility.
At least 45% of U.S. tap water is estimated to contain one or more forever chemicals, and one drinking water expert told me that the cost of removing them all would likely reach $100 billion.
n 2022, 3M said that it would stop making PFAS and would “work to discontinue the use of PFAS across its product portfolio,” by the end of 2025 — a pledge that it called “another example of how we are positioning 3M for continued sustainable growth.” But it acknowledged that more than 16,000 of its products still contained PFAS.
·propublica.org·
Toxic Gaslighting: How 3M Executives Convinced a Scientist the Forever Chemicals She Found in Human Blood Were Safe
How McKinsey Destroyed the Middle Class - The Atlantic
How McKinsey Destroyed the Middle Class - The Atlantic

The rise of management consulting firms like McKinsey played a pivotal role in disempowering the American middle class by promoting corporate restructuring that concentrated power and wealth in the hands of elite managers while stripping middle managers and workers of their decision-making roles, job security, and opportunities for career advancement.

Key topics:

  • Management consulting's role in reshaping corporate America
  • The decline of the middle class and the rise of corporate elitism
  • McKinsey's influence on corporate restructuring and inequality
  • The shift from lifetime employment to precarious jobs
  • The erosion of corporate social responsibility
  • The role of management consulting in perpetuating economic inequality
what consequences has the rise of management consulting had for the organization of American business and the lives of American workers? The answers to these questions put management consultants at the epicenter of economic inequality and the destruction of the American middle class.
Managers do not produce goods or deliver services. Instead, they plan what goods and services a company will provide, and they coordinate the production workers who make the output. Because complex goods and services require much planning and coordination, management (even though it is only indirectly productive) adds a great deal of value. And managers as a class capture much of this value as pay. This makes the question of who gets to be a manager extremely consequential.
In the middle of the last century, management saturated American corporations. Every worker, from the CEO down to production personnel, served partly as a manager, participating in planning and coordination along an unbroken continuum in which each job closely resembled its nearest neighbor.
Even production workers became, on account of lifetime employment and workplace training, functionally the lowest-level managers. They were charged with planning and coordinating the development of their own skills to serve the long-run interests of their employers.
At McDonald’s, Ed Rensi worked his way up from flipping burgers in the 1960s to become CEO. More broadly, a 1952 report by Fortune magazine found that two-thirds of senior executives had more than 20 years’ service at their current companies.
Top executives enjoyed commensurately less control and captured lower incomes. This democratic approach to management compressed the distribution of income and status. In fact, a mid-century study of General Motors published in the Harvard Business Review—completed, in a portent of what was to come, by McKinsey’s Arch Patton—found that from 1939 to 1950, hourly workers’ wages rose roughly three times faster than elite executives’ pay. The management function’s wide diffusion throughout the workforce substantially built the mid-century middle class.
The earliest consultants were engineers who advised factory owners on measuring and improving efficiency at the complex factories required for industrial production. The then-leading firm, Booz Allen, did not achieve annual revenues of $2 million until after the Second World War. McKinsey, which didn’t hire its first Harvard M.B.A. until 1953, retained a diffident and traditional ethos
A new ideal of shareholder primacy, powerfully championed by Milton Friedman in a 1970 New York Times Magazine article entitled “The Social Responsibility of Business is to Increase its Profits,” gave the newly ambitious management consultants a guiding purpose. According to this ideal, in language eventually adopted by the Business Roundtable, “the paramount duty of management and of boards of directors is to the corporation’s stockholders.” During the 1970s, and accelerating into the ’80s and ’90s, the upgraded management consultants pursued this duty by expressly and relentlessly taking aim at the middle managers who had dominated mid-century firms, and whose wages weighed down the bottom line.
Management consultants thus implemented and rationalized a transformation in the American corporation. Companies that had long affirmed express “no layoff” policies now took aim at what the corporate raider Carl Icahn, writing in the The New York Times in the late 1980s, called “corporate bureaucracies” run by “incompetent” and “inbred” middle managers. They downsized in response not to particular business problems but rather to a new managerial ethos and methods; they downsized when profitable as well as when struggling, and during booms as well as busts.
Downsizing was indeed wrenching. When IBM abandoned lifetime employment in the 1990s, local officials asked gun-shop owners around its headquarters to close their stores while employees absorbed the shock.
In some cases, downsized employees have been hired back as subcontractors, with no long-term claim on the companies and no role in running them. When IBM laid off masses of workers in the 1990s, for example, it hired back one in five as consultants. Other corporations were built from scratch on a subcontracting model. The clothing brand United Colors of Benetton has only 1,500 employees but uses 25,000 workers through subcontractors.
Shift from lifetime employment to reliance on outsourced labor; decline in unions
The shift from permanent to precarious jobs continues apace. Buttigieg’s work at McKinsey included an engagement for Blue Cross Blue Shield of Michigan, during a period when it considered cutting up to 1,000 jobs (or 10 percent of its workforce). And the gig economy is just a high-tech generalization of the sub-contractor model. Uber is a more extreme Benetton; it deprives drivers of any role in planning and coordination, and it has literally no corporate hierarchy through which drivers can rise up to join management.
In effect, management consulting is a tool that allows corporations to replace lifetime employees with short-term, part-time, and even subcontracted workers, hired under ever more tightly controlled arrangements, who sell particular skills and even specified outputs, and who manage nothing at all.
the managerial control stripped from middle managers and production workers has been concentrated in a narrow cadre of executives who monopolize planning and coordination. Mid-century, democratic management empowered ordinary workers and disempowered elite executives, so that a bad CEO could do little to harm a company and a good one little to help it.
Whereas at mid-century a typical large-company CEO made 20 times a production worker’s income, today’s CEOs make nearly 300 times as much. In a recent year, the five highest-paid employees of the S&P 1500 (7,500 elite executives overall), obtained income equal to about 10 percent of the total profits of the entire S&P 1500.
as Kiechel put it dryly, “we are not all in this together; some pigs are smarter than other pigs and deserve more money.” Consultants seek, in this way, to legitimate both the job cuts and the explosion of elite pay. Properly understood, the corporate reorganizations were, then, not merely technocratic but ideological.
corporate reorganizations have deprived companies of an internal supply of managerial workers. When restructurings eradicated workplace training and purged the middle rungs of the corporate ladder, they also forced companies to look beyond their walls for managerial talent—to elite colleges, business schools, and (of course) to management-consulting firms. That is to say: The administrative techniques that management consultants invented created a huge demand for precisely the services that the consultants supply.
Consulting, like law school, is an all-purpose status giver—“low in risk and high in reward,” according to the Harvard Crimson. McKinsey also hopes that its meritocratic excellence will legitimate its activities in the eyes of the broader world. Management consulting, Kiechel observed, acquired its power and authority not from “silver-haired industry experience but rather from the brilliance of its ideas and the obvious candlepower of the people explaining them, even if those people were twenty-eight years old.”
A deeper objection to Buttigieg’s association with McKinsey concerns not whom the firm represents but the central role the consulting revolution has played in fueling the enormous economic inequalities that now threaten to turn the United States into a caste society.
Meritocrats like Buttigieg changed not just corporate strategies but also corporate values.
GM may aspire to build good cars; IBM, to make typewriters, computers, and other business machines; and AT&T, to improve communications. Executives who rose up through these companies, on the mid-century model, were embedded in their firms and embraced these values, so that they might even have come to view profits as a salutary side effect of running their businesses well.
When management consulting untethered executives from particular industries or firms and tied them instead to management in general, it also led them to embrace the one thing common to all corporations: making money for shareholders. Executives raised on the new, untethered model of management aim exclusively and directly at profit: their education, their career arc, and their professional role conspire to isolate them from other workers and train them single-mindedly on the bottom line.
American democracy, the left believes, cannot be rejuvenated by persuading elites to deploy their excessive power somehow more benevolently. Instead, it requires breaking the stranglehold that elites have on our economics and politics, and reempowering everyone else.
·archive.is·
How McKinsey Destroyed the Middle Class - The Atlantic
Omegle's Rise and Fall - A Vision for Internet Connection
Omegle's Rise and Fall - A Vision for Internet Connection
As much as I wish circumstances were different, the stress and expense of this fight – coupled with the existing stress and expense of operating Omegle, and fighting its misuse – are simply too much. Operating Omegle is no longer sustainable, financially nor psychologically. Frankly, I don’t want to have a heart attack in my 30s. The battle for Omegle has been lost, but the war against the Internet rages on. Virtually every online communication service has been subject to the same kinds of attack as Omegle; and while some of them are much larger companies with much greater resources, they all have their breaking point somewhere. I worry that, unless the tide turns soon, the Internet I fell in love with may cease to exist, and in its place, we will have something closer to a souped-up version of TV – focused largely on passive consumption, with much less opportunity for active participation and genuine human connection.
I’ve done my best to weather the attacks, with the interests of Omegle’s users – and the broader principle – in mind. If something as simple as meeting random new people is forbidden, what’s next? That is far and away removed from anything that could be considered a reasonable compromise of the principle I outlined. Analogies are a limited tool, but a physical-world analogy might be shutting down Central Park because crime occurs there – or perhaps more provocatively, destroying the universe because it contains evil. A healthy, free society cannot endure when we are collectively afraid of each other to this extent.
In recent years, it seems like the whole world has become more ornery. Maybe that has something to do with the pandemic, or with political disagreements. Whatever the reason, people have become faster to attack, and slower to recognize each other’s shared humanity. One aspect of this has been a constant barrage of attacks on communication services, Omegle included, based on the behavior of a malicious subset of users. To an extent, it is reasonable to question the policies and practices of any place where crime has occurred. I have always welcomed constructive feedback; and indeed, Omegle implemented a number of improvements based on such feedback over the years. However, the recent attacks have felt anything but constructive. The only way to please these people is to stop offering the service. Sometimes they say so, explicitly and avowedly; other times, it can be inferred from their act of setting standards that are not humanly achievable. Either way, the net result is the same.
I didn’t really know what to expect when I launched Omegle. Would anyone even care about some Web site that an 18 year old kid made in his bedroom in his parents’ house in Vermont, with no marketing budget? But it became popular almost instantly after launch, and grew organically from there, reaching millions of daily users. I believe this had something to do with meeting new people being a basic human need, and with Omegle being among the best ways to fulfill that need. As the saying goes: “If you build a better mousetrap, the world will beat a path to your door.” Over the years, people have used Omegle to explore foreign cultures; to get advice about their lives from impartial third parties; and to help alleviate feelings of loneliness and isolation. I’ve even heard stories of soulmates meeting on Omegle, and getting married. Those are only some of the highlights. Unfortunately, there are also lowlights. Virtually every tool can be used for good or for evil, and that is especially true of communication tools, due to their innate flexibility. The telephone can be used to wish your grandmother “happy birthday”, but it can also be used to call in a bomb threat. There can be no honest accounting of Omegle without acknowledging that some people misused it, including to commit unspeakably heinous crimes.
As a young teenager, I couldn’t just waltz onto a college campus and tell a student: “Let’s debate moral philosophy!” I couldn’t walk up to a professor and say: “Tell me something interesting about microeconomics!” But online, I was able to meet those people, and have those conversations. I was also an avid Wikipedia editor; I contributed to open source software projects; and I often helped answer computer programming questions posed by people many years older than me. In short, the Internet opened the door to a much larger, more diverse, and more vibrant world than I would have otherwise been able to experience; and enabled me to be an active participant in, and contributor to, that world. All of this helped me to learn, and to grow into a more well-rounded person. Moreover, as a survivor of childhood rape, I was acutely aware that any time I interacted with someone in the physical world, I was risking my physical body. The Internet gave me a refuge from that fear. I was under no illusion that only good people used the Internet; but I knew that, if I said “no” to someone online, they couldn’t physically reach through the screen and hold a weapon to my head, or worse. I saw the miles of copper wires and fiber-optic cables between me and other people as a kind of shield – one that empowered me to be less isolated than my trauma and fear would have otherwise allowed.
·omegle.com·
Omegle's Rise and Fall - A Vision for Internet Connection
Grammy Chief Harvey Mason Clarifies New AI Rule: We’re Not Giving an Award to a Computer
Grammy Chief Harvey Mason Clarifies New AI Rule: We’re Not Giving an Award to a Computer
The full wording of the ruling follows: The GRAMMY Award recognizes creative excellence. Only human creators are eligible to be submitted for consideration for, nominated for, or win a GRAMMY Award. A work that contains no human authorship is not eligible in any Categories. A work that features elements of A.I. material (i.e., material generated by the use of artificial intelligence technology) is eligible in applicable Categories; however: (1) the human authorship component of the work submitted must be meaningful and more than de minimis; (2) such human authorship component must be relevant to the Category in which such work is entered (e.g., if the work is submitted in a songwriting Category, there must be meaningful and more than de minimis human authorship in respect of the music and/or lyrics; if the work is submitted in a performance Category, there must be meaningful and more than de minimis human authorship in respect of the performance); and (3) the author(s) of any A.I. material incorporated into the work are not eligible to be nominees or GRAMMY recipients insofar as their contribution to the portion of the work that consists of such A.I material is concerned. De minimis is defined as lacking significance or importance; so minor as to merit disregard.
the human portion of the of the composition, or the performance, is the only portion that can be awarded or considered for a Grammy Award. So if an AI modeling system or app built a track — ‘wrote’ lyrics and a melody — that would not be eligible for a composition award. But if a human writes a track and AI is used to voice-model, or create a new voice, or use somebody else’s voice, the performance would not be eligible, but the writing of the track and the lyric or top line would be absolutely eligible for an award.”
·variety.com·
Grammy Chief Harvey Mason Clarifies New AI Rule: We’re Not Giving an Award to a Computer
The $2 Per Hour Workers Who Made ChatGPT Safer
The $2 Per Hour Workers Who Made ChatGPT Safer
The story of the workers who made ChatGPT possible offers a glimpse into the conditions in this little-known part of the AI industry, which nevertheless plays an essential role in the effort to make AI systems safe for public consumption. “Despite the foundational role played by these data enrichment professionals, a growing body of research reveals the precarious working conditions these workers face,” says the Partnership on AI, a coalition of AI organizations to which OpenAI belongs. “This may be the result of efforts to hide AI’s dependence on this large labor force when celebrating the efficiency gains of technology. Out of sight is also out of mind.”
This reminds me of [[On the Social Media Ideology - Journal 75 September 2016 - e-flux]]:<br>> Platforms are not stages; they bring together and synthesize (multimedia) data, yes, but what is lacking here is the (curatorial) element of human labor. That’s why there is no media in social media. The platforms operate because of their software, automated procedures, algorithms, and filters, not because of their large staff of editors and designers. Their lack of employees is what makes current debates in terms of racism, anti-Semitism, and jihadism so timely, as social media platforms are currently forced by politicians to employ editors who will have to do the all-too-human monitoring work (filtering out ancient ideologies that refuse to disappear).
Computer-generated text, images, video, and audio will transform the way countless industries do business, the most bullish investors believe, boosting efficiency everywhere from the creative arts, to law, to computer programming. But the working conditions of data labelers reveal a darker part of that picture: that for all its glamor, AI often relies on hidden human labor in the Global South that can often be damaging and exploitative. These invisible workers remain on the margins even as their work contributes to billion-dollar industries.
One Sama worker tasked with reading and labeling text for OpenAI told TIME he suffered from recurring visions after reading a graphic description of a man having sex with a dog in the presence of a young child. “That was torture,” he said. “You will read a number of statements like that all through the week. By the time it gets to Friday, you are disturbed from thinking through that picture.” The work’s traumatic nature eventually led Sama to cancel all its work for OpenAI in February 2022, eight months earlier than planned.
In the day-to-day work of data labeling in Kenya, sometimes edge cases would pop up that showed the difficulty of teaching a machine to understand nuance. One day in early March last year, a Sama employee was at work reading an explicit story about Batman’s sidekick, Robin, being raped in a villain’s lair. (An online search for the text reveals that it originated from an online erotica site, where it is accompanied by explicit sexual imagery.) The beginning of the story makes clear that the sex is nonconsensual. But later—after a graphically detailed description of penetration—Robin begins to reciprocate. The Sama employee tasked with labeling the text appeared confused by Robin’s ambiguous consent, and asked OpenAI researchers for clarification about how to label the text, according to documents seen by TIME. Should the passage be labeled as sexual violence, she asked, or not? OpenAI’s reply, if it ever came, is not logged in the document; the company declined to comment. The Sama employee did not respond to a request for an interview.
In February, according to one billing document reviewed by TIME, Sama delivered OpenAI a sample batch of 1,400 images. Some of those images were categorized as “C4”—OpenAI’s internal label denoting child sexual abuse—according to the document. Also included in the batch were “C3” images (including bestiality, rape, and sexual slavery,) and “V3” images depicting graphic detail of death, violence or serious physical injury, according to the billing document.
I haven't finished watching [[Severance]] yet but this labeling system reminds me of the way they have to process and filter data that is obfuscated as meaningless numbers. In the show, employees have to "sense" whether the numbers are "bad," which they can, somehow, and sort it into the trash bin.
But the need for humans to label data for AI systems remains, at least for now. “They’re impressive, but ChatGPT and other generative models are not magic – they rely on massive supply chains of human labor and scraped data, much of which is unattributed and used without consent,” Andrew Strait, an AI ethicist, recently wrote on Twitter. “These are serious, foundational problems that I do not see OpenAI addressing.”
·time.com·
The $2 Per Hour Workers Who Made ChatGPT Safer