On Mastodon at @ssweeny@fosstodon.org Opinions are my own, not those of my wife, employer, child, or pets. In fact there are few areas in which we agree.
72 stories
·
1 follower

In mass media’s death throes

1 Share

The New York Times et al wish Joe Biden would go gentle into that good night. I wish mass media would instead. Here is a post from a thread:

In this defensive New Yorker reaction to Joe Biden (finally) criticizing the press that has been criticizing him, Jay Caspian Kang shares an important insight about the falling power of the press. But I come to a different conclusion.

Kang says that media are weakened and that’s what makes it easy for Trump and now Biden alike to attack them. I say what it shows instead is that as media realize they have lost the ability to set the agenda, their response is to shout louder and more often. That is what we see every day in The New York Times.

In The Gutenberg Parenthesis, I chronicle — nay, celebrate — the death of mass media and the insult of the idea of the mass. Kang makes me see that I next need to examine mass media’s behavior in their death throes. They are not accustomed to being talked back to, by their subjects or by the public. They respond with resentment. They dig in. 

Journalists have never been good at listening. That is why Carrie Brown and I started a program in Engagement Journalism at CUNY (now moving to Montclair State): to teach journalists to listen. In all the wagon-circling by The Times’ Kahn and Sulzberger, The Post’s Lewis, The New Yorker’s Remnick, CNN’s Zaslav, we see a failure from the top to listen to and learn from criticism.

Kang likens Trump/right-wing and Biden/liberal press criticism but they could not be more different. Trump et al want to destroy the institutions of journalism, education, and government itself. Biden and liberals wish to improve the press. We are begging for a better Times. But The Times can’t hear that over the sound of wagons circling. 

As I also write in Gutenberg, we find ourselves in a paradoxical time when the insurrectionists formerly known as “conservatives” try to destroy the institutions they once wished to conserve, putting progressives in the position not of reforming but instead of protecting those institutions. 

When I criticize The Times —and Kang quotes me doing so — it pains me terribly, for I have devoted my life to journalism and long held up The Times as our standard. No more. It is failing journalism & democracy. I fear the incumbents may be beyond reform & require replacement.

By the way, the incumbents of journalism know this. That is why they invest in lobbyists to pass legislation in New York, California, Washington (State and next DC), Canada, and Australia to benefit themselves at the expense of the media — community, nonprofit, startup, digital — that would replace them. More on that another day. 

So I am glad that Biden is finally criticizing The Times and its mass-media peers, if not yet by name. I am glad he rejects the fair-weather platformed pundits, moneyed executives (corporations), and elites (Clooney) who reject him now. In trying to dismiss him, they only make him more progressive.

The New Yorker headline over Kang’s column calls Biden’s criticism of the press “cynical.” It is anything but. It is an overdue and proper response to the cynical exercise of — as Kang makes me understand — the dying power of The Times et al. No one elected Sulzberger, Kahn, Lewis, Zaslav — or Remnick — to run the nation. Millions of us voted for Biden to do so. 

Some on the socials insist that The Times etc. want Trump to win. I’ve said that is a simplistic conspiracy theory. I’ve thought they want chaos: something for them to cover. Now Kang makes me think instead they want to recapture their lost agenda and influence: their power. 

Kang closes: “If Biden believes he is the last chance for democracy in America, perhaps he should start acting like it.” That should be said of those in charge of America’s legacy mass media: If you think you can save democracy, then start acting like it.

The post In mass media’s death throes appeared first on BuzzMachine.

Read the whole story
ssweeny
12 hours ago
reply
Pittsburgh
Share this story
Delete

Reality bites

1 Share
"Americans do not just disagree with each other, they live in different realities," Peter Baker says. Ask Herman Cain how that worked out for him.
Read the whole story
ssweeny
3 days ago
reply
Pittsburgh
Share this story
Delete

Pop Culture

1 Comment and 2 Shares

A week and a half ago, Goldman Sachs put out a 31-page-report (titled "Gen AI: Too Much Spend, Too Little Benefit?”) that includes some of the most damning literature on generative AI I've ever seen. And yes, that sound you hear is the slow deflation of the bubble I've been warning you about since March

The report covers AI's productivity benefits (which Goldman remarks are likely limited), AI's returns (which are likely to be significantly more limited than anticipated), and AI's power demands (which are likely so significant that utility companies will have to spend nearly 40% more in the next three years to keep up with the demand from hyperscalers like Google and Microsoft).

This report is so significant because Goldman Sachs, like any investment bank, does not care about anyone's feelings unless doing so is profitable. It will gladly hype anything if it thinks it'll make a buck. Back in May, it was claimed that AI (not just generative AI) was "showing very positive signs of eventually boosting GDP and productivity," even though said report buried within it constant reminders that AI had yet to impact productivity growth, and states that only about 5% of companies report using generative AI in regular production.

For Goldman to suddenly turn on the AI movement suggests that it’s extremely anxious about the future of generative AI, with almost everybody agreeing on one core point: that the longer this tech takes to make people money, the more money it's going to need to make.

The report includes an interview with economist Daron Acemoglu of MIT (page 4), an Institute Professor who published a paper back in May called "The Simple Macroeconomics of AI" that argued that "the upside to US productivity and, consequently, GDP growth from generative AI will likely prove much more limited than many forecasters expect." A month has only made Acemoglu more pessimistic, declaring that "truly transformative changes won't happen quickly and few – if any – will likely occur within the next 10 years," and that generative AI's ability to affect global productivity is low because "many of the tasks that humans currently perform...are multi-faceted and require real-world interaction, which AI won't be able to materially improve anytime soon."

What makes this interview – and really, this paper — so remarkable is how thoroughly and aggressively it attacks every bit of marketing collateral the AI movement has. Acemoglu specifically questions the belief that AI models will simply get more powerful as we throw more data and GPU capacity at them, and specifically ask a question: what does it mean to "double AI's capabilities"? How does that actually make something like, say, a customer service rep better?

And this is a specific problem with the AI fantasists' spiel. They heavily rely on the idea that not only will these large language models (LLMs) get more powerful, but that getting more powerful will somehow grant it the power to do...something. As Acemoglu says, "what does it mean to double AI's capabilities?" 

No, really, what does "more" actually mean? While one might argue that it'll mean faster generative processes, there really is no barometer for what "better" looks like, and perhaps that's why ChatGPT, Claude and other LLMs have yet to take a leap beyond being able to generate stuff. Anthropic's Claude LLM might be "best-in-class," but that only means that it's faster and more accurate, which is cool but not the future or revolutionary or even necessarily good.

I should add that these are the questions I – and other people writing about AI – should've been asking the whole time. Generative AI generates outputs based on text-based inputs and requests, requests that can be equally specific and intricate, yet the answer is always, as obvious as it sounds, generated fresh, meaning that there is no actual "knowledge" or, indeed, "intelligence" operating in any part of the process. As a result, it's easy to see how this gets better, but far, far harder – if not impossible – to see how generative AI leads any further than where we're already at. 

How does GPT – a transformer-based model that generates answers probabilistically (as in what the next part of the generation is most likely to be the correct one) based entirely on training data – do anything more than generate paragraphs of occasionally-accurate text? How do any of these models even differentiate when most of them are trained on the same training data that they're already running out of?

The training data crisis is one that doesn’t get enough attention, but it’s sufficiently dire that it has the potential to halt (or dramatically slow) any AI development in the near future. As one paper, published in the journal Computer Vision and Pattern Recognition, found, in order to achieve a linear improvement in model performance, you need an exponentially large amount of data. 

Or, put another way, each additional step becomes increasingly (and exponentially) more expensive to take. This infers a steep financial cost — not merely in just obtaining the data, but also the compute required to process it — with Anthropic CEO Dario Amodei saying that the AI models currently in development will cost as much as $1bn to train, and within three years we may see models that cost as much as “ten or a hundred billion” dollars, or roughly three times the GDP of Estonia.  

Acemoglu doubts that LLMs can become superintelligent, and that even his most conservative estimates of productivity gains "may turn out to be too large if AI models prove less successful in improving upon more complex tasks." And I think that's really the root of the problem. 

All of this excitement, every second of breathless hype has been built on this idea that the artificial intelligence industry – led by generative AI – will somehow revolutionize everything from robotics to the supply chain, despite the fact that generative AI is not actually going to solve these problems because it isn't built to do so. 

While Acemoglu has some positive things to say — for example, that AI models could be trained to help scientists conceive of and test new materials (which happened last year) — his general verdict is quite harsh: that using generative AI and "too much automation too soon could create bottlenecks and other problems for firms that no longer have the flexibility and trouble-shooting capabilities that human capital provides." In essence, replacing humans with AI might break everything if you're one of those bosses that doesn't actually know what the fuck it is they're talking about.


The report also includes a palette-cleanser for the quirked-up AI hype fiend on page 6, where Goldman Sachs' Joseph Briggs argues that generative AI will "likely lead to significant economic upside" based — and I shit you not — entirely on the idea that AI will replace workers in some jobs and then allow them to get jobs in other fields. Briggs also argues that "the full automation of AI exposed tasks that are likely to occur over a longer horizon could generate significant cost savings," which assumes that generative AI (or AI itself) will actually replace these tasks. 

I should also add that unlike every other interview in the report, Briggs continually mixes up AI and generative AI, and at one point suggests that "recent generative AI advances" are "foreshadowing the emergence of a "superintelligence."

I included this part of the report because sometimes — very rarely — I get somebody suggesting I'm not considering both sides. The reason I don't generally include both sides of this argument is that the AI hype side generally makes arguments based on the assumption that things will happen, such as a transformer model that probabilistically generates the next part of a sentence or a picture will somehow gain sentience.

Francois Chollet — an AI researcher at Google — recently argued that LLMs can't lead to AGI, explaining (in detail) that models like GPT are simply not capable of the kind of reasoning and theorizing that makes a human brain work. Chollet also notes that even models specifically built to complete the tasks of the Abstraction & Reasoning Corpus (a benchmark test for AI skills and true "intelligence") are only doing so because they've been fed millions of datapoints of people solving the test, which is kind of like measuring somebody's IQ based on them studying really hard to complete an IQ test, except even dumber.

The reason I'm suddenly bringing up superintelligences — or AGI (artificial general intelligence) — is because throughout every defense of generative AI is a deliberate attempt to get around the problem that generative AI doesn't really automate many tasks. While it's good at generating answers or creating things based on a request, there's no real interaction with the task, or the person giving it the task, or consideration of what the task needs at all — just the abstraction of "thing said" to "output generated." 

Tasks like taking someone's order and relaying it to the kitchen at a fast food restaurant might seem elementary to most people (I won't write easy, working in fast food sucks), but it isn't for an AI model that generates answers without really understanding the meaning of any of the words. Last year, Wendy's announced that it would integrate its generative "FreshAI" ordering system into some restaurants, and a few weeks ago it was revealed that the system requires human intervention on 14% of the orders.On Reddit, one user noted that Wendy's AI regularly required three attempts to get it to understand them, and would cut you off if you weren't speaking fast enough.

White Castle, which implemented a similar system in partnership with Samsung and SoundHound, fared little better, with 10% of orders requiring human intervention. Last month, McDonald’s discontinued its own AI ordering system — which it built with IBM and deployed to more than 100 restaurants — likely because it just wasn’t very good, with one customer rang up for literally hundreds of chicken nuggets. However, to be clear, McDonald’s system wasn’t based on generative AI.

If nothing else, this illustrates the disconnect between those building AI systems, and how much (or, rather, how little) they understand the jobs they wish to eliminate. A little humility goes a long way.      

Another thing to note is that, on top of generative AI cocking up these orders, Wendy's still requires human beings to make the goddamn food. Despite all of this hype, all of this media attention, all of this incredible investment, the supposed "innovations" don't even seem capable of replacing the jobs that they're meant to — not that I think they should, just that I'm tired of being told that this future is inevitable.

The reality is that generative AI isn't good at replacing jobs, but commoditizing distinct acts of labor, and, in the process, the early creative jobs that help people build portfolios to advance in their industries. 

The freelancers having their livelihoods replaced by bosses using generative AI aren't being "replaced" so much as they're being shown how little respect many bosses have for their craft, or for the customer it allegedly serves. Copy editors and concept artists provide far-more-valuable work than any generative AI can, yet an economy dominated by managers who don't appreciate (or participate in) labor means that these jobs are under assault from LLMs pumping out stuff that all looks and sounds the same to the point that copywriters are now being paid to help them sound more human.

One of the fundamental misunderstandings of the bosses replacing these workers with generative AI is that you are not just asking for a thing, but outsourcing the risk and responsibility. When I hire an artist to make a logo, my expectation is that they'll listen to me, then add their own flair, then we'll go back and forth with drafts until we have something I like. I'm paying them not just for their time, their years learning their craft and the output itself, but so that the ultimate burden of production is not my own, and their experience means that they can adapt to circumstances that I might not have thought of. These are not things that you can train in a dataset, because they're derived from experiences inside and outside of the creative process.

While one can "teach" a generative AI what a billion images look like, AI does not get hand cramps, or a call at 8PM saying that it "needs it to pop more." It does not have moods, nor can it infer them from written or visual media, because human emotions are extremely weird, as are our moods, our bodies, and our general existences. I realize all of this is a little flowery, but even the most mediocre copy ever written is, on some level, a collection of experiences. And fully replacing any creative is so very unlikely if you're doing so based on copying a million pieces of someone else's homework.


The most fascinating part of the report (page 10) is an interview with Jim Covello, Goldman Sachs' Head of Global Equity Research. Covello isn't a name you'll have heard unless you are, for whatever reason, a big semiconductor-head, but he's consistently been on the right side of history, named as the top semiconductor analyst by II Research for years, successfully catching the downturn in fundamentals in multiple major chip firms far before others did.

And Jim, in no uncertain terms, thinks that the generative AI bubble is full of shit.

Covello believes that the combined expenditure of all parts of the generative AI boom — data centers, utilities and applications — will cost a trillion dollars in the next several years alone, and asks one very simple question: "what trillion dollar problem will AI solve?" He notes that "replacing low-wage jobs with tremendously costly technology is basically the polar opposite of the prior technology transitions [he's] witnessed in the last thirty years."

One particular myth Covello dispels is comparing generative AI "to the early days of the internet," noting that "even in its infancy, the internet was a low-cost technology solution that enabled e-commerce to replace costly incumbent solutions," and that "AI technology is exceptionally expensive, and to justify those costs, the technology must be able to solve complex problems, which it isn't designed to do."

Covello also dismisses the suggestion that tech starts off expensive and gets cheaper over time as "revisionist history," and that "the tech world is too complacent in the assumption that AI costs will decline substantially over time." He specifically notes that the only reason that Moore's law was capable of enabling smaller, faster cheaper chips was because competitors like AMD forced Intel (and other companies to compete) — a thing that doesn't really seem to be happening with Nvidia, which has a near-stranglehold on the GPUs required to handle generative AI. 

While there are companies making GPUs aimed at the AI market (especially in China, where US trade restrictions prevent local companies from buying high-powered cards like the A100 for fears they’ll be diverted to the military), they're not doing so at the same scale, and Covello notes that "the market is too complacent about the certainty of cost declines." 

He also notes that the costs are so high that even if they were to come down, they'd have to do so dramatically, and that the comparison to the early days of the internet (where businesses often relied on $64,000 servers from Sun Microsystems and there was no AWS, Linode, or Azure) "pales in comparison" to the costs of AI, and that's even without including the replacement of the power grid, a necessity to keep this boom going.

I could probably write up Covello's entire interview, because it's nasty. Covello adds that the common adage that people didn't think smartphones would be big was false. He sat through hundreds of presentations in the early 2000s, many of them including roadmaps that accurately fit how smartphones rolled out, and that no such roadmap (or killer app) for AI has been found. 

He notes that big tech companies now have no choice but to engage in the AI arms race given the hype (which will continue the trend of massive spending), and he believes that there are "low odds of AI-related revenue expansion," in part because he doesn't believe that generative AI will make workers smarter, just more capable of finding better information faster, and that any advantages that generative AI gives you can be "arbitraged away" because the tech can be used everywhere, and thus you can't, as a company, raise prices.

In plain English: generative AI isn't making any money for anybody because it doesn't actually make companies that use it any extra money. Efficiency is useful, but it is not company-defining. He also adds that hyperscalers like Google and Microsoft will "also garner incremental revenue" from AI — not the huge returns they’re perhaps counting on, given their vast AI-related expenditure over the past two years.

This is damning for many reasons, chief of which is that the biggest thing that artificial intelligence is meant to do is be smart, and make you smarter. Being able to access information faster might make you better at your job, but that's efficiency rather than allowing you to do something new. Generative AI isn't creating new jobs, it isn't creating new ways to do your job, and it isn't making anybody any money — and the path to boosting revenues is unclear.

Covello ends with one important and brutal note: that the more time that passes without significant AI applications, the more challenging "the AI story will become," with corporate profitability likely floating this bubble as long as it takes for the tech industry to hit a more difficult economic period.

He also adds his own prediction — "investor enthusiasm may begin to fade" If "important use cases don't start to become more apparent in the next 12-18 months."

I think he's being optimistic.


Hi there. Did you know that I also do a podcast called Better Offline? If not, please immediately download it on your podcast app. Follow the show. Download every episode. Share with your friends, and demand they do the same.


While I won't recount the rest of the report, one theme brought up repeatedly is the idea that America's power grid is literally not ready for generative AI. In an interview former Microsoft VP of Energy Brian Janous (page 15), the report details numerous nightmarish problems that the growth of generative AI is causing to the power grid, such as:

  • Hyperscalers like Microsoft, Amazon and Google have increased their power demands from a few hundred megawatts in the early 2010s to a few gigawatts by 2030, enough to power multiple American cities.
  • The centralization of data center operations for multiple big tech companies in Northern Virginia may potentially require a doubling of grid capacity over the next decade.
  • Utilities have not experienced a period of load growth — as in a significant increase in power draw — in nearly 20 years, which is a problem because power infrastructure is slow to build and involves onerous permitting and bureaucratic measures to make sure it's done properly.
  • The total capacity of power projects waiting to connect to the grid grew 30% in the last year and wait times are 40-70 months.
  • Expanding the grid is "no easy or quick task," and that Mark Zuckerberg said that these power constraints are the biggest thing in the way of AI, which is... sort of true.

In essence, on top of generative AI not having any killer apps, not meaningfully increasing productivity or GDP, not generating any revenue, not creating new jobs or massively changing existing industries, it also requires America to totally rebuild its power grid, which Janous regrettably adds the US has kind of forgotten how to do.

Perhaps Sam Altman's energy breakthrough could be these fucking AI companies being made to pay for new power infrastructure.

The reason I so agonizingly picked apart this report is that if Goldman Sachs is saying this, things are very, very bad. It also directly attacks the specific hype-tactics of AI fanatics — the sense that generative AI will create new jobs (it hasn't in 18 months), the sense that costs will come down (they're haven’t, and there doesn't seem to be a path to them doing so in a way that matters), and that there's incredible demand for these products (there isn't, and there's no path to it existing). 

Even Goldman Sachs, when describing the efficiency benefits of AI, added that while it was able to create an AI that updated historical data in its company models more quickly than doing so manually, it cost six times as much to do so.


The remaining defense is also one of the most annoying — that OpenAI has something we don't know about. A big, sexy, secret technology that will eternally break the bones of every hater. 

Yet, I have a counterpoint: no it doesn't. 

Seriously, Mira Murati, CTO of OpenAI, said a few weeks ago that the models it has in its labs are not much more advanced than those that are publicly-available.

That's my answer to all of this. There is no magic trick. There is no secret thing that Sam Altman is going to reveal to us in a few months that makes me eat crow, or some magical tool that Microsoft or Google "pops out that makes all of this worth it.

There isn't. I'm telling you there isn't. 

Generative AI, as I said back in March, is peaking, if it hasn't already peaked. It cannot do much more than it is currently doing, other than doing more of it faster with some new inputs. It isn’t getting much more efficient. Sequoia hype-man David Cahn gleefully mentioned in a recent blog that Nvidia's B100 will "have 2.5x better performance for only 25% more cost," which doesn't mean a goddamn thing, because generative AI isn't going to gain sentience or intelligence and consciousness because it's able to run faster.

Generative AI is not going to become AGI, nor will it become the kind of artificial intelligence you've seen in science fiction. Ultra-smart assistants like Jarvis from Iron Man would require a form of consciousness that no technology currently — or may ever — have — which is the ability to both process and understand information flawlessly and make decisions based on experience, which, if I haven't been clear enough, are all entirely distinct things. 

Generative AI at best processes information when it trains on data, but at no point does it "learn" or "understand," because everything it's doing is based on ingesting training data and developing answers based on a mathematical sense or probability rather than any appreciation or comprehension of the material itself. LLMs are entirely different pieces of technology to that of "an artificial intelligence" in the sense that the AI bubble is hyping, and it's disgraceful that the AI industry has taken so much money and attention with such a flagrant, offensive lie.

The jobs market isn't going to change because of generative AI, because generative AI can't actually do many jobs, and it's mediocre at the few things that it's capable of doing. While it's a useful efficiency tool, said efficiency is based off of a technology that is extremely expensive, and I believe that at some point AI companies like Anthropic and OpenAI will have to increase prices — or begin to collapse under the weight of a technology that has no path to profitability.

If there were some secret way that this would all get fixed, wouldn't Microsoft, or Meta, or Google, or Amazon — whose CEO of AWS compared the generative AI hype to the Dotcom bubble in February — have taken advantage of it? And why am I hearing that OpenAI is already trying to raise another multi-billion dollar round after raising an indeterminate amount at an $80 billion valuation in February? Isn't its annualized revenue $3.4 billion? Why does it need more money?

I'll give you an educated guess: because whatever they — and other generative AI hucksters — have today is obviously, painfully not the future. Generative AI is not the future, but a regurgitation of the past, a useful-yet-not-groundbreaking way to quickly generate "new" data from old that costs far too much to make the compute and energy demands worth it. Google grew its emissions by 48% in the last five years chasing a technology that made its search engine even worse than it already is, with little to show for it.

It's genuinely remarkable how many people have been won over by this remarkable con — this unscrupulous manipulation of capital markets, the media and brainless executives disconnected from production — all thanks to a tech industry that's disconnected itself from building useful technology.

I've been asked a few times what I think will burst this bubble, and I maintain that part of the collapse will be investor dissent, punishing one of the major providers (Microsoft or Google, most likely) for a massive investment in an industry that produces little actual revenue. However, I think the collapse will be a succession of bad events — like Figma pausing its new AI feature after it immediately plagiarized Apple's weather app, likely as a result of training data that included it — crested by one large one, such as a major AI company like chatbot company Character.ai (which raised $150m in funding, and The Information claims might sell to one of the big tech companies) collapsing under the weight of an unsustainable business model built on unprofitable tech. 

Perhaps it's Cognition AI, the company that raised $175 million at a $2 billion valuation in April to make an "AI software engineer" that was so good that it had to fake a demo of it completing a software development project on Upwork.

Basically, there's going to be a moment that spooks a venture capital firm into pushing one of its startups to sell, or the sudden, unexpected yet very obvious collapse of a major player. For OpenAI and Anthropic, there really is no path to profitability — only one that includes burning further billions of dollars in the hope that they discover something, anything that might be truly innovative or indicative of the future, rather than further iterations of generative AI, which at best is an extremely expensive new way to process data.

I see no situation where OpenAI and Anthropic continue to iterate on Large Language Models in perpetuity, as at some point Microsoft, Amazon and Google decide (or are forced to decide) that cloud compute welfare isn't a business model. Without a real, tangible breakthrough — one that would require them to leave the world of LLMs entirely, in my opinion — it's unclear how generative AI companies can survive. 

Generative AI is locked in the Red Queen's Race, burning money to make money in an attempt to prove that they one day will make more money, despite there being no clear path to doing so.


I feel a little crazy every time I write one of these pieces, because it's patently ridiculous. Generative AI is unprofitable, unsustainable, and fundamentally limited in what it can do thanks to the fact that it's probabilistically generating an answer. It's been eighteen months since this bubble inflated, and since then very little has actually happened involving technology doing new stuff, just an iterative exploration of the very clear limits of what an AI model that generates answers can produce, with the answer being "something that is, at times, sort of good."

It's obvious. It's well-documented. Generative AI costs far too much, isn't getting cheaper, uses too much power, and doesn't do enough to justify its existence. There are no killer apps, and no killer apps on the horizon. And there are no answers. 

I don't know why more people aren't saying this as loudly as they can. I understand that big tech desperately needs this to be the next hypergrowth market as they haven't got any others, but pursuing this quasi-useful environmental disaster-causing cloud efficiency boondoggle will send shockwaves through the industry.

It's all a disgraceful waste.

Read the whole story
ssweeny
5 days ago
reply
Pittsburgh
Share this story
Delete
1 public comment
billyhopscotch
4 hours ago
reply
Ed is usually right about AI, and his summary of the Goldman report covers it all.

It’s Independence Day. Read Frederick Douglass.

1 Share
if you're feeling fearful, or pessimistic, or unable to speak of hope without sounding scornfully sarcastic just now, then you might need to read the whole thing.
Read the whole story
ssweeny
8 days ago
reply
Pittsburgh
Share this story
Delete

Bracket Symbols

2 Comments and 13 Shares
’"‘”’" means "I edited this text on both my phone and my laptop before sending it"
Read the whole story
ssweeny
9 days ago
reply
Pittsburgh
Share this story
Delete
2 public comments
jlvanderzwan
9 days ago
reply
Is the implication that all French people are animorphs?
iustinp
10 days ago
reply
He he :)
Switzerland

The Shareholder Supremacy

1 Share

I promise you, everything that's happening makes sense. It all feels so chaotic, so utterly, offensively stupid, so disconnected from reality that it's hard to understand how Meta can run a terrible company with decaying services that's also wildly profitable, or how Meta, Microsoft and Google can proliferate unprofitable, unsustainable tech that takes water from the desert and strains our power grids to produce deeply mediocre outcomes based on incredibly vague promises and have their stock prices go up.

The answer is simple: the customer, and by extension the service provided to the customer, is no longer the primary concern of a company. It's all about shareholder value, and while this may seem a little obvious, it requires a little bit of a history lesson to really explain how profoundly damaging shareholder supremacy is. 

Our journey takes us back almost 100 years — long before the creation of the Internet, or the iPhone, or Facebook, or even the Manchester Baby, the world’s first stored program computer. We’ll talk about figures that predate the villains I’ve covered in the past — the Altmans, Pichais, and Raghavans of the technology world — but despite their historical distance from the current era, they’re important to know and understand, because they’ve fundamentally shaped the culture and psychology of today’s management elite, and crucially, the incentive structures that guide their companies. 

These stories explain the often-paradoxical motivations of modern capitalism, where those who make short-term decisions (that invariably result in long-term pain and, in many cases, decline) are rewarded, whereas those who build sustainable businesses that actually innovate, and don’t treat their customers and employees like dirt, are ignored — if not actively maligned. 

In 1916, the Ford Motor Company had an idea — to use its surplus capital to invest in new plants to increase production of Ford's Model T car, which the company had continually made cheaper while keeping wages for workers high. Ford intended to cut dividends to shareholders in favor of investing in its employees and infrastructure, which angered minority shareholders who were already incensed that Ford (who was a horrible man otherwise) had prioritized the company's success (and its employees’ happiness) over making the stock price go up, leading to the famous Dodge vs. Ford Motor Co. case that would define — and ultimately doom — modern capitalism, and in many ways birth the growth-at-all-costs Rot Economy

The Michigan Supreme Court found that "a business corporation is organized and carried on primarily for the profit of the stockholders [and that] the powers of the directors are to be employed for that end," and intimated that cash surpluses should not be saved to invest in upcoming projects, but distributed to shareholders, because Ford had shown that it was good at making money. Ford was directly forbidden from lowering prices and raising employee salaries, and forced to issue a dividend.

To be clear, the statement around corporations’ duty toward shareholders was made “obiter dicta.” This means it was not actually legally binding, despite over a hundred years of people acting as if it was. 

This statement — not even a legal precedent, a statement — was the beginning of what I call The Shareholder Supremacy, when companies moved away from building lasting, sustainable companies that created things and instead began focusing on pleasing shareholders. It birthed a short-term mindset focused on increasingly abstracting a company away from the production of goods or services and promoting growth mechanics that increased stock valuations and made better balance sheets. 

The cult of Shareholder Supremacy (also referred to as shareholder primacy) is one disconnected from production, and I'd argue humanity itself, a continual shell game where companies do things not to produce an outcome in real life, but to manipulate investors and the markets themselves. These tactics should be immediately recognizable to anyone who has followed this newsletter over the past few years. If a company’s share price declines and management smells a shareholder revolt, they can “juice” their numbers by laying off a few thousand workers, or adopting a specious new technology (like generative AI or the metaverse) and launching an accompanying media blitz, or launching a buyback program, diverting company funds from things like R&D and employee salaries to investors.   And it was the Shareholder Supremacy movement that created the nebulous creature known as "management" — a figurehead that exists to increase company value and make speeches rather than have any kind of domain expertise or bonafides, someone with just the ability to move numbers around and point at people to "get things done," even if "things" might mean "make something worse as a means of cutting costs." 

In the eyes of the Shareholder Supremacist, the CEO of a tech company isn't someone that builds, invests in, or proliferates technologies, but a stage-magician-accountant hybrid that uses a combination of sleight of hand and vague promises to convince those around them that a company is "the future," with the occasional result being that the company might develop technology at some point.

Yet it took decades for the damage to really set in, when, in 1960, a horrible little man called Jack Welch would join General Electric, a company co-founded by lightbulb-inventor Thomas Edison to sell things like lightbulbs and refrigerators. Welch originally joined the company as a junior chemical engineer, where he lasted roughly one year in the job before moving to a management track

Jack Welch’s damage to the corporate world and society is more like that of a war criminal. He showed corporate America how unprofitable a soul was. He is, to quote Robert Evans on Behind the Bastards, the reason you were laid off. You, and every single other person who was laid off to make the company more money.

But I'll get to that. 

Eight years into his tenure, Welch would become the VP and head of General Electric's plastics division, and, to quote David Gelles' The Man Who Broke Capitalism, believed that business "was a Darwinian competition" where he was "better than the rest," which caused him to "push GE to the limits." In practice, this meant that Welch, as manager of a factory trying to develop a new plastic in 1963, continually pushed his team to "move faster, run more experiments  — whatever it took," which led to a massive explosion at the factory thanks to Welch pushing his scientists to use an untested process where oxygen moved through a "highly volatile solution."

One might think that this would lead to Welch's ouster from the company, but it instead became a noxious management consultant fable about failure and was (to quote Gelles) "a point of pride" for Welch, one that "demonstrated a healthy appetite for risk." Welch became GE’s head of plastics five years later in 1968, and would use his aggressive (and dangerous) tactics to grow Noryl, a kind of plastic that’s well-suited for things like electronics, into a billion-dollar business, and crucially, becoming Head of Plastics gave Welch his very first stock options — which, in turn, began his obsession with stock valuations.

In 1977, Welch was one of a chosen few in line to take over from then-CEO Reg Jones, and was handed a series of business units to run, including GE's appliances business and, most important of all, GE Credit. I'll get back to that in a minute.

As Gelles recounts, Welch decided that despite its profitability and continued growth, appliances would face competition from overseas, and the right move was to start laying people off. It was a huge success at the company — insofar as it boosted profits — and other divisions copied his idea, gleefully firing thousands of people from a company that had grown incredibly successful by investing in making itself a great place to work. To quote Gelles, "Welch dispensed with the notion that mass layoffs were a measure of last resort," and "labor was a cost, not an asset." Before Welch, layoffs were something that happened when the company was collapsing, not as a means of boosting one’s balance sheet.

Thanks, Jack! I hope Hell exists if only for you to burn there along with Ronald Reagan.

Welch would become CEO in 1981, and in the space of two years lay off over 72,000 people. One tactic he would employ was "stack ranking," (also known as “The Vitality Curve,” or, as you’ll soon understand why, “rank and yank”) where high-ranking managers were forced to rank their subordinates and fire the bottom 10%. This tactic later spread to (and poisoned) countless other companies, including Amazon, Google, Activision Blizzard, and Microsoft (which has since stopped using it).

It’s worth noting that not every implementation of stack ranking usually results in immediate payroll cuts. Those perceived as low-performers may be denied bonuses or raises, or issued warnings, or put on a PIP (performance improvement plan, which is almost always a precursor to a firing), or simply encouraged to leave. But even in the most benign (for lack of a better word) form, it’s a pretty horrendous management tool. If you have a team of ten equally excellent workers, but only eight can get a bonus, or a positive ranking, with two left out (or, perhaps worse, ranked “inadequate”), you’re pretty much guaranteed to kill morale and team cohesion.

But perhaps that was the point. Welch's poisonous philosophies have deeply damaged the concept of management itself, turning managers into miniature accountants that see labor, as Welch did, as a cost-center, and managers as a protected class "above the fray," reframing the definition of a "good company" to mean "the one that grows profits while controlling labor costs." Welch eventually got the nickname "Neutron Jack," referring to the thermonuclear bomb that kills people but leaves infrastructure intact. 

GE Credit, however, was where Welch would make his mark. General Electric, as a reliable, profitable and sustainable company, was able to fairly easily mobilize capital, something that Welch loved, claiming that "compared to the industrial operations I did know, this business seemed an easy way to make money," and that "[you] didn’t have to invest heavily in R&D, build factories and bend metal.” In the first few years of his tenure, Welch would "aggressively expand" (according to Gelles) GE Credit, buying up companies that "had nothing to do with manufacturing," including Kidder Peabody, an investment bank that would eventually turn out to have falsified $350 million in profits. GE Capital would expand internationally, ballooning to $370 billion in assets by the time Welch left the company in 2001. 

According to Gelles (and likely referencing this CNN article), at one point GE Capital was America's largest equipment lessor, leasing hundreds of thousands of vehicles, handled credit operations for companies like Kodak, becoming a backbone of America's increasingly debt-ridden economy. To quote Gelles, by the time that Welch left, GE Capital was effectively "a giant, unregulated bank," invested in "all kinds of risky debt instruments, insurance products and credit cards."

And crucially, Welch's legacy is one where at the time he was considered a genius that had taken General Electric's market capitalization from $14 billion to $400 billion, all through a very specific kind of financial deception where General Electric would move things around — laying people off, buying new companies, selling old companies and getting into new industries — to match analyst expectations and make his earnings. In a fawning CNN piece from 1997 — years before Welch would leave and the collapse of GE Credit — reporter John Curran describes how GE Capital grew by seeing new opportunities and immediately growing a new business in any new market it could, taking advantage of Capital's "low cost of funds" at one point both leasing equipment to companies and buying it back, refurbishing it, and selling it to other companies.

In the space of a few decades, Welch had taken General Electric from a company that made lightbulbs and refrigerators and plastics to one that continually played with numbers as a means of boosting its stock price, including a $10 billion stock buyback in 1990, and the New York Times' John Holusha noted that GE was only investing 2.4% of its revenues in research and development, nearly a full percentage point below the national average at the time.

To be clear, General Electric was, at this point, an absolute dog of a company. As David Gelles noted in a Reddit thread, Welch operated at a time before Sarbanes-Oxley, a sweeping series of financial reforms instituted after the Enron scandal that required companies to do things like disclose off-balance sheet financial arrangements and many other financial disclosures that would have likely made Welch's financial games a little harder to play, which Welch claimed in a 2002 CNBC interview would "suck [risk] out of the system" and cause people to "not go for their dream."

Welch's tenure was one that destroyed General Electric's ability to innovate while turning it into one of the most wildly-profitable companies in the world, all through a nihilistic form of capitalism where growth is all that matters, even if it means making worse products, constantly entering and exiting industries, reducing spend in research and development, outsourcing multiple parts of the company to avoid paying benefits and higher American wages, and generally treating human beings like inanimate assets.

The result was, when Welch left the company, a period of prolonged decline as it became obvious that General Electric had become, as Gelles had called it, a giant — previously, at least — unregulated bank, one operating in too many industries in a new regulatory environment that worked against them. General Electric and its bloated, messy asset portfolio were central to the 2008 financial crisis, with GE Capital overexposed to the crisis while also invested in subprime mortgages that would eventually see the company fined $1.5 billion by the SEC

Sidenote: In Eastern Europe, things arguably were even worse. GE was one of several Western financial institutions that leapt across the Iron Curtain in the 2000s, following the entry of seven former communist countries into the EU in 2004 (and a further two, Romania and Bulgaria, in 2007). 

As these nations joined the European Single Market — which creates uniform rules for things like finance across all member states, while also legislating for the free movement of capital — GE opened (or bought) banks and lending companies in their capitals, and then flooded the market with mortgages and loans denominated in Swiss Francs. 

This bit is a tad complicated, but for a while, foreign lenders were reluctant to issue mortgages in currencies like the Polish Zloty and Hungarian Forint. The Swiss Franc offered stability, low interest rates, and until 2015 was roughly pegged to the Euro at a rate of 1:1.20, giving it a reputation for low volatility. 

If you’re even vaguely familiar with the economic malaise that wrecked the continent in the early-2010s, you see where this is going. 

In 2010, the Eurozone had its own financial crisis, which had a knock-on effect on the economies of places like Poland and Hungary. The Zloty, Forint, and other Eastern European currencies declined precipitously against the Swiss Franc, and those CHF-denominated loans suddenly became much, much more expensive to service for borrowers. When the Swiss central bank dropped its kind-of currency peg in 2015, the situation became even more acute. 

For the unluckiest borrowers, the balance on their loans became greater than the value of the actual properties themselves — meaning, even if they sold their homes, they’d still owe thousands. This situation, which affected hundreds of thousands of borrowers, forced the Polish state to step in. A legal battle ensued and in 2019, the EU’s top court would rule these mortgages as “unfair,” invalidating them and forcing the banks to redenominate in the local currencies. Had it not, hundreds of thousands of people would have likely seen their homes repossessed. 

This saga — complicated though it was — is the perfect demonstration of GE’s fall from grace, tumbling from the company once founded by Thomas Edison to a sketchy financial services company that (and, to be fair to GE, along with many other foreign banks, like France’s BNP Paribas and Spain’s Santander) thrust dangerous and deeply-flawed products onto unsuspecting consumers. 

The company behind the Boeing 777’s massive GE90 engines, not to mention the first television broadcast, had opted to become the Eastern Bloc’s own version of Bear Stearns.      

Though GE Capital would still make nearly half of its profits from its financial arm in 2013, General Electric would sell most of its financial units for $26.5 billion starting in 2015, and while one might say "wow, this is a great moment where the company moves away from Welchism," the company proudly announced that this would allow the company to return $90 billion to investors in the form of buybacks and dividends by 2018, a promise I'm not sure it ever actually kept (though it recently announced it planned a $15 billion buyback in May.)

I realize this was an extremely long and arduous history lesson, but a necessary one to express the incredible darkness of Jack Welch's legacy. Welch's poisonous philosophy — that everything must grow, that the value of the company is that which it returns to the shareholders, and that human beings are costs to be moderated. During his tenure (as reported by Gelles) Welch ran a "campaign against loyalty," claiming that "the psychological contract has to change," and that loyalty, to Welch, was "not 'giving time' to some corporate entity" in return for shielding and protection from the outside world, but "an affinity among people who want to grapple with the outside world and win."

You know — moving fast and breaking things. Meritocracy. Making the numbers look right. Saying the right thing at the right time. The ability to run a company that binged and purged assets (like Google), taking on entirely new unrelated business lines as a means of expressing growth to the markets (like Facebook), and still, despite your legacy being one of abject destruction and recklessness being called an amazing leader by the New York Times as recently as 2022.

Welch's dark influence deeply poisoned American capitalism, creating an environment where the only good companies are those that grow forever. His acolytes include David Calhoun — once considered in line to replace Welch — who later moved to  Boeing, where he worked as a director of the board from 2009 until he became the lead independent director in 2018, becoming Chairman in 2019, and then CEO in 2020, a period in which he was accused of "strip-mining Boeing" by pushing to cut costs with aggressive outsourcing

During his tenure, two Boeing 737 Max 8s crashed — one in 2018 just outside of Jakarta, killing 189 people, and another in 2019 en route to Nairobi that killed 157, followed by a door flying off of an Alaska Airlines flight in January 2024 that led to an investigation where Alaska Airlines claimed that it found "many loose bolts" on its now-grounded Boeing Max 9 planes

Sidenote: Boeing is an interesting case of a company that — like GE — started out as a genuine innovator, but due to the influence of management minds and spreadsheet myopia, lost its way. The rot arguably took root in 1997, when Boeing merged with McDonnell Douglas in an attempt to bolster its share of the defense market. 

Whereas Boeing had traditionally been a very engineer-led business, McDonnell Douglas was the exact opposite, and was prone to making the kind of short-term decisions that look good on a balance sheet, but don’t really lead to long-term growth and innovation. That shouldn’t come as a surprise when you consider that its CEO, Harry Stonecipher, was a former GE executive and an acolyte of Jack Welch’s management philosophies. 

After the merger was completed, assumed a high-ranking role in Boeing, and later became CEO in 2003 after its incumbent leader, Phil Condit, became embroiled in allegations of corruption over the lease of aircraft refueling tankers to the US Air Force. As soon as Stonecipher took the helm at Boeing, he began instituting sweeping cultural and organizational changes. Gradually, that engineer-led culture that made Boeing so successful started to evaporate. 

The company began divesting core parts of its business — like its aerostructures division, which later became Spirit Aerosystems, and which Boeing just announced it would repurchase for $4.6bn — thus forcing it to rely upon a vast global web of outsourced manufacturing and integration partners. Much of the software behind the 737 Max, for example, was written by coders working for Indian outsourcing giant HCL, who typically earned around $9-an-hour

This cultural shift goes, in some way, to explain the sheer existence of the 737 Max. The original 737 was first introduced in 1968, with the A320 coming nearly 20 years later, in 1988. When Airbus announced a new, more fuel-efficient version of the A320 series, Boeing was blindsided. While Boeing needed (and originally intended) to create a brand new airplane, it soon realized that it couldn’t do so in time to launch alongside the new A320neo. 

But, more pertinently, it didn’t want to. As Bill George, Executive Fellow at the Harvard Business School explained: “One of Stonecipher’s fated decisions was to turn down the proposal from Boeing’s head of commercial aviation to design an all-new single-aisle aircraft to replace the Boeing 727 (FAA-certified in 1964), 737 (1968 certification) and 757 (1972). Instead of designing a new airplane incorporating all the advances in aviation technology from the past 30-40 years, Stonecipher elected to maximize profits from older models and use the cash to buy back Boeing stock.”

And so, Boeing took the nearly 50-year-old 737 airframe and injected the aviation equivalent of botox into it, slapping on some new engines and (I’m massively oversimplifying here) called it a day. 

Had Boeing not been so hollowed out from the inside, it might have anticipated Airbus’ next step. It might have had the engineering capacity and the agility to build a compelling clean-sheet design that would have lasted for another 50 years. The irony here is that while the 737 Max was an attractive financial proposition for Boeing at first, it has since cost the company more than $20bn in fines, regulatory penalties, and compensation to passengers and airlines. 

Bob Nardelli — one of the three finalists at GE that competed to take over from Welch — went on to become CEO of Home Depot in 2000, where he boosted profits immediately by aggressively cutting costs, and when the stock didn't stay competitive with Lowe's, Nardelli chose to cut experienced full-time employees in favor of part-time workers, eroding its already-shaky position in the market until he was paid $210 million to leave the company in 2007.

Every single one of these men fails upwards because Shareholder Supremacy is what truly dominates modern capitalism — the sense that what matters is growth and shareholder value, even if "shareholder value" really means "making a very specific group of people richer" and "showing perpetual growth to match the expectations of Wall Street." Welch himself had one particularly gruesome way of putting it: that "you can't grow long-term if you can't eat short-term," and that "the main social responsibility for a company is to win."

The crucial way to summarize Jack Welch was that he was, for the majority of his career, not actually engaging in the process of labor. Welch started as a chemical engineer at General Electric in 1960, but was a high-ranking manager three years later, no longer participating in the actual process that made the company rich. As Welch grew more powerful in the organization, he further distanced himself from production, and by the time he was CEO in 1981, Jack Welch hadn't done a real job in nearly twenty years. Under Welch, General Electric distanced itself from producing things constructed by people, and taught the economy that one didn't have to run a good business to be a good company — just one that had the right numbers.

And knowing all of this, it's important to note that Welch was, until fairly recently, considered a hero with — of all people — Malcolm Gladwell being one of the first to truly question his legacy in October 2022, only a couple of weeks before the New York Times would publish a piece calling Welch an “amazing leader who inspired his colleagues to accomplish more.” Because Welch's horrifying methods were so effective at boosting stock prices, he was considered "one of America's Greatest CEOs," with Forbes calling him a "managerial genius" and one of the greatest business minds of the time.

The problem is that the moniker "greatest CEO," in part thanks to Welch, no longer means "someone who makes a company with happy customers and sustainable profits." A CEO is no longer a person that built a company and runs it to provide a service, but the person that can make the company look "good" on paper, meaning that the company in question looks like it's growing, either in quarterly earnings or when presented to a gormless venture capitalist that hasn't participated in any meaningful form of production in years (or decades). 

Executives of companies are no longer people that built things that take that expertise to build something else, but a rotating cast of disconnected stewards that have the "right credentials," who can continually fail at their jobs — like Prabhakar Raghavan taking over Google Search after running Yahoo! Search into the ground — because they're not measured on efficacy, but their ability to increase arbitrary metrics. These metrics were often esoteric ways to express growth, something that David Gelles reports was commonplace in Welch's world, where senior management would adjust inventory to show the appearance of profit, feeling "that the only way to achieve the enormous increases in sales and profits...was to bend the rules."

Welch gave birth to the fake businessperson, and the culture of the overpaid and ever-distant chief executive, a con artist that moves numbers around to make rich people happy, one that will never — and maybe never has — participated in the value exchange that makes them rich, all while lacking any fundamental appreciation or respect for labor, all while demanding complete fealty from it. These people have now mentored themselves across generations of business-people, hiring them and training others to be like them, poisoning private and public companies alike with people who find ways to abstract themselves away from doing anything. 

And it's exactly this type of person that's currently destroying Silicon Valley.


The following is an ad, but not one I was paid for.

In 2012, I met my best friend, a guy called Phil Broughton, because I bought a bottle of his coffee drink, The Black Blood of the Earth. BBotE is made by cold water steeping followed by vacuum extraction, pulling pure coffee out of the beans, and the result is a highly-concentrated, acid-free coffee, where one shot is all you really should have as BBotE is 40 times as powerful as regular coffee.

BBotE has two calories, and it tastes really good, kind of like the platonic form of "coffee" you imagined before you actually tried coffee for the first time. Bottles last three months in the fridge, and you can enjoy it like you would regular coffee (putting it in hot water), take a shot of it (do not take more than a shot), or mix half a shot in a glass of milk (4:1 ratio) to make something like an iced latte. It's really great. I've been drinking it for years. It's the best tasting coffee I've ever had while also being easy on my weak little stomach.

Phil is a health physicist and Deputy Laser Safety officer, and personally makes every bottle of BBotE. Use the code "SMILINGMAN" for 10% off. You can buy it here. It expires July 31st.


Last week, OpenAI CTO Mira Murati spoke for nearly an hour at Dartmouth University, where she'd recently accepted an "honorary" (read: fake, much like Jack Welch’s) doctorate of science for "pushing the frontiers of what neural networks can do." Other than when she graduated with her Bachelor of Engineering from the Thayer School of Engineering, and for one year of her career, Murati has only ever been a manager — specifically a project manager, somebody who doesn't write code or build software, but points at things and says "hey, we should do that." 

After graduating, Murati worked briefly at French aerospace company Zodiac as an Advanced Concepts Engineer, before joining Tesla in 2013 as a Product Manager, before moving to Leap Motion from 2016 to 2018, where she was the VP of Product and Engineering — a deceiving title that again does not mean she actually wrote code for a company that burned tens of millions of dollars on gesture-based controls for your Mac, and would ultimately be acquired for $30m in 2019, just one-tenth of its all-time value. Her role would have undoubtedly involved wrangling Gantt charts and spec sheets, rather than lines of code in a text editor. 

After leaving Leap Motion, her career trajectory accelerated in a way that’s hard to explain. She moved to OpenAI as its Vice President of Applied AI and Partnerships ,, was promoted to SVP of Research, Product, and Partnerships in 2020, and then, at some indeterminate time in 2022, became its Chief Technology Officer, despite having, from what I can tell, exactly one academic credit to her name in a list of twenty or so people

OpenAI co-founder Ilya Sutskever, if you’re curious, has over 100

During her speech, Murati spoke about how artificial intelligence could kill creative jobs "that shouldn't have been there in the first place," something you'd only say if you had never created anything in your entire life. Putting that aside — and trust me, I'll get to it — Murati, much like Sam Altman, seems like she's completely full of shit, rambling in the vaguest of ways about how "the societal impacts of this work are not an afterthought" and that "you kind of have to build them alongside the technology." 

The Chief Technology Officer of the most prominent startup in America appears to not know that much about technology, to the point that she was unable to answer whether Sora, OpenAI's generative video product, was trained on publicly-available data all the way back in March, making a face that looks like she was chewing a vinegar-flavored wasp. While she could've been lying, Murati rarely demonstrates any kind of technical depth, dancing around answers with the vaguest possible explanations, and if you're wondering why the host didn't push her at all, it's because he's Jeffrey Blackburn — a career manager at Amazon who recently joined DoorDash's board.

The future of the tech industry is in the hands of people who don't know much about technology. OpenAI is run by Sam Altman, an unqualified non-technical founder who has conned his way to the heights of Silicon Valley. Google is run by Sundar Pichai, an MBA and former McKinsey management consultant that has overseen the destruction of Google's core products, laid off tens of thousands of people, and pushed Google to shoehorn generative AI into the core search product with disastrous results. Microsoft is run by Satya Nadella, another MBA that has, in his tenure, overseen layoffs of over 30,000 people, and pushed his company deep into the deeply unprofitable and unsustainable world of generative AI.

Amazon CEO Andy Jassy — surprise, surprise, another MBA — started at Amazon as a marketing manager in 1997, and to his credit came up with the idea of Amazon Web Services, where he was the CEO until 2021, succeeded by Adam Selipsky, the former CEO of data software company Tableau and… a Harvard MBA that ran a management consultancy

The founding story of Amazon Web Services may be one of the greatest lies in the tech industry's history — I can find little specific evidence as to who the actual architect of AWS was, only endless mewling about how Andy Jassy, an MBA, built one of the most important pieces of tech infrastructure of all time. Jassy regularly uses the royal "we" to talk about how he built Amazon Web Services despite the fact he didn't build anything at all, as he was a manager managing managers with Jeff Bezos, then the CEO, another manager. 

What little I can find suggests that much of the actual technical work was done by people like then-CTO Allan Vermeulen and Brewster Kahle, who founded a company called Alexa Internet that Amazon acquired in 1999 that was instrumental to spinning up the services that became AWS — two names that I have never read in the many, many articles crediting Jassy as the architect behind Amazon’s most profitable service. 

Arguably the most Welch-pilled Silicon Valley staple is Meta, a company dominated by MBAs like Sheryl Sandberg (its first Chief Operating Officer), Javier Olivan (who replaced Sandberg as COO in 2022),  Facebook’s “Chief People Officer” Lori Goller, and Chief Privacy Officer Michael Protti. Yet what makes Meta such a Welch-esque institution is its dedication to making its products worse in the search of growth, with career product managers like CMO Alex Schultz and Head of Product Naomi Gleit dominating the company from its earliest days and creating a culture that focused entirely on gaming Facebook’s metrics to please Mark Zuckerberg’s demand for perpetual 10% year-over-year growth in core metrics. Meta has laid off over 13% of its workforce in the last year — over 11,000 people — despite the fact that the company has been profitable for over a decade, even while plunging tens of billions of dollars into Zuckerberg’s “reality labs” metaverse division and authorizing $50 billion in stock buybacks

In many ways, Zuckerberg is Welch perfected — the CEO of a company that provides a continuously-deteriorating service that prints money while gaming its metrics (such as no longer reporting its monthly active users) to make Wall Street believe that it’s a “good company.” 

It doesn’t matter that Meta has effectively given up on trying to solve its massive problems with AI-generated spam and scams, and 404 Media reports that Meta has turned its back on the experts that helped bolster its content moderation services. Sadly, none of this really matters, because the markets still love Meta, even though they punished the company briefly for “a light forecast” where it had “raised investor expectations due to [its] improved financial performance in recent quarters, leaving little room for error” according to CNBC, a business network that Jack Welch owned a large chunk of through his acquisition of NBC as part of a deal where General Electric acquired RCA Corp in 1985. It’s also important to note that Meta executives received bonuses when they fired thousands of people in 2023.

Google isn’t much better. Since Sundar Pichai — a career product manager and MBA — became CEO in 2015, Google’s culture has soured, giving Android inverter Andy Rubin $90 Million in 2018 to leave the company after credible sexual misconduct claims, causing a massive walkout over Google’s forced arbitration clauses over harassment and discrimination that muzzle victims and empower Google to quietly hide its failure to police its own company. This happened in the same year that others walked out over Google’s work with the Pentagon. Under Pichai, Google has been fined billions of dollars by the European Union for antitrust violations around its Android operating system, and found itself embroiled in three-year-long antitrust battle with the US Department of Justice around its anti-competitive approach to keeping Google search on top, including paying Apple $20 billion in 2022 to be the default search engine on Safari. Under Pichai, Google has replaced hard-working lifers that built the very foundation of the company with scummy cretins like Prabhakar Raghavan and Jerry Dischler, the latter of which transparently intimidated the Google search team to increase ad revenue generated by search in emails revealed in the trial from March 2019

You’ll be shocked to hear that Jerry Dischler has an MBA.

Yet, as with Meta, and as with all of these companies (excluding OpenAI), Google prints money, netting over $23 billion in its Q1 2024 earnings, the same ones where it announced its first dividend and a $70 billion stock buyback program. It doesn’t matter that Google Search is incredibly broken as a result of Google’s constant drive to increase revenue, or that Google has decided to generate results on search using unreliable generative artificial intelligence that generates ridiculous answers. Alphabet (Google’s holding company) has seen its stock price continually rise since March of this year.

As I’ve said previously, this is a special kind of financial nihilism that the market continues to reward, one that has poisoned Silicon Valley, elevating men and women like Sam Altman and Mira Murati to positions of power despite the fact that neither of them seem to actually build technology. Altman himself is a seedy charlatan, one that’s grown incredibly powerful in the Valley despite never really having done anything, and it shouldn’t surprise anyone that one of Altman’s nine recommended books is “Winning” by Jack Welch, a book where Welch claims that “winning companies are meritocracies” and lionizes Kenneth Yu of 3M’s Chinese operations for “throwing out the phony ritual of annual budgeting and replacing it with sky’s-the-limit dialogue about opportunities,” claiming that budgeting at 3M is “not about delivering good-enough plans and beating them…[but] about having the courage and zeal to reach for what can be done…[and] doesn’t that sound like more fun than budgeting?” I imagine this must be referring to 3M bribing Chinese government officials as a means of selling product between 2014 and 2017 or the thousands of people that 3M has laid off in the last few decades, including in China.

Jack Welch is the man that Sam Altman lionizes — a sleazy con artist that continually moved around data as a means of making General Electric look bigger and stronger than it really was, kind of like how Sam Altman regularly say things like that we’ll be able to “ask our computer to solve all of physics.”  

Behind the scenes, generative AI isn’t moving the needle, with Isabelle Bousquette of the Wall Street Journal reporting that companies are finding getting the full value of AI assistants requires “heavy lifting”  including a hilarious quote from Google Cloud Chief Evangelist Richard Seroter (a career product manager) where he blames those not finding the value in AI for “not having their data house in order,” haughtily chirping that “you can’t just buy six units of AI and then magically change your business.” 

In essence, it isn’t generative AI’s fault that it isn’t particularly useful without adapting it to your workflows — it’s your fault, you nasty little pig. You didn’t do the work the AI needed you to do to make it useful. You should feel ashamed of yourself. 

In all seriousness, the commonality between all of these people — Jack Welch, Sundar Pichai, Mira Murati, Sam Altman, Mark Zuckerberg and an alarming amount of CEOs inside and outside of tech — is that they, along with their companies, have escaped the human condition. They make their companies bigger for the sake of making them bigger, making more money to increase the value of the financial instrument attached to the company, abstracted away from any purpose, craft or creativity. 

These people — management consultants, MBAs and Product Managers — are not creators of anything other than financial occultism, a dark art where the financial value of a company is often separated from what it does or whether it’s good at it. They are parasites. 

Meta calls itself — still — a “social metaverse company” that’s “committed to keeping people safe and making a positive impact,” but at its core it’s a near-monopolistic social data and advertising company that is testing the limits of how little connectivity it can provide in its core products without sacrificing advertising revenue. Mark Zuckerberg has proven that he will do whatever he needs to — including letting deadly misinformation spread on Facebook — to keep showing that Meta is growing. One might be forgiven for thinking that Mark Zuckerberg is the exception, that he isn’t a management goon disconnected from the process of writing code, until you realize that he stopped coding eighteen fucking years ago in 2006. 

Hey, wait a second. Wasn’t Sheryl Sandberg brought in to make Facebook a real, mature business? She shared a communications coach with Jack Welch! God fucking damnit!

Like Welch, Zuckerberg has weathered numerous scandals — a movie that framed him as a sociopathic thief, a data-leaking scandal that may have influenced multiple elections, a scandal where Facebook deliberately emotionally manipulated 700,000 users using the news feed in 2014, a $5 billion FTC fine, to name but a few — and come out completely unscathed, all as he publicly destroys a company that was once a globally-beloved institution in the name of endless growth. 

In fact, much like Welch was applauded as one of the greatest minds in management for decades as he burned General Electric in broad daylight, Zuckerberg — a man that has taken a once-profitable company that once benefited society and made it both harmful and lackluster at providing its services — was named one of Barrons’ Top CEOs of 2024 last week, where it credited him with a “corporate course correction from the metaverse to artificial intelligence.” Much like so many journalists and industry figures failed to properly identify  Welch when he was right in their midst, Barrons’ failed to call Zuckerberg what he is — a con artist that took useful software and turned it into a data collection firm with a shitty product attached. Zuckerberg is trying something that Welch could have only dreamed of: seeing how little he can actually make to convince the markets that Meta is valuable.

It’s almost a little on the nose.

And it fits. It perfectly fits. Shareholder Supremacy is the force currently driving the tech ecosystem, one that, as I’ve noted, currently lacks any remaining hyper-growth markets, funding and proliferating technologies that exist not to provide a truly innovative service, but create more growth, a phony sense of progress that allows companies like MIcrosoft, Meta, Google and Amazon to create something that sort of looks innovative, even if their models are all extremely similar, as they’re all trained on the same quickly-dwindling amount of training data. Already, we’ve seen AI developers become less discerning with the training data they use, with these models slurping up unverified social media posts (a Reddit post seemingly led to Google recommending adding wood glue to pizza sauce), satirical news sites like The Onion (which led to Google recommending users eat one small rock each day as part of a balanced diet), or other AI-generated content, which leads to a fun little phenomenon called Habsburg AI

It’s not like the markets are actually seeing if this shit does anything, or whether it’s truly revolutionary. They didn’t care that many tech products have got increasingly worse over time as tech executives were rewarded again and again for pursuing growth over delivering any kind of quality product. Since becoming CEO of Google, Sundar Pichai has been paid over $500 million dollars in the last three years, taking home $281 million (mostly made up of stock options) in 2019 — a fateful year that many of you might remember from The Man Who Killed Google Search — and $226 million in 2022, a year before Google laid off over 10,000 people.

Sundar Pichai’s job as CEO of Google is not to make sure that Google delivers great products, such as a quick path to an answer using Google Search — his job is to continually grow the amount of money that Google’s business units make. I have complete confidence that if Sundar Pichai was able to make Google Search worse than it already is and maintain double-digit year-over-year revenue growth, he would not only do so but be rewarded with hundreds of millions of dollars of stock options in perpetuity. 

Pichai is not judged — not even by the tech media itself, with The Verge’s Nilay Patel pulling his punches when face-to-face with him — for his Welchian nightmare, burning a genuine technological innovation that billions rely upon, a heartbreaking tragedy that happened in real time, one where a bad person makes worse people richer in a way that tangibly damages us all. We’re all forced to both suffer the consequences and mourn the loss of Google Search in slow motion, with nobody really wanting to admit the obvious — that Google intends to bleed this thing out unless regulation stops them. 

And I realize I sound a little dramatic describing this as a tragedy. Google Search was always a for-profit business, and advertising would so obviously poison it that Brin and Page warned about it in the original Google paper, saying that they expected that “advertising funded search engines would be biased towards the advertisers and away from the needs of the consumers.”

But Google Search was, for a while, something magical, a thing that we all took for granted, and I think you’re being disingenuous if you pretend that this isn’t something you cared about or relied upon, and I’d argue for the earlier days of Facebook and Instagram. These are — or, perhaps, were — institutions that helped many people my age (an ancient 38 years old) become who we are today, finding the things and the people we needed and connections we otherwise wouldn’t have made or sustained. These were companies that made billions of dollars in profit by providing a kind of social good, even if it was one they realized they could use to create a monopoly then rug pull us in all in favor of shareholders.

Where Jack Welch destroyed multiple local economies in Massachusetts, Indiana and Pennsylvania by laying off thousands of workers in favor of cheaper outsourced options, Sundar Pichai and Mark Zuckerberg have realized that Shareholder Supremacy only means making as useful a product as necessary, and optimizing it at all times to meet analyst expectations. 

The unique problem that Sundar Pichai and the rest of the rot barons currently face is that there aren’t any hyper-growth markets left, and they’ve been desperately adapting to that reality since 2015. So many promises — augmented reality, home robotics, autonomous cars, and even then, artificial intelligence writ large — never quite materialized into viable business units, making big tech that little bit more desperate. I have repeatedly and substantively proven that both Meta and Google made their products worse in pursuit of growth, and they’ve done so by following a roadmap drawn by Jack Welch, a sociopathic scumbag that realized that he could turn General Electric into a shambling monstrosity of a company that could shapeshift into whatever the street needed. 

And I believe that this same financial nihilism is what empowers people like Mira Murati and Sam Altman, but also millions more middle managers and absentee CEOs like the kinds I’ve been writing about for the last three years. Our economy is run by people that have never built anything, running companies that they contort to make a number go up for shareholders they rarely meet — people like David Zaslav, the CEO of Warner Brothers Discovery who intentionally chose to not release Coyote Vs. Acme, a fully-produced and ready-to-debut movie featuring Warner Brothers’ core brands, choosing instead to save money on its tax bill. Zaslav, as CEO of Warner Brothers Discovery, has overseen one of the darkest periods in its history, including endless cutbacks and, through his own mismanagement, caused a 5-month-long writers’ strike. Zaslav continued to complain even after the strike ended, claiming that studios “overpaid” as he earned a $50 million salary for driving the company into the ground.

And where do you think David Zaslav gets his fucking management philosophy from? Huh? Can you guess? Can you guess who it might be?

It’s Jack fucking Welch! 


During a commencement speech in 2023 where he was booed by students, David Zaslav told the story of how, sometime after General Electric acquired NBC (through GE’s acquisition of RCA in 1985), he sent a handwritten note to Jack Welch telling him about himself and what he’d been doing, leading to a job running NBC in 1989. This story doesn’t make a ton of sense based on any living history of David Zaslav’s life — he joined NBC three years after the acquisition of RCA closed, he claimed he was “working for a few years” in cable news, but it appears he went straight into working at a law firm (LeBoeuf, Lamb, Leiby & MacRae, according to his Warner Brothers profile) when he graduated from Boston University in 1985, meaning that he would have had to send that note while working at a firm that was most assuredly not associated with entertainment or cable news.

Regardless of whether Zaslav made up an entire story about how he got his job at NBC, he was — and is — one of Welch’s most staunch acolytes, telling David Gelles that “Jack set the path. He saw the whole world. He was above the whole world” and that “what [Welch] created at GE became the way companies now operate.” Zaslav, a man who has caused unfathomable damage to the entertainment industry while intentionally choosing not to release movies, learned to do so from the Michael Jordan of corporate destruction. And to Zaslav, Welch was “like an older brother” that would “pick [people at NBC] up with a hug” when things were tough, and that there was “no better friend,” according to an interview with CNBC.

I realize I’m oscillating between industries and names, but the Shareholder Supremacy is something that has poisoned almost every part of global capitalism. Dedication to the shareholder is, in many ways, kind of like a religion. It isn’t really about “a” shareholder or “the” shareholder, just moving assets to make a number go up, and making another number look like it might go up in the future. Welch, Zaslav, Pichai and Zuckerberg are all the same kind of monster — one divorced from consequence and production, incapable of contributing meaningfully to the world at large because the market has shown that it doesn’t really care if they do so. 

Sundar Pichai doesn’t use Google, and Mark Zuckerberg doesn’t use Instagram, Facebook or Threads — at least not like an actual user does, anyway, much like how David Zaslav doesn’t watch or care about TV shows and movies. Their deep-seated nihilism means that they aren’t users, or creators, or even active participants in any part of society. They are there to move the stuff around so that they can keep numbers going up.

I believe this is all a consequence of Jack Welch’s legacy, where the economy has been built on the back of a management philosophy that no longer involves managing people or building things. The business media — and society at large — elevates executives like Zuckerberg and Pichai as they demolish their products and lay off thousands of people because the terms of “success” are no longer about making money by providing a good service. The markets don’t reward steady growth, nor do they punish executives for the quality of their product, because analysts and investors aren’t concerned with the product, just the financial output.

At this point, the illogical rush to boost artificial intelligence makes a lot of sense. When you have a society and economy dominated by people who neither create nor understand the things they’re selling, people that don’t experience or respect labor, their natural thought will be that any form of creativity or “work” can (and should) be automated. It isn’t about what’s “good,” but what’s “enough,” and artificial intelligence allows them to test the boundaries of what “enough” can mean. 

Toys-R-Us (or, at least, the husk that remains after the company was acquired in a leveraged buyout, systematically gutted, and then pushed into bankruptcy) should feel an abundance of shame for releasing a horrifying AI-generated “origin of Toys-R-Us” movie full of ugly hallucinations and janky animations, but what it’s really trying to see is how much it can get away with — how shitty something can be without losing it customers, or whether the potential loss of customers is worth it to save the money it would take to actually shoot a commercial. 

When Mira Murati said that AI was killing creative jobs that “should never have been there in the first place,” it wasn’t just a grotesque insult to those actually working in creative professions like journalism, design, and software engineering — though it was. Nor was it merely a demonstration of her unchained sociopathy and hubris, and the fact that she believes that she — a woman of little actual accomplishment and talent, and the intellectual depth of a puddle on a flat sidewalk —  is singularly ordained to decide which jobs are worthy, and which jobs deserve to die.

Really, it was an insult to all of us. It was an expression of her belief that consumers don’t deserve things created by a person who actually gives a shit. You don’t deserve to watch movies that only exist thanks to the combined efforts of illustrators, sound engineers, composers, actors, writers, directors, and countless other uncelebrated roles, where each cog in the machine has spent years honing their skills until they’ve reached a level of unassailable mastery. 

You, yes you, don’t deserve to use software created by someone who actually thought about the problem and — at the very least — makes a best effort to write secure, robust code. As pointed out by cybersecurity firm Snyk, Microsoft’s Github Copilot tool routinely — and unquestioningly — outputs insecure code, because (as I’ve said ad nauseum in other newsletters) it doesn’t know anything. A human can look at a codebase and identify, for example, areas where the system doesn’t check for SQL injection attacks, which could allow an attacker to steal and modify data from the database. An AI tool merely guesses what code looks right, given the prompt it was provided.

When Murati gleefully brags about how AI will eviscerate an unknowable number of creative jobs — which she has personally identified as expendable — what she’s really saying is that people should be content with using shitty, broken, AI-generated software, and watching shitty, broken, AI-generated films, where the number of digits on each hand fluctuates wildly like the line on an oscilloscope, and where nothing new is said — but rather, a machine regurgitates stuff created by other people based solely on a mathematical model of probability.     

Generative AI is exciting to the disconnected business freaks running our economy because it’s a way to abstract and outsource even more forms of labor. We have spent decades pushing young people to “get into management” without ever teaching anybody about what managers are meant to do, creating a class structure in organizations where there are those that do things and those that take the credit. 

And the latter are the people in charge — disconnected from labor, disconnected from quality, disconnected from production, and thus incapable of making informed decisions other than “what if we moved this number here” and “what if we stopped paying these people.” The AI bubble has been inflated by people excited about the prospect of not having to deal with those filthy “laborers” that “do work,” and they ultimately aspire to make companies with as few people as possible, with the CEO making the most money because they’re the ones that move the numbers around for the markets.

This also explains why they’re so incapable of describing what AI is, what it does, and why it’s useful. It doesn’t matter that data centers might end up using as much power as India by 2034, or that generative AI isn’t actually that useful. It’s a chance for them to further disconnect themselves from having to pay actual people to do actual work, a thing that they themselves consider the product of the underclass. They don’t care that the output is mediocre, that the product is unprofitable, that there is a quickly-approaching wall that generative AI can’t leap over as it runs out of training data, or that the transformer-based architecture of Large Language Models has hard limitations that are impossible to overcome. This is a shiny new object they can wave at investors that already don’t know what any of this stuff means, one that lets them dream of a world without labor. Generative AI wants us to stop researching, or talking, or thinking, and eventually, it will come for your livelihood. 

In some respects, generative AI is morally worse than Welch’s odious anti-worker philosophy. Whereas Welch saw workers as a cost center to be minimized and eliminated where possible, generative AI extends that idea to the raw materials necessary to build apps like ChatGPT, with Mustafa Suleyman, the CEO of Microsoft AI and the co-founder of Deepmind, declaring the existence of an unspoken social contract where online content is “freeware” for building AI models. The law, I note, tends to regard actual contracts more seriously than unspoken (and, let’s face it, imaginary) social contracts, and I desperately look forward to the day when Suleyman’s ideas are tested in court. I don’t fancy his chances.

But that’s the thing, in many of my newsletters and podcasts, I’ve argued that the current trajectory of the tech industry will lead it to ruin. The deep poison of the Shareholder Supremacy — the nihilistic intention of contorting companies to please analysts and investors — is one that will uniquely punish the tech industry, just like it did to General Electric. Google, Microsoft and Meta have bet their futures on generative AI, cramming it into their products with little regard for utility, all to show the markets that they’re futuristic rocket ship growth companies, rather than (at least in Google and Meta’s case) aging and decaying empires that got too big and corrupted by the forces of the lazy management sect.

This is the force behind everything — the dark hand that demanded you return to the office in spite of the productivity gains of remote work, the weight behind the people destroying the media, the people who lied about cryptocurrency being the future of finance and the people that collapsed multiple banks because risk management is considered the enemy. It will burn the world to a crisp in search of a profit, in search of eternal growth, in search of the things that will make the rich, disconnected monsters even more capable of escaping the drudgery of knowing and doing things. This is a dark future where somehow all labor is automated, all money flows upwards, and society — drained of taxes and any kind of social safety net — does… something with all the children that Elon Musk demands that we have.

The tech industry is dominated by management consultants and product managers, with people that have written few lines of code in their lives holding sway over actual builders, instructing them to create things that they don’t understand as a means of improving the bottom line rather than solving a need, or even addressing obvious business fundamentals like profitability or stability.

At some point, things have to collapse. I am not saying that every major tech company goes belly-up, but I believe the generative AI boom is the force that creates a reckoning in this industry. 

It’s almost a little too on the nose — a tool that only seems revolutionary to people that haven’t written, or coded, or drawn, or sang, or created anything in years. The last four years have proven that the tech industry is desperate for somebody to follow. Mark Zuckerberg claimed the metaverse was the future, and the market (along with Microsoft and many other companies) agreed, even if “the metaverse” was a barely-conceived pipedream by a guy who hasn’t coded since 2006. 

Generative AI is the next step up — a trend with an actual product, a superficially-impressive doodad that can sell cloud compute access and proliferate more software, even if the underlying technology is so remarkably unprofitable and unreliable that big tech firms are having to calm down their salespeople. And big tech was desperate for something new to follow, something new to do with themselves that would sustain the Valley’s threadbare facade as a crucible of innovation and progress, rather than doing the hard work to actually research and develop important technology that people would like and pay for. Sam Altman gave them something to do, and he packaged it in a way that told them how to think about it — and, of course, lie about it.

At some point, something has to give. The markets are fickle, and demanding, and engineered to demand endless growth and endless returns. Generative AI is hitting a wall, and any future improvements will be marginal, and cost billions to achieve. The transformer-based architectures like the kind used in OpenAI’s GPT models are gigantic math-machines that know nothing, and will never, ever create the Artificial General Intelligence (AGI) that Sam Altman has been claiming it will. Generative AI does not appear to provide the kind of easy business returns reminiscent of the cloud computing boom and mobile app store booms, and once that becomes clear, the markets will punish those most deeply invested.

And once they do, the real fun will begin, as big tech reconciles with a lack of innovation, and leadership teams stuffed with management consultants, product managers and sycophants that lack the ideas and the expertise to know what actually might be next, not least because they’ve sequestered themselves not just from creators, but from people writ large.

I have no idea when this reckoning will happen, but I do know this: I started writing this as a brief way to connect the past to the present, and as with everything I’ve ever written, I found my argument as I wrote it, researching as I did so. There is no prompt that could have told a generative AI to write what I’ve written, because the process of writing — the actual labor of considering which words to use and finding the theories behind the things I was seeing — is what creates great writing, not the probabilistic formulation of whatever you might create if you trained on billions of words of other people’s stuff.

These people will demand, as Jack Welch did, that we burn whatever we need to in the pursuit of growth — to divert billions of dollars, exabytes of data, endless acres of data centers — all so that they can maybe create a future they’ve never understood. It doesn’t matter if it actually does anything, just as long as they can convince shareholders and analysts that it might one day do so. Mentally, it’s hard to consider any of this a grift, because doing so would be admitting how much of the economy is controlled by grifters.   

I leave you with a quote from Nik Suresh’s incredible “I Will Fucking Piledrive You If You Mention AI Again”:

You see, while hype is nice, it's only nice in small bursts for practitioners. We have a few key things that a grifter does not have, such as job stability, genuine friendships, and souls. What we do not have is the ability to trivially switch fields the moment the gold rush is over, due to the sad fact that we actually need to study things and build experience. Grifters, on the other hand, wield the omnitool that they self-aggrandizingly call 'politics'. That is to say, it turns out that the core competency of smiling and promising people things that you can't actually deliver is highly transferable.

Generative AI is an act of theft in and of itself, perpetuated by people that have stolen innovation in the name of Shareholder Supremacy, creating a degenerative form of innovation that optimizes the tech industry to create things that sound cool rather than actually help people. And the people perpetuating these acts — Sundar Pichai, Mark Zuckerberg, Andy Jassy, Satya Nadella and Sam Altman — are all the same kind of charlatan, the ultimate manager, one that has created the means to escape the workforce and, ultimately, having to create anything of any kind. 


I do not claim to have any answers, or to truly know how to unwind the terrors in front of us. I don't know how you change things, other than loudly saying what I see in front of my eyes and how it makes me feel, and wanting others to, at the very least, have more clarity into the things that are being done to them, and the ways in which others have twisted the system to make more money than anyone's ever had transforming companies into nihilistic engines for growth, anti-companies that create value by reducing what they contribute to the world. 

But I encourage you not to be nihilistic yourself, to lose faith in the power of sunlight, in calling these people what they are – parasites. Being clear and concise and continually outraged at the damage these people do to society is necessary, and I encourage you all to catalogue everything you see. These people have names – Sam Altman, Sundar Pichai, Mark Zuckerberg, Satya Nadella, all of these people diverting the world's most talented engineers with an unsustainable way to do things they don't even understand can be held accountable, at the very least in record. 

Believe your eyes. You are being given less so that others can have more, you are having things taken from you by corporations because they must always please the nebulous form of the shareholder.

These do–nothing corporate stooges deserve your loud, proud and consistent ire. 

Read the whole story
ssweeny
10 days ago
reply
Pittsburgh
Share this story
Delete
Next Page of Stories