A bunch of AI-related news has popped up this week, so let’s do a roundup.
Asianometry notes that TSMC’s caution at expanding is amply justified by the boom-and-bust nature of the semiconductor industry:
- “I’m hearing many similar views in the Silicon Valley Borg that TSMC is the break or limiter on the AI boom, as if they’re the reason why we don’t have AGI yet. Because they didn’t and still don’t believe.”

- “If we can ever say that a company that spent $41 billion on capital expenditure in 2025, with another $53 to $56 billion in 2026 planned, is sitting on its hands, doing nothing.”
- “TSMC having 90% share of the AI chip market looks pretty unhealthy. That should go down and it will. Samsung seems to be doing well so far.”
- “The cold, hard reality is that shortages are a fact of life in semiconductors, as are horrific gluts.”
- “What we are flippantly labeling as TSMC we really mean is the AI supply chain. And that supply chain is as complicated as you can possibly imagine. Like an iceberg, it looks big enough on the surface of the water, but goes way far deeper underneath. TSMC has thousands of suppliers in two categories: Equipment like the famed ASML lithography tools and materials like photoresist, silicon wafers, acid etch gases and so on. These are not generalized tools and materials. They are not fungeible like AWS compute units.”
- “And then there are the memory guys. You cannot ship an AI system without memory. DRAM and NAND. Nvidia’s AI chips use a special form of DRAM called high bandwidth memory, and they use quite a lot of it. The memory industry is just as consolidated as the logic industry, with the major players being Samsung, SK Hynix and Micron.”
- “The chip guys are last to know when the party is getting started, but first they get batoned in the face when the police shut things down.”
- He points out that semiconductor manufacturers have log supply chains. He uses a different metaphor (the beer distribution game, or a bullwhip), but back when I was working at Applied Materials, it was described as trains linked together with slinkys. First software takes off, then hardware gets yanked along, then the chip manufacturers get yanked, and then, finally, semiconductor equipment manufacturers get yanked into motion, and shortly after that happens, the bust hits the front of the train, and the trailing cars all crash into each other. It’s a regular boom/bust cycle.
- “From 1961 to 2006, electronics consumption in the United States grew positively but with wild volatility swings between 0 to 20%. But for the semiconductor makers, that translates to swings anywhere from 20% to 40%. And for the equipment makers, it is amplified even more, plus or minus 60%. The whip hits particularly hard in the semiconductor industry because of the industry’s long lead times. It takes 4.5 months to fabricate and package a chip. It takes 18 months to 2 years to build a fab. Meaning from shovels down to producing chips, and it takes 12 to 18 months to produce and install something like an EUV machine into the fab. Another 6 months before that machine actually starts patterning wafers.”
- “Long lead times mean having to make very long demand forecasts, which leads to extreme volatility swings during up and downturns even if those up or downturns are relatively small.” People forget that in 1998, during the time we now think of as the DotCom Boom, there was a small semiconductor downturn that had Applied Materials forcing employees to take unpaid leave.
- “ASML just reported 2025 earnings, and we see the bullwhip in full effect. TSMC raised capital expenditure 35% but ASML announced €13.2 billion of net new bookings. Analysts had expected just €6.32 billion. This is because ASML collected orders not just from TSMC, but also Samsung, Intel and the memory guys. When it rains it pours, right? Again, this is why I fear that another AI foundry would not mean our compute shortage is solved, because ultimately, when those foundries start scaling their capacity, they all go to the same suppliers.”
- He goes over how car manufacturers cancelled orders during Flu Manchu, and then scrambled when the economy took off afterwards. “TSMC was trying to discern between double booked orders and real demand, which is not an uncommon experience for them. Customers lie about their own demand all the time, or at least we can say that they are eternally optimistic. TSMC tried to respond in 2022. The Taiwanese giant poured $36 billion into capital expenditure. They went to their suppliers and pushed like no tomorrow.”
- “It turned out those customers really were double booking orders and artificially inflating demand. When the macro environment turned in 2022, the automotive, smartphone, and PC chips that were so hot during the COVID era fell out of vogue and customers started cutting orders.”
- “Meanwhile, deeper down in the supply chain, TSMC and the rest of the semiconductor industry were getting bullwhipped by COVID hangover. Utilization at TSMC’s multi-billion dollar N7 fabs crashed, Semi analysis wrote in April 2023. Now, Semi analysis data indicates that the 7nm utilization rates were below 70% in Q1. Furthermore, Q2 gets even worse with 7nm utilization rates falling to below 60%. This is primarily due to weakness in both smartphones and PCs, but there is a broader weakness in most segments. A fab’s break even utilization rates are about 60% to 70%. So those N7 Taichung fabs were taking financial losses potentially on the order of hundreds of millions, maybe even billions. The financial burdens of low utilization are another reason why I’m skeptical another AI foundry could have rushed into the AI chip fray to save the day.”
- He says that Intel incurred losses during this period due to an unnecessary fab expansion, which is probably true, but that was a secondary factor next to their longer running problem of getting their process wrong.
- “ChatGPT was released in November 2022, and that kicked off a massive increase in capex amongst the hyperscalers in particular, but it sure seems like TSMC didn’t buy the hype. That lack of increased investment earlier this decade is why there is a shortage today and is why TSMC has been a de facto break on the AI buildout/bubble.”
- “I recall news in mid 2024 of TSMC struggling with CoWoS capacity bottlenecks and yield problems, including one design issue that caused cracks in the Nvidia chips packaging.” CoWoS is Chip on Wafer on Substrate, which involves fabbing an interposer as a substrate for faster connections between your processing chips and memory.
- “I also recall news in late 2024 noting how the vendors in charge of making the server racks for Nvidia’s Blackwell servers struggled with overheating, liquid cooling leaks, software bugs, and connectivity issues. Such technical difficulties delayed server deployment until early to mid 2025, creating a weird situation for several months where TSMC was pumping out chips that just went into storage. So that gated things, because you don’t scale until you first fix the technical problems.”
- Then there’s the power-scaling issue, which is a whole ‘nuther can of worms.

The software sector was jolted overnight with what analysts are calling a “SaaSpocalypse” — a sudden and severe selloff triggered by new artificial intelligence tools unveiled by US AI startup Anthropic. The episode has sharpened investor fears that AI is no longer merely helping software companies but may now begin replacing them.
Anthropic has expanded its enterprise AI platform, Claude Cowork, by launching 11 new plugins aimed at automating a wide range of professional tasks. Claude Cowork is an agentic, no-code AI assistant built for corporate users, allowing companies to automate workflows without writing software. The new plugins are designed to handle tasks across legal, sales, marketing and data analysis functions. The most recent addition is Anthropic’s Claude Legal agent, which can perform routine legal work such as document and contract review, and compliance checks.
Anthropic has said that the tool does not provide legal advice and that all AI-generated outputs must be reviewed by licensed attorneys. Even so, the breadth of automation signals a step change in how much white-collar work AI systems can now perform.
Here are the current plugins for Claude Cowork:
- Productivity — Manage tasks, calendars, daily workflows, and personal context
- Enterprise search — Find information across your company’s tools and docs
- Plugin Create/Customize — Create and customize new plugins from scratch
- Sales — Research prospects, prep deals, and follow your sales process
- Finance — Analyze financials, build models, and track key metrics
- Data — Query, visualize, and interpret datasets
- Legal — Review documents, flag risks, and track compliance
- Marketing — Draft content, plan campaigns, and manage launches
- Customer support — Triage issues, draft responses, and surface solutions
- Product management — Write specs, prioritize roadmaps, and track progress
- Biology research — Search literature, analyze results, and plan experiments
A lot of those are already automated elsewhere, but I suspect a lot accountants and paralegals just felt a goose strut across their grave. On the other hand, who is really going to turn over, say, Accounts Payable to an AI? One glitch, and your entire bank account is drained…
If it works (a big if, give so many AIs are prone to hallucinations), this is potentially good news for Anthropic and the companies using their tools, and bad for SaaS companies and the employees currently doing those jobs.

I note there’s no plugin for technical writing…yet.
And Google Cloud ended 2025 at an annual run rate of over $70 billion, representing a wide breadth of customers, driven by demand for AI products.
We’re seeing our AI investments and infrastructure drive revenue and growth across the board. To meet customer demand and capitalize on the growing opportunities we have ahead of us, our 2026 CapEx investments are anticipated to be in the range of $175 to $185 billion.”
In September 2025, Nvidia and OpenAI announced a letter of intent for Nvidia to invest up to $100 billion in OpenAI’s AI infrastructure. At the time, the companies said they expected to finalize details “in the coming weeks.” Five months later, no deal has closed, Nvidia’s CEO now says the $100 billion figure was “never a commitment,” and Reuters reports that OpenAI has been quietly seeking alternatives to Nvidia chips since last year.
Reuters also wrote that OpenAI is unsatisfied with the speed of some Nvidia chips for inference tasks, citing eight sources familiar with the matter. Inference is the process by which a trained AI model generates responses to user queries. According to the report, the issue became apparent in OpenAI’s Codex, an AI code-generation tool. OpenAI staff reportedly attributed some of Codex’s performance limitations to Nvidia’s GPU-based hardware.
After the Reuters story published and Nvidia’s stock price took a dive, Nvidia and OpenAI have tried to smooth things over publicly. OpenAI CEO Sam Altman posted on X: “We love working with NVIDIA and they make the best AI chips in the world. We hope to be a gigantic customer for a very long time. I don’t get where all this insanity is coming from.”
Microsoft’s Copilot chatbot has become central to its artificial-intelligence strategy as the company’s close partnership with OpenAI diminishes. But the effort to build it up as a ChatGPT alternative has been tough going.
Remember, Copilot is the AI that wants to take pictures of your desktop every few seconds. Golly, can’t imagine why it’s unpopular..
Confusing brand positioning and interoperability problems have frustrated users, current and former employees who have worked on Microsoft’s AI products said.
Interoperability problems? With a Microsoft product?

Only a small proportion of subscribers to Microsoft’s enterprise suite use Copilot, and the percentage who favor it over Google’s Gemini or other tools has decreased in recent months, according to data reviewed by the Journal.
The stakes are high for Microsoft because Copilot is core to a push by Chief Executive Satya Nadella to transform Microsoft into an AI-first company, much as he transformed it into a cloud-first company around a decade ago. Copilot is one of Nadella’s top priorities, current and former executives said.
Microsoft shares tumbled after its earnings report last week sparked investor concern that growth in its most important unit, the Azure cloud-computing business, is slowing, and that its AI business is reliant on OpenAI while Copilot remains unproven. Shares fell nearly 3% Tuesday amid a slide in software stocks prompted by fresh concerns that AI tools will make enterprise subscriptions less necessary.
For other AI companies, we merely suspect they’re evil. For Microsoft (and Google), we already know they’re evil…