Posts Tagged ‘Anthropic’

Amazon AI Causes Amazon Outage

Sunday, February 22nd, 2026

Megacorporations are telling businesses that their AI offerings are good enough to run vital company functions. The problem is, those AIs are still screwing up, and frequently in ways humans wouldn’t screw up. That’s what Amazon found out when they tried to eat their own dogfood, putting their AI in charge of Amazon Web Services. It didn’t go well.

Are AI tools reliable enough to be used at in commercial settings? If so, should they be given “autonomy” to make decisions? These are the questions being raised after at least two internet outages at Amazon’s cloud division were allegedly caused by blundering AI agents, according to new reporting from the Financial Times.

In one incident in December, engineers at Amazon Web Services allowed its in-house Kiro “agentic” coding tool to make changes that sparked a 13-hour disruption, according to four sources familiar with the matter. The AI, ill-fatedly, had decided to “delete and recreate the environment,” the sources said.

When something is “in the cloud,” that means it’s sitting on someone else’s computer. More specifically, it’s probably running as a containerized instance on any of a number of other CPU and storage pools being run under a hypervisor to scale up or scale down resources as demand requires. This allows efficient use of those resources, and it’s made AWS Amazon’s most profitable business. And most of the time AWS works pretty well.

Amazon employees claimed that this was not the first service disruption involving an AI tool.

“We’ve already seen at least two production outages [in the past few months],” one senior AWS employee told the FT. “The engineers let the AI [agent] resolve an issue without intervention. The outages were small but entirely foreseeable.”

AWS launched its in-house coding assistant, Kiro, in July. The company describes the tool as an “autonomous” agent that can help deliver projects “from concept to production.” Another AI coding assistant developed by Amazon, described as an AI assistant, was involved in the earlier outage.

The employees said the AI tools were treated as an extension of an operator and given operator-level permissions. In both of the outages, the engineers didn’t require a second person’s approval before finalizing the changes, going against typical protocol.

In a statement to the FT, Amazon claimed the outage was an “extremely limited event” that affected only one service in parts of China.

I’m not sure I was aware AWS operated in China, but I guess I’m not surprised. Is it too much to ask that the China data centers are adequately segmented and firewalled from the American data centers?

Moreover, it was a “coincidence that AI tools were involved” and that “the same issue could occur with any developer tool or manual action,” it said.

Except usually code changes are usually run through rigorous testing in a continuous integration/continuous deployment pipeline, and then deployed to a test server for performance and regression testing. It’s not clear that was done here.

It also claimed that its Kiro AI “requests authorisation before taking any action,” but that the engineer involved in the December outage had more permissions than usual, calling this a “user access control issue, not an AI autonomy issue.”

“In both instances, this was user error, not AI error,” Amazon insisted.

True, in the sense that an Amazon engineer evidently allowed an AI to alter production code.

The company also claimed that it had not seen evidence that mistakes were more common with AI tools. To which we retort: is Amazon living under a rock? While AI and its foray into commercial applications remain nascent, there’s no shortage of evidence showing that the tools are prone to malfunctioning. Their proclivity for producing hallucinations, or instances in which they fabricate facts, is well documented. So are their weak guardrails. Even some of Amazon’s own employees are reluctant to use AI tools because of the risk of error, they told the FT.

Veteran programmers are finding that AI coding assistants consistently spit out botched code, with several studies showing that the frequent double and triple-checking the questionable outputs require in reality slow down software engineers, even though the AI, on a surface level, may be producing the code faster. The rise of “vibe coding” with AI has resulted in numerous blunders in which an agentic AI makes decisions that its owners didn’t intend.

Of course, it would not be much of a ringing endorsement if tech companies weren’t using the AI tools they claim will supercharge productivity in their own operations, and they’ve been more than willing to get high on their own supplies. Both Microsoft and Google boast that over a quarter of their code is now written with AI. Engineers at Anthropic and OpenAI have suggested that nearly 100 percent of their code is AI written.

This does not inspire me with confidence. Let’s pull out the relevant XKCD comic again:

The only reason the modern technological world works is that someone, somewhere understands at a deep level how each of those boxes work, and can fix it if something goes wrong. And for Open Source software, the source code for those boxes is available somewhere other people can look at it and understand it.

When you start replacing the code in some of those boxes with AI-generated code, you start losing the knowledge of how everything works and why. Maybe the AI is producing clear, well-documented code, but you can’t count on it. And the AI doesn’t understand code the way a human does, because and AI doesn’t understand anything in the way we mean it, it’s running on artificially evolved heuristics that have performed well designing things to pass documented test cases, but which have zero frameworks for handling unanticipated exceptions. And when it breaks, there’s no guarantee a human will understand how and why it broke.

And given competitive time-to-market pressures, you can be sure companies will increasingly ship AI code without adequate safeguards or sufficient testing because their service is down hard and the latest code fixes the last AI bug, so they’ll end up rolling the fix straight to production, and something in the fix will be an even more disasterous bug none of the test cases caught and everything will come tumbling down.

And if you do that with enough of those little boxes of digital infrastructure, the entire underpinings of modern online life may come tumbling down with it. And you can’t find people to fix it because you laid them off last year and replaced them with AI.

The problem with eating your own dog food is that sometimes it can be lousy, especially if you have no idea what went into it…

Microsoft AI Head: Most White Collar Jobs Automated In 18 Months

Sunday, February 15th, 2026

Here’s a pretty provocative prediction: Microsoft AI CEO Mustafa Suleyman predicts “most, if not all” white-collar tasks will be automated by AI within 18 months.

Microsoft’s AI CEO is joining a chorus of executives who say they anticipate widespread job automation driven by artificial intelligence.

Mustafa Suleyman, the Microsoft AI chief, said in an interview with the Financial Times that he predicts most, if not every, task in white-collar fields will be automated by AI within the next year or year and a half.

“I think that we’re going to have a human-level performance on most, if not all, professional tasks,” Suleyman said in the interview that was published Wednesday. “So white-collar work, where you’re sitting down at a computer, either being a lawyer or an accountant or a project manager or a marketing person — most of those tasks will be fully automated by an AI within the next 12 to 18 months.”

The CEO said the trend is already observable in software engineering, in which employees are using “AI-assisted coding for the vast majority of their code production.”

“It’s a quite different relationship to the technology, and that’s happened in the last six months,” he said.

I really, really doubt that. AI has been able to do surprisingly well on a number of programming tasks mainly because it was created by programmers. It’s no wonder that it should be good at something like, say, creating a program to ingest JSON REST data into an SQL database. But the further you get from technical domains, the further you’re getting from people who can correct the AI when it gets things wrong.

Not every white collar employee is absolutely necessary, but as a class they possess the vital wells of institutional knowledge that lie beyond the datasets AI have been trained on. They also understand exception handling, knowing what to do when things go wrong. AI systems generally do poorly when faced with input combinations they’ve never seen before, which is why you have those videos of dozens of Waymo taxis blocking roads in clumps.

Also, given how zealously IT teams work to secure company data, are they really going to just blithely set AIs loose in their databases? Especially when their jobs may be among the ones target for elimination? Especially when AI is still prone to notorious hallucinations?

There are places where where AI might replace experts, namely those that use wide but highly structured datasets for narrow decision points, like some areas of regulatory law. But are decision makers really going to remove the ability to blame underlings for mistakes? “Sure we lost $100 million, but the AI told me it was OK!” is probably not going to wash as an adequate ass-covering maneuver. And, as I noted before, who is going to put an AI in charge of Accounts Payable when a single glitch could drain your entire bank account?

Plus there are entire classes of white collar jobs (sales comes to mind) where I don’t see AI making any headway replacing the fleshy incumbents.

Second, even where AI might effectively replace some humans, I don’t see companies letting Microsoft AIs in the front door. Copilot is a product people hate so much that current users won’t even turn it on even though they’re getting it free. Maybe Microsoft will make money getting in the back door via investments in OpenAI and Anthropic, but I doubt that’s going to reflect glory on Mustafa Suleyman.

But finally, let’s assume that his prediction is true and that most white collar jobs will be automated out of existence in the next 18 months. Just how does Microsoft expect to profit in the inevitable widespread economic collapse? If you think New York City’s trajectory is dire under Mandami now, how much direr is it going to get when vast swathes of people who pay the most taxes (and buy the most stocks) lose their income almost overnight? The subprime meltdown will look like a cakewalk in comparison when everyone starts selling their investment holdings just to pay for food and rent.

And if it’s known that the big AI companies are the direct causes of their immediate penury, how are such grandees going to protect themselves from the fury of the newly unemployed mobs? There were probably just a few thousand of Ned Ludd’s boys smashing mill machinery in early 19th century England. If Suleyman’s prediction came to pass, there would be what, 30 or 40 million people hurled into unemployment at the same time? In that sort of environment, thousands of Luigi Mangiones would bloom. And if it’s a global phenomena, neither the French Riviera nor Davos will be safe from the fury of the mobs. And that’s assuming congress doesn’t step in with something like the Execute Mustafa Suleyma Live On National Television Act.

So I’m reasonably certain that Suleyman’s prophecy will not come to pass. (Wait, a Microsoft CEO’s predictions turning out to be wrong? What are the odds?)

And anyway, if his prediction does have a chance of coming true, the best move isn’t investing in Microsoft, it’s investing in canned goods and ammunition…

AI News Roundup For February 5, 2026

Thursday, February 5th, 2026

A bunch of AI-related news has popped up this week, so let’s do a roundup.

  • Some AI companies are complaining that TSMC is killing the AI boom by not expanding rapidly enough:

    Asianometry notes that TSMC’s caution at expanding is amply justified by the boom-and-bust nature of the semiconductor industry:

    • “I’m hearing many similar views in the Silicon Valley Borg that TSMC is the break or limiter on the AI boom, as if they’re the reason why we don’t have AGI yet. Because they didn’t and still don’t believe.”

    • “If we can ever say that a company that spent $41 billion on capital expenditure in 2025, with another $53 to $56 billion in 2026 planned, is sitting on its hands, doing nothing.”
    • “TSMC having 90% share of the AI chip market looks pretty unhealthy. That should go down and it will. Samsung seems to be doing well so far.”
    • “The cold, hard reality is that shortages are a fact of life in semiconductors, as are horrific gluts.”
    • “What we are flippantly labeling as TSMC we really mean is the AI supply chain. And that supply chain is as complicated as you can possibly imagine. Like an iceberg, it looks big enough on the surface of the water, but goes way far deeper underneath. TSMC has thousands of suppliers in two categories: Equipment like the famed ASML lithography tools and materials like photoresist, silicon wafers, acid etch gases and so on. These are not generalized tools and materials. They are not fungeible like AWS compute units.”
    • “And then there are the memory guys. You cannot ship an AI system without memory. DRAM and NAND. Nvidia’s AI chips use a special form of DRAM called high bandwidth memory, and they use quite a lot of it. The memory industry is just as consolidated as the logic industry, with the major players being Samsung, SK Hynix and Micron.”
    • “The chip guys are last to know when the party is getting started, but first they get batoned in the face when the police shut things down.”
    • He points out that semiconductor manufacturers have log supply chains. He uses a different metaphor (the beer distribution game, or a bullwhip), but back when I was working at Applied Materials, it was described as trains linked together with slinkys. First software takes off, then hardware gets yanked along, then the chip manufacturers get yanked, and then, finally, semiconductor equipment manufacturers get yanked into motion, and shortly after that happens, the bust hits the front of the train, and the trailing cars all crash into each other. It’s a regular boom/bust cycle.
    • “From 1961 to 2006, electronics consumption in the United States grew positively but with wild volatility swings between 0 to 20%. But for the semiconductor makers, that translates to swings anywhere from 20% to 40%. And for the equipment makers, it is amplified even more, plus or minus 60%. The whip hits particularly hard in the semiconductor industry because of the industry’s long lead times. It takes 4.5 months to fabricate and package a chip. It takes 18 months to 2 years to build a fab. Meaning from shovels down to producing chips, and it takes 12 to 18 months to produce and install something like an EUV machine into the fab. Another 6 months before that machine actually starts patterning wafers.”
    • “Long lead times mean having to make very long demand forecasts, which leads to extreme volatility swings during up and downturns even if those up or downturns are relatively small.” People forget that in 1998, during the time we now think of as the DotCom Boom, there was a small semiconductor downturn that had Applied Materials forcing employees to take unpaid leave.
    • “ASML just reported 2025 earnings, and we see the bullwhip in full effect. TSMC raised capital expenditure 35% but ASML announced €13.2 billion of net new bookings. Analysts had expected just €6.32 billion. This is because ASML collected orders not just from TSMC, but also Samsung, Intel and the memory guys. When it rains it pours, right? Again, this is why I fear that another AI foundry would not mean our compute shortage is solved, because ultimately, when those foundries start scaling their capacity, they all go to the same suppliers.”
    • He goes over how car manufacturers cancelled orders during Flu Manchu, and then scrambled when the economy took off afterwards. “TSMC was trying to discern between double booked orders and real demand, which is not an uncommon experience for them. Customers lie about their own demand all the time, or at least we can say that they are eternally optimistic. TSMC tried to respond in 2022. The Taiwanese giant poured $36 billion into capital expenditure. They went to their suppliers and pushed like no tomorrow.”
    • “It turned out those customers really were double booking orders and artificially inflating demand. When the macro environment turned in 2022, the automotive, smartphone, and PC chips that were so hot during the COVID era fell out of vogue and customers started cutting orders.”
    • “Meanwhile, deeper down in the supply chain, TSMC and the rest of the semiconductor industry were getting bullwhipped by COVID hangover. Utilization at TSMC’s multi-billion dollar N7 fabs crashed, Semi analysis wrote in April 2023. Now, Semi analysis data indicates that the 7nm utilization rates were below 70% in Q1. Furthermore, Q2 gets even worse with 7nm utilization rates falling to below 60%. This is primarily due to weakness in both smartphones and PCs, but there is a broader weakness in most segments. A fab’s break even utilization rates are about 60% to 70%. So those N7 Taichung fabs were taking financial losses potentially on the order of hundreds of millions, maybe even billions. The financial burdens of low utilization are another reason why I’m skeptical another AI foundry could have rushed into the AI chip fray to save the day.”
    • He says that Intel incurred losses during this period due to an unnecessary fab expansion, which is probably true, but that was a secondary factor next to their longer running problem of getting their process wrong.
    • “ChatGPT was released in November 2022, and that kicked off a massive increase in capex amongst the hyperscalers in particular, but it sure seems like TSMC didn’t buy the hype. That lack of increased investment earlier this decade is why there is a shortage today and is why TSMC has been a de facto break on the AI buildout/bubble.”
    • “I recall news in mid 2024 of TSMC struggling with CoWoS capacity bottlenecks and yield problems, including one design issue that caused cracks in the Nvidia chips packaging.” CoWoS is Chip on Wafer on Substrate, which involves fabbing an interposer as a substrate for faster connections between your processing chips and memory.
    • “I also recall news in late 2024 noting how the vendors in charge of making the server racks for Nvidia’s Blackwell servers struggled with overheating, liquid cooling leaks, software bugs, and connectivity issues. Such technical difficulties delayed server deployment until early to mid 2025, creating a weird situation for several months where TSMC was pumping out chips that just went into storage. So that gated things, because you don’t scale until you first fix the technical problems.”
    • Then there’s the power-scaling issue, which is a whole ‘nuther can of worms.

  • There’s a lot of talk about a SaaSpocalypse going on thanks to a new AI tool. (SaaS is “Software as a Service.” Instead of hosting your own payroll or sales-tracking or whatever servers, you hire a company that already has cloud software setup to do it and you just tie into that, which can considerably reduce startup costs. A whole lot of successful new tech companies over the last decade plus have been SaaS companies.)

    The software sector was jolted overnight with what analysts are calling a “SaaSpocalypse” — a sudden and severe selloff triggered by new artificial intelligence tools unveiled by US AI startup Anthropic. The episode has sharpened investor fears that AI is no longer merely helping software companies but may now begin replacing them.

    Anthropic has expanded its enterprise AI platform, Claude Cowork, by launching 11 new plugins aimed at automating a wide range of professional tasks. Claude Cowork is an agentic, no-code AI assistant built for corporate users, allowing companies to automate workflows without writing software. The new plugins are designed to handle tasks across legal, sales, marketing and data analysis functions. The most recent addition is Anthropic’s Claude Legal agent, which can perform routine legal work such as document and contract review, and compliance checks.

    Anthropic has said that the tool does not provide legal advice and that all AI-generated outputs must be reviewed by licensed attorneys. Even so, the breadth of automation signals a step change in how much white-collar work AI systems can now perform.

    Here are the current plugins for Claude Cowork:

    • Productivity — Manage tasks, calendars, daily workflows, and personal context
    • Enterprise search — Find information across your company’s tools and docs
    • Plugin Create/Customize — Create and customize new plugins from scratch
    • Sales — Research prospects, prep deals, and follow your sales process
    • Finance — Analyze financials, build models, and track key metrics
    • Data — Query, visualize, and interpret datasets
    • Legal — Review documents, flag risks, and track compliance
    • Marketing — Draft content, plan campaigns, and manage launches
    • Customer support — Triage issues, draft responses, and surface solutions
    • Product management — Write specs, prioritize roadmaps, and track progress
    • Biology research — Search literature, analyze results, and plan experiments

    A lot of those are already automated elsewhere, but I suspect a lot accountants and paralegals just felt a goose strut across their grave. On the other hand, who is really going to turn over, say, Accounts Payable to an AI? One glitch, and your entire bank account is drained…

    If it works (a big if, give so many AIs are prone to hallucinations), this is potentially good news for Anthropic and the companies using their tools, and bad for SaaS companies and the employees currently doing those jobs.

    I note there’s no plugin for technical writing…yet.

  • Google/Alphabet just reported $400 billion in earnings in 2025. CEO Sundar Pichai:

    And Google Cloud ended 2025 at an annual run rate of over $70 billion, representing a wide breadth of customers, driven by demand for AI products.

    We’re seeing our AI investments and infrastructure drive revenue and growth across the board. To meet customer demand and capitalize on the growing opportunities we have ahead of us, our 2026 CapEx investments are anticipated to be in the range of $175 to $185 billion.”

  • Remember how Nvidia was going to invest $100 billion in OpenAI? Yeah, not so much.

    In September 2025, Nvidia and OpenAI announced a letter of intent for Nvidia to invest up to $100 billion in OpenAI’s AI infrastructure. At the time, the companies said they expected to finalize details “in the coming weeks.” Five months later, no deal has closed, Nvidia’s CEO now says the $100 billion figure was “never a commitment,” and Reuters reports that OpenAI has been quietly seeking alternatives to Nvidia chips since last year.

    Reuters also wrote that OpenAI is unsatisfied with the speed of some Nvidia chips for inference tasks, citing eight sources familiar with the matter. Inference is the process by which a trained AI model generates responses to user queries. According to the report, the issue became apparent in OpenAI’s Codex, an AI code-generation tool. OpenAI staff reportedly attributed some of Codex’s performance limitations to Nvidia’s GPU-based hardware.

    After the Reuters story published and Nvidia’s stock price took a dive, Nvidia and OpenAI have tried to smooth things over publicly. OpenAI CEO Sam Altman posted on X: “We love working with NVIDIA and they make the best AI chips in the world. We hope to be a gigantic customer for a very long time. I don’t get where all this insanity is coming from.”

  • You know who’s not winning the AI war? Microsoft.

    Microsoft’s Copilot chatbot has become central to its artificial-intelligence strategy as the company’s close partnership with OpenAI diminishes. But the effort to build it up as a ChatGPT alternative has been tough going.

    Remember, Copilot is the AI that wants to take pictures of your desktop every few seconds. Golly, can’t imagine why it’s unpopular..

    Confusing brand positioning and interoperability problems have frustrated users, current and former employees who have worked on Microsoft’s AI products said.

    Interoperability problems? With a Microsoft product?

    Only a small proportion of subscribers to Microsoft’s enterprise suite use Copilot, and the percentage who favor it over Google’s Gemini or other tools has decreased in recent months, according to data reviewed by the Journal.

    The stakes are high for Microsoft because Copilot is core to a push by Chief Executive Satya Nadella to transform Microsoft into an AI-first company, much as he transformed it into a cloud-first company around a decade ago. Copilot is one of Nadella’s top priorities, current and former executives said.

    Microsoft shares tumbled after its earnings report last week sparked investor concern that growth in its most important unit, the Azure cloud-computing business, is slowing, and that its AI business is reliant on OpenAI while Copilot remains unproven. Shares fell nearly 3% Tuesday amid a slide in software stocks prompted by fresh concerns that AI tools will make enterprise subscriptions less necessary.

    For other AI companies, we merely suspect they’re evil. For Microsoft (and Google), we already know they’re evil…

  • LinkSwarm for December 5, 2025

    Friday, December 5th, 2025

    Following hot on the heels of Thanksgiving travel and the final push to put out a new Lame Excuse Books catalog next week, this is going to be a somewhat briefer LinkSwarm.

    This week: The Supreme Court greenlights the Texas redistricting map, a whole lot of support behind Trump Accounts, more Tim Walz corruption in Minnesota, the January 6 pipeline bomber turns out to be a black anti-Trump radical, more Ukrainian missile and drone strikes on Russian infrastructure, another pedo teacher exposed, Netflix buys Warner Brothers, and a tsunami of horrifying sequels barrels towards movie screens. It’s the Friday LinkSwarm!

  • Texas’ Redistricting Map Left Intact by U.S. Supreme Court, Permanently Halting Lower Court Ruling.”

    Texas’ newly redistricted congressional map will remain in effect for the 2026 primary after the U.S. Supreme Court on Thursday approved a stay of a lower court panel’s ruling against the new lines.

    The State of Texas had applied for a stay of that ruling by the El Paso-based federal judicial panel that came down last month, which declared that legislators illegally considered racial factors in the redraw. The Office of the Attorney General (OAG) then appealed that ruling to the U.S. Supreme Court, citing many of the fiery arguments made by the panel’s lone dissenter, Judge Jerry Smith.

    Before Thanksgiving, Justice Samuel Alito issued a temporary stay of the ruling, pending further consideration by the full court.

    Now that stay has been made permanent, pending a full appeal later on, in a 6 to 3 ruling by the court along ideological lines. Justices Samuel Alito, Clarence Thomas, and Neil Gorsuch penned a concurring opinion.

    “First, the dissent does not dispute—because it is indisputable—that the impetus for the adoption of the Texas map (like the map subsequently adopted in California) was partisan advantage pure and simple,” the trio wrote.

    “Thus, when the asserted reason for a map is political, it is critical for challengers to produce an alternative map that serves the State’s allegedly partisan aim just as well as the map the State adopted. Id., at 34; Easley v. Cromartie, 532 U. S. 234, 258 (2001). Although respondents’ experts could have easily produced such a map if that were possible, they did not, giving rise to a strong inference that the State’s map was indeed based on partisanship, not race.”

    They concluded, “Neither the duration of the District Court’s hearing nor the length of its majority opinion provides an excuse for failing to apply the correct legal standards as set out clearly in our case law.”

    Justices Elena Kagan, Sonia Sotomayor, and Ketanji Brown Jackson dissented.

    On to 2026.

  • Billions Spent By One-Party-Rule Maryland Democrats With Little Oversight.”

    The one-party rule of ‘Democratic Kings’ in Maryland continues to reveal an optically displeasing truth about these leftist activists masquerading as competent politicians, who are anything but, and their epic mismanagement of state finances has only occurred because of limited oversight into their radical agendas.

    Fox Baltimore reports that a state legislative audit uncovered major concerns about the oversight of billions of dollars spent by Democratic Gov. Wes Moore and his rudderless leftist allies in Annapolis, who champion everything from failed climate-crisis policies to wokeism to gender identity agendas to social justice and criminal justice reforms, as well as protecting illegal aliens (new voter base) – this is anything but ‘Maryland First’…

    “Most recently, a state audit revealed 42 state offices spent a total of $8.5 billion last year with minimal oversight. That audit came on the heels of a State Highway Administration audit detailing $360 million in unauthorized spending for federal projects, and a separate Social Services Administration audit revealing a lack of protections for foster care children in Maryland,” Fox Baltimore wrote in a report.

    Taxpayers Protection Alliance president David Williams told Fox Baltimore journalist Jeff Abell, “It’s a problem that almost $9 billion is going to these entities and we just don’t know where the money is going.”

    Williams expressed serious concerns over the findings, pointing out, “This is supposed to be a system of checks and balances. We know the checks have gone out but there are no balances to be sure the money is being spent wisely.”

    He called for increased oversight, saying, “If you’re receiving taxpayer money, there has to be full accountability, and this is billions of dollars we’re talking about.”

    The lack of oversight in Maryland comes as no surprise, given that the state suffers from a disastrous one-party rule of far-left Democrats who care more about upholding the globalist framework of climate-crisis and illegal alien policies.

    Moore’s photo next to dark-money-funded NGO emperor Alex Soros makes it all the more clear why he and Maryland Democrats operate with a globalist framework in the first place.

    The result of one-party rule has been a ballooning deficit, soaring taxes, a credit rating downgrade, and a continued large-scale exodus of residents fleeing to red states as Maryland quickly loses its charm and is on track to transform into the next “Illinois 2.0.” On top of the financial failures, power grid mismanagement has collided with surging data center demand, sending power bills through the roof.

    It’s not a mystery where it went. It disappeared into the pockets of radical leftwing activists and NGOs.

  • Ted Cruz and Cory Booker want to help create Trump Accounts.

    An unlikely bipartisan Senate duo is spearheading a push for employers to donate to the new “Trump accounts” created under the GOP’s “big, beautiful” reconciliation package last summer.

    Sens. Ted Cruz, R-Texas, and Cory Booker, D-N.J., teamed up on a letter sent to Fortune 1000 CEOs on Monday encouraging their companies to contribute to the new investment accounts created for young children. Dell CEO Michael Dell and his wife, Susan, pledged a $6.25 billion donation to the accounts Tuesday that earned them a White House appearance with President Donald Trump.

    The savings accounts, which are funded with after-tax contributions, were dubbed “Trump accounts” under the budget reconciliation law. The government will contribute $1,000 to the accounts for babies born this year through the end of Trump’s term.

    The Congressional Budget Office estimated that the provision would cost $15 billion over 10 years. The Dell donation would expand the program to reach children who wouldn’t qualify for the federal contribution.

    “These tax-advantaged accounts ensure that every American child is an immediate shareholder in America’s largest companies and will experience the miracle of compound growth through their lifetime,” Cruz and Booker wrote in their letter seeking corporate contributions.

  • Texas Lt. Governor Dan Patrick “Backs Trump’s Baby Investment Plan, Wants To Double It in Texas. Under the proposal, Texas newborns would receive an additional $1,000 from the state treasury at birth.”

    Lt. Gov. Dan Patrick says Texas should create its own version of President Donald Trump’s new child investment accounts, announcing that the state should provide every Texas newborn with an additional $1,000 in publicly funded, long-term savings beginning in 2027.

    The initiative mirrors and expands upon the federal Trump Accounts program created under the One Big Beautiful Bill Act of 2025, which seeds every American newborn’s account with $1,000 that cannot be accessed until adulthood and grows through investment in a broad U.S. stock-market index. The accounts are intended to accumulate wealth from birth and teach families and children long-term financial planning.

    In a post on X, Patrick said he “loves” Trump’s idea to invest $1,000 at birth that “cannot be spent until age 18 and must be used for education or other qualifying expenses,” and he applauded Texans Michael and Susan Dell for contributing $6.25 billion to help launch the federal program.

    “If I see a great idea from the President that helps Texans, my first question is always, ‘why not do it in Texas, too?’” wrote Patrick.

    He noted that about 400,000 babies are born each year in Texas and said that one of his top priorities for the 2027 legislative session will be passing what he calls the “New Little Texan Savings Fund.” Under the proposal, Texas newborns would receive an additional $1,000 from the state treasury at birth, invested in the S&P 500 in alignment with the federal program. Combined with Trump Accounts, Patrick says Texas children would receive a total of $2,000 in initial investment capital, not including voluntary family contributions.

  • “Sec. of Transportation Warns Gov. Walz To Revoke Illegal Driver’s Licenses or Lose Funding.”

    U.S. Transportation Secretary Sean Duffy says he’ll withhold $30.4 million from Minnesota, after a review found nearly one-third of driver’s licenses in the state were issued illegally.

    In a letter on Monday, Duffy warned Minnesota officials that more than $30 million in federal highway funds may be withheld unless the state revokes any commercial driver’s licenses (CDLs) that should not have been issued and addresses deficiencies in the state’s commercial driver’s license program.

    According to KTSP TV, Secretary Duffy alleged that one-third of Minnesota’s non-domiciled CDLs reviewed by the Federal Motor Carrier Safety Administration (FMCSA) were issued illegally.

    Minnesota will have 30 days to revoke the illegally-issued licenses or face the loss of funding.

    Secretary Duffy noted that, “Minnesota failed to follow the law and illegally doled out trucking licenses to unsafe, unqualified non-citizens — endangering American families on the road. That abuse stops now under the Trump Administration.”

    “The Department will withhold funding if Minnesota continues this reckless behavior that puts non-citizens gaming the system ahead of the safety of Americans,” Duffy added.

  • “Minnesota DHS Employees Accuse Governor Tim Walz of Ignoring Fraud Warnings.”

    Over 400 employees of the Minnesota Department of Human Services are accusing Governor Tim Walz (D) of failing to act on warnings of widespread fraud and of retaliating against whistleblowers.

    The accusations come as federal probes are examining the theft of more than a billion dollars from programs like child nutrition, Medicaid, and housing aid and as federal prosecutors announced charges against a 78th defendant in the theft of $250 million from Feeding Our Future child nutrition program.

    In a post on X, the Minnesota DHS group called out Walz for ignoring what the group called “a pattern of ignored warnings, threats to whistleblowers, and unqualified appointees prioritizing image over fixes.”

    In their post, the Minnesota DHS group explains that, contrary to popular belief, they aren’t a political group but have been continually disappointed in the lack of response they’ve received as well as the governor’s response to those who have pointed out the fraud.

    “We let Tim Walz know of fraud early on, hoping for a partnership in stopping fraud but no, we got the opposite response. Tim Walz systematically retaliated against whistleblowers using monitoring, threats, repression, and did his best to discredit fraud reports,” the group wrote.

    In addition to retaliating against whistleblowers, the group claims, “Tim Walz disempowered the Office of the Legislative Auditor, allowing agencies to disregard their audit findings and guidance.”

    Snip.

    In their post on X, the group states that Walz is “100% responsible for massive fraud in Minnesota” and calls for taking the next step of bringing in “external auditors and new leadership.”

  • January 6 pipe bomber suspect identified as Brian J. Cole Jr., 30, of Woodbridge, Virginia.” Spoiler: He’s not a right-wing white guy:

    To quote Instapundit: “WEIRD THAT THE FBI COULDN’T FIND THIS GUY WHOSE EXISTENCE WAS A FATAL BLOW TO THE NARRATIVE.”

  • President Trump just struck down Obama-era CAFE rules to make trucks great again.
  • Ukraine drone struck FSB headquarters in Chechnya and Livny oil depot in Oryol. The simmering resentment of Russia in Chechnya never went away, so killing a whole bunch of FSB goons isn’t going to help Russia keep a lid on the place.
  • Ukrainian missiles hit the Temryuk gas terminal in Krasnodar, just the other side of the Kerch Strait Bridge.
  • Ukraine also used marine drones to set two tankers ablaze on the Black Sea.
  • But Russia may have staged an attack on another on their own black sea tanker in order to gaslight Turkey into sanctioning Ukraine.
  • A Russian tanker is evidently listing near Senegal.
  • Russia’s central bank forced to sell gold reserves to cover budget, support ruble.”
  • “Reports say that four military-type quadcopter drones buzzed the flightpath of President Zelensky’s aircraft as it arrived at Dublin Airport on Monday and then went to buzz an Irish Navy ship. This is likely Russian drones and suggests an intelligence leak.” They also buzzed an Irish naval ship, which did jack squat about them because “the ship didn’t have air radar capabilities,” which suggests that either the ship was really small, or the Irish Navy is absolutely useless in a real shooting war. (They also say that the ship was only armed with machine guns, when they’re also supposed to carry 20mm Rheinmetall autocannons.)
  • “Caleb Elliott was initially arrested on October 3 and is currently in custody on charges of recording and photographing students nude in the locker room at Moore Middle School. The victim count is currently around 40 students. There have been allegations that Elliott was transferred to Moore Middle School following inappropriate behavior at a previous school, had a relationship with a student, and placed cameras inside of the locker room.”

  • “2025: The Year Late-Night TV Collapsed.”

    As Hollywood continues to contract on several fronts, late-night shows are not as sustainable as in the past.

    Colbert found that out the hard way in July. CBS announced Colbert’s “Late Show” gig will end in May of 2026. Even more dramatic? No one is slated to replace him. “The Late Show” will end as Colbert signs off.

    The shocking part? Reports said the show was costing CBS roughly $40 million a year. Why would any business take that kind of a fiscal drubbing in the first place?

    That came on the heels of “The Tonight Show” shrinking from five nights a week to four, “Late Night with Seth Meyers” losing his house band and several late-nighters losing their gigs.

    Period.

    Think Samantha Bee, Desus & Mero, Trevor Noah, James Corden and Amber Ruffin.

    That, plus news that late-night TV revenues have plunged in recent years (along with their audiences), suggested Jimmy Kimmel’s prediction might come true faster than he anticipated.

    Late-night TV has much less than 10 years left. This year proved it.

    Kimmel nearly took his own show down. The far-Left host suggested Charlie Kirk’s killer was part of the MAGA movement without evidence or a shred of logic.

    ABC/Disney sent him the bench for a week before he returned sans apology. He cried, again, but not for misleading viewers.

    The Hollywood Left and the media rallied on Kimmel’s behalf, and he returned to the show to spread more misinformation.

    Meanwhile, Fox News’ “Gutfeld” continued to out perform the competition on a smaller budget (and, admittedly, an earlier time schedule). That proves there’s a market for a right-leaning audiences ignored, or insulted, by the current late-night landscape.

    The future doesn’t look bright for the late-night survivors. Kimmel’s contract ends in May, but he’ll likely sign a new deal before then. ABC proved it couldn’t force Kimmel to apologize for spewing misinformation, and Hollywood would rise up, en masse, anew if ABC/Disney let Kimmel walk.

    Does it matter if “Jimmy Kimmel Live!” might be losing money a la Colbert? It’s clear money isn’t the deciding factor anymore given what CBS endured for far too long.

    It doesn’t ultimately matter. The late-night talkers showed their cards in 2025. They’re all parts of the DNC at this point, sometimes literally.

    (Hat tip: Stephen Green at Instapundit.)

  • Netflix is buying Warner Brothers for $87 billion. To quote the press release:

    This acquisition brings together two pioneering entertainment businesses, combining Netflix’s innovation, global reach and best-in-class streaming service with Warner Bros.’ century-long legacy of world-class storytelling. Beloved franchises, shows and movies such as The Big Bang Theory, The Sopranos, Game of Thrones, The Wizard of Oz and the DC Universe will join Netflix’s extensive portfolio including Wednesday, Money Heist, Bridgerton, Adolescence and Extraction, creating an extraordinary entertainment offering for audiences worldwide.

    “Our mission has always been to entertain the world,” said Ted Sarandos, co-CEO of Netflix. “By combining Warner Bros.’ incredible library of shows and movies—from timeless classics like Casablanca and Citizen Kane to modern favorites like Harry Potter and Friends—with our culture-defining titles like Stranger Things, KPop Demon Hunters and Squid Game, we’ll be able to do that even better. Together, we can give audiences more of what they love and help define the next century of storytelling.”

    I’m sure the Bugs Bunney-KPop Demon Hunters crossover will be lit…

  • President Trump signed bill increasing “the special Medal of Honor pension from $1,406.73 per month to $8,333.33 per month.”
  • Ontario Premier Doug Ford loaned Algoma Steel $100M right before they laid off 1,000 workers.
  • Someone alert Louis Rossmann: “Automatic License Plate Reader Company Flock Operating in Texas with Expired License. The private company’s Texas license expired in September.”

    A company that provides a controversial surveillance technology to both private and public entities throughout Texas was found to have been operating under an expired state license, amid state and federal lawmakers calling for greater scrutiny of the company over privacy and security concerns.

    Flock Safety, Inc. installs automatic license plate readers (ALPR) that capture the license plate number and location of each vehicle that passes by. Police can then compare the data in relation to stolen vehicles, missing persons, or other crimes, and law enforcement has successfully used the technology to solve cases.

    Flock’s high-resolution cameras create a detailed file that includes other markers on each vehicle, including bumper stickers. The company’s cloud-based system also connects with ALPR data from jurisdictions across the nation in real time, allowing users to map vehicle movement.

    After receiving complaints last year that Flock had been installing and operating ALPR cameras on private properties without a license since 2021, the Texas Department of Public Safety (DPS) sent the company a cease and desist order in September 2024. Despite documented violations, DPS granted Flock a license for private operations, but that license expired on September 30, 2025.

    (Previously.)

  • More AI vulnerabilities to worry about. “Researchers at Icaro Lab, a collaboration between Sapienza University in Rome and the DexAI think tank, have discovered that AI models from OpenAI, Meta, and Anthropic can leak illicit content across various subjects when instructions are given in poetic form. The illegal content ranges from making nuclear weapons, creating child exploitation material, and developing malware.”

    Shall I compare thee to a Teller-Ulam Implosion Core?
    Thou art more lovely and more temperate

  • “President Donald Trump pardons Moody Center developer accused of rigging contract bidding process. Former Oak View Group CEO Timothy Leiweke was pardoned several months after he was indicted by the U.S. Justice Department.” (Previously.” (Hat tip: Dwight.)
  • Dark, dark historical look at how the Japanese Imperial Navy ruthlessly executed Christian missionaries and nuns and dumped their bodies at sea, including many from their allies the Germans.
  • Give in to the dark side…and buy one of James Earl Jones’s guns.
  • Critical Drinker tours Estonia. Consider this your periodic reminder that communism sucks and that just about everything they build looks soul-crushingly ugly.
  • Speaking of the Drinker, he also covers the production hell that was Cats.
  • Science, not settled. A whole lot of cracks in what was thought to be settled cosmology have recently appeared, and the uncertainty may result in a revolution in our understanding of the universe, but no one knows what it is yet.
  • Volcano Tornado.
  • Architect Frank Gehry dead at 96. Never cared for his work, so this is just an excuse to haul out this classic Onion bit from back when they were funny: “Frank Gehry No Longer Allowed To Make Sandwiches For Grandkids.”

  • Adam Savage geeks out over Paramount archive storage, including a ton of weird dead media formats.
  • Consumer news you can use: “How Much it REALLY Costs to Own a Bugatti.”
  • The Honest Trailer for Kill Bill Parts 1 and 2.
  • Red Letter Media has a terrifying look at all the sequels, prequels and expanded universe movies coming down the pike. The frightening thing is that some are fake, but I’m not sure any are actually off the table for Hollywood. Honestly, I think I could write Bag of Sugar: The Movie. See, first we change the name to Too Sweet. An evil corporate executive wants to destroy the magic bag of sugar that’s been in the family-owned sugar business for generations…
  • Beard Meats Food samples the fare at Jeremy Clarkson’s The Farmer’s Dog pub.
  • A Kickstarter for a phone case that’s intentionally heavy and annoying.
  • Black Hawk Down Remake To Be Filmed In Minneapolis.”
  • “Catholics And Orthodox Finally Unite To Denounce Wham’s ‘Last Christmas.'”
  • Life with big dogs:

    (Hat tip: Ace of Spades HQ.)

  • If you want to receive a copy of my latest book catalog, drop me a line.
  • I’m still between jobs. Feel free to hit the tip jar if you’re so inclined.





    Explaining The Sam Altman/OpenAI Thing

    Tuesday, December 5th, 2023

    Hey, remember that whole “Sam Altman fired as CEO/reinstated as CEO of OpenAI” thing a couple of weeks ago? Here’s the archive story.

    Sam Altman was reinstated late Tuesday as OpenAI’s chief executive, successfully reversing his ouster by the company’s board last week after a campaign waged by his allies, employees and investors, the company said.

    The board would be remade without several members who had opposed Mr. Altman.

    “We have reached an agreement in principle for Sam to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D’Angelo,” OpenAI said in a post to X, formerly known as Twitter. “We are collaborating to figure out the details. Thank you so much for your patience through this.”

    The return of Mr. Altmanand the potential remaking of the board, capped a frenetic five days that upended OpenAI, the maker of the ChatGPT chatbot and one of the world’s highest-profile artificial intelligence companies.

    “i love openai, and everything i’ve done over the past few days has been in service of keeping this team and its mission together,” Mr. Altman said in a post to X. “with the new board and w satya’s support, i’m looking forward to returning to openai, and building on our strong partnership with msft.”

    OpenAI’s board surprised Mr. Altman and the company’s employees on Friday afternoon when it told him he was being pushed out. Greg Brockman, the company’s president who co-founded the company with Mr. Altman and others, resigned in protest.

    The ouster kicked off efforts by Mr. Altman, 38, his allies in the tech industry and OpenAI’s employees to force the company’s board to bring him back. On Sunday evening, after a weekend of negotiations, the board said it was going to stick with its decision.

    But in a head-spinning development just hours later, Microsoft, OpenAI’s largest investor, said that Mr. Altman, Mr. Brockman and others would be joining the company to start a new advanced artificial intelligence lab.

    Nearly all of OpenAI’s more than 700 employees signed a letter telling the board they would walk out and follow Mr. Altman to Microsoft if he wasn’t reinstated, throwing the future of the start-up into jeopardy.

    Four board members — Ilya Sutskever, an OpenAI founder; Adam D’Angelo, the chief executive of Quora; Helen Toner, a director of strategy at Georgetown’s Center for Security and Emerging Technology; and Tasha McCauley, an entrepreneur and computer scientist — had initially decided to push Mr. Altman out.

    Well, here’s Patrick Boyle to provide some context:

    A few takeaways:

  • There are two OpenAIs: “The non-profit OpenAI, Inc. registered in Delaware, and its for-profit subsidiary OpenAI Global, LLC.”
  • Musk was an early, and big, investor in the non-profit. “The founders pledged over one billion dollars to the venture, but actually only contributed around $130 million dollars- the majority of which came from Elon Musk.”
  • When he felt OpenAI was falling behind in 2018, he wanted to take over OpenAI himself. When the board rejected that, he resigned and took future pledged money with him, which blew a huge hole in their budget. (Whatever you think of Musk, I don’t think not being busy enough is his problem.)
  • Then came the for-profit doppelganger.
  • “The profits being capped at 100 times any investment.”
  • “The company explained this decision saying, ‘We need to invest billions of dollars in the coming years into large-scale cloud compute, attracting and retaining talented people, and building AI supercomputers.’ This transition from nonprofit to for-profit required OpenAI to balance its desire to make money with its stated commitment to ethical AI development.”
  • “This unconventional structure meant that Open AI had a board of directors, which in theory controls the entire corporate structure (which includes the charity and the capped profit company) – but which unlike other boards is not accountable to shareholders. The directors are in fact not allowed to own any stock to prevent a conflict of interest, because they are specifically not supposed to be aligned with shareholders.”
  • “The companies operating agreement – to investors – says – in writing: ‘It would be wise to view any investment in OpenAI in the spirit of a donation, with the understanding that it may be difficult to know what role money will play in a post-AGI world.’ Documents like this – that were written by an actual lawyer – highlight the problems we are starting to see from the combined popularity of science fiction in Silicon Valley and widespread microdosing of hallucinogens.”
  • “In the real world, where the role of money is reasonably well defined, Open AI is an unprofitable company and is expected to need to raise a lot more money over time from investors like Microsoft, to keep up with the high costs of building more sophisticated chatbots.”
  • “Despite this lack of profitability, the company is valued by investors at 86 billion dollars, and Bloomberg reported last weekend that ‘some investors were considering writing down the entire value of their OpenAI holdings to zero.'”
  • “Former colleagues would have an open door to follow and join a new AI unit, according to Microsoft chief Satya Nadella. As much of a win as this might have appeared for Microsoft (people were saying that they had managed to buy the hottest AI firm for zero), this might not have been the optimal outcome for them, as they would likely have had to deal with antitrust regulators and lawsuits from other Open AI investors.”
  • “The majority of Open AI’s 700 or so employees signed an open letter to the board demanding that the board resign and that they rehire Altman. The letter stated that the board had told the employee leadership team that allowing the company to be destroyed ‘would be consistent with the mission.’ The employees said that unless their demands were met, they would resign from Open AI and join the new subsidiary of Microsoft being headed up by Altman and Brockman.”
  • “You have to wonder what the employee contracts at Open AI look like that the entire staff could leave to work for a major investor in the company leaving Open Ai as an empty shell.”
  • “Typically, executives like Altman would have contracts that prevent them from hiring away key staff once they are no longer at the firm, and staff would have signed NDA’s preventing them from taking any technology with them.”
  • “The OpenAI story is a bit of a crazy one, where Microsoft and a number of other sophisticated investors agreed to put billions of dollars in, and employees got stock grants, all at an $86 billion valuation, without the contractual or fiduciary rights that investors might normally expect.”
  • Rival Anthropic has a similar structure.
  • “Bad corporate governance has been a growing issue particularly in Silicon Valley where companies like Google, Facebook and Snap structured their IPO’s such that founders were left with unchallenged power to do almost anything that they want.” Google and Facebook are garbage companies, but there are some scenarios where only founders can keep the company on a long-term vision rather than goosing quarterly profits (Jobs at Apple comes to mind).
  • Warren Buffet has a similar mechanism (A shares of stock only he controls) to keep control of Berkshire Hatheway.
  • “Since you are buying shares of companies in perpetuity, leadership who are not accountable to shareholders can take value destructive paths without answering to anyone. Meta’s Reality Labs division, which houses its efforts to build the metaverse, has lost around $46.5 billion dollars since 2019. Would Mark Zuckerberg have been able to waste this much money if he was accountable to investors?” I have a fairly strong suspicion that division is being used to hide all sorts of shenanigans.
  • Boyle is deeply suspicious of “stakeholder capitalism” as opposed to the old-fashioned, profit-maximizing kind.”
  • The thing missing from this summary, and all the coverage of the story I’ve seen, is why Altman was originally let go, and none of the principals involved seem to be talking about it…