Astral Codex Ten Podcast cover art

Astral Codex Ten Podcast

Astral Codex Ten Podcast

By: Jeremiah
Listen for free

About this listen

The official audio version of Astral Codex Ten, with an archive of posts from Slate Star Codex. It's just me reading Scott Alexander's blog posts. Science
Episodes
  • Mantic Monday: Groundhog Day
    Apr 2 2026
    Having Your Own Government Try To Destroy You Is (At Least Temporarily) Good For Business

    On Friday, the Pentagon declared AI company Anthropic a "supply chain risk", a designation never before given to an American company. This unprecedented move was seen as an attempt to punish, maybe destroy the company. How effective was it?

    Anthropic isn't publicly traded, so we turn to the prediction markets. Ventuals.com has a "perpetual future" on Anthropic stock, a complicated instrument attempting to track the company's valuation, to be resolved at the IPO. Here's what they've got:

    https://www.astralcodexten.com/p/mantic-monday-groundhog-day

    Show More Show Less
    31 mins
  • "All Lawful Use": Much More Than You Wanted To Know
    Apr 2 2026

    Last Friday, Secretary of War Pete Hegseth declared AI company Anthropic a "supply chain risk", the first time this designation has ever been applied to a US company. The trigger for the move was Anthropic's refusal to allow the Department of War to use their AIs for mass surveillance and autonomous weapons.

    A few hours later, Hegseth and Sam Altman declared an agreement-in-principle for OpenAI's models to be used in the niche vacated by Anthropic. Altman stated that he had received guarantees that OpenAI's models wouldn't be used for mass surveillance or autonomous weapons either, but given Hegseth's unwillingness to concede these points with Anthropic, observers speculated that the safeguards in Altman's contract must be weaker or, in a worst-case scenario, completely toothless.

    The debate centers on the Department of War's demand that AIs be permitted for "all lawful use". Anthropic worried that mass surveillance and autonomous weaponry would de facto fall in this category; Hegseth and Altman have tried to reassure the public that they won't, and the parts of their agreement that have leaked to the public cite the statutes that Altman expects to constrain this category. Altman's initial statement seemed to suggest additional prohibitions, but on a closer read, provide little tangible evidence of meaningful further restrictions.

    Some alert ACX readers1 have done a deep dive into national security law to try to untangle the situation. Their conclusion mirrors that of Anthropic and the majority of Twitter commenters: this is not enough. Current laws against domestic mass surveillance and autonomous weapons have wide loopholes in practice. Further, many of the rules which do exist can be changed by the Department of War at any time. Although OpenAI's national security lead said that "we intended [the phrase 'all lawful use'] to mean [according to the law] at the time the contract is signed', this is not how contract law usually works, and not how the provision is likely to be enforced2. Therefore, these guarantees are not helpful.

    To learn more about the details, let's look at the law:

    https://www.astralcodexten.com/p/all-lawful-use-much-more-than-you

    Show More Show Less
    20 mins
  • Next-Token Predictor Is An AI's Job, Not Its Species
    Apr 2 2026

    I.

    In The Argument, Kelsey Piper gives a good description of the ways that AIs are more than just "next-token predictors" or "stochastic parrots" - for example, they also use fine-tuning and RLHF. But commenters, while appreciating the subtleties she introduces, object that they're still just extra layers on top of a machine that basically runs on next-token prediction.

    I want to approach this from a different direction. I think overemphasizing next-token prediction is a confusion of levels. On the levels where AI is a next-token predictor, you are also a next-token (technically: next-sense-datum) predictor. On the levels where you're not a next-token predictor, AI isn't one either.

    Show More Show Less
    16 mins
No reviews yet