MANIFOLD
Will we conclude Tesla launched level 4 robotaxis in summer 2025?
238
Ṁ1kṀ63k
Sep 1
6%
chance

Elon Musk has been very explicit in promising a robotaxi launch in Austin in June with unsupervised full self-driving (FSD). We'll give him some leeway on the timing and say this counts as a YES if it happens by the end of August.

As of April 2025, Tesla seems to be testing this with employees and with supervised FSD and doubling down on the public Austin launch.

PS: A big monkey wrench no one anticipated when we created this market is how to treat the passenger-seat safety monitors. See FAQ9 for how we're trying to handle that in a principled way. Tesla is very polarizing and I know it's "obvious" to one side that safety monitors = "supervised" and that it's equally obvious to the other side that the driver's seat being empty is what matters. I can't emphasize enough how not obvious any of this is. At least so far, speaking now in August 2025.

FAQ

1. Does it have to be a public launch?

Yes, but we won't quibble about waitlists. As long as even 10 non-handpicked members of the public have used the service by the end of August, that's a YES. Also if there's a waitlist, anyone has to be able to get on it and there has to be intent to scale up. In other words, Tesla robotaxis have to be actually becoming a thing, with summer 2025 as when it started.

If it's invite-only and Tesla is hand-picking people, that's not a public launch. If it's viral-style invites with exponential growth from the start, that's likely to be within the spirit of a public launch.

A potential litmus test is whether serious journalists and Tesla haters end up able to try the service.

UPDATE: We're deeming this to be satisfied.

2. What if there's a human backup driver in the driver's seat?

This importantly does not count. That's supervised FSD.

3. But what if the backup driver never actually intervenes?

Compare to Waymo, which goes millions of miles between [injury-causing] incidents. If there's a backup driver we're going to presume that it's because interventions are still needed, even if rarely.

4. What if it's only available for certain fixed routes?

That would resolve NO. It has to be available on unrestricted public roads [restrictions like no highways is ok] and you have to be able to choose an arbitrary destination. I.e., it has to count as a taxi service.

5. What if it's only available in a certain neighborhood?

This we'll allow. It just has to be a big enough neighborhood that it makes sense to use a taxi. Basically anything that isn't a drastic restriction of the environment.

6. What if they drop the robotaxi part but roll out unsupervised FSD to Tesla owners?

This is unlikely but if this were level 4+ autonomy where you could send your car by itself to pick up a friend, we'd call that a YES per the spirit of the question.

7. What about level 3 autonomy?

Level 3 means you don't have to actively supervise the driving (like you can read a book in the driver's seat) as long as you're available to immediately take over when the car beeps at you. This would be tantalizingly close and a very big deal but is ultimately a NO. My reason to be picky about this is that a big part of the spirit of the question is whether Tesla will catch up to Waymo, technologically if not in scale at first.

8. What about tele-operation?

The short answer is that that's not level 4 autonomy so that would resolve NO for this market. This is a common misconception about Waymo's phone-a-human feature. It's not remotely (ha) like a human with a VR headset steering and braking. If that ever happened it would count as a disengagement and have to be reported. See Waymo's blog post with examples and screencaps of the cars needing remote assistance.

To get technical about the boundary between a remote human giving guidance to the car vs remotely operating it, grep "remote assistance" in Waymo's advice letter filed with the California Public Utilities Commission last month. Excerpt:

The Waymo AV [autonomous vehicle] sometimes reaches out to Waymo Remote Assistance for additional information to contextualize its environment. The Waymo Remote Assistance team supports the Waymo AV with information and suggestions [...] Assistance is designed to be provided quickly - in a mater of seconds - to help get the Waymo AV on its way with minimal delay. For a majority of requests that the Waymo AV makes during everyday driving, the Waymo AV is able to proceed driving autonomously on its own. In very limited circumstances such as to facilitate movement of the AV out of a freeway lane onto an adjacent shoulder, if possible, our Event Response agents are able to remotely move the Waymo AV under strict parameters, including at a very low speed over a very short distance.

Tentatively, Tesla needs to meet the bar for autonomy that Waymo has set. But if there are edge cases where Tesla is close enough in spirit, we can debate that in the comments.

9. What about human safety monitors in the passenger seat?

Oh geez, it's like Elon Musk is trolling us to maximize the ambiguity of these market resolutions. Tentatively (we'll keep discussing in the comments) my verdict on this question depends on whether the human safety monitor has to be eyes-on-the-road the whole time with their finger on a kill switch or emergency brake. If so, I believe that's still level 2 autonomy. Or sub-4 in any case.

See also FAQ3 for why this matters even if a kill switch is never actually used. We need there not only to be no actual disengagements but no counterfactual disengagements. Like imagine that these robotaxis would totally mow down a kid who ran into the road. That would mean a safety monitor with an emergency brake is necessary, even if no kids happen to jump in front of any robotaxis before this market closes. Waymo, per the definition of level 4 autonomy, does not have that kind of supervised self-driving.

10. Will we ultimately trust Tesla if it reports it's genuinely level 4?

I want to avoid this since I don't think Tesla has exactly earned our trust on this. I believe the truth will come out if we wait long enough, so that's what I'll be inclined to do. If the truth seems impossible for us to ascertain, we can consider resolve-to-PROB.

11. Will we trust government certification that it's level 4?

Yes, I think this is the right standard. Elon Musk said on 2025-07-09 that Tesla was waiting on regulatory approval for robotaxis in California and expected to launch in the Bay Area "in a month or two". I'm not sure what such approval implies about autonomy level but I expect it to be evidence in favor. (And if it starts to look like Musk was bullshitting, that would be evidence against.)

12. What if it's still ambiguous on August 31?

Then we'll extend the market close. The deadline for Tesla to meet the criteria for a launch is August 31 regardless. We just may need more time to determine, in retrospect, whether it counted by then. I suspect that with enough hindsight the ambiguity will resolve. Note in particular FAQ1 which says that Tesla robotaxis have to be becoming a thing (what "a thing" is is TBD but something about ubiquity and availability) with summer 2025 as when it started. Basically, we may need to look back on summer 2025 and decide whether that was a controlled demo, done before they actually had level 4 autonomy, or whether they had it and just were scaling up slowing and cautiously at first.

13. If safety monitors are still present, say, a year later, is there any way for this to resolve YES?

No, that's well past the point of presuming that Tesla had not achieved level 4 autonomy in summer 2025.

14. What if they ditch the safety monitors after August 31st but tele-operation is still a question mark?

We'll also need transparency about tele-operation and disengagements. If that doesn't happen soon after August 31 (definition of "soon" to be determined) then that too is a presumed NO.


Ask more clarifying questions! I'll be super transparent about my thinking and will make sure the resolution is fair if I have a conflict of interest due to my position in this market.

[Ignore any auto-generated clarifications below this line. I'll add to the FAQ as needed.]

  • Update 2025-11-01 (PST) (AI summary of creator comment): The creator is [tentatively] proposing a new necessary condition for YES resolution: the graph of driver-out miles (miles without a safety driver in the driver's seat) should go roughly exponential in the year following the initial launch. If the graph is flat or going down (as it may have done in October 2025), that would be a sufficient condition for NO resolution.

  • Update 2025-12-10 (PST) (AI summary of creator comment): The creator has indicated that Elon Musk's November 6th, 2025 statement ("Now that we believe we have full self-driving / autonomy solved, or within a few months of having unsupervised autonomy solved... We're on the cusp of that") appears to be an admission that the cars weren't level 4 in August 2025. The creator is open to counterarguments but views this as evidence against YES resolution.

  • Update 2025-12-10 (PST) (AI summary of creator comment): The creator clarified that presence of safety monitors alone is not dispositive for determining if the service meets level 4 autonomy. What matters is whether the safety monitor is necessary for safety (e.g., having their finger on a kill switch).

  • Additionally, if Tesla doesn't remove safety monitors until deploying a markedly bigger AI model, that would be evidence the previous AI model was not level 4 autonomous.

  • Update 2026-01-31 (PST) (AI summary of creator comment): The creator is proposing June 22, 2026 (one year after the robotaxi launch) as a firm deadline for Tesla to provide transparency about tele-operation and disengagements, and to demonstrate scaling. If Tesla has not achieved this by that date, the market will resolve NO.

This pins down the previously vague "soon" timeframe mentioned in FAQ14 regarding when transparency must be provided after August 31, 2025.

  • Update 2026-01-31 (PST) (AI summary of creator comment): The creator clarified that passenger-seat emergency stop buttons should be evaluated based on their function:

    • If the button is a real-time "hit the brakes we're gonna crash!" intervention button, this would indicate supervision that could rule out level 4 autonomy

    • If the button is a "stop requested as soon as safely possible" button (where the car remains in control until safely stopped), this would not rule out level 4 autonomy

This distinction applies to both Waymo (the benchmark) and Tesla. The creator emphasized that mere presence of a safety monitor doesn't rule out level 4 - what matters is whether there is supervision with the ability to intervene in real time.

More research and transparency from Tesla may be needed to determine which type of button the Tesla robotaxis have.

Market context
Get
Ṁ1,000
to start trading!
Sort by:

Update today! David Moss (the lidar salesman who achieved the first coast-to-coast trip with zero-interventions) got a fully unsupervised robotaxi ride on attempt number 58. No chase car. He then booked a different ride with a different robotaxi, using a friend's account, and it was also fully unsupervised (modulo the open question of tele-operation).

So I'd say that's evidence that the removal of safety monitors is more than a one-off demo, as I was becoming suspicious of. If scaling starts happening and private Teslas start hitting disengagement rates closer to 1-in-100k miles then we may have to dive in on debating whether Tesla was at level 4 with the robotaxis on August 31. I see plenty of evidence against it, like what @ChristopherRandles has been saying about FSD versions, not to mention Elon Musk himself saying that unsupervised autonomy was still "a month away" in November. But we'll want to hear the Tesla bulls out.

Again, I think what we're waiting on now is scaling up to the point that tele-operation is implausible.

Finally, to review how Tesla could've pulled off the robotaxi service we've seen so far if they haven't been at level 4 all along, I believe it would be some combination of:

  1. Tele-operation (not to be confused with remote assistance, allowed for level 4)

  2. Safety monitors with physical kill switches

  3. Low enough mileage to not hit edge cases

The market description says we presume NO, not level 4 by August 31, 2025, if we don't have transparency about tele-operation and disengagements "soon" after August 31. To pin that down, how do people feel about June 22, 2026 -- one year after the robotaxi launch?

@dreev Re: Tele-operation. What's the point of having a safety passenger if Tesla had people driving remotely?

@MarkosGiannopoulos Network lag? Or for plausible deniability, like if they got caught tele-operating they could say "yes, well, we never said this was fully unsupervised, did we? what? Musk said that? well he didn't mean fully literally unsupervised". Or just redundancy, I guess. Key word "guess" though. I'm just super 🤨 based on what we've seen so far. I'm not sure what we're seeing today from Tesla is at parity with where Waymo was in 2017. Not to say that means Tesla will need 9+ years to catch Waymo.

Anyway, I want to make sure I'm not moving goalposts. Always being like "but what if they're tele-operated?" is in danger of doing that. On the other hand, if Tesla doesn't scale this up and if private Tesla owners continue to not be able to safely read a book in the driver's seat then at some point we're forced to conclude that Tesla has not cracked level 4. Are you comfortable committing to June 22, 2026, as a line in the sand for this market? As in, if we have neither transparency about tele-operation/disengagements nor sufficient scale that such transparency is irrelevant then on June 22 we presume a NO for this market.

@dreev I don't mind the deadline, but how do you describe "we have neither transparency about tele-operation/disengagements nor sufficient scale that such transparency is irrelevant"?

I can safely say
a) Tesla will not disclose any tele-operation because it's completely against how they describe the service. It would be completely disastrous for them.
b) Tesla already posts some safety statistics for FSD, but they will only post Robotaxi-specific safety data when they look good. Their mode of operation is only to post negative-looking data when they are legally required.
c) As for private customers, it's illegal to read books and drive, so you won't see this happening much on a level to be convinced unsupervised FSD is real.

What I believe remains (that can both happen and be verified) to settle this. E.g. it is a clear fact, not us quibbling over a specific word of a Musk statement/tweet
a) Tesla gets a licence from the Texas DMV sometime in spring, according to the new law for autonomous vehicles
b) Tesla has 10+ (20?30?) unsupervised cars (current count is 3) in Austin doing paid rides
c) Tesla starts the California DMV process to get an autonomous license

@MarkosGiannopoulos Good points! I think I disagree with (c) in particular, in an important sense. Namely, people like David Moss and Dirty Tesla are doing honest reporting on exactly how good FSD is getting. Plus the crowdsourced data on disengagements. I think we'll be able to have a decent sense of when FSD is approaching the threshold where it would be safe to read a book, were it legal. (And I kind of expect people to start doing it, legal or not, when they're convinced it's safe enough.)

@dreev I am not following this market anymore so I did not read all the comments and maybe you're already aware, but I think it's worth pointing out that David Moss was caught lying about using a second account to get his second robotaxi ride. Note the screen saying "welcome David" on his second ride. (His friend is named James.)

https://x.com/macman222/status/2016958353549726180?s=46

>"Tesla has 10+ (20?30?) unsupervised cars (current count is 3) in Austin doing paid rides"

If 30 unsupervised cars requires 30 teledrivers then I think this bombshell news would break in some manner. There would be no competitive advantage over traditional taxis if this were to remain. I don't expect this to be the case as Tesla wouldn't be pushing ahead so much if they were this far away from level 4.

If it was 30 unsupervised cars requires 3 teledrivers to handle cases where the software is least sure what to do and the plan is to reduce the ratio of teledrivers to cars over time then this seems much harder to rule out and less likely to be revealed.

I don't think this is likely as I don't think they are likely to be doing it for customer FSD and the customer FSD is getting considerable milage without critical disengagements.

I am almost at the point of wanting to concede the question should be resolved without much further evidence on teledriving in order to bring attention back to whether Tesla was at a level 4 level in August 2025 and when this was launched. However I should respect @dreev wish to follow whatever course seems appropriate to judge the question.

@dreev "I think we'll be able to have a decent sense of when FSD is approaching the threshold where it would be safe to read a book, were it legal." - But you will still have people arguing that the Robotaxi in June '25 had an older/less capable version of FSD than what private customers will be running in 2026.

On the FSD version of Robotaxi
"A variant of the software that's used for the robotaxis service was shipped to customers with v14 and customers saw a huge jump in performance."- Ashok Elluswamy, Q4/2025 earnings call

This confirms Tesla has at least two branches of FSD (I think HW3 and Cybertruck customer cars run slightly different versions as well), and the timing of releasing customer version 14 is not very helpful for this market. Robotaxi was much earlier (June) running on a "v14-like" FSD version.

@MarkosGiannopoulos I suspect we are both to some extent seeing what we want to see to argue our case here.

To me is there 'A variant of the software'? - Well yes, the robotaxi has to integrate with the system for requesting a robotaxis and details of the journey being passed to the taxi assigned and integrating with call centre assistance. This makes it 'a variant' from customer owned FSD cars software. Into the robotaxi software (or the customer FSD command and control structure), I would expect them to be able to add any version of the driving software. I wouldn't expect there to be completely different versions of the driving software.

If they are releasing new diving software versions every couple of weeks to customer FSD and sometimes more frequently than that, I don't see the robotaxi software being noticeably more than 4 weeks ahead of customer FSD because this would mean they are updating the FSD software to versions that are already out of date and could be skipped.

If the robotaxi software is only ~ 4 weeks or less ahead of customer FSD and monitorless driving started 22 Jan 2026 then it would appear that that software wasn't ready for real world testing at 31 Aug 2025.

So to me it looks like the driving software wasn't ready by August 2025 and it wasn't launched until 22 Jan 2026. (This assumes they have reached the level now which might be disproved though I don't particularly expect this I hope it goes well.)

@ChristopherRandles "I don't see the robotaxi software being noticeably more than 4 weeks ahead of customer FSD"
You have no factual base for this assertion.

Excellent debate; thanks again for this, both of you. My thinking is that whenever we hit cases like this where we have to decide how much benefit of the doubt to give to Tesla (and agreed that this is often a crux of these disagreements) then the answer is to wait and hope the truth comes to light and makes the debate moot.

Cumulative miles for Robotaxi. And some material for new Manifold markets on the investors deck https://assets-ir.tesla.com/tesla-contents/IR/TSLA-Q4-2025-Update.pdf

Gemini calculated the number of miles for each month
Jul: +3,200

Aug: +14,300

Sep: +101,500

Oct: +144,500

Nov: +188,900

Dec: +206,300

Or we could calculate the number we really want: customer paying taxi miles with no safety monitor present as 0 to 21st Jan 2026 (or ~26th Jan 2026 if we insist on no chase car).
4.6 months with no growth.

So when was it capable of level 4 standard?
And when did they launch this level 4 standard robotaxi service?

General reminder to everyone to scout-mindset-maxx so we maintain a spirit of truth-seeking cooperation. In particular, I propose that all reasonable participants in this excellent ongoing debate here should reject both of these extremes:

  1. Tesla's graph of robotaxi miles in which they lump supervised and unsupervised miles together, with no stats on disengagements even.

  2. Rejecting all miles with any form of supervision, even a chase car.

With that out of the way, let's try to get our heads around this. Apparently Tesla has 500 robotaxis across the Bay Area and Austin, and 50 in Austin.

Naively, that means 90% of the robotaxi miles Tesla's touting are normal Uber-style trips with the driver using FSD (possibly with impressively low disengagement rates but as far as I know Tesla isn't sharing that data).

I'm trying to get GPT to make a version of Tesla's mileage graph that incorporates all this. We have to make a lot of guesses but I'm thinking it's safe to say that Tesla's graph of cumulative and undifferentiated mileage is pretty much actively deceptive.

Again, I'd argue that the blue doesn't count. The orange is ambiguous. The red is currently a barely visible sliver on the bar for January.

I don't put that much more faith in this graph than in Tesla's version but if the Austin robotaxi mileage is going down, it would be consistent with the theory that Tesla has not yet fully cracked vision-only unsupervised self-driving and that Austin is more of a controlled demo made possible through a combination of tele-operation, supervisors with kill-switches, and low enough mileage to avoid too many edge cases.

PS: Further agonizing from me over on AGI Friday. Thanks again to @MarkosGiannopoulos especially for the productive debates here!

PPS: Remember the lidar salesman, David Moss, famous for the first zero-intervention coast-to-coast trip? He's a huge Tesla fan and is currently visiting Austin, Texas, determined to trial a fully unsupervised robotaxi ride. He's live-tweeting this and is up to 54 attempts with no luck so far. He even cancels when he gets matched with a car he knows from previous attempts has a safety monitor.

https://x.com/DavidMoss

A couple other things I'm noticing: (1) the safety monitors seem to be watching the road like hawks, and (2) many of his rides have the safety monitor in the driver's seat. I don't know if he's tracking the numbers on that but at one point he mentioned that 6 rides in a row were with a safety driver as opposed to just a safety monitor.

Anyway, I think this is another small Bayesian update toward the theory that the fully unsupervised robotaxi rides so far have been closer to a one-off demo. (It would be great to have another market on this!)

And even if we count rides with passenger-seat safety monitors as autonomous, Tesla's autonomous mileage could be going down for all we know, since we don't know what fraction of the Austin robotaxi rides have a safety driver vs safety monitor. (If anyone has ideas for getting harder data on this, I'd love to hear!)

@dreev "many of his rides have the safety monitor in the driver's seat." - Weather in Austin was not optimal the last week (Waymo was offline for 2-3 days also I believe). Tesla is being cautious here.

@MarkosGiannopoulos Oh, that's true, both Tesla robotaxis and Waymo were offline due to an ice storm. I think things were back to normal when David Moss had that string of 6 safety drivers: https://x.com/DavidMoss/status/2016316573342908541

But you're right that it could've been extra caution due to weather.

(In any case, this casts in a new light the derision directed at Waymo for being offline due to weather longer than Tesla was. Reminds me of the story of Waymos in San Francisco supposedly being stymied by a power outage. I wrote about that last month (see the section "No vision-only self-driving cars"), basically concluding that Tesla's self-driving had more problems with the power outage than Waymo did but Tesla's backup drivers let it hide that fact.)

@dreev Well, looks like the weather is better now and Mr Moss has had two unsupervised rides (and no chase cars this time, as Musk mentioned in yesterday's call) https://x.com/DavidMoss/status/2016936705090011573

@MarkosGiannopoulos Ah, thank you! I missed this comment when I added the update above. I was also watching David Moss's experiment with bated breath.

PS: Not quite fair to say "now that the weather is better". David Moss was trying for days before the storm and continued to try for days after it. He was trying his hardest to prove the Tesla haters wrong and it still took him 58 tries. He even canceled rides when he could see it was a car he already tried. So presumably 58 was a lower bound on the samples required to find an unsupervised robotaxi. Of course getting 2 in a row when he did finally succeed suggests Tesla is expanding the unsupervised rides now.

Not that these quibbles matter. Tesla started with a handful of unsupervised rides and that's now increasing. I guess the next milestone can be when it's a majority of rides.

bought Ṁ10 YES

The Robotaxi tracker now filters out cars that have not been noticed recently, so the more accurate Austin fleet size is currently 50.

Hopefully, there will be some concrete info in today's earnings call.

Safety monitors have been removed for some customers in Austin as of today https://x.com/aelluswamy/status/2014398853991301538

@MarkosGiannopoulos Ooh! Very cool. Of course, this being Elon Musk we'll just want to hold off on breaking out any champagne to make sure this isn't like the autonomous delivery, i.e., a one-off demo. And we've still got the looming question of tele-operation at this scale. When private Tesla owners are napping and reading books and getting killed at a rate no worse than once every 100M miles, then we'll know for sure this is real.

(What that will mean for resolving this market is another can of worms. I'm just not worrying too much because if there's no progress on that front by summer 2026 then we'll be at a pretty clear NO without having to dig deep on the question of where Tesla's tech was as of August 2025. I'm definitely still interested in hearing opinions and reasons that people may feel confident one way or another though. To me it still feels like a big question mark, though a steadily shrinking one the longer it takes. I guess it feels just barely possible that safety-monitor-free rides could suddenly start scaling up at a rate that it could start to feel unfair to resolve this market NO. But, ok, I said I wasn't going to agonize yet!)

@dreev "And we've still got the looming question of tele-operation at this scale." - What's your point for believing that Tesla does not have tele-operators driving the cars? At 50 cars? 100 cars? 500 cars?
You do not have "private Waymo owners napping in their cars", but you still trust that Waymo has an actual autonomous vehicle.

@MarkosGiannopoulos

What's your point for believing that Tesla does not have tele-operators driving the cars? At 50 cars? 100 cars? 500 cars?

Excellent question. I know I sound high on copium and seem to be unfairly moving goalposts when I suggest that Tesla's robotaxi launch might not count. When I wrote about this in June I subtitled the post "In which I grasp desperately at straws to disbelieve my eyes or at least Elon Musk's mouth", referring especially to the autonomous customer delivery. But that post sure is aging well so far. (It includes this prediction: "This was effectively a publicity stunt and no normal customers will be getting their cars delivered this way, not this summer anyway.")

Anyway, so asking me to draw my lines in the sand on this is perfect. Does 2500, the size of Waymo's fleet, sound reasonable?

I'm not sure how much we need to agonize about that threshold though. What will be nicely dispositive is seeing it work for privately owned Teslas. I promise not to accuse Tesla of tele-operating those!

I'm actually confused about possible worlds in which the Tesla robotaxis have scaled past 2500 cars but privately owned Teslas aren't even level 3. I guess I'll add that as another prediction, that that won't happen.

You do not have "private Waymo owners napping in their cars", but you still trust that Waymo has an actual autonomous vehicle.

Right, the difference between Waymo and Tesla is night-and-day on this. Waymo is impressively transparent, making all their data available, and as part of their permitting in California they detail exactly how their remote assistance system works. (Also I kind of do personally trust them; I'm friends with the coCEO.) See my AGI Friday post about this for a review of how we're as confident as we are that Waymo is actually autonomous.

Relatedly, check out the new post from Kelsey Piper this week on how we absolutely do know that Waymos are safer than human drivers.

Tesla is practically the opposite of all of that. In particular I think it's damning that they haven't applied for permits in places that require the kind of transparency we have from Waymo. As I commented today in our other debate thread, all of the following feel at least plausible:

  1. Tesla is there, they're just being very cautious with roll-out.

  2. Tesla is getting there, asymptotically, and these milestones are real.

  3. Tesla isn't close, if you didn't supervise FSD it would crash every ~13k miles, and Musk is stringing us along with rigged demos that they can spin as milestones.

I guess the last one is slightly harsher than I mean to be. The milestones are cool and I doubt any are totally fake. I believe they represent steady progress. Just that there's a plausible bear case where the milestones don't mean anything close to what Musk makes them out to mean.

PS: I'm referring to milestones like:

  1. The robotaxi launch with empty driver's seats

  2. The autonomous customer delivery

  3. Service area expansions

  4. Launching in additional cities with safety drivers

  5. A coast-to-coast trip with zero interventions

  6. Removing the safety monitors for some trips in Austin

@dreev "Does 2500, the size of Waymo's fleet, sound reasonable?" - So you are asking to compress Waymo's years of getting to 2500 cars in a fraction of it? Is that a fair requirement?
Gemini responds like this to how long it took Waymo to develop its fleet
Time to first 700 cars: 14 years (2009–2023)

Time to next 1,800 cars: ~18 months (2024-2026)
Digging a bit more, Waymo launched to the public (no NDA) and without a safety driver on October 2020. At the time, they had 300 cars

"What will be nicely dispositive is seeing it work for privately owned Teslas." - There is no country in the world which allows you to nap while being in the driver's seat. So you need an alternative to that.

@MarkosGiannopoulos Waymo has more issues scaling up: The cars are specially adapted with expensive sensors and expanding to more areas requires more areas to have high definition maps of area. If we believe Tesla is not using high definition maps and ordinary new Teslas have all the sensors needed and they are not using teledriving then the only restriction to a fast scale up would be getting the necessary permits.

If Texas scales up to most of the state with 1000 cars quite quickly while other states are slower and only start to have some monitor-less taxi rides when permits allow that might be enough?

I doubt teledriving is an issue as the owner population is getting good distances without critical disengagements. More data that may emerge later may be better but I doubt it is a problem - looks like we have more of a wait yet for resolution for other reasons anyway.

ISTM it is far more relevant to judging this claim is firstly whether they were ready in August and are just being ultra cautious requiring extra unnecessary developments or whether there was development in 1 Sept 2025 to 22 Jan 2026 that was needed in order to deploy without safety monitors. Secondly what counts as the launch day.

They don't seem to have made much of a fuss about 22 Jan as a/the launch date, maybe that to a small extent counts against 22 Jan 2026 as being the launch day. However I don't think they were ready at the end of Aug and the June date is a launch of a level 2 robotaxi service so I am not seeing a better date and am inclined to suggest software not ready until ~December and the launch day was 22 Jan 2026. More details could emerge yet

https://x.com/elonmusk/status/2014397578352226423
isn't much of an announcement?
https://techcrunch.com/2026/01/22/tesla-launches-robotaxi-rides-in-austin-with-no-human-safety-driver/
has word launch in the title.

@ChristopherRandles The Robotaxi service was launched in June with no person in the driver's seat from day 1 and a gradual set of security measures, as the quality of the service was validated. You might have been more convinced with a big launch of 100 cars, but a launch of 10 cars is still a launch. E.g., a start.

@MarkosGiannopoulos

asking to compress Waymo's years [...] Is that a fair requirement?

Waymo is 17 years old. Tesla introduced Autopilot in 2014 so I guess that's 12 years for Tesla. But I don't think the crux of our disagreements involve fairness to Tesla in how quickly they're catching up. We can pretty much all agree they're moving faster than Waymo, in the sense of being less than 5 years behind. At least I'll personally be surprised if I can't read a book in the driver's seat of a Tesla in 2031. Even without lidar, by then.

To review my prediction, and the motivation for this market:

Back in April 2025 I thought there was no way Tesla would be at driver-out autonomy in time for their planned Austin robotaxi launch in June. Even end of August was unrealistic, I was betting. I figured the launch just wouldn't happen. Then it did, so my credibility takes a hit there. But now it's January and the likelihood that the Austin launch was more controlled demo than real-world capability is growing. As I wrote last month (see "Prediction 2: No vision-only self-driving cars") I originally predicted "no level 4 without lidar etc" but I'm allowing myself the retroactive wiggle room that I meant to include "or at least another year or so of overall AI progress". I've never thought vision-only self-driving is impossible (humans do it!). But if Tesla pulls off vision-only level 4 self-driving by summer 2026, I'll be officially super wrong.

Back to privately owned Teslas, you're right to point out that Tesla's FSD might be safe to use unsupervised before it's legal to do so. I kind of expect the experiment will happen at that point, with Tesla owners spoofing Tesla's internal cameras or similar to get around the "watch the road" nagging. An interesting prediction we might disagree about: Conditional on 100 million miles of possibly illegally unsupervised FSD miles by August 31, 2026, will anyone die that way?

Or we can look at miles between interventions...

I'm happy to trust Tesla owners like the lidar salesman we've been talking about. YouTuber Dirty Tesla is another one I trust despite being an enormous Tesla fanboy. When he's comfortable reading a book in the driver's seat of his Tesla, that will be a big update for me. I believe the consensus today, including honest Tesla fans, is that Tesla FSD needs a safety-critical intervention every 13k miles, as a high upper bound, that being the much-vaunted record so far. 13k miles with zero interventions is extremely cool and impressive. Just that, without supervision (which is the whole point, that there not be supervision) it's not close to Waymo-level safety. Waymos go something like a million miles between at-fault crashes. For fully apples-to-apples comparison we need the hypothetical disengagement rate, like if a human had been monitoring the Waymo. That's tricky to estimate. You have to consider near misses, like the Waymo screwing up but then lucking out and avoiding a crash anyway, like by other cars taking evasive action. In any case, I think you have to make pretty weird assumptions to get it as low as 100k for miles between would-be disengagements for Waymo. When anecdotal evidence from Tesla owners suggests that the current at-most 13k miles between interventions is up to 100k miles, that will be another big update for me.

PS: Again, in all of the above I'm not counting the Austin robotaxi miles. I just want to set those aside until we know if they should count. I remain very confused about those miles and don't have confident predictions about them. If you spot concrete disagreements outside of robotaxis, do let me know.

PPS: We're now 10 years and 13 days since Musk predicted that "in ~2 years" private Teslas would be making cross-country trips with no one in the car. At the time I said I was bullish on self-driving cars but I'd eat my hat if that happened in 3 years (I wouldn't have bet against 10 years back then). I think Musk just has the planning fallacy baked indelibly into his psyche. Not his biggest flaw, I guess.

@dreev Tesla is now reporting the paid robotaxi miles in the quarterly. It was about 200,000 miles in December. Tesla now has 500+ cars in Austin and SF Bay Area and is doubling each month (verbally reported on earnings call). A fleet of 2,000 cars in March 2026 (as per the doubling progression: 500 in Jan → 1,000 in Feb → 2,000 in Mar):

  • Base case (same ~4.77 rides/car/day): ~66,780 weekly rides (2,000 × 4.77 × 7, rounded).

  • Double intensity (~9.54 rides/car/day): ~133,560 weekly rides.

  • Triple intensity (~14.31 rides/car/day): ~200,340 weekly rides.

    Tesla has the lower rides per car per day for now. Uber and waymo have about triple the daily usage. Doubling current levels would get the weekly rides. Tesla will be reporting cumulative miles. Have to get the miles and infer the rides at about 5 miles per ride standard average for rideshare and Waymo. Waymo 2500 cars give 450,000 weekly rides. Picture from page 11 of quarterly earnings.

  • Insurance company lemonade now reports and is will to insure Tesla cars with FSD at half the price per mile of Tesla cars without FSD in operation. Note the Lemonade graphic, they are projecting Tesla FSD to be ten times safer version 17. FSD 15 will have half of the risk of collisions. I believe this is an actuarial projection from data they are getting. Tesla already has 500+ cars as robotaxi. They said no chase cars and no safety monitors for some of the Austin cars.

  • Waymo has 133 incidents reported in Austin. austin is 8% of the Waymo fleet. 40,000 rides per week, about 200k miles per week. Tesla reported 200k miles in December from about 250 cars. Tesla likely at 400k-600k miles in January. Intensity of usage can triple or more to get to Uber/Waymo levels. Safety monitors starting to come out. I think most out by end of February in Austin. 500 cars in Austin and 1500 cars in SF Bay area in March would align with new double every month and would be the prior end of December aspiration by Elon.

  • Waymo has 5 collisions and and 6 near misses in Austin out of 130 incidents.

  • NHTSA has over 100 accidents per month for Waymo. 25 per week on about 2 million miles per week. So we are saying Waymo is getting into one accident every 80,000 miles but 14:1 it is the other cars fault? Also, Waymo blocks intersections of traffic like when their entire system and all vehicles went down in the SF power outage. 64 incidents of nuisance, blocking traffic and school bus violations in austin. out of about 4 million miles. 60,000 miles per blocking or related incident. Collisions in such a situation would not be just the other cars fault. 20 high risk conditions every 1.3 million miles.

https://www.nhtsa.gov/laws-regulations/standing-general-order-crash-reporting

@brianwang A website robotaxi tracker performed an analysis of about 400 NHTSA Waymo accident reports using Gemini and found 13-17% were Waymo at fault. this would mean out of 25 accidents every 2 million miles 4 were Waymo at fault or one in every 500,000 miles. Near misses are about as frequent as accidents per Austin tracking data. Waymo should have critical disengagement one in every 60,000 miles to avoid dangerous incidents, badly blocking intersections, near misses, at fault accidents etc... Recalled software for the bus passing problem.

humans - average US driver

Near misses: These are much more common than reported crashes. Naturalistic driving studies (e.g., NHTSA's 100-Car Study and related analyses) indicate that drivers experience near-crashes or critical incidents far more frequently—potentially several per year or even per month in high-risk scenarios. Some estimates suggest near-misses occur at rates 10–100 times higher than actual crashes, with drivers in observational studies showing hundreds of near-events over months of monitored driving. Anecdotal and survey-based reports often put noticeable near-misses at 1–2 per month for active drivers, though this varies widely. [aka you would "disengage human drivers every 1000 miles or so and critical disengagements every 5000 miles]

How many police reported versus unreported crashes?

  • Actual accidents for humans (police-reported crashes): The average driver is involved in a police-reported crash roughly once every 10–18 years (150k-250k miles but. Insurance data often cites a collision claim about once every 17.9 years. With ~238 million licensed drivers and ~5.9–6 million police-reported crashes annually (including property damage only, injury, and fatal), this equates to a low per-driver annual probability, but cumulatively, most drivers experience 3–4 police reported crashes over a lifetime (6-10 police reported and unreported accidents. 4-10 years between accident one in every 70-150k miles).

  • At-fault accidents: Specific at-fault rates are harder to pinpoint nationally (as fault determination varies by state and crash reports), but studies show that in multi-vehicle crashes, roughly 50–60% involve at least one at-fault driver (often the one violating right-of-way or following too closely). Human drivers contribute to fault in the vast majority of crashes via errors like distraction or speeding. No precise national per-driver annual at-fault

    rate exists, but it's a subset of the overall crash rate above.

More detailed breakdowns from NHTSA economic cost studies (e.g., Blincoe et al., referenced in recent analyses) indicate higher underreporting for specific severities: ~60% of property-damage-only (PDO) crashes are unreported, while ~32% of non-fatal injury crashes go unreported. Fatal crashes are essentially 100% reported. This means the overall ratio skews toward more unreported minor crashes, but police data covers nearly all serious ones.

@brianwang Thanks for this research. I don't have my head around it yet but I'm wondering if you find Kelsey Piper's new article, "We absolutely do know that Waymos are safer than human drivers", persuasive?

@dreev my analysis shows Waymo is safer than the average US human driver. But the safety of US human drivers is less as I was showing. The range of human driving is broad. A lot of too drunk, too sleepy, too distracted, too old or new drivers mess up the reality. Waymo has systemic issues -- getting confused about what to do and then locking up. Brittleness, and depending upon hypermapping. The speed of decisions I expect to be a problem especially in cases of a lot more people and vehicles around in a chaotic situations. Waymo has 10-20X more problem situations beyond the crash risk. Waymo and Tesla will be or are safe enough and no fundamental limit against become 10-40 times safer than human. The main Tesla advantages are being able to get to true scale and quickly. Next 3 years Waymo is very limited by the amount of robotaxi grade Lidar. Not enough factories and factory capacity and having to take the sensor kits and then mostly hand build the modifications. 3-5 per day is the current limit they have shown. Tesla pulling the safety monitors and chase cars means austin will become completely unsupervised and then SF Bay ARea and 10-20 other metro areas this year with hundreds of thousands of cars and potentially millions with airbnb business model. Any remote monitoring is meaningless because that is the standard and is economically scalable at 3 cars to one remote monitor. 1.1 million with active FSD this could be 2-4 million by year end

On January 23 around 7:40 a.m. near Grant Elementary School, Santa Monoca Waymo's driverless Jaguar I-Pace hit a child who ran from behind a double-parked SUV amid crossing guards and stopped cars. The vehicle braked from 17 mph to under 6 mph before minor-impact contact, then called 911 and stayed until police cleared it; the child walked away with minor injuries. NHTSA opened probe PE26001 to check caution near schools, speed compliance, and response protocols, amid prior Waymo issues like school bus failures in Atlanta and Austin.

© Manifold Markets, Inc.TermsPrivacy