If that person had been a roboticist, they would have known what to do: stand in front of the car. It would have saved the cat's life. And most non-roboticists will immediately recognize this as a solution, too: these cars obviously detect humans right in front of them very well and will not move in that case. By the way, the same would have worked better for most human drivers as well. Even if you yelled at a human driver that there was a cat under the car, it would not be a reliable solution because they may not hear or understand you. But they, too, would almost certainly not run you over if you stood in front of the car.
To be clear, I don't blame the witness for not doing this in the moment. And she probably has figured this out by now, too. I'm mostly pointing out that, as more and more people learn about robot taxis, more people will known how to help in such a situation, which is clearly what she wanted to do.
Waymos are capable of seeing cats - I was in one looking at the route view the other day, and it highlighted a cat that was a decent distance away sitting in a front yard as it passed it. Then it went through a roundabout seemingly just to show it could do it.
(It then proceeded to drop me off in a weird back corner spot in Santana Row by a loading dock. Can't have everything.)
I assume once you're close enough or actually under it there's a blind spot. It doesn't seem very good at evading potholes either.
Something I hadn't thought about is, could it be that electric cars are not perceived as a threat due to the lack / different of noise, vibration and heat when compared to a combustion car?
I know animals nap under cars all the time but at least with "regular" cars they seemed to be more aware of the danger.
I'm not talking about waymo, self driving, human in the loop or any of that here, I'm just curious because I wonder if the same thing would've happened with a combustion engine and if there are any "easy wins" in terms of deterrence.
> Even if you yelled at a human driver that there was a cat under the car, it would not be a reliable solution because they may not hear or understand you.
Doesn't matter if shouting at the driver only works some of the time, that's still an infinite improvement over working 0% of the time when there's no driver.
The difference between actual zero and close-to-zero is infinity.
Let's divide-and-conquer! -- like all good CS algorithms do. /newest is very noisy but if you subscribe to just a "sliver" that you are interested in and review those submissions, then we can improve voting together very effectively. Personally I'm interested in robotics, so I just use this in my RSS reader and literally look at every submission (because it's not too much): https://hnrss.org/newest?q=robot%20OR%20robotics%20OR%20robo...
And sometimes that's just the way of HN I've had some pieces that I thought were duds make it to the front and others that I loved that got stuck in the new queue
Rule of thumb: Demos = as low as 10% reliable (read: we are showing you the successful cases). Product = 99% reliability required and even that is still not great (costly to operate and maintain, sub-par reviews from customers). So the answer to your question depends very much on the use-case and form-factor of the robot. Watching that Neo robot closing a dishwasher was so painful, and even that was still teleop-ed.
Another data point: in academia people thought robot localization was solved back in the 90's. In practice, mislocalizations has been an issue for robotics companies in the real-world until very recently. The number of 9's really matter and is where all the sweat is.
No video encoder information! As a roboticist, the number one thing I look for when comparing SBCs is whether they have hardware encoders for h264 video. Pi4, Jetson Nano, and Orange Pi 5 all had one, but some newer boards like the Pi5 and Orin Nano don't. It would be awesome to have that part of the spec be included in the data/comparison.
This is your third question along those lines. Mind sharing your context? Are you trying to identify a need to build a company around or is this research more academic in nature?
Good question, a bit of both, honestly. We’re exploring whether AI can meaningfully reduce the time teams spend diagnosing complex robotics failures. A lot of current tooling stops at visualization, and we’re curious if there’s room for an intelligence layer that helps correlate and triage data automatically. So I’m mostly here to stress test assumptions and learn how practitioners actually work.
You seem to describe the problem of automated anomaly detection. Many companies tried or are trying to solve this (e.g., Heex), but I don't think anyone has done it definitively. The issue is that "normal" behavior keeps changing, so its difficult to build a model of what is abnormal. And by the time the behavior of the robots in the fleet becomes more stable (in all aspects, physical, electrical, networking, logging, etc.), it's usually easy for the engineers who built it to put in the right metrics and health-monitoring checks to detect issues. So even though theoretically automated anomaly detection sounds like the holy grail of fleet observability, in practice, it's not such a big deal.
So I guess to answer your question, I think yes, the second, better tooling (and a ton of metrics data collected from the fleet with good versioning).
To be clear, I don't blame the witness for not doing this in the moment. And she probably has figured this out by now, too. I'm mostly pointing out that, as more and more people learn about robot taxis, more people will known how to help in such a situation, which is clearly what she wanted to do.