trail wrote:
The thing I don't understand about vision-only. It only looks forward. Those four little circles on the rear are sonar. They can "see" 2-3m at best, and at a pretty narrow field-of-view. For FSD purposes, that's fine for last-resort don't bump into things with your bumper. As a self-driving engineer, I just don't understand what they're plans for doing more complex reverse maneuvers, like backing to a crowded parking lot with a 10-20m manuever. You can sometimes get a good look at the scene with the forward-looking cameras then "hope" that the scene doesn't change as you go in reverse. My conservative nature doesn't like that.
There's also the issue of not seeing things not at bumper level at all. Say you're backing into a spot and there's an SUV with a bunch of 4x4 lumber hanging out the rear window. You're gonna put that lumber right through your rear window....no way of seeing it.
I just don't understand. On the other hand lots of people at Tesla are smarter than I am....
Edit: There is a rear-looking monocular camera. It's theoretically possible they do stereo-through-motion (instead of using two side-by-side cameras to do stereo, you use one camera, but take images in multiple positions). I'll be really impressed if they can pull this off in real time well. It's computationally brutal compared to conventional stereo.
Thanks for your insight. I agree with your sentiments. Scenes are just so much more complex than what you can -reliably- see with CMOS and visible light.
Eliot
blog thing - strava thing