Light Detection and Ranging (LIDAR) sensors play an important role in the perception stack of autonomous robots, supplying mapping and localization pipelines with depth measurements of the environment. While their accuracy outperforms other types of depth sensors, such as stereo or time-of-flight cameras, the accurate modeling of LIDAR sensors requires laborious manual calibration that typically does not take into account the interaction of laser light with different surface types, incidence angles and other phenomena that significantly influence measurements. In this work, we introduce a physically plausible model of a 2D continuous-wave LIDAR that accounts for the surface-light interactions and simulates the measurement process in the Hokuyo URG-04LX LIDAR. Through automatic differentiation, we employ gradient-based optimization to estimate model parameters from real sensor measurements.
So what did the car actually see?
— green (@greentheonly) December 6, 2019
Well, it turns out at first it looked like an obstacle so it slowed down some, then it decided that nah, there's nothing and lifted it's "foot" off the brakes, huh? pic.twitter.com/bun4mrxKOZ
Now we wondered what would happen if we remove the foil? Well, pretty much the same thing only this time autopilot actually sped up before impact after initially slowing down some and did not disengage until I stomped on the brakes. pic.twitter.com/NgszJwO2zh
— green (@greentheonly) December 6, 2019