[HN Gopher] New Lidar System Promises 3D Vision for Cameras, Car...
       ___________________________________________________________________
        
       New Lidar System Promises 3D Vision for Cameras, Cars, and Bots
        
       Author : rbanffy
       Score  : 71 points
       Date   : 2022-04-25 16:25 UTC (6 hours ago)
        
 (HTM) web link (spectrum.ieee.org)
 (TXT) w3m dump (spectrum.ieee.org)
        
       | rsp1984 wrote:
       | This is not a "true" LiDAR in the classical sense (where you send
       | out a laser beam and measure time passed until return). It's
       | rather an indirect ToF sensor that uses modulated light bursts
       | that flood the entire scene.
       | 
       | This approach typically works well for close-range convex
       | surfaces (hand tracking, bin picking, face ID) but fails pretty
       | miserably when longer ranges and concave surfaces are involved,
       | due to quadratic signal dropoff and multipath errors.
       | 
       | As far as I understand, what the team has achieved is lowering
       | the power requirements for the modulation part. It means they can
       | spend the saved power on making the modulated light brighter,
       | which should give them a bit more range. I haven't seen any other
       | major improvements though and none of the other issues with iToF
       | were addressed.
       | 
       | Not trying to downplay the achievement, just saying it is still
       | affected by the usual tradeoffs and probably just occupies
       | another niche in the high dimensional space of 3D cameras, rather
       | than spanning many of today's niches.
        
         | rowanG077 wrote:
         | I'm curious since you know your stuff. Is structured light a
         | form of modulated light?
        
           | rsp1984 wrote:
           | Nope. Structured light is projecting a geometric pattern into
           | the scene and viewing it from from a camera that's offset by
           | some amount vs. the projector. The original Kinect works this
           | way.
        
             | robotresearcher wrote:
             | Structured light is modulated in space rather than time.
             | Accordingly, distance is inferred from offset in space
             | (pixel, ie. angle) rather than time (phase).
        
         | Animats wrote:
         | _This approach typically works well for close-range convex
         | surfaces (hand tracking, bin picking, face ID) but fails pretty
         | miserably when longer ranges and concave surfaces are involved,
         | due to quadratic signal dropoff and multipath errors._
         | 
         | Exactly. This has been used before for short-range sensors,
         | modulating the outgoing light electrically. Microsoft used it
         | in the second generation Kinect. Mesa Imaging used it in
         | 2013.[1] The prototype of that was shown in 2003.[2] I looked
         | into this in my DARPA Grand Challenge days.
         | 
         | Since it's a continuous emission system, you need enough
         | continuous light to overpower other light sources. Filters can
         | narrow the bandwidth, so you only have to be brighter at a
         | specific color. This works badly outdoors. Pulsed LIDARs
         | outshine the sun at a specific color for a nanosecond, and thus
         | can be used in bright sunlight. Also, they tend not to
         | interfere with each other, because they're receiving for maybe
         | 1000ns every 10ms, or 0.01% of the time. A little random timing
         | jitter on transmit can prevent repeated interference from a
         | similar unit.
         | 
         | So, short range use only. For more range, you have to use short
         | pulses.
         | 
         | For short range, there's another cheap approach - project a
         | pattern of random dots and triangulate. That was used in the
         | first generation Kinect and is used in Apple's TrueDepth phone
         | camera.
         | 
         | [1] https://www.robotshop.com/community/blog/show/mesa-
         | imagings-...
         | 
         | [2] https://www.researchgate.net/publication/228602757_An_all-
         | so...
        
           | rsp1984 wrote:
           | Interesting bits about pulsed LiDAR. Is there any camera
           | available to buy on the market that uses this technology?
        
       | titzer wrote:
       | While this is cool, I think we should seriously take a look at
       | how insects navigate 3d spaces. They typically have compound
       | eyes, where each eye-let incredibly simplified (in fact, early-
       | stage evolution) photoreceptor that has independent connections
       | in the brain. They have crappy resolution but incredibly good
       | response time and spatial awareness. And they are _super_ low
       | power.
        
         | peteradio wrote:
         | But cars don't navigate 3d spaces, it seems like that would
         | make results not so transferable. How does a fly avoid in-air
         | collision with another fly? Answer: it doesn't need to.
        
       | danbruc wrote:
       | What happens in busy places when tens or hundreds of vehicles or
       | robots illuminate the environment? How do several independent
       | time-of-flight camera systems coexist? Are there existing
       | solutions from other areas that are already using time-of-flight
       | camera systems?
        
         | xnx wrote:
         | Not as big a problem as you might think:
         | https://ideas.4brad.com/tricking-lidars-and-robocars#:~:text...
        
         | alimov wrote:
         | > What happens in busy places when tens or hundreds of vehicles
         | or robots illuminate the environment?
         | 
         | I think about this one a lot. While I haven't looked into it I
         | wonder if this will impact insects and birds somehow
        
         | ARandomerDude wrote:
         | Headline 2035: terrorists cause 200 car pile up with cheap
         | DRFM-like device.
         | 
         | https://en.wikipedia.org/wiki/Digital_radio_frequency_memory
        
           | rbanffy wrote:
           | Cars (of 2035) ought to have multiple sensors and should be
           | able to discard discrepant input - if (stereo) visual, radar,
           | sonar and lidar disagree, the car should switch to a
           | "degraded safe" self driving mode and signal it to cars
           | around it via multiple channels (visual, radio, ultrasound,
           | etc).
        
             | bdamm wrote:
             | Cars of today already have this problem. If one sensor says
             | there's a solid object right in front of you that just
             | popped out, and another sensor says there probably isn't
             | but is confused, what do you do? Slam on the brakes?
             | 
             | Teslas today have this "degraded mode" and the reaction is
             | to sound an alarm and ask the driver to pay attention and
             | grip the wheel, while slowing. This seems like a reasonable
             | thing to do. Cars of 2035 that have no wheel better have
             | perfect sensor integration and false-data elimination.
        
               | AlotOfReading wrote:
               | You can't and won't get perfect sensing. It's simply not
               | possible.
               | 
               | A safe vehicle design avoids the situation of having to
               | decide whether to slam on the brakes constantly by
               | realizing that's a fundamentally unsafe mode to be
               | operating in. You should already be slowing/able to
               | safely brake/pulling over/stopped before that point
               | excepting rare "everything explodes simultaneously"
               | situations you can try to engineer away with safety
               | ratings and MTBF numbers.
        
               | rbanffy wrote:
               | > A safe vehicle design avoids the situation
               | 
               | And, since you aren't driving a fully automated vehicle,
               | it driving below speed limits becomes much less annoying
               | (and, if all cars can coordinate their speeds, prevent
               | traffic jams altogether).
        
               | rbanffy wrote:
               | If you have two sensors, you have one. You can't resolve
               | a disagreement of two sensors. Stopping the car is the
               | only option in this case, but, say, if you have four
               | sensors and one says there is 100% certainty of a solid
               | object ahead of you, one says it's 50/50 and two others
               | say it's a 100% no, then a mild braking action may be all
               | that's needed, to give the car and its sensors more time
               | to react and assess. Worst case scenario you still hit
               | the solid object at an easily survivable speed. You also
               | signal cars behind you to actuate their brakes
               | accordingly (and, in turn, inform traffic behind them of
               | their actions).
        
               | samstave wrote:
               | And it gets way more interesting when every care itself
               | is a cluster, or node of sensors, and all of these
               | sensors of the same type can form telemetry fabrics of
               | all the families of sensor-types across the nodes.
               | 
               | Each a plane, and then ultimately being able to see the
               | relationships of patterns of sensor groups. I wonder how
               | that information will become useful?
        
               | rbanffy wrote:
               | First use could be to model traffic patterns - as
               | undoubtedly all online mapping apps already do. As the
               | meshes get denser, the cars can become aware of traffic
               | patterns around them and make decisions in a coordinated
               | way, forming "trains" to reduce drag in highways, for
               | instance. Eventually enough data is gathered that we have
               | predictive models for traffic jams and accidents and can
               | act to prevent them.
        
               | samstave wrote:
               | exactly
        
               | Tade0 wrote:
               | > You can't resolve a disagreement of two sensors.
               | 
               | Two sensors are still useful when all you need to know is
               | if there's a disagreement.
        
               | rbanffy wrote:
               | True, but the usual action is to hand the decision to the
               | human, who can look out, figure out which (if any) sensor
               | is right, and take appropriate action (737 Max feelings).
        
               | shadowgovt wrote:
               | Drivers of today also have this problem in the form of a
               | lack of standards for vehicle height and headlight power.
        
       | [deleted]
        
       | UberFly wrote:
       | The futuristic crime dramas and games where they replay events in
       | a simulated 3D space is getting closer. Imagine public spaces
       | constantly scanned and recorded right down to the dropped
       | cigarette.
        
       | nynx wrote:
       | This is great-hope it pans out and makes it to market quickly.
        
       | throwaway4aday wrote:
       | This will be huge for VR/AR progress.
        
         | rbanffy wrote:
         | Imagine Google Earth and Street View (and their counterparts),
         | but with almost real time 3D maps.
        
           | wlesieutre wrote:
           | It's not real time, but Apple's map scans are much more 3D.
           | Like Google Maps you can't make small movement adjustments in
           | between designated viewpoints, but the animation of moving
           | from one point to another is pretty slick.
           | 
           | https://youtu.be/Rd8VltIAZjU?t=36
        
             | rbanffy wrote:
             | Now, imagine that, using car sensors, you could time travel
             | and see the same point in space, but at any point in time
             | (after the introduction of cars that contribute their
             | sensor data to a consolidated repository of "4D" data.
        
       ___________________________________________________________________
       (page generated 2022-04-25 23:01 UTC)