Visual sensors such as cameras are ubiquitous today in almost all our devices. Lidars can perceive even in the absence of ambient light. However, for perceiving objects in non line of sight conditions, through thick smoke or fog or debris, visible and infrared light doesn’t penetrate all the way. Radio frequency is known to pass through buildings and walls and reach your cell phones from towers far away. The field of wireless sensing uses radio frequency signals to perceive the world in conditions challenging to other wavelengths.
As can be seen from the figure, today’s state-of-the-art RF sensors are inferior in many aspects compared to lidars and cameras. For example, while camera and lidar offer angular resolutions of about 0.01° and 0.1° respectively, the best commodity radars are atleast 10 times worser. In my research, I build application-specific RF sensing systems that address some of these challenges using signal processing, hardware design, machine learning and sensor fusion techniques.
Tire wear affects safety and performance of automobiles. Past solutions are either inaccurate, require sensors embedded in tires or are prone to road debris. Osprey proposes a radio frequency tire wear sensing system design, based on automotive millimeter wave radars, to overcome challenges related to road debris. We design super resolution algorithms to measure millimeter-level changes due to wear and tear. In addition, the principles used also open up solutions for detecting and localizing harmful metallic foreign objects in the tire.
Best Paper Honorable Mention (MobiSys 2020)
Best Demo (MobiSys 2020)
ACM GetMobile Research Highlight
How can cars perceive critical roadside infrastructure even when weather conditions are not favorable for visual sensors? Millimetro tackles this by designing low power radio frequency tags that can be mounted on such infrastructure and builds decoding algorithms that run on radars in cars to decode, identify and accurately localize tags.
How can we enable single vantage point depth imaging that works at long ranges to build simple-to-deploy but high quality survelliance systems? Metamoran proposes a monocular camera-radar fusion solution that leverages the pros of each sensor to combat the cons and provides an accurate depth image of various objects even at long ranges.
How can we enable high quality perception for robots navigating in harsh environments with smoke or fog? Camera or lidar based perception would suffer in these conditions. We explore a millimeter wave radar based perception for seeing past these occlusions. We combat the poor spatial resolution of these radars by training an end-to-end deep learning super-resolution network that outputs lidar-like point clouds from just a cheap, single-chip radar! We also show RadarHD’s robustness in smoke by testing with smoke bombs in a smoke chamber.