RoadTrack algorithm could help autonomous vehicles navigate dense traffic scenarios
Tracking trucks, large or small cars, buses, bicycles, rickshaws, pedestrians, moving carts, and animals on a highway or an urban road is an important problem in autonomous driving and related areas such as trajectory prediction. These “road-agents” have different shapes, move at varying speeds, and their underlying dynamics constraints govern their trajectories. Furthermore, traffic patterns or behaviors can vary considerably between highway traffic, sparse urban traffic, and dense urban traffic with a variety of such heterogeneous agents.
Traffic density can be defined based on the number of distinct road-agents captured in a single frame of a video or the number of agents per unit length of the roadway. The tracking problem corresponds to computing the consistency in the temporal and spatial identity of all agents in the video sequence. Recent developments in autonomous driving and large-scale deployment of high-resolution cameras for surveillance has generated interest in the development of accurate tracking algorithms, especially in dense scenarios with a large number of heterogeneous agents.
The complexity of tracking increases in dense scenarios as different types of road-agents come in close proximity and interact with each other. Passengers board or deboard buses, bicyclists ride alongside cars and so on. Such traffic scenarios arise frequently in densely populated metropolitan cities and are the focus of new work presented at IROS 2019 by ISR-affiliated Professor Dinesh Manocha (ECE/CS/UMIACS), his Maryland Ph.D. students Rohan Chandra and Uttaran Bhattacharya, Assistant Research Professor Aniket Bera (CS/UMIACS) and graduate student Tanmay Randhavane at the University of North Carolina. The researchers are part of the Geometric Algorithms for Modeling, Motion and Animation (GAMMA) group.
Their real-time tracking algorithm, RoadTrack, uses a tracking-by-detection approach, a two-step process of object detection and state prediction using a motion model. The first step, object detection, is performed to generate vectorized representations, called features, for each road-agent that facilitate identity association across frames. The second step predicts the state (position and velocity) for the next frame using a motion model. Road-agents are tracked by matching the appearance or bounding box region in the current frame with the predicted bounding box region propagated from the previous frame.
RoadTrack uses a new motion model called Simultaneous Collision Avoidance and Interaction (SimCAI) to predict the motion of road-agents by simultaneously accounting for collision avoidance and pairwise interactions between the road-agents for the next frame. This model is better suited for dense and heterogeneous traffic scenes than linear constant velocity, non-linear, and learning-based motion models.
RoadTrack makes no assumptions regarding camera motion and camera view. The algorithm can track road-agents in heavy traffic captured from both front view and top view cameras that can be either stationary or moving. In addition, there are no assumptions for lighting conditions; RoadTrack is able to track road-agents at night even with glare from oncoming traffic.
The researchers’ IROS paper, RoadTrack: Realtime Tracking of Road Agents in Dense and Heterogeneous Environments demonstrates the advantage of RoadTrack on a dataset of dense traffic videos. RoadTrack’s accuracy is 75.8%, outperforming prior state-of-the-art tracking algorithms by at least 5.2%. RoadTrack operates in real time at approximately 30 fps and is at least four times faster than prior tracking algorithms on standard tracking datasets.
Learn more about RoadTrack at the Gamma Lab website.