Autonomous vehicle research is critically dependent on vast quantities of real-world data for development, testing and validation of algorithms before deployment on public roads. However, few research groups can manage the costs of developing and maintaining a suitable autonomous vehicle platform, regular calibration and data collection procedures, and storing and processing the collected data. Following the benchmark-driven approach of the computer vision community, a number of vision-based autonomous driving datasets have been released, notably the KITTI and Cityscapes datasets. Neither of these datasets address the challenges of long-term autonomy: chiefly, localisation in the same environment under significantly different conditions and mapping in the presence of structural change over time.
We present a new benchmark: the Oxford RobotCar Dataset. Over the period of November 2014 to December 2015 we traversed a 10km route through central Oxford twice a week on average in the Oxford RobotCar platform, an autonomous Nissan LEAF. This resulted in approximately 1000km of recorded driving with over 20 million images collected from 6 cameras mounted to the vehicle, along with LIDAR, GPS and INS ground truth. Data was collected in all weather conditions, including heavy rain, nighttime, direct sunlight and snow, and road and building works over the period of a year significantly changed sections of the route from the beginning to the end of data collection. By frequently traversing the same route over the period of a year we enable research investigating long-term localisation and mapping for autonomous vehicles in real-world, dynamic urban environments.
The dataset was collected by Oxford University’s Oxford Robotics Institute.