Can a self-driving vehicle ever really be as aware of its surroundings as a human being? It would mean for computers to actually learn from past experiences and recognize patterns in order to navigate new and/or unpredictable situations.
These are the types of questions that Toyota, together with the Massachusetts Institute of Technology (MIT) are trying to answer. The two have now released a new open dataset called DriveSeg.
DriveSeg is meant to demonstrate how autonomous driving systems could perceive the driving environment as a continuous flow of visual information…
DriveSeg is available for free and can be found here. Its data consists of two parts: the Manual version is just under 3 minutes of high-res video captured during a daytime trip around Cambridge, Massachusetts, while the Semi-Auto version represents 20,100 video frames taken from MIT Advanced Vehicle Technologies (AVT) Consortium data.