The paper "When Can We Use KinectFusion for Ground Truth Acquisition?" has been published in the
Workshop on Color-Depth Camera Fusion in Robotics (https://sites.google.com/site/iros2012ws/)
at the International Conference on Intelligent Robots and Systems 2012 (http://www.iros2012.org).
- Interdisciplinary Center for Scientific Computing
- Microsoft Research Cambridge
- LiDAR Research Group, Institute of Geography, University of Heidelberg
- Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences (HGS MathComp)
See below to download the object meshes and data used in the paper.
The used capture tool and KinectFusion implementation is a pre-production version of the tools which are now integrated into the official Microsoft Kinect SDK, available here: http://www.microsoft.com/en-us/kinectforwindows/develop/developer-downloads.aspx
Update Jan 2014:
Thanks to Mohamed Chafik Bakkay of the Research Team on Intelligent Systems in Imaging and Artifcial Vision (SIIVA), Laboratory RIADI in Tunisia
the Kinect Raw Data is now also available in the OpenNI .oni file format.
AcknowledgementsThe IROS paper was authored by the following people:
Stephan Meister (HCI, University of Heidelberg)
Pushmeet Kohli (Microsoft Research)
Shahram Izadi\inst (Microsoft Research)
Martin Hämmerle (LiDAR Research Group - GIScience; Institute of Geography,University of Heidelberg)
Carsten Rother (Microsoft Research)
Daniel Kondermann (HCI, University of Heidelberg)
This work has been partially funded by the Intel Visual Computing Institute, Saarbrücken (IVCI) and is part of the Project "Algorithms for Low Cost Depth Imaging".
We thank Prof. Susanne Krömker, Anja Schäfer and Julia Freudenreich of the Visualization and Numerical Geometry Group (Interdisciplinary Center for Scientific Computing, University of Heidelberg) and the Heidelberg Graduate School of Mathematical and Computational Methods for the Sciences (HGS MathComp) We also thank Markus Forbriger, Larissa Müller and Fabian Schütt of the LiDAR Research Group (University of Heidelberg) for their support. Additional thanks to Mohamed Chafik Bakkay of the Research Team on Intelligent Systems in Imaging and Artifcial Vision (SIIVA) - Laboratory RIADI, Tunisia for converting our data