Are you sure you want the projection matrix? A camera projection matrix is typically a 3x4 matrix that projects (homogenous) points in R3 to (homogenous) points in R2 in the image plane up to some scale (see the wikipedia entry http://en.wikipedia.org/wiki/Camera_matrix). It sounds like you are interested in comparing your computed visual odometry to the ground truth odometry provided on the KITTI website; in this case, you would be comparing the rigid transformation matrices from your VO estimation to the KITTI ground truth transformation.
如果您使用“原始”数据集,“地面实况”将作为 OXTS 数据记录提供 - 即组合的 IMU 和 GPS 数据。该数据位于全局框架中,需要做更多的工作才能与您的数据进行比较。然而,听起来您正在使用里程计基准数据;地面真实变换已经在左摄像机的框架中,这应该使任务变得更容易(这就是我将讨论的内容)。
由于您没有指定一种语言,我将在这里更笼统地讲一下,但是 ROS 确实提供了 C++(tf 和 Eigen)和 Python(transformations.py)中的工具来执行诸如从四元数转换为旋转矩阵等任务...
You have t
and q
, the translation and rotation represented as a quaternion. You can convert the quaternion to a rotation matrix (usually, 'sxyz' form) and vice versa. The KITTI data is specified as a 3x4 matrix and this is the rotation matrix concatenated with the translation vector (i.e. the 4th column is tgt).
r11 r12 r13 t1
r21 r22 r23 t2
r31 r32 r33 t3
You can simply compute the translation error by computing the L2 norm: ||t - tgt||. Computing the error in the rotation is a little bit more difficult; one way to do it is to use a function such as QuaternionBase::angularDistance()
from Eigen, since both measurements should be in the same coordinate frame. To do this, you need to transform the ground truth rotation matrix into a quaternion, using either Eigen or the transformations.py library.
请记住,这是里程计框架中的误差 - 即第 i 个估计姿势相对于初始姿势框架中第 i 个地面真实姿势的误差。有时,比较帧与帧之间的平均误差也很有用,特别是因为里程计往往会随着时间的推移而显着漂移,即使算法在平均帧之间相对准确。
总之:
- 将旋转矩阵转换为四元数以计算角度误差(注意坐标系),
- and use the formula ||t - tgt|| to compute the translation error.
- 再次,注意你的坐标系。