如何使用 OpenCV Viz 和 ARUCO 转换增强现实应用的 3D 模型

2024-03-26

我正在开发一个简单的基于标记的增强现实应用程序OpenCV 可视化 and ARUCO。我只想在标记上可视化 3D 对象(PLY 格式)。

我可以使用 ARUCO 毫无问题地运行标记检测和姿态估计(返回旋转和平移向量)。我可以在 Viz 窗口中可视化任何 3D 对象(PLY 格式)和相机帧。然而,我坚持使用 ARUCO 的旋转和平移矢量输出来定位标记上的 3D 模型。

我正在使用旋转和平移向量创建仿射变换并将其应用到 3D 模型。这是对的吗?我应该如何利用平移和旋转向量?

下面是我的代码片段。

// Camera calibration outputs
cv::Mat cameraMatrix, distCoeffs;
loadIntrinsicCameraParameters(cameraMatrix, distCoeffs);

// Marker dictionary
Ptr<cv::aruco::Dictionary> dictionary = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_6X6_250);

viz::Viz3d myWindow("Coordinate Frame");

cv::Mat image;

// Webcam frame pose, without this frame is upside-down
Affine3f imagePose(Vec3f(3.14159,0,0), Vec3f(0,0,0));

// Viz viewer pose to see whole webcam frame
Vec3f cam_pos( 0.0f,0.0f,900.0f), cam_focal_point(0.0f,0.0f,0.0f), cam_y_dir(0.0f,0.0f,0.0f);
Affine3f viewerPose = viz::makeCameraPose(cam_pos, cam_focal_point, cam_y_dir);

// Video capture from source
VideoCapture cap(camID);
int frame_width = cap.get(CV_CAP_PROP_FRAME_WIDTH);
int frame_height = cap.get(CV_CAP_PROP_FRAME_HEIGHT);
cap >> image;

// Load mash data
viz::WMesh batman(viz::Mesh::load("../data/bat.ply"));
viz::WImage3D img(image, Size2d(frame_width, frame_height));

// Show camera frame, mesh and a coordinate widget (for debugging)
myWindow.showWidget("Image", img);
myWindow.showWidget("Batman", batman);
myWindow.showWidget("Coordinate Widget", viz::WCoordinateSystem(5.0));

myWindow.setFullScreen(true);
myWindow.setViewerPose(viewerPose);

// Rotation vector of 3D model
Mat rot_vec = Mat::zeros(1,3,CV_32F);
cv::Vec3d rvec, tvec;

// ARUCO outputs
float roll, pitch, yaw;
float x, y, z;

while (!myWindow.wasStopped()) {

    if (cap.read(image)) {
        cv::Mat image, imageCopy;
        cap.retrieve(image);
        image.copyTo(imageCopy);

        // Marker detection
        std::vector<int> ids;
        std::vector<std::vector<cv::Point2f> > corners;
        cv::aruco::detectMarkers(image, dictionary, corners, ids);

        if (ids.size() > 0){

            // Draw a green line around markers
            cv::aruco::drawDetectedMarkers(imageCopy, corners, ids);
            vector<Vec3d> rvecs, tvecs;

            // Get rotation and translation vectors of each markers
            cv::aruco::estimatePoseSingleMarkers(corners, 0.05, cameraMatrix, distCoeffs, rvecs, tvecs);

            for(int i=0; i<ids.size(); i++){

                cv::aruco::drawAxis(imageCopy, cameraMatrix, distCoeffs, rvecs[i], tvecs[i], 0.1);

                // Take only the first marker's rotation and translation to visualize 3D model on this marker
                rvec = rvecs[0];
                tvec = tvecs[0];

                roll = rvec[0];
                pitch = rvec[1];
                yaw = rvec[2];

                x = tvec[0];
                y = tvec[1];
                z = tvec[2];

                qDebug() << rvec[0] << "," << rvec[1] << "," << rvec[2] << "---" << tvec[0] << "," << tvec[1] << "," << tvec[2];
            }
        }
        // Show camera frame in Viz window
        img.setImage(imageCopy);
        img.setPose(imagePose);
    }

    // Create affine pose from rotation and translation vectors
    rot_vec.at<float>(0,0) = roll;
    rot_vec.at<float>(0,1) = pitch;
    rot_vec.at<float>(0,2) = yaw;

    Mat rot_mat;

    Rodrigues(rot_vec, rot_mat);

    Affine3f pose(rot_mat, Vec3f(x, y, z));

    // Set the pose of 3D model
    batman.setPose(pose);

    myWindow.spinOnce(1, true);
}

我认为你的问题是 rvec 它不是滚动俯仰角和偏航角而是罗德里格斯矢量。要找到横滚角和偏航角,您必须使用 Opencv 的 Rodriguez 函数将旋转矢量 (rvec) 转换为旋转矩阵。然后使用 RQDecomp3x3 将旋转矩阵分解为欧拉角(横滚、俯仰和偏航)

本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

如何使用 OpenCV Viz 和 ARUCO 转换增强现实应用的 3D 模型 的相关文章

随机推荐