拼接后计算源图像的坐标

2023-11-25

我使用 opencv 的全景拼接算法,将 2 或 3 个图像拼接成一个新的结果图像。

我有每个源图像中的点的坐标。我需要计算结果图像中这些点的新坐标是什么。

我在下面描述该算法。我的代码类似于示例“缝合_详细”来自 opencv(分支 3.4)。 Aresult_mask类型的Mat产生了,也许这就是解决方案?但我不知道如何使用它。我发现这里有一个相关问题但不是在缝合上。

任何想法?

算法如下(详细代码:缝合_详细.cpp):

Find features对于每个图像:

Ptr<FeaturesFinder> finder = makePtr<SurfFeaturesFinder>()
vector<ImageFeatures> features(num_images);
for (int i = 0; i < num_images; ++i)
{
  (*finder)(images[i], features[i]);
}

Make pairwise_matches:

vector<MatchesInfo> pairwise_matches;
Ptr<FeaturesMatcher> matcher = makePtr<BestOf2NearestMatcher>(false, match_conf);
(*matcher)(features, pairwise_matches);

重新排序图像:

vector<int> indices = leaveBiggestComponent(features, pairwise_matches, conf_thresh);
# here some code to reorder 'images'

估计单应性cameras:

vector<CameraParams> cameras;
Ptr<Estimator> estimator = makePtr<HomographyBasedEstimator>();
(*estimator)(features, pairwise_matches, cameras);

转换成CV_32F:

for (size_t i = 0; i < cameras.size(); ++i)
{
  Mat R;
  cameras[i].R.convertTo(R, CV_32F);
  cameras[i].R = R;
}

执行一个BundleAdjuster:

Ptr<detail::BundleAdjusterBase> adjuster = makePtr<detail::BundleAdjusterRay>();
adjuster->setConfThresh(conf_thresh);
adjuster->setRefinementMask(refine_mask);
(*adjuster)(features, pairwise_matches, cameras);

计算一个值warped_image_scale:

for (int i = 0; i < cameras.size(); ++i)
  focals.push_back(cameras[i].focal);
float warped_image_scale = static_cast<float>(focals[focals.size() / 2 - 1] + focals[focals.size() / 2]) * 0.5f;

进行波形校正:

vector<Mat> rmats;
for (size_t i = 0; i < cameras.size(); ++i)
  rmats.push_back(cameras[i].R.clone());
waveCorrect(rmats, wave_correct);
for (size_t i = 0; i < cameras.size(); ++i)
  cameras[i].R = rmats[i];

创建一个warper:

Ptr<WarperCreator> warper_creator = makePtr<cv::SphericalWarper>();
Ptr<RotationWarper> warper = warper_creator->create(static_cast<float>(warped_image_scale * seam_work_aspect));

创建一个搅拌机并喂它:

Ptr<Blender> blender;

for (size_t i = 0; i < cameras.size(); ++i)
{
  full_img = input_imgs[img_idx];
  if (!is_compose_scale_set)
  {
    is_compose_scale_set = true;
    compose_scale = /* … */
  }
  if (abs(compose_scale - 1) > 1e-1)
    resize(full_img, img, Size(), compose_scale, compose_scale, INTER_LINEAR_EXACT);
  else
    img = full_img;

  // Warp the current image
  warper->warp(img, K, cameras[img_idx].R, INTER_LINEAR, BORDER_REFLECT, img_warped);

  // Warp the current image mask
  mask.create(img_size, CV_8U);
  mask.setTo(Scalar::all(255));
  warper->warp(mask, K, cameras[img_idx].R, INTER_NEAREST, BORDER_CONSTANT, mask_warped);

  // Compensate exposure
  compensator->apply(img_idx, corners[img_idx], img_warped, mask_warped);

  dilate(masks_warped[img_idx], dilated_mask, Mat());
  resize(dilated_mask, seam_mask, mask_warped.size(), 0, 0, INTER_LINEAR_EXACT);
  mask_warped = seam_mask & mask_warped;

  if (!blender)
  {
    blender = Blender::createDefault(blend_type, try_gpu);
    Size dst_sz = resultRoi(corners, sizes).size();
    float blend_width = sqrt(static_cast<float>(dst_sz.area())) * blend_strength / 100.f;
    MultiBandBlender *mb = dynamic_cast<MultiBandBlender *>(blender.get());
    mb->setNumBands(static_cast<int>(ceil(log(blend_width) / log(2.)) - 1.));
    blender->prepare(corners, sizes);
  }

  // Blend the current image
  blender->feed(img_warped_s, mask_warped, corners[i]);
}

然后,使用搅拌机:

Mat result, result_mask;
blender->blend(result, result_mask);
// The result image is in 'result'

当我还是个小学生的时候,我发现opencv/samples/cpp/stitching_detailed.cpp在 OpenCV 示例文件夹中。那时我的编程能力很差。即使我绞尽脑汁也无法理解。这个问题引起了我的注意,唤起了我的记忆。After a whole night of hard work and debugging, I finally get it.


基本步骤:

  1. 给定三张图像:blue.png, 绿色.png, and red.png

enter image description here

  1. 我们可以得到拼接结果(result.png) 使用缝合_详细.cpp. .

enter image description here

enter image description here

blender->blend(result, result_mask);
imwrite("result.png", result);
imwrite("result_mask.png", result_mask);
  1. 我选择的是centers从三幅图像中,计算出corresponding coordinates(扭曲)在拼接图像上,并且draw in solid如下:

enter image description here


Warping images (auxiliary)...
Compensating exposure...

Blending ...

Warp each center point, and draw solid circle.
[408, 204] => [532, 224]
[408, 204] => [359, 301]
[408, 204] => [727, 320]

Check `result.png`, `result_mask.png` and `result2.png`!

Done!

这是函数calcWarpedPoint我写了计算拼接图像上的扭曲点:

cv::Point2f calcWarpedPoint(
    const cv::Point2f& pt,
    InputArray K,                // Camera K parameter             
    InputArray R,                // Camera R parameter                
    Ptr<RotationWarper> warper,  // The Rotation Warper    
    const std::vector<cv::Point> &corners,
    const std::vector<cv::Size> &sizes)
{
    // Calculate the wrapped point using camera parameter.
    cv::Point2f  dst = warper->warpPoint(pt, K, R);

    // Calculate the stitching image roi using corners and sizes.
    // the corners and sizes have already been calculated.
    cv::Point2f  tl = cv::detail::resultRoi(corners, sizes).tl();

    // Finally adjust the wrapped point to the stitching image.
    return cv::Point2f(dst.x - tl.x, dst.y - tl.y);
}

这是示例代码片段:

std::cout << "\nWarp each center point, and draw solid circle.\n";
std::vector<cv::Scalar> colors = { {255,0,0}, {0, 255, 0}, {0, 0, 255} };
for (int idx = 0; idx < img_names.size(); ++idx) {
    img = cv::imread(img_names[idx]);
    Mat K;
    cameras[idx].K().convertTo(K, CV_32F);
    Mat R = cameras[idx].R;

    cv::Point2f cpt = cv::Point2f(img.cols / 2, img.rows / 2);
    cv::Point pt = calcWarpedPoint(cpt, K, R, warper, corners, sizes);
    cv::circle(result, pt, 5, colors[idx], -1, cv::LINE_AA);
    std::cout << cpt << " => " << pt << std::endl;
}

std::cout << "\nCheck `result.png`, `result_mask.png` and `result2.png`!\n";
imwrite("result2.png", result);

完整代码:

/*
* Author   : Kinght-金(https://stackoverflow.com/users/3547485/)
* Created  : 2019/03/01 23:00 (CST)
* Finished : 2019/03/01 07:50 (CST)
*
* Modified on opencv401/samples/cpp/stitching_detailed.cpp
* From  https://github.com/opencv/opencv/blob/4.0.1/samples/cpp/stitching_detailed.cpp
*
* 
* Description: A simple opencv(4.0.1) image stitching code for Stack Overflow answers.
* For https://stackoverflow.com/questions/54904718/compute-coordinates-from-source-images-after-stitching/54953792#comment96681412_54953792
*
*/

#include <iostream>
#include <fstream>
#include <string>
#include "opencv2/opencv_modules.hpp"
#include <opencv2/core/utility.hpp>
#include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/stitching/detail/autocalib.hpp"
#include "opencv2/stitching/detail/blenders.hpp"
#include "opencv2/stitching/detail/camera.hpp"
#include "opencv2/stitching/detail/exposure_compensate.hpp"
#include "opencv2/stitching/detail/matchers.hpp"
#include "opencv2/stitching/detail/motion_estimators.hpp"
#include "opencv2/stitching/detail/seam_finders.hpp"
#include "opencv2/stitching/detail/warpers.hpp"
#include "opencv2/stitching/warpers.hpp"

using namespace std;
using namespace cv;
using namespace cv::detail;

//! img_names are the input image (full) paths
// You can download from using the links from the answer.
//! Blue: https://i.stack.imgur.com/Yz3U1.png
//! Green: https://i.stack.imgur.com/AbUTH.png
//! Red: https://i.stack.imgur.com/9wcGc.png
vector<String> img_names = {"D:/stitching/blue.png", "D:/stitching/green.png", "D:/stitching/red.png"};

//! The function to calculate the warped point on the stitching image.
cv::Point2f calcWarpedPoint(
    const cv::Point2f& pt,
    InputArray K,                // Camera K parameter
    InputArray R,                // Camera R parameter
    Ptr<RotationWarper> warper,  // The Rotation Warper
    const std::vector<cv::Point> &corners,
    const std::vector<cv::Size> &sizes)
{
    // Calculate the wrapped point
    cv::Point2f  dst = warper->warpPoint(pt, K, R);

    // Calculate the stitching image roi using corners and sizes,
    // the corners and sizes have already been calculated.
    cv::Point2f  tl = cv::detail::resultRoi(corners, sizes).tl();

    // Finally adjust the wrapped point
    return cv::Point2f(dst.x - tl.x, dst.y - tl.y);
}


int main(int argc, char* argv[])
{
    double work_megapix = 0.6;
    double seam_megapix = 0.1;
    double compose_megapix = -1;
    float conf_thresh = 1.f;
    float match_conf = 0.3f;
    float blend_strength = 5;


    // Check if have enough images
    int num_images = static_cast<int>(img_names.size());
    if (num_images < 2)
    {
        std::cout << "Need more images\n";
        return -1;
    }

    double work_scale = 1, seam_scale = 1, compose_scale = 1;
    bool is_work_scale_set = false, is_seam_scale_set = false, is_compose_scale_set = false;

    //(1) 创建特征查找器
    Ptr<Feature2D> finder = ORB::create();

    // (2) 读取图像,适当缩放,并计算图像的特征描述
    Mat full_img, img;
    vector<ImageFeatures> features(num_images);
    vector<Mat> images(num_images);
    vector<Size> full_img_sizes(num_images);
    double seam_work_aspect = 1;

    for (int i = 0; i < num_images; ++i)
    {
        full_img = imread(img_names[i]);
        full_img_sizes[i] = full_img.size();

        if (full_img.empty())
        {
            cout << "Can't open image " << img_names[i] << std::endl;
            return -1;
        }
        if (!is_work_scale_set)
        {
            work_scale = min(1.0, sqrt(work_megapix * 1e6 / full_img.size().area()));
            is_work_scale_set = true;
        }
        resize(full_img, img, Size(), work_scale, work_scale, INTER_LINEAR_EXACT);

        if (!is_seam_scale_set)
        {
            seam_scale = min(1.0, sqrt(seam_megapix * 1e6 / full_img.size().area()));
            seam_work_aspect = seam_scale / work_scale;
            is_seam_scale_set = true;
        }

        computeImageFeatures(finder, img, features[i]);
        features[i].img_idx = i;
        std::cout << "Features in image #" << i + 1 << ": " << features[i].keypoints.size() << std::endl;

        resize(full_img, img, Size(), seam_scale, seam_scale, INTER_LINEAR_EXACT);
        images[i] = img.clone();
    }

    full_img.release();
    img.release();


    // (3) 创建图像特征匹配器,计算匹配信息
    vector<MatchesInfo> pairwise_matches;
    Ptr<FeaturesMatcher>  matcher = makePtr<BestOf2NearestMatcher>(false, match_conf);
    (*matcher)(features, pairwise_matches);
    matcher->collectGarbage();

    //! (4) 剔除外点,保留最确信的大成分
    // Leave only images we are sure are from the same panorama
    vector<int> indices = leaveBiggestComponent(features, pairwise_matches, conf_thresh);
    vector<Mat> img_subset;
    vector<String> img_names_subset;
    vector<Size> full_img_sizes_subset;
    for (size_t i = 0; i < indices.size(); ++i)
    {
        img_names_subset.push_back(img_names[indices[i]]);
        img_subset.push_back(images[indices[i]]);
        full_img_sizes_subset.push_back(full_img_sizes[indices[i]]);
    }

    images = img_subset;
    img_names = img_names_subset;
    full_img_sizes = full_img_sizes_subset;

    // Check if we still have enough images
    num_images = static_cast<int>(img_names.size());
    if (num_images < 2)
    {
        std::cout << "Need more images\n";
        return -1;
    }

    //!(5) 估计 homography
    Ptr<Estimator> estimator = makePtr<HomographyBasedEstimator>();
    vector<CameraParams> cameras;
    if (!(*estimator)(features, pairwise_matches, cameras))
    {
        cout << "Homography estimation failed.\n";
        return -1;
    }

    for (size_t i = 0; i < cameras.size(); ++i)
    {
        Mat R;
        cameras[i].R.convertTo(R, CV_32F);
        cameras[i].R = R;
        std::cout << "\nInitial camera intrinsics #" << indices[i] + 1 << ":\nK:\n" << cameras[i].K() << "\nR:\n" << cameras[i].R << std::endl;
    }

    //(6) 创建约束调整器
    Ptr<detail::BundleAdjusterBase> adjuster = makePtr<detail::BundleAdjusterRay>();
    adjuster->setConfThresh(conf_thresh);
    Mat_<uchar> refine_mask = Mat::zeros(3, 3, CV_8U);
    refine_mask(0, 0) = 1;
    refine_mask(0, 1) = 1;
    refine_mask(0, 2) = 1;
    refine_mask(1, 1) = 1;
    refine_mask(1, 2) = 1;
    adjuster->setRefinementMask(refine_mask);
    if (!(*adjuster)(features, pairwise_matches, cameras))
    {
        cout << "Camera parameters adjusting failed.\n";
        return -1;
    }

    // Find median focal length
    vector<double> focals;
    for (size_t i = 0; i < cameras.size(); ++i)
    {
        focals.push_back(cameras[i].focal);
    }

    sort(focals.begin(), focals.end());
    float warped_image_scale;
    if (focals.size() % 2 == 1)
        warped_image_scale = static_cast<float>(focals[focals.size() / 2]);
    else
        warped_image_scale = static_cast<float>(focals[focals.size() / 2 - 1] + focals[focals.size() / 2]) * 0.5f;


    std::cout << "\nWarping images (auxiliary)... \n";

    vector<Point> corners(num_images);
    vector<UMat> masks_warped(num_images);
    vector<UMat> images_warped(num_images);
    vector<Size> sizes(num_images);
    vector<UMat> masks(num_images);

    // Preapre images masks
    for (int i = 0; i < num_images; ++i)
    {
        masks[i].create(images[i].size(), CV_8U);
        masks[i].setTo(Scalar::all(255));
    }

    // Warp images and their masks
    Ptr<WarperCreator> warper_creator = makePtr<cv::CylindricalWarper>();
    if (!warper_creator)
    {
        cout << "Can't create the warper \n";
        return 1;
    }

    //! Create RotationWarper
    Ptr<RotationWarper> warper = warper_creator->create(static_cast<float>(warped_image_scale * seam_work_aspect));

    //! Calculate warped corners/sizes/mask
    for (int i = 0; i < num_images; ++i)
    {
        Mat_<float> K;
        cameras[i].K().convertTo(K, CV_32F);
        float swa = (float)seam_work_aspect;
        K(0, 0) *= swa; K(0, 2) *= swa;
        K(1, 1) *= swa; K(1, 2) *= swa;
        corners[i] = warper->warp(images[i], K, cameras[i].R, INTER_LINEAR, BORDER_REFLECT, images_warped[i]);
        sizes[i] = images_warped[i].size();
        warper->warp(masks[i], K, cameras[i].R, INTER_NEAREST, BORDER_CONSTANT, masks_warped[i]);
    }

    vector<UMat> images_warped_f(num_images);
    for (int i = 0; i < num_images; ++i)
        images_warped[i].convertTo(images_warped_f[i], CV_32F);

    std::cout << "Compensating exposure... \n";

    //! 计算曝光度,调整图像曝光,减少亮度差异
    Ptr<ExposureCompensator> compensator = ExposureCompensator::createDefault(ExposureCompensator::GAIN_BLOCKS);
    if (dynamic_cast<BlocksCompensator*>(compensator.get()))
    {
        BlocksCompensator* bcompensator = dynamic_cast<BlocksCompensator*>(compensator.get());
        bcompensator->setNrFeeds(1);
        bcompensator->setNrGainsFilteringIterations(2);
        bcompensator->setBlockSize(32, 32);
    }

    compensator->feed(corners, images_warped, masks_warped);

    Ptr<SeamFinder> seam_finder = makePtr<detail::GraphCutSeamFinder>(GraphCutSeamFinderBase::COST_COLOR);
    seam_finder->find(images_warped_f, corners, masks_warped);

    // Release unused memory
    images.clear();
    images_warped.clear();
    images_warped_f.clear();
    masks.clear();

    Mat img_warped, img_warped_s;
    Mat dilated_mask, seam_mask, mask, mask_warped;
    Ptr<Blender> blender;
    double compose_work_aspect = 1;

    for (int img_idx = 0; img_idx < num_images; ++img_idx)
    {
        // Read image and resize it if necessary
        full_img = imread(img_names[img_idx]);
        if (!is_compose_scale_set)
        {
            is_compose_scale_set = true;
            compose_work_aspect = compose_scale / work_scale;

            // Update warped image scale
            warped_image_scale *= static_cast<float>(compose_work_aspect);
            warper = warper_creator->create(warped_image_scale);

            // Update corners and sizes
            for (int i = 0; i < num_images; ++i)
            {
                cameras[i].focal *= compose_work_aspect;
                cameras[i].ppx *= compose_work_aspect;
                cameras[i].ppy *= compose_work_aspect;

                Size sz = full_img_sizes[i];
                if (std::abs(compose_scale - 1) > 1e-1)
                {
                    sz.width = cvRound(full_img_sizes[i].width * compose_scale);
                    sz.height = cvRound(full_img_sizes[i].height * compose_scale);
                }

                Mat K;
                cameras[i].K().convertTo(K, CV_32F);
                Rect roi = warper->warpRoi(sz, K, cameras[i].R);

                corners[i] = roi.tl();
                sizes[i] = roi.size();
            }
        }

        if (abs(compose_scale - 1) > 1e-1)
            resize(full_img, img, Size(), compose_scale, compose_scale, INTER_LINEAR_EXACT);
        else
            img = full_img;
        full_img.release();
        Size img_size = img.size();

        Mat K, R;
        cameras[img_idx].K().convertTo(K, CV_32F);
        R = cameras[img_idx].R;

        // Warp the current image : img => img_warped
        warper->warp(img, K, cameras[img_idx].R, INTER_LINEAR, BORDER_REFLECT, img_warped);

        // Warp the current image mask
        mask.create(img_size, CV_8U);
        mask.setTo(Scalar::all(255));
        warper->warp(mask, K, cameras[img_idx].R, INTER_NEAREST, BORDER_CONSTANT, mask_warped);

        compensator->apply(img_idx, corners[img_idx], img_warped, mask_warped);
        img_warped.convertTo(img_warped_s, CV_16S);
        img_warped.release();
        img.release();
        mask.release();

        dilate(masks_warped[img_idx], dilated_mask, Mat());
        resize(dilated_mask, seam_mask, mask_warped.size(), 0, 0, INTER_LINEAR_EXACT);
        mask_warped = seam_mask & mask_warped;

        if (!blender)
        {
            blender = Blender::createDefault(Blender::MULTI_BAND, false);
            Size dst_sz = resultRoi(corners, sizes).size();
            float blend_width = sqrt(static_cast<float>(dst_sz.area())) * blend_strength / 100.f;
            if (blend_width < 1.f){
                blender = Blender::createDefault(Blender::NO, false);
            }
            else
            {
                MultiBandBlender* mb = dynamic_cast<MultiBandBlender*>(blender.get());
                mb->setNumBands(static_cast<int>(ceil(log(blend_width) / log(2.)) - 1.));
            }
            blender->prepare(corners, sizes);
        }

        blender->feed(img_warped_s, mask_warped, corners[img_idx]);
    }

    /* ===========================================================================*/
    // Blend image
    std::cout << "\nBlending ...\n";
    Mat result, result_mask;
    blender->blend(result, result_mask);
    imwrite("result.png", result);
    imwrite("result_mask.png", result_mask);

    std::cout << "\nWarp each center point, and draw solid circle.\n";
    std::vector<cv::Scalar> colors = { {255,0,0}, {0, 255, 0}, {0, 0, 255} };
    for (int idx = 0; idx < img_names.size(); ++idx) {
        img = cv::imread(img_names[idx]);
        Mat K;
        cameras[idx].K().convertTo(K, CV_32F);
        Mat R = cameras[idx].R;

        cv::Point2f cpt = cv::Point2f(img.cols / 2, img.rows / 2);
        cv::Point pt = calcWarpedPoint(cpt, K, R, warper, corners, sizes);
        cv::circle(result, pt, 5, colors[idx], -1, cv::LINE_AA);
        std::cout << cpt << " => " << pt << std::endl;
    }

    std::cout << "\nCheck `result.png`, `result_mask.png` and `result2.png`!\n";
    imwrite("result2.png", result);

    std::cout << "\nDone!\n";
    /* ===========================================================================*/

    return 0;
}

一些链接可能有用:

  1. stitching_detailed.cpp : https://github.com/opencv/opencv/blob/4.0.1/samples/cpp/stitching_detailed.cpp

  2. waper->warp(), warpPoint(), warpRoi() https://github.com/opencv/opencv/blob/master/modules/stitching/src/warpers.cpp#L153

  3. resultRoi() https://github.com/opencv/opencv/blob/master/modules/stitching/src/util.cpp#L116


其他链接可能很有趣:

  1. 将 opencv 重映射代码从 c++ 转换为 python

  2. 分割扫描文档中的文本行

  3. 如何使用 Flann 匹配之间的关系来确定合理的单应性?

本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

拼接后计算源图像的坐标 的相关文章

随机推荐

  • Firefox 上 Tumblr 的 Base64 字体编码

    我正在开发一个 Tumblr 主题 并且想要使用我在所有浏览器上都有的字体 由于 Firefox 不允许对其他域的 css 字体进行 http 请求 因此该字体目前无法在 Firefox 中使用 但可以在使用标准 font face 语法的
  • 您放弃 MVVM 而采用基于 UserControl 的 WPF 架构有何体验?

    我们基于以下内容构建了一个大型应用程序复合应用程序库 and MVVM using 基础设施学控制 为了节省时间并使申请更加直接 我们废除了 MVVM 要求 我们现在没有 Presenter 或 ViewModel 我们的 View 已成为
  • 为什么使用 GCC 时库链接器标志有时必须放在最后?

    我正在编写一个使用 librt 的小型 C 程序 让我感到非常惊讶的是 如果我将链接标志放在开头而不是结尾 程序将无法编译 目前 为了编译程序 我这样做 gcc o prog prog c lrt std gnu99 如果我执行以下操作 将
  • 如何获取SDCARD上存储的图片的Uri?

    我需要获取存储在 SDCARD 上的图像的 URI 当我想获取存储在可绘制对象上的图像的 Uri 时 我使用它并且效果完美 i putExtra Intent EXTRA STREAM Uri parse android resource
  • jquery Draggable +sortable 与自定义 html 的放置事件?

    将元素放置在可放置区域时更改 html 像这样的东西 http the stickman com files jquery draggable sortable html 但是当我删除元素时 会更改放置的 html 其他示例 我有 2 个列
  • 查看复杂 OCaml 代码中的推断类型

    我是一名 OCaml 新手 正在使用一些我没有编写的相当复杂 至少对我来说 的 OCaml 代码 如果我可以看到某些值的推断类型 那么对理解它会有很大帮助 就像我可以通过将鼠标悬停在任何值上来使用 F 和 Visual Studio 一样
  • Angularjs:控制器被多次调用

    由于某种原因 当我在资源 1 和资源 2 之间切换时 我的控制器被双重调用 这是代码 索引 html
  • EditText在设备旋转后自动保存值

    我在 Android Studio 中创建了一个示例应用程序来了解 Android 应用程序的生命周期 我知道方向改变会完全重新启动活动 即再次调用 OnCreate 方法 据我所知 方向改变应该会破坏上下文并在设备旋转后显示空白文本 但不
  • 如何将类属性声明为类名的联合?

    我正在阅读电子表格寻找不同的结构 当我使用 Moose 尝试以下操作时 它似乎做了我想要的事情 我可以创建不同类型的对象 将其分配给找到的成员 并转储 Cell 实例以供审查 package Cell use Moose use Moose
  • Grunt - 监视文件并在文件更改时进行 SFTP

    我正在尝试自动上传 css文件 当它从 Sass 编译时 这就是我的Gruntfile js module exports function grunt Project configuration grunt initConfig pkg
  • 如何调用系统命令并捕获其输出?

    有没有办法调用系统命令 例如ls or fuser在铁锈中 捕获它的输出怎么样 std process Command允许这样做 有多种方法可以生成子进程并在计算机上执行任意命令 spawn 运行程序并返回一个包含详细信息的值 output
  • 无法解析指针:/definitions/Error-ModelName

    我是 Swagger io 的新手 也是 Spring Fox 的新手 我遇到的问题是 由于某种原因 一个对象没有正确引用其模型 The error in the UI 错误是因为 JSON 中的结果如下 schema ref defini
  • Android 真的没有 wchar_t 吗?

    我建立了一个简单的方法 如下所示 wchar t buf 1024 void logDebugInfo wchar t fmt va list args va start args fmt vswprintf buf sizeof buf
  • 使用 JAX-RS 创建 RESTful Web 服务并将其部署到 tomcat

    我正在尝试使用 JAX RS 创建和部署 RESTful Web 服务并将其部署到 tomcat 我不想使用任何 IDE 在 Tomcat 中 我在 webapps 中有以下目录结构 notifire WEB INF gt web xml
  • 显示所有表的所有数据

    我想显示数据库中的所有数据而不编写select对于每个表 我该怎么做 我不想这样做 select from Customer select from Employee select 我正在将 TSQL 与 MSSQL Server 一起使用
  • 在 Swift 中添加和减去时间

    我用伪代码编写了其中一些内容 因为我不知道它的语法 我想要timeLeftLabel text反映 6 小时结束前还剩多少小时 分钟和秒 我最大的问题是我不知道如何加减时间 谁能帮我 var timer NSTimer func timer
  • 如何在 Primefaces DataTable 中使用 Font Awesome(或其他字体图标)而不是 jQuery sprite?

    我正在使用 PrimeFaces 生成数据表 默认的排序图标是 JQuery 库中的 V 形图标 但由于它们不是 Vector 因此看起来很难看 我不想使用像 font Awesome 这样的字体来替换它们 但我不知道如何做到这一点 我可以
  • 如何将 SolrQuery(SOLRJ) 转换为 URL?

    在使用 SOLRJ 时 我想知道如何使用 SOLR 查询语法将 SolrQuery 对象转换为其 URL 表示形式 我尝试使用 toString 方法 但它没有返回正确的查询表示 还有其他方法吗 我建议ClientUtils toQuery
  • 从您的应用程序启动 Facetime?

    我发现您可以通过以下方式从您的应用程序启动 FaceTime UIApplication sharedApplication openURL NSURL URLWithString facetime tel number 我还读到 由于没有
  • 拼接后计算源图像的坐标

    我使用 opencv 的全景拼接算法 将 2 或 3 个图像拼接成一个新的结果图像 我有每个源图像中的点的坐标 我需要计算结果图像中这些点的新坐标是什么 我在下面描述该算法 我的代码类似于示例 缝合 详细 来自 opencv 分支 3 4