OpenCV中GPU模块(CUDA)函数

2023-10-28

The OpenCV GPU module is a set of classes and functions to utilize GPU computational capabilities. It is implemented using NVIDIA* CUDA* Runtime API and supports only NVIDIA GPUs.

 

1.      getCudaEnableDeviceCount:returns the number of installed CUDA-enabled devices;

2.      setDevice:sets adevice and initializes it for the current thread;

3.      getDevice:returns the current device index set or initialized by default;

4.      resetDevice:explicitly destroys and cleans up all resources associated with the current device in the current process;

5.      FeatureSet:enumeration providing GPU computing features;

6.      class::TargetArchs:class providing a set of static methods to check what NVIDIA card architecture the GPU module was built for;

7.      class::DeviceInfo:class providing functionality for querying the specified GPU properties;

8.      DeviceInfo::name:returns the device name;

9.      DeviceInfo::majorVersion:returns the major compute capability version;

10.  DeviceInfo::minorVersion:returns the minor compute capability version;

11.  DeviceInfo::multiProcessorCount:returns the number of streaming multiprocessors;

12.  DeviceInfo::freeMemory:returns the amount of free memory in bytes;

13.  DeviceInfo::totalMemory:returns the amount of total memory in bytes;

14.  DeviceInfo::supports:provides information on GPU feature support;

15.  DeviceInfo::isCompatible:checks the GPU module and device compatibility;

16.  DeviceInfo::deviceID:returns system index of the GPU device starting with 0;

17.  struct::PtrStepSz:lightweight class encapsulating pitched memory on a GPU and passed to nvcc-compiled code(CUDA kernels);

18.  struct::PtrStep:structure similar to gpu::PtrStepSz but containing only a pointer and row step;

19.  class::GpuMat:base storage class for GPU memory with reference counting. Its interface matches the Mat interface;

20.  createContinuous:creates a continuous matrix in the GPU memory;

21.  ensureSizeIsEnough:ensures that the size of a matrix is big enough and the matrix has a proper type;

22.  registerPageLocked:page-locks the memory of matrix and maps it for the device(s);

23.  unregisterPageLocked:unmaps the memory of matrix and makes it pageable again;

24.  class::CudaMem:class with reference counting wrapping special memory type allocation functions from CUDA. Its interface is also Mat()-like but with additional memory type parameters;

25.  CudaMem::createMatHeader:creates a header without reference counting to gpu::CudaMem data;

26.  CudaMem::createGpuMatHeader:maps CPU memory to GPU address space and create the gpu::GpuMat header without reference counting for it;

27.  CudaMem::canMapHostMemory:returns true if the current  hardware supports address space mapping and ALLOC_ZEROCOPY memory allocation;

28.  class::Stream:this class encapsulates a queue of asynchronous calls. Some functions have overloads with the additional gpu::Stream parameter;

29.  Stream::queryIfComplete:returns true if the current stream queue is finished, otherwise, it returns false;

30.  Stream::waitForCompletion:blocks the current CPU thread until all operations in the stream are complete;

31.  Stream::enqueueDownload:copies data from device to host;

32.  Stream::enqueueUpload:copies data from host to device;

33.  Stream::enqueueCopy:copies data from device to device;

34.  Stream::enqueueMemSet:initializes or sets device memory to a value;

35.  Stream::enqueueConvert:converts matrix type, ex from float to uchar depending on type;

36.  Stream::enqueueHostCallback:adds a callback to be called on the host after all currently enqueued items in the stream have completed;

37.  struct::StreamAccessor:class that enables getting cuda Stream_t from gpu::Stream and is declared in stream_accessor.hpp because it is the only public header that depends on the CUDA Runtime API;

38.  gemm(cv::gemm):performs generalized matrix multiplication;

39.  transpose(cv::transpose):transpose a matrix;

40.  flip(cv::flip):flips a 2D matrix around vertical, horizontal, or both axes;

41.  LUT(cv::LUT):transforms the source matrix into the destination matrix using the given look-up table:dst(I) = lut(src(I));

42.  merge(cv::merge):makes a multi-channel matrix out of several single-channel matrices;

43.  split(cv::split):copies each plane of a multi-channel matrix into an array;

44.  magnitude(cv::magnitude):computes magnitudes of complex matrix elements;

45.  magnitudeSqr:computes squared magnitudes of complex matrix elements;

46.  phase(cv::phase):computes polar angles of complex matrix elements;

47.  cartToPolar(cv::cartToPolar):converts Cartesian coordinates into polar;

48.  polarToCart(cv::polarToCart):converts polar coordinates into Cartesian;

49.  normalize(cv::normalize):normalizes the norm or value range of an array;

50.  add(cv::add):computes a matrix-matrix or matrix-scalar sum;

51.  subtract(cv::subtract):computes a matrix-matrix or matrix-scalar difference;

52.  multiply(cv::multiply):computes a matrix-matrix or matrix-scalar per-element product;

53.  divide(cv::divide):computes a matrix-matrix or matrix-scalar division;

54.  addWeighted(cv::addWeighted):computes the weighted sum of two arrays;

55.  abs(cv::abs):computes an absolute value of each matrix element;

56.  sqr:computes a square value of each matrix element;

57.  sqrt(cv::sqrt):computes a square root of each matrix element;

58.  exp(cv::exp):computes an exponent of each matrix element;

59.  log(cv::log):computes a natural logarithm of absolute value of each matrix element;

60.  pow(cv::pow):raises every matrix element to a power;

61.  absdiff(cv::absdiff):computes per-element absolute difference of two matrices(or of a matrix and scalar);

62.  compare(cv::compare):compares elements of two matrices;

63.  bitwise_not(cv::bitwise_not):performs a per-element bitwise inversion;

64.  bitwise_or(cv::bitwise_or):performs a per-element bitwise disjunction of two matrices or of matrix and scalar;

65.  bitwise_and(cv::bitwise_and):performs a per-element bitwise conjunction of two matrices or of matrix and scalar;

66.  bitwise_xor(cv::bitwise_xor):performs a per-element bitwise exclusive or operation of two matrices of matrix and scalar;

67.  rshift:performs pixel by pixel right shift of an image by a constant value;

68.  lshift:peforms pixel by pixel left shift of an image by a constant value;

69.  min(cv::min):computes the per-element minimum of two matrices(or a matrix and a scalar);

70.  max(cv::max):computes the per-element maximum of two matrices(or a matrix and a scalar);

71.  meanShiftFiltering:performs mean-shift filtering for each point of the source image;

72.  meanShiftProc(gpu::meanShiftFiltering):performs a mean-shift procedure and stores information about processed points(their colorsand positions) in two images;

73.  meanShiftSegmentation:performs a mean-shift segmentation of the source image and eliminates small segments;

74.  integral(cv::integral):computes an integral image;

75.  sqrIntegral:computes a squared integral image;

76.  columnSum:computes a vertical(column) sum;

77.  cornerHarris(cv::cornerHarris):computes the Harris cornerness criteria at each image pixel;

78.  cornerMinEigenVal(cv::cornerMinEigenVal):computes the minimum eigen value of a 2*2 derivative covariation matrix at each pixel(the cornerness criteria);

79.  mulSpectrums(cv::mulSpectrums):performs a per-element multiplication of two Fourier spectrums;

80.  mulAndScaleSpectrums(cv::mulSpectrums):performs a per-element multiplication of two Fourier spectrums and scales the result;

81.  dft(cv::dft):performs a forward or inverse discrete Fourier transform(1D or 2D) of the floating pointmatrix;

82.  struct::ConvolveBuf:class providing a memory buffer for gpu::convolve() function, plus it allows toadjust some specific parameters;

83.  ConvolveBuf::create:constructs a buffer for gpu::convolve() function with respecitive arguments;

84.  convolve(gpu::filter2D):computes a convolution (or cross-correlation) of two images;

85.  struct::MatchTemplateBuf:class providing memory buffers for gpu::matchTemplate() function, plus it allows toadjust some specific parameters;

86.  matchTemplate(cv::matchTemplate):computes a proximity map for a raster template and an image where the template is searchedfor;

87.  remap(cv::remap):applies a generic geometrical transformation to an image;

88.  cvtColor(cv::cvtColor):converts an image from one color space to another;

89.  swapChannels:exchanges the color channels of an image in-place;

90.  threshold(cv::threshold):applies a fixed-level threshold to each array element;

91.  resize(cv::resize):resizes an image;

92.  warpAffine(cv::warpAffine):applies an affine transformation to an image;

93.  buildWarpAffineMats(gpu::warpAffine,gpu::remap):builds transformation maps for affine transformation;

94.  warpPerspective(cv::warpPerspective):applies a perspective transformation to an image;

95.  buildWarpPerspectiveMaps(gpu::warpPerspective,gpu::remap):builds transformation maps for perspective transformation;

96.  rotate(gpu::warpAffine):rotates an image around the origin(0,0) and then shifts it;

97.  copyMakeBorder(cv::copyMakeBorder):forms a border around an image;

98.  rectStdDev:computes a standard deviation of integral images;

99.  evenLevels:computes levels with even distribution;

100.  histEven:calculates a histogram with evenly distributed bins;

101.  histRange:calculates a histogram with bins determined by the levels array;

102.  calcHist:calculates histogram for one channel 8-bit image;

103.  equalizeHist(cv::equalizeHist):equalizes the histogram of a grayscale image;

104.  buildWarpPlaneMaps:builds plane warping maps;

105.  buildWapCylindricalMaps:builds cylindrical warping maps;

106.  buildWarpSphericalMaps:builds spherical warping maps;

107.  pyrDown(cv::pyrDown):smoothes an image and downsamples it;

108.  pyrUp(cv::pyrUp):upsamples an image and then smoothes it;

109.  blendLinear:performs linear blending of two images;

110.  bilateralFilter(cv::bilateralFilter):performs bilateral filtering of passed image;

111.  nonLocalMeans(cv::fastNlMeanDenoising):performs pure non local means denoising without any simplification, and thus it is not fast;

112.  class::FastNonLocalMeansDenoising:the class implements fast approximate Non Local Means Denoising algorithm;

113.  FastNonLocalMeansDenoising::simpleMethod(cv::fastNlMeanDenoising):perform image denoising using Non-local Means Denoising algorithm;

114.  FastNonLocalMeansDenoising::labMethod(cv::fastNlMeanDenoisingColored):modification of FastNonLocalMeansDenoising::simpleMethod for color images;

115.  alphaComp:composites two images using alpha opacity values contained in eachimage;

116.  Canny(cv::Canny):finds edges in an image using Canny algorithm;

117.  HoughLines(cv::HoughLines):finds lines in abinary image using the classical Hough transform;

118.  HoughLinesDownload(gpu::HoughLines):downloads resultsfrom gpu::HoughLines to host memory;

119.  HoughCircles(cv::HoughCircles):finds circles in agrayscale image using the Hough transform;

120.  HoughCirclesDownload(gpu::HoughCircles):downloads results from gpu::HoughCircles to host memory;

121.  meanStdDev(cv::meanStdDev):computes a mean value and a standard deviation of matrix elements;

122.  norm(cv::norm):returns the norm of a matrix(or difference of two matrices );

123.  sum(cv::sum):returns the sum of matrix elements;

124.  absSum:returns the sum of absolute values for matrix elements;

125.  sqrSum:returns the squared sum of matrix elements;

126.  minMax(cv::minMaxLoc):finds global minimum and maximum matrix elements and returns theirvaluse;

127.  minMaxLoc(cv::minMaxLoc):finds global minimumand maximum matrix elements and returns their values with locations;

128.  countNonZero(cv::countNonZero):counts non-zero matrix elements;

129.  reduce(cv::reduce):reduces a matrix to a vector;

130.  struct::HOGDescriptor:the class implements Histogram of Oriented Gradients object detector;

131.  HOGDescriptor::getDescriptorSize:returns the number of coefficients required for the classification;

132.  HOGDescriptor::getBlockHistogramSize:returns the block histogram size;

133.  HOGDescriptor::setSVMDetector:sets coefficients for the linear SVM classifier;

134.  HOGDescriptor::getDefaultPeopleDetector:returns coefficients of the classifier trained for people detection(for default window size);

135.  HOGDescriptor::getPeopleDetector48x96:returns coefficients of the classifier trained for people detection(for 48 * 96windows);

136.  HOGDescriptor::getPeopleDetector64x128:returns coefficients of the classifier trained for people detection(for 64 * 128 windows);

137.  HOGDescriptor::detect:performs object detection without a multi-scale window;

138.  HOGDescriptor::detectMultiScale:performs object detection with a multi-scale window;

139.  HOGDescriptor::getDescriptors:returns block descriptors computed for the whole image;

140.  class::CascadeClassifier_GPU:cascade classifier class used for object detection, supports HAAR and LBP cascades;

141.  CascadeClassifier_GPU::empty:checks whether the classifier is loaded or not;

142.  CascadeClassifier_GPU::load:loads the classifier from a file, the previous content is destroyed;

143.  CascadeClassifier_GPU::release:destroys the loaded classifier;

144.  CascadeClassifier_GPU::detectMultiScale(cv::CascadeClassifier::detectMultiScale):detects objects of different sizes in the input image;

145.  class::FAST_GPU(cv::FAST):class used for corner detection using the FAST algorithm;

146.  FAST_PUG::operator():finds the key points using FAST detector;

147.  FAST_PUG::downloadKeypoints:downlaod key points from GPU to CPU memory;

148.  FAST_PUG::convertKeypoints:converts key points from GPU representation to vector of Key Point;

149.  FAST_PUG::release:releases inner buffer memory;

150.  FAST_PUG::calsKeyPointsLocation:find keypoints and compute it’sresponse if nonmaxSupression is true;

151.  FAST_PUG::getKeyPoints:gets final array of keypoints;

152.  class::ORB_GPU:class for extracting ORB features and descriptors from an image;

153.  ORG_PUG::operator():detects keypoints and computes descriptors for them;

154.  ORG_PUG::downlaodKeyPoints:download keypoints from GPU to CPU memory;

155.  ORG_PUG::convertKeyPoints:converts keypoints from GPU representation to vector of KeyPoint;

156.  ORG_PUG::release:releases inner buffer memory;

157.  class::BruteForceMatcher_GPU_base(cv::DescriptorMatcher,cv::BFMatcher):brute-force descriptor matcher, for each descriptor in the firstset, this matcher finds the closest descriptor in the second set by trying eachone, this descriptor matcher supports masking permissible matches between descriptor sets;

158.  BruteForceMatcher_GPU_base::match(cv::DescriptorMatcher::match):finds thebest match for each descriptor from a query set with train descriptors;

159.  BruteForceMatcher_GPU_base::makeGpuCollection:performs a GPU collection of traindescriptors and masks in a suitable format for the gpu:: BruteForceMatcher_GPU_base::matchCollectionfunction;

160.  BruteForceMatcher_GPU_base::matchDownload:downloads matrices obtained via gpu:: BruteForceMatcher_GPU_base::matchSingleor gpu:: BruteForceMatcher_GPU_base::matchCollection to vector with DMatch;

161.  BruteForceMatcher_GPU_base::matchConvert:converts matrices obtained via gpu:: BruteForceMatcher_GPU_base::matchSingleor gpu:: BruteForceMatcher_GPU_base::matchCollection to vector with DMatch;

162.  BruteForceMatcher_GPU_base::knnMatch(cv::DescriptorMatcher::knnMatch):finds the k best matches for eachdescriptor from a query set with train descriptors;

163.  BruteForceMatcher_GPU_base::knnMatchDownload:downloads matrices obtained via gpu:: BruteForceMatcher_GPU_base::knnMatchSingleor gpu:: BruteForceMatcher_GPU_base::knnMatch2Collection to vector with DMatch;

164.  BruteForceMatcher_GPU_base::knnMatchConvert:converts matrices obtained via gpu:: BruteForceMatcher_GPU_base::knnMatchSingleor gpu:: BruteForceMatcher_GPU_base::knnMatch2Collection to CPU vector withDMatch;

165.  BruteForceMatcher_GPU_base::radiusMatch(cv::DescriptorMatcher::radiusMatch):for each query descriptor, finds thebest matches with a distance less than a given threshold;

166.  BruteForceMatcher_GPU_base::radiusMatchDownload:downloads matrices obtained via gpu:: BruteForceMatcher_GPU_base::radiusMatchSingleor gpu:: BruteForceMatcher_GPU_base::radiusMatchCollection to vector withDMatch;

167.  BruteForceMatcher_GPU_base::radiusMatchConvert:converts matrices obtained via gpu:: BruteForceMatcher_GPU_base::radiusMatchSingleor gpu:: BruteForceMatcher_GPU_base::radiusMatchCollection to vector withDMatch;

168.  class::BaseRowFilter_GPU:base class for linear or non-linearfilters that processes rows of 2D arrays, such filters are used for the “horizontal”filtering passes in separable filters;

169.  class::BaseColumnFilter_GPU:base class for linear or non-linearfilters that processes columns of 2D arrays, such filters are used for the “vertical”filtering passes in separable filters;

170.  class::BaseFilter_GPU:base class for non-separable 2D filters;

171.  class::FilterEngine_GPU:base class for the Filter Engine;

172.  createFilter2D_GPU(gpu::createBoxFilter_GPU):creates a non-separable filter enginewith the specified filter;

173.  createSeqrableFilter_GPU:creates a separable filter engine withthe specified filters;

174.  getRowSumFilter_GPU:creates a horizontal 1D box filter;

175.  getColumnSumFilter_GPU:creates a vertical 1D box filter;

176.  createBoxFilter_GPU(cv::boxFilter):creates a normalized 2D box filter;

177.  boxFilter(cv::boxFilter):smooths the imageusing the normalized box filter;

178.  blur(cv::blur, gpu::boxFilter):acts as a synonymfor the normalized box filter;

179.  createMorphologyFilter_GPU(cv::createMorphologyFilter):creates a 2Dmorphological filter;

180.  erode(cv::erode):erodes an image by using a specific structuring element;

181.  dilate(cv::dilate):dilates an image by using a specific structuring element;

182.  morphologyEx(cv::morphologyEx):applies an advanced morphologicaloperation to an image;

183.  createLinearFilter_GPU(cv::createLinearFilter):creates anon-separable linear filter;

184.  filter2D(cv::filter2D, gpu::convolve):applies thenon-separable 2D linear filter to an image;

185.  Laplacian(cv::Laplacian, gpu::filter2D):applies theLaplacian operator to an image;

186.  getLinearRowFilter_GPU(cv::createSeparableLinearFilter):creates aprimitive row filter with the specified kernel;

187.  getLinearColumnFilter_GPU(cv::createSeparableLinearFilter):creates aprimitive column filter with the specified kernel;

188.  createSeparableLinearFilter_GPU(cv::createSeparableLinearFilter):creates aseparable linear filter engine;

189.  sepFilter2D(cv::sepFilter2D):applies a separable2D linear filter to an image;

190.  createDerivFilter_GPU(cv::createDerivFilter):creates afilter engine for the generalized Sobel operator;

191.  Sobel(cv::Sobel):applies the generalized Sobel operator to an image;

192.  Scharr(cv::Scharr):calculates the first x- or y- image derivative using the Scharroperator;

193.  createGaussianFilter_GPU(cv::createGaussianFilter):creates aGaussian filter engine;

194.  GaussianBlur(cv::GaussianBlur):smooths an imageusing the Gaussian filter;

195.  getMaxFilter_GPU:create the maximum filter:

196.  getMinFilter_GPU:create the minimum filter;

197.  class::StereoBM_GPU:class computing stereo correspondence(disparity map) using the blockmatching algorithm;

198.  StereoBM_GPU::operator:enables thestereo correspondence operator that finds the disparity for the specifiedrectified stereo pair;

199.  StereoBM_GPU::checkIfGpuCallReasonable:uses a heuristic method to estimatewhether the current GPU is faster than the CPU in this algorithm, it queriesthe currently active device;

200.  class::StereoBeliefPropagation:class computing stereo correspondenceusing the belief propagation algorithm;

201.  StereoBeliefPropagation::estimateRecommendedParams:uses a heuristic method to compute therecommended parameters(ndisp, iters and levels) for the specified image size(widthand height);

202.  StereoBeliefPropagation::operator:enables the stereo correspondenceoperator that finds the disparity for the specified rectified stereo pair ordata cost;

203.  class::StereoConstantSpaceBP:class computingstereo correspondence using the constant space belief propagation algorithm;

204.  StereoConstantSpaceBP::estimateRecommendedParams:uses aheuristic to compute parameters(ndisp, iters, levelsand nrplane) for thespecified image size(width and height);

205.  StereoConstantSpaceBP::operator:enables the stereocorrespondence operator that finds the disparity for the specified rectifiedstereo pair;

206.  class::DisparityBilateralFilter:classrefining a disparity map using joint bilateral filtering;

207.  DisparityBilateralFilter::operator:refines a disparitymap using joint bilateral filtering;

208.  drawColorDisp:colors a disparity image;

209.  reprojectImageTo3D(cv::reprojectImageTo3D):reprojects adisparity image to 3D space;

210.  solvePnPRansac(cv::solvePnPRansac):finds the objectpose from 3D-2D point correspondences;

211.  class::BroxOpticalFlow:class computing the optical flow for two images using Brox et alOptical Flow algorithm;

212.  class::GoodFeaturesToTrackDetector_GPU(cv::goodFeaturesToTrack):class usedfor strong corners detection on an image;

213.  GoodFeaturesToTrackDetector_GPU::operator(cv::goodFeaturesToTrack):finds themost prominent corners in the image;

214.  GoodFeaturesToTrackDetector_GPU::releaseMemory:releasesinner buffers memory;

215.  class::FarnebackOpticalFlow:class computing adense optical flow using the Gunnar Farneback’s algorithm;

216.  FarnebackOpticalFlow::operator(cv::calcOpticalFlowFarneback):computes adense optical flow using the Gunnar Farneback’s algorithm;

217.  FarnebackOpticalFlow::releaseMemory:releases unusedauxiliary memory buffers;

218.  class::PyrLKOpticalFlow(cv::calcOpticalFlowPyrLK):class usedfor calculating an optical flow;

219.  PyrLKOpticalFlow::sparse:calculate an opticalflow for a sparse feature set;

220.  PyrLKOpticalFlow::dense:calculate dense optical flow;

221.  PyrLKOpticalFlow::releaseMemory:releases inner buffers memory;

222.  interpolateFrames:interpolates frames(images) usingprovided optical flow(displacement field);

223.  class::FGDStatModel:class used for background/foregroundsegmentation;

224.  FGDStatModel::create:initializes background model;

225.  FGDStatModel::release:releases all inner buffer’s memory;

226.  FGDStatModel::update:updates the background model andreturns foreground regions count;

227.  class::MOG_GPU(cv::BackgroundSubtractorMOG):GaussianMixture-based Background/Foreground Segmentation Algorithm;

228.  MOG_GPU::operator:updates the background model and returns the foreground mask;

229.  MOG_GPU::getBackgroundImage:computes abackground image;

230.  MOG_GPU::release:releases all inner buffer’s memory;

231.  class::MOG2_GPU(cv::BackgroundSubtractorMOG2):GaussianMixture-based Background/Foreground Segmentation Algorithm;

232.  MOG2_GPU::operator:updates the background model and returns the foreground mask;

233.  MOG2_GPU::getBackgroundImage:computes abackground image;

234.  MOG2_GPU::release:releases all inner buffer’s memory;

235.  class::GMG_GPU:class used for background/foreground segmentation;

236.  GMG_GPU::initialize:initialize background model and allocates all inner buffers;

237.  GMG_GPU::operator:updates the background model and returns the foreground mask;

238.  GMG_GPU::release:releases all inner buffer’s memory;

239.  class::VideoWriter_GPU:video writercalss;

240.  VideoWriter_GPU::open:initializes or reinitializes video writer;

241.  VideoWriter_GPU::isOpened:returns true ifvideo writer has been successfully initialized;

242.  VideoWriter_GPU::close:releases the video writer;

243.  VideoWriter_GPU::write:writes the next video frame;

244.  strct::VideoWriter_GPU::EncoderParams:differentparameters for CUDA video encoder;

245.  VideoWriter_GPU::EncoderParams::load:reads parametersfrom config file;

246.  VideoWriter_GPU::EncoderParams::save:saves parameters toconfig file;

247.  class::VideoWriter_GPU::EncoderCallBack:callbacksfor CUDA video encoder;

248.  VideoWriter_GPU::EncoderCallBack::acquireBitStream:callbackfunctions to signal the start of bitstream that is to be encoded;

249.  VideoWriter_GPU::EncoderCallBack::releaseBitStream:callback function to signal that theencoded bitstream is ready to be written to file;

250.  VideoWriter_GPU::EncoderCallBack::onBeginFrame:callback function to signal thatencoding operation on the frame has started;

251.  VideoWriter_GPU::EncoderCallBack::onEndFrame:callback function signals that theencoding operation on the frame has finished;

252.  class::VideoReader_GPU:class for reading video from files;

253.  VideoReader_GPU::Codec:video codecs supported bygpu::VideoReader_GPU;

254.  VideoReader_GPU::ChromaFormat:chroma formats supported bygpu::VideoReader_GPU;

255.  VideoReader_GPU::FormatInfo:struct providing information aboutvideo file format;

256.  VideoReader_GPU::open:initializes or reinitializes videoreader;

257.  VideoReader_GPU::isOpened:returns true if video reader has beensuccessfully initialized;

258.  VideoReader_GPU::close:releases the video reader;

259.  VideoReader_GPU::read:grabs, decodes and return the nextvideo frame;

260.  VideoReader_GPU::format:returns information about video fileformat;

261.  VideoReader_GPU::dumpFormat:dump information about video fileformat to specified stream;

262.  class::VideoReader_GPU::VideoSource:interface for video demultiplexing;

263.  VideoReader_GPU::VideoSource::format:returns information about video fileformat;

264.  VideoReader_GPU::VideoSource::start:starts processing;

265.  VideoReader_GPU::VideoSource::stop:stops processing;

266.  VideoReader_GPU::VideoSource::isStarted:returns true if processing wassuccessfully started;

267.  VideoReader_GPU::VideoSource::hasError:returns true if error occurred duringprocessing;

268.  VideoReader_GPU::VideoSource::parseVideoData:parse next video frame, implementationmust call this method after new frame was grabbed;

 

 

    Keep in mind:(1)、No double support on the GPU;(2)、Porting small functions to GPU is not recommended as the upload/download time will be larger than the amount you gain by a parallel execution;(3)、GpuMat works similar tothe Mat with a 2D only limitation and no reference returning for its functions(cannot mix GPU references with CPU ones);(4)、Not for all channel numbers you can make efficient algorithms on the GPU;(5)、The input images for the GPU images need to be either one or four channel ones and one of the char or float type for the item sizes;If you have three channel images as an input you can do two things: either adds a new channel (and use char elements) or split up the image and call the function for each image.The first one isn’t really recommended as you waste memory;(6)、For some functions, where the position of the elements (neighbor items) doesn’t matter quick solution is to just reshape it into a single channel image;(7)、Data allocations are very expensive on GPU;Use a buffer to solve: allocate once reuse later;(8)、You’re throwing out on the window the price for memory allocation and data transfer. And on the GPU this is damn high. Another possibility for optimization is to introduce asynchronous OpenCV GPU calls too with the help of the gpu::Stream;(9)、Memory allocation on the GPU is considerable. Therefore, if it’s possible allocate new memory as few times as possible. If you create a function what you intend to call multiple times it is a good idea to allocate any local parameters for the function only once, during the first call. To do this you create a data structure containing all the local variables you will use. The GpuMat will only reallocate itself on a new call if the new matrix size is different from the previous one;(10)、Avoid unnecessary function data transfers. Any small data transfer will be significant one once you go to the GPU. Therefore, if possible make all calculations in-place (in other words do not create new memory objects for reasons explained at the previous point). For example, although expressing arithmetical operations maybe easier to express in one line formulas, it will be slower;(11)、Use asynchronous calls (the gpu::Stream). By default whenever you call a gpu function it will wait for the call to finish and return with the result afterwards. However, it is possible to make asynchronous calls, meaning it will call for the operation execution, make the costly data allocations for the algorithm and return back right away.Now you can call another function if you wish to do so.By using a stream we can make the data allocation, upload operations while the GPU is already executing a given method. For example we need to upload two images. We queue these one after another and call already the function that processes it. The functions will wait forthe upload to finish, however while that happens makes the output bufferallocations for the function to be executed next.

 

本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

OpenCV中GPU模块(CUDA)函数 的相关文章

  • Python 函数前的星号[重复]

    这个问题在这里已经有答案了 我正在关注这个教程 http www pyimagesearch com 2015 04 20 sorting contours using python and opencv comment 405768 ht
  • 如何提取图像中的表格

    我想从图像中提取表格 这个 python 模块https pypi org project ExtractTable https pypi org project ExtractTable 与他们的网站https www extractta
  • 如何使用requirements.txt 在 Heroku python Web 应用程序中安装 Dlib?

    我构建了一个涉及机器学习的 Python Flask Web API 但在 Heroku 上部署它时遇到了很多挫折 问题是 我的应用程序依赖于 Dlib 一个库 我似乎找不到在我的 Heroku 服务器中安装的方法 我正在试图解决这个问题
  • OpenCV findContours 破坏源图像

    我编写了一个在单通道空白图像中绘制圆形 直线和矩形的代码 之后 我只需找出图像中的轮廓 就可以正确获取所有轮廓 但找到轮廓后 我的源图像变得扭曲 为什么会出现这种情况 任何人都可以帮我解决这个问题 我的代码如下所示 using namesp
  • bitblt 在 Windows 10 版本 1703 上失败 (15063.138)

    使用 Visual Studio 2017 vc141 以下代码应该从前游戏窗口获取屏幕截图 但现在它返回黑色和空白图像 唯一的游戏问题 尝试过 OpenGL 和 Vulkan ogl 返回黑色 vulkan 返回白色 在升级到 Windo
  • 为什么这些双精度数的返回值为-1.#IND?

    I have double score cvMatchContourTrees CT1 CT2 CV CONTOUR TREES MATCH I1 0 0 cout lt
  • 查找彼此接近的对象边界

    我正在研究一个计算机视觉问题 其中问题的第一步是找到物体彼此靠近的位置 例如 在下图中 我感兴趣的是找到灰色标记的区域 Input Output 我目前的方法是首先反转图像 然后通过侵蚀进行形态梯度跟随 然后删除一些不感兴趣的轮廓 脚本如下
  • 如何在Python中使用tcp套接字发送和接收网络摄像头流?

    我正在尝试重新创建这个项目 https github com hamuchiwa AutoRCCar 我拥有的是服务器 我的电脑 和客户端 我的树莓派 我所做的与原始项目不同的是我尝试使用一个简单的网络摄像头而不是树莓派摄像头将图像从我的
  • Opencv未找到所有轮廓

    我试图找到该图像的轮廓 但是该方法查找轮廓只返回1轮廓 轮廓突出显示image 2 我正在努力寻找all外部轮廓就像这些圆圈 里面有数字 我究竟做错了什么 我可以做什么来实现它 image 1 image 2 以下是我的代码的相关部分 th
  • 使用 opencv warpPerspective() 生成道路的自上而下视图

    我正在尝试实施逆透视映射计算与道路上另一辆车的距离 我知道在应用该函数之前我需要生成一个包含源点和目标点的变换矩阵warpPerspective 但我不知道如何计算目的地点 我在这个论坛和其他网站中搜索 但无法将第一张图片转换为第二张图片
  • 在加载“cv2”二进制扩展期间检测到递归

    我有一个小程序 在 pyinstaller 编译后返回 opencv 错误 但无需编译即可工作 我在 Windows 10 上使用 Python 3 8 10 Program 导入 pyautogui将 numpy 导入为 np导入CV2
  • 如何使用 Python 将我的 GoPro Hero 4 相机直播连接到 openCV?

    我在尝试从我的新 GoPro Hero 4 相机捕获实时流并使用 openCV 对其进行一些图像处理时遇到麻烦 这是我的试用 创建的窗口上没有显示任何内容 import cv2 import argparse import time imp
  • Python中最相似的人脸识别

    如何使用Python和OpenCV来查找面部相似 我已成功使用 OpenCV 和 Python 使用 Haar Cascades 从多张照片中提取人脸 我现在有一个图像目录 所有这些都是不同人的面孔 我想做的是拍摄一张样本图像 然后看看它最
  • 我可以使用 openCV 比较两张不同图像上的两张脸吗?

    我对 openCV 很陌生 我看到它可以计算出脸部并返回一个矩形来指示脸部 我想知道 openCV 是否可以访问两张包含一张脸的图像 并且我希望 openCV 返回这两个人是否相同的可能性 Thanks OpenCV 不提供完整的人脸识别引
  • 曲线/路径骨架二值图像处理

    我正在尝试开发一个可以处理图像骨架的路径 曲线的代码 我想要一个来自两点之间骨架的点向量 该代码在添加一些点后结束 我没有找到解决方案 include opencv2 highgui highgui hpp include opencv2
  • BASH 脚本编译多个 C++ 文件 - OpenCV

    请参见在C 和OpenCV中调用其他文件中的函数 https stackoverflow com questions 24442836 call functions in other files in c and opencv 对于最初的问
  • 使用 ffmpeg 或 OpenCV 处理原始图像

    看完之后维基百科页面 http en wikipedia org wiki Raw image format原始图像格式 是任何图像的数字负片 为了查看或打印 相机图像传感器的输出具有 进行处理 即转换为照片渲染 场景 然后以标准光栅图形格
  • OpenCV Visual Studio ntdll.dll

    我尝试在 Visual Studio 2013 上使用 OpenCV 2 4 10 创建一个项目 但由于以下异常 到目前为止我运气不佳 请建议帮助 TIA letstryitonemoretime exe Win32 Loaded C Us
  • OpenCV 2.3 与 VS 2008 - 鼠标事件

    强制性 我是新手 有一份涉及编程的工作 并且我一边工作一边自学 不用说 作为一名老师 我经常犯彻底的错误 我现在所处的位置 我创建了 Graph 类 它 令人惊讶的是 制作了图表 但现在我想通过单击鼠标来修改图形 但我似乎无法让鼠标处理程序
  • 如何使用 python、openCV 计算图像中的行数

    我想数纸张 所以我正在考虑使用线条检测 我尝试过一些方法 例如Canny HoughLines and FLD 但我只得到处理过的照片 我不知道如何计算 有一些小线段就是我们想要的线 我用过len lines or len contours

随机推荐

  • TF-IDF(词频-逆文档频率)介绍与python实现

    TF IDF term frequency inverse document frequency TF IDF介绍 TF IDF 词频 逆文档频率 是一种用于信息检索 Information retrieval 与数据挖掘 data min
  • 布林通道参数用20还是26_布林线指标参数设置为13、20、26、30、60、99,那个才是最佳?...

    布林线 Boll 指标是通过计算股价的 标准差 再求股价的 信赖区间 很多炒股新人不知道怎么设置布林线的参数 13 20 26 30 60 99那个才是最佳 下面详细介绍 一起来学习吧 布林带指标参数设置多少才最佳 布林带的参数一般默认的情
  • 系统用户名为中文导致PowerShell无法正确操作conda

    本文叙述的问题诱因 系统用户名为中文 本文最后给出了解决方案 并给出了原理猜测 conda版本 4 10 3 系统版本 Win11 专业版 21H2 现象 我在Win11使用Powershell时 发现终端提示 conda没有被初始化 需要
  • Spring系列篇--关于Spring Bean完整的生命周期【附有流程图,超级易懂】

    Welcome Huihui s Code World 接下来看看由辉辉所写的关于Spring的相关操作吧 目录 Welcome Huihui s Code World 一 Spring Bean是单例模式还是多例模式 二 论证Bean是单
  • AMD历代CPU发布时间

    1969年5月1日 AMD公司以10万美元的启动资金正式成立 1997 AMD推出AMD K6处理器 1998 AMD在微处理器论坛上发布AMD速龙处理器 以前的代号为K7 1999 AMD推出AMD速龙处理器 它是业界第一款支持Micro
  • ioctl函数详细分析

    IPv4 和 IPv6 的网络接口操作使用套接字 ioctl 命令 级别 中级 Katiyar Manish manish katiyar in ibm com 软件工程师 IBM Intel Microsoft HPShweta Gupt
  • 显式调用构造函数和析构函数

    今天跟同事聊天 他说到STL源码有用到显示调用析构函数 试一了一下 果然能行 include lt iostream gt using namespace std class MyClass public MyClass cout lt l
  • SqlServer傻瓜教程 — 表备份

    大家好 我们知道类似于ERP这种大型软件 最大的难点就在庞大数据库的整理和维护 表备份顾名思义 是对一个表进行备份 那么我们什么时候需要表备份呢 表备份是在操作大型ERP数据库的某一个表中有重要数据的时候 测试你对此表操作的存储过程 或者测
  • pytorch下import numpy失败_PyTorch的编译系统

    背景 本文以PyTorch 1 0为基础 PyTorch的编译首先是python风格的编译 使用了python的setuptools编译系统 以最基本的编译安装命令python setup py install 为例 这一编译过程包含了如下
  • 两台linux服务器互相自动备份

    将数据同步到其它服务器这里使用Linux同步文件工具rsync来进行文件的同步 rsync rsync是类unix系统下的数据镜像备份工具 remote sync 一款快速增量备份工具 Remote Sync 远程同步 支持本地复制 或者与
  • 【教3妹学算法-leetcode】数组能形成多少数对

    插 前些天发现了一个巨牛的人工智能学习网站 通俗易懂 风趣幽默 忍不住分享一下给大家 点击跳转到网站 坚持不懈 越努力越幸运 大家一起学习鸭 3妹 这么热的天气还要持续多久啊 快要被热死了 2哥 感觉还要有一阵子 今天刚入伏 3妹 啊 今年
  • 三极管的工作原理详解,图文+案例

    什么是三极管 三极管全称是 晶体三极管 也被称作 晶体管 是一种具有放大功能的半导体器件 通常指本征半导体三极管 即BJT管 典型的三极管由三层半导体材料 有助于连接到外部电路并承载电流的端子组成 施加到晶体管的任何一对端子的电压或电流控制
  • C语言程序设计 例7-5

    例7 5 原题 选择排序法 输入一个正整数n 1
  • C++解释器模式:Interpreter Pattern

    当有语言要解释时 请使用解释器模式为语言创建解释器 解释器模式的核心是解释器类 在解释器模式中一般会定义两种解释器 终结符解释器 Terminal Expression Interpreter 终结符解释器用于解释语言中的基本单位 对应语法
  • [架构之路-210]- 人人都是产品经理 - 互联网产品解决用户需求的分析思路和方法笔记

    目录 前言 一 产品需求分析思路和方法 产品需求 1 产品需求的内涵 什么是产品 什么是需求 需求的产品的关系 案例分析 理解需求的误区 2 需求的分类及层次 规律 拆解用户需求 需求分类 需求层次 马斯洛需求层次理论 需求层次的规律 拆解
  • K-means聚类之一(多维整型数据)

    算法介绍 k means 算法接受输入量 k 然后将n个数据对象划分为 k个聚类以便使得所获得的聚类满足 同一聚类中的对象相似度较高 而不同聚类中的对象相似度较小 聚类相似度是利用各聚类中对象的均值所获得一个 中心对象 引力中心 来进行计算
  • javascript对象概念大全

    http www css88 com archives 512 本文介绍了几乎所有关于对象的基本概念 什么是对象 如何创建对象 对象的属性的设置和读取 删除属性的方法 构造函数 对象原型 父类 子类 继承等等 1 对象 对象是一种复合数据类
  • Python爬虫小程序

    import base64 import string import time from selenium import webdriver from selenium webdriver chrome service import Ser
  • 【Docker技术入门与实践(第2版)】Docker入门_学习笔记

    第一章 1 Docker入门须知 1 1 Docker基本知识 Docker是基于Go语言实现的开源容器项目 诞生于2013年年初 最初发 起者是dotCloud公司 Docker自开源后受到广泛的关注和讨论 目前已有多个相关项目 包括Do
  • OpenCV中GPU模块(CUDA)函数

    The OpenCV GPU module is a set of classes and functions to utilize GPU computational capabilities It is implemented usin