我正在使用相机工作。
相机以实时反馈的形式呈现给用户,当用户单击时,就会创建图像并将其传递给用户。
问题是图像被设计为位于最顶部位置,该位置高于实时预览显示的位置。
您知道如何调整相机的框架,使实时视频的顶部与他们要拍摄的照片的顶部相匹配吗?
我以为这可以做到这一点,但事实并非如此。这是我当前的相机帧代码:
//Add the device to the session, get the video feed it produces and add it to the video feed layer
func initSessionFeed()
{
_session = AVCaptureSession()
_session.sessionPreset = AVCaptureSessionPresetPhoto
updateVideoFeed()
_videoPreviewLayer = AVCaptureVideoPreviewLayer(session: _session)
_videoPreviewLayer.frame = CGRectMake(0,0, self.frame.width, self.frame.width) //the live footage IN the video feed view
_videoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
self.layer.addSublayer(_videoPreviewLayer)//add the footage from the device to the video feed layer
}
func initOutputCapture()
{
//set up output settings
_stillImageOutput = AVCaptureStillImageOutput()
var outputSettings:Dictionary = [AVVideoCodecJPEG:AVVideoCodecKey]
_stillImageOutput.outputSettings = outputSettings
_session.addOutput(_stillImageOutput)
_session.startRunning()
}
func configureDevice()
{
if _currentDevice != nil
{
_currentDevice.lockForConfiguration(nil)
_currentDevice.focusMode = .Locked
_currentDevice.unlockForConfiguration()
}
}
func captureImage(callback:(iImage)->Void)
{
if(_captureInProcess == true)
{
return
}
_captureInProcess = true
var videoConnection:AVCaptureConnection!
for connection in _stillImageOutput.connections
{
for port in (connection as AVCaptureConnection).inputPorts
{
if (port as AVCaptureInputPort).mediaType == AVMediaTypeVideo
{
videoConnection = connection as AVCaptureConnection
break;
}
if videoConnection != nil
{
break;
}
}
}
if videoConnection != nil
{
_stillImageOutput.captureStillImageAsynchronouslyFromConnection(videoConnection)
{
(imageSampleBuffer : CMSampleBuffer!, _) in
let imageDataJpeg = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(imageSampleBuffer)
var pickedImage = UIImage(data: imageDataJpeg, scale: 1)
UIGraphicsBeginImageContextWithOptions(pickedImage.size, false, pickedImage.scale)
pickedImage.drawInRect(CGRectMake(0, 0, pickedImage.size.width, pickedImage.size.height))
pickedImage = UIGraphicsGetImageFromCurrentImageContext() //this returns a normalized image
if(self._currentDevice == self._frontCamera)
{
var context:CGContextRef = UIGraphicsGetCurrentContext()
pickedImage = UIImage(CGImage: pickedImage.CGImage, scale: 1.0, orientation: .UpMirrored)
pickedImage.drawInRect(CGRectMake(0, 0, pickedImage.size.width, pickedImage.size.height))
pickedImage = UIGraphicsGetImageFromCurrentImageContext()
}
UIGraphicsEndImageContext()
var image:iImage = iImage(uiimage: pickedImage)
self._captureInProcess = false
callback(image)
}
}
}
如果我通过提高 y 值来调整 AVCaptureVideoPreviewLayer 的亮度,我只会得到一个显示偏移量的黑条。我很好奇为什么视频帧的最顶部与我的输出图像不匹配。
我确实“裁剪”了相机,使其成为一个完美的正方形,但是为什么实时相机馈送的顶部不是实际顶部(因为图像默认为相机馈送不显示的更高位置)
Update:
这是我正在谈论的内容之前和之后的屏幕截图
Before:
之前的图像 http://postimg.org/image/qsjdokxan/这就是现场直播所显示的内容
After:
后图像 http://postimg.org/image/wnwyepgpz/这就是用户单击拍照时生成的图像