AVFoundation - 为视频添加模糊背景

2023-11-25

I am working on a video editing app in Swift. In my case my output video looks like as followingenter image description hereenter image description here

I am trying to fill the black portion with blur effect exactly like thisenter image description hereenter image description here

我搜索但没有得到任何有效的解决方案。任何帮助都会有很大帮助。


Swift 4 - 为视频添加模糊背景

可能我这个答案迟到了,但我仍然没有找到满足此要求的任何解决方案。所以分享我的工作:

在此下载示例代码

Features

  1. 单视频支持
  2. 多个视频合并支持
  3. 支持任何比例的任何画布
  4. 将最终视频保存到相机胶卷
  5. 管理所有视频方向

为视频添加模糊背景的步骤

  1. 合并所有没有音频的视频
    a) 需要渲染区域大小。
    b) 需要计算该区域内视频的比例和位置。对于aspectFill 属性。
  2. 为合并视频添加模糊效果
  3. 将一张一张的视频放在模糊视频的中心

合并视频

func mergeVideos(_ videos: Array<AVURLAsset>, inArea area:CGSize, completion: @escaping (_ error: Error?, _ url:URL?) -> Swift.Void) {

    // Create AVMutableComposition Object.This object will hold our multiple AVMutableCompositionTrack.
    let mixComposition = AVMutableComposition()

    var instructionLayers : Array<AVMutableVideoCompositionLayerInstruction> = []
    
    for asset in videos {
        
        // Here we are creating the AVMutableCompositionTrack. See how we are adding a new track to our AVMutableComposition.
        let track = mixComposition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: kCMPersistentTrackID_Invalid)
        
        // Now we set the length of the track equal to the length of the asset and add the asset to out newly created track at kCMTimeZero for first track and lastAssetTime for current track so video plays from the start of the track to end.
        if let videoTrack = asset.tracks(withMediaType: AVMediaType.video).first {
            
            
            /// Hide time for this video's layer
            let opacityStartTime: CMTime = CMTimeMakeWithSeconds(0, asset.duration.timescale)
            let opacityEndTime: CMTime = CMTimeAdd(mixComposition.duration, asset.duration)
            let hideAfter: CMTime = CMTimeAdd(opacityStartTime, opacityEndTime)
            
            
            let timeRange = CMTimeRangeMake(kCMTimeZero, asset.duration)
            try? track?.insertTimeRange(timeRange, of: videoTrack, at: mixComposition.duration)
            
            
            /// Layer instrcution
            let layerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: track!)
            layerInstruction.setOpacity(0.0, at: hideAfter)

            /// Add logic for aspectFit in given area
            let properties = scaleAndPositionInAspectFillMode(forTrack: videoTrack, inArea: area)
            
            
            /// Checking for orientation
            let videoOrientation: UIImageOrientation = self.getVideoOrientation(forTrack: videoTrack)
            let assetSize = self.assetSize(forTrack: videoTrack)

            if (videoOrientation == .down) {
                /// Rotate
                let defaultTransfrom = asset.preferredTransform
                let rotateTransform = CGAffineTransform(rotationAngle: -CGFloat(Double.pi/2.0))
                
                // Scale
                let scaleTransform = CGAffineTransform(scaleX: properties.scale.width, y: properties.scale.height)
                
                // Translate
                var ytranslation: CGFloat = assetSize.height
                var xtranslation: CGFloat = 0
                if properties.position.y == 0 {
                    xtranslation = -(assetSize.width - ((size.width/size.height) * assetSize.height))/2.0
                }
                else {
                    ytranslation = assetSize.height - (assetSize.height - ((size.height/size.width) * assetSize.width))/2.0
                }
                let translationTransform = CGAffineTransform(translationX: xtranslation, y: ytranslation)
                
                // Final transformation - Concatination
                let finalTransform = defaultTransfrom.concatenating(rotateTransform).concatenating(translationTransform).concatenating(scaleTransform)
                layerInstruction.setTransform(finalTransform, at: kCMTimeZero)
            }
            else if (videoOrientation == .left) {

                /// Rotate
                let defaultTransfrom = asset.preferredTransform
                let rotateTransform = CGAffineTransform(rotationAngle: -CGFloat(Double.pi))
                
                // Scale
                let scaleTransform = CGAffineTransform(scaleX: properties.scale.width, y: properties.scale.height)

                // Translate
                var ytranslation: CGFloat = assetSize.height
                var xtranslation: CGFloat = assetSize.width
                if properties.position.y == 0 {
                    xtranslation = assetSize.width - (assetSize.width - ((size.width/size.height) * assetSize.height))/2.0
                }
                else {
                    ytranslation = assetSize.height - (assetSize.height - ((size.height/size.width) * assetSize.width))/2.0
                }
                let translationTransform = CGAffineTransform(translationX: xtranslation, y: ytranslation)
                
                // Final transformation - Concatination
                let finalTransform = defaultTransfrom.concatenating(rotateTransform).concatenating(translationTransform).concatenating(scaleTransform)
                layerInstruction.setTransform(finalTransform, at: kCMTimeZero)
            }
            else if (videoOrientation == .right) {
                /// No need to rotate
                // Scale
                let scaleTransform = CGAffineTransform(scaleX: properties.scale.width, y: properties.scale.height)
                
                // Translate
                let translationTransform = CGAffineTransform(translationX: properties.position.x, y: properties.position.y)
                
                let finalTransform  = scaleTransform.concatenating(translationTransform)
                layerInstruction.setTransform(finalTransform, at: kCMTimeZero)
            }
            else {
                /// Rotate
                let defaultTransfrom = asset.preferredTransform
                let rotateTransform = CGAffineTransform(rotationAngle: CGFloat(Double.pi/2.0))
                
                // Scale
                let scaleTransform = CGAffineTransform(scaleX: properties.scale.width, y: properties.scale.height)
                
                // Translate
                var ytranslation: CGFloat = 0
                var xtranslation: CGFloat = assetSize.width
                if properties.position.y == 0 {
                    xtranslation = assetSize.width - (assetSize.width - ((size.width/size.height) * assetSize.height))/2.0
                }
                else {
                    ytranslation = -(assetSize.height - ((size.height/size.width) * assetSize.width))/2.0
                }
                let translationTransform = CGAffineTransform(translationX: xtranslation, y: ytranslation)
                
                // Final transformation - Concatination
                let finalTransform = defaultTransfrom.concatenating(rotateTransform).concatenating(translationTransform).concatenating(scaleTransform)
                layerInstruction.setTransform(finalTransform, at: kCMTimeZero)
            }

            instructionLayers.append(layerInstruction)
        }
    }
    
    
    let mainInstruction = AVMutableVideoCompositionInstruction()
    mainInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, mixComposition.duration)
    mainInstruction.layerInstructions = instructionLayers

    let mainCompositionInst = AVMutableVideoComposition()
    mainCompositionInst.instructions = [mainInstruction]
    mainCompositionInst.frameDuration = CMTimeMake(1, 30)
    mainCompositionInst.renderSize = area
    
    //let url = URL(fileURLWithPath: "/Users/enacteservices/Desktop/final_video.mov")
    let url = self.videoOutputURL
    try? FileManager.default.removeItem(at: url)
    
    let exporter = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality)
    exporter?.outputURL = url
    exporter?.outputFileType = .mp4
    exporter?.videoComposition = mainCompositionInst
    exporter?.shouldOptimizeForNetworkUse = true
    exporter?.exportAsynchronously(completionHandler: {
        if let anError = exporter?.error {
            completion(anError, nil)
        }
        else if exporter?.status == AVAssetExportSessionStatus.completed {
            completion(nil, url)
        }
    })
}

添加模糊效果

func addBlurEffect(toVideo asset:AVURLAsset, completion: @escaping (_ error: Error?, _ url:URL?) -> Swift.Void) {
        
        let filter = CIFilter(name: "CIGaussianBlur")
        let composition = AVVideoComposition(asset: asset, applyingCIFiltersWithHandler: { request in
            // Clamp to avoid blurring transparent pixels at the image edges
            let source: CIImage? = request.sourceImage.clampedToExtent()
            filter?.setValue(source, forKey: kCIInputImageKey)
            
            filter?.setValue(10.0, forKey: kCIInputRadiusKey)
            
            // Crop the blurred output to the bounds of the original image
            let output: CIImage? = filter?.outputImage?.cropped(to: request.sourceImage.extent)
            
            // Provide the filter output to the composition
            if let anOutput = output {
                request.finish(with: anOutput, context: nil)
            }
        })
        
        //let url = URL(fileURLWithPath: "/Users/enacteservices/Desktop/final_video.mov")
        let url = self.videoOutputURL
        // Remove any prevouis videos at that path
        try? FileManager.default.removeItem(at: url)
        
        let exporter = AVAssetExportSession(asset: asset, presetName: AVAssetExportPresetHighestQuality)
        
        // assign all instruction for the video processing (in this case the transformation for cropping the video
        exporter?.videoComposition = composition
        exporter?.outputFileType = .mp4
        exporter?.outputURL = url
        exporter?.exportAsynchronously(completionHandler: {
            if let anError = exporter?.error {
                completion(anError, nil)
            }
            else if exporter?.status == AVAssetExportSessionStatus.completed {
                completion(nil, url)
            }
        })
}

将一张一张的视频放在模糊视频的中心
这将是您的最终视频网址。

func addAllVideosAtCenterOfBlur(videos: Array<AVURLAsset>, blurVideo: AVURLAsset, completion: @escaping (_ error: Error?, _ url:URL?) -> Swift.Void) {
    
    
    // Create AVMutableComposition Object.This object will hold our multiple AVMutableCompositionTrack.
    let mixComposition = AVMutableComposition()
    
    var instructionLayers : Array<AVMutableVideoCompositionLayerInstruction> = []
    
    
    // Add blur video first
    let blurVideoTrack = mixComposition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: kCMPersistentTrackID_Invalid)
    // Blur layer instruction
    if let videoTrack = blurVideo.tracks(withMediaType: AVMediaType.video).first {
        let timeRange = CMTimeRangeMake(kCMTimeZero, blurVideo.duration)
        try? blurVideoTrack?.insertTimeRange(timeRange, of: videoTrack, at: kCMTimeZero)
    }
    
    /// Add other videos at center of the blur video
    var startAt = kCMTimeZero
    for asset in videos {
        
        /// Time Range of asset
        let timeRange = CMTimeRangeMake(kCMTimeZero, asset.duration)
        
        // Here we are creating the AVMutableCompositionTrack. See how we are adding a new track to our AVMutableComposition.
        let track = mixComposition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: kCMPersistentTrackID_Invalid)
        
        // Now we set the length of the track equal to the length of the asset and add the asset to out newly created track at kCMTimeZero for first track and lastAssetTime for current track so video plays from the start of the track to end.
        if let videoTrack = asset.tracks(withMediaType: AVMediaType.video).first {
            
            /// Hide time for this video's layer
            let opacityStartTime: CMTime = CMTimeMakeWithSeconds(0, asset.duration.timescale)
            let opacityEndTime: CMTime = CMTimeAdd(startAt, asset.duration)
            let hideAfter: CMTime = CMTimeAdd(opacityStartTime, opacityEndTime)
            
            /// Adding video track
            try? track?.insertTimeRange(timeRange, of: videoTrack, at: startAt)
            
            /// Layer instrcution
            let layerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: track!)
            layerInstruction.setOpacity(0.0, at: hideAfter)
            
            /// Add logic for aspectFit in given area
            let properties = scaleAndPositionInAspectFitMode(forTrack: videoTrack, inArea: size)
            
            /// Checking for orientation
            let videoOrientation: UIImageOrientation = self.getVideoOrientation(forTrack: videoTrack)
            let assetSize = self.assetSize(forTrack: videoTrack)
            
            if (videoOrientation == .down) {
                /// Rotate
                let defaultTransfrom = asset.preferredTransform
                let rotateTransform = CGAffineTransform(rotationAngle: -CGFloat(Double.pi/2.0))
                
                // Scale
                let scaleTransform = CGAffineTransform(scaleX: properties.scale.width, y: properties.scale.height)
                
                // Translate
                var ytranslation: CGFloat = assetSize.height
                var xtranslation: CGFloat = 0
                if properties.position.y == 0 {
                    xtranslation = -(assetSize.width - ((size.width/size.height) * assetSize.height))/2.0
                }
                else {
                    ytranslation = assetSize.height - (assetSize.height - ((size.height/size.width) * assetSize.width))/2.0
                }
                let translationTransform = CGAffineTransform(translationX: xtranslation, y: ytranslation)
                
                // Final transformation - Concatination
                let finalTransform = defaultTransfrom.concatenating(rotateTransform).concatenating(translationTransform).concatenating(scaleTransform)
                layerInstruction.setTransform(finalTransform, at: kCMTimeZero)
            }
            else if (videoOrientation == .left) {
                
                /// Rotate
                let defaultTransfrom = asset.preferredTransform
                let rotateTransform = CGAffineTransform(rotationAngle: -CGFloat(Double.pi))
                
                // Scale
                let scaleTransform = CGAffineTransform(scaleX: properties.scale.width, y: properties.scale.height)
                
                // Translate
                var ytranslation: CGFloat = assetSize.height
                var xtranslation: CGFloat = assetSize.width
                if properties.position.y == 0 {
                    xtranslation = assetSize.width - (assetSize.width - ((size.width/size.height) * assetSize.height))/2.0
                }
                else {
                    ytranslation = assetSize.height - (assetSize.height - ((size.height/size.width) * assetSize.width))/2.0
                }
                let translationTransform = CGAffineTransform(translationX: xtranslation, y: ytranslation)
                
                // Final transformation - Concatination
                let finalTransform = defaultTransfrom.concatenating(rotateTransform).concatenating(translationTransform).concatenating(scaleTransform)
                layerInstruction.setTransform(finalTransform, at: kCMTimeZero)
            }
            else if (videoOrientation == .right) {
                /// No need to rotate
                // Scale
                let scaleTransform = CGAffineTransform(scaleX: properties.scale.width, y: properties.scale.height)
                
                // Translate
                let translationTransform = CGAffineTransform(translationX: properties.position.x, y: properties.position.y)
                
                let finalTransform  = scaleTransform.concatenating(translationTransform)
                layerInstruction.setTransform(finalTransform, at: kCMTimeZero)
            }
            else {
                /// Rotate
                let defaultTransfrom = asset.preferredTransform
                let rotateTransform = CGAffineTransform(rotationAngle: CGFloat(Double.pi/2.0))
                
                // Scale
                let scaleTransform = CGAffineTransform(scaleX: properties.scale.width, y: properties.scale.height)
                
                // Translate
                var ytranslation: CGFloat = 0
                var xtranslation: CGFloat = assetSize.width
                if properties.position.y == 0 {
                    xtranslation = assetSize.width - (assetSize.width - ((size.width/size.height) * assetSize.height))/2.0
                }
                else {
                    ytranslation = -(assetSize.height - ((size.height/size.width) * assetSize.width))/2.0
                }
                let translationTransform = CGAffineTransform(translationX: xtranslation, y: ytranslation)
                
                // Final transformation - Concatination
                let finalTransform = defaultTransfrom.concatenating(rotateTransform).concatenating(translationTransform).concatenating(scaleTransform)
                layerInstruction.setTransform(finalTransform, at: kCMTimeZero)
            }
            
            instructionLayers.append(layerInstruction)
        }
        
        /// Adding audio
        if let audioTrack = asset.tracks(withMediaType: AVMediaType.audio).first {
            let aTrack = mixComposition.addMutableTrack(withMediaType: AVMediaType.audio, preferredTrackID: kCMPersistentTrackID_Invalid)
            try? aTrack?.insertTimeRange(timeRange, of: audioTrack, at: startAt)
        }
        
        // Increase the startAt time
        startAt = CMTimeAdd(startAt, asset.duration)
    }

    
    /// Blur layer instruction
    let layerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: blurVideoTrack!)
    instructionLayers.append(layerInstruction)
    
    let mainInstruction = AVMutableVideoCompositionInstruction()
    mainInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, blurVideo.duration)
    mainInstruction.layerInstructions = instructionLayers
    
    let mainCompositionInst = AVMutableVideoComposition()
    mainCompositionInst.instructions = [mainInstruction]
    mainCompositionInst.frameDuration = CMTimeMake(1, 30)
    mainCompositionInst.renderSize = size
    
    //let url = URL(fileURLWithPath: "/Users/enacteservices/Desktop/final_video.mov")
    let url = self.videoOutputURL
    try? FileManager.default.removeItem(at: url)
    
    let exporter = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality)
    exporter?.outputURL = url
    exporter?.outputFileType = .mp4
    exporter?.videoComposition = mainCompositionInst
    exporter?.shouldOptimizeForNetworkUse = true
    exporter?.exportAsynchronously(completionHandler: {
        if let anError = exporter?.error {
            completion(anError, nil)
        }
        else if exporter?.status == AVAssetExportSessionStatus.completed {
            completion(nil, url)
        }
    })
}

有关上述代码中使用的帮助方法,请下载随附的示例代码。
我也期待您是否有更短的方法来做到这一点。因为我必须将视频导出 3 次才能实现此目的。

本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

AVFoundation - 为视频添加模糊背景 的相关文章

  • 核心数据二进制数据允许外部存储崩溃

    我在 iOS 12 0 1 上发生崩溃 日志如下 ImageIO CFDataGetBytes data 0x28539b2f0 size 154262 offset 8 count 8 dst 0x16dbf86f0 External d
  • 如何更改某些功能以兼容 iOS 10 或更低版本的 Snapchat 中的某些功能,例如相机视图控制器

    我正在制作一个视图控制器来制作像 snapchat 相机一样的相机视图控制器 我下面的代码在 iOS 11 或更高版本上完美运行 老实说 我并没有真正掌握我的代码 因为我只是按照这个像相机视图控制器这样的 snapchat 的教程进行操作
  • NSNotification 与dispatch_get_main_queue

    和 关联这个问题 https stackoverflow com questions 7905192 iphone grand central dispatch main thread我想知道关于何时使用 NSNotification 在主
  • 为什么使用 UIImageJPEGRepresentation 方法通过 writetofile 保存的 .jpeg 文件大小比 ios 中的 UIImageWriteToSavedPhotosAlbum 大

    我正在尝试拯救一个UIImage设备中 jpeg 文件的对象 我正在使用这段代码 void saveImageToDocumentsDirectory UIImage mimage withFileName NSString fileNam
  • 如何在 Cocoa with Core Data 中上传和存储文件?

    我有一个应用程序 我希望能够将文件上传到其中 我可以想象序列化数据 将其放入数组中 然后序列化数组并将其放入核心数据中 但这似乎不太正确 我还可以想象将文件复制到应用程序的支持文件中 获取这些文件的 NSURL 并存储数组或 NSURL 无
  • Base64Transcoder.m 重复符号

    我想使用 SKPSMTPMessage 库 唯一的问题是这个库包含文件 Base64Transcoder m 由于我有 Dropbox SDK 该文件会出现重复错误 我该如何解决这个错误 我不能直接删除 Base64Transcoder m
  • iOS swift 应用程序启动时出现黑屏

    我有个问题 当我启动我的应用程序时 会看到黑屏几秒钟 然后出现启动屏幕 我的启动画面不是默认的 我使用了视图控制器 因为我的启动画面有一个动画 我搜索了一个解决方案 我得到了这个 在我的闪屏加载 iPhone 之前出现黑屏 https st
  • 如何执行 UIAlertAction 的处理程序?

    我正在尝试编写一个帮助程序类以允许我们的应用程序支持两者UIAlertAction and UIAlertView 然而 当写alertView clickedButtonAtIndex 方法为UIAlertViewDelegate 我遇到
  • Swift 中 UIViewController 子类成员的双重初始化

    我想制作一个自定义容器视图控制器并向其子类添加一些成员UIViewController 当我尝试使用以下代码从应用程序委托初始化它时 self window UIWindow frame UIScreen mainScreen bounds
  • 我以前没见过的 CGRect 语法

    我在一些示例代码中看到了下面的语法 但不确定我是否理解它 CGRect imageRect CGRect size baseImage size 这只是初始化的一种简写方式吗CGRect相当于 CGRect imageRect CGRect
  • ios 8 opengl es 1.1 已停产?

    我们即将在 iOS 应用商店上推出一款游戏 最近我们发现它无法在 iOS 8 上运行 游戏加载到黑屏 但其他一切似乎都可以运行 可以听到音乐 对触摸屏有反应 但显示屏上没有任何反应 我们的引擎相当旧并且使用 OpenGL ES 1 1 我现
  • 在 UITableView 中设置滚动位置

    我有一个应用程序 其工作方式与 iPhone 的 联系人 应用程序的工作方式有些相似 当我们添加新的联系人时 用户将被定向到包含联系人信息的仅查看屏幕 如果我们从导航栏中选择 所有联系人 用户将导航到查看最近添加的联系人的所有联系人列表 我
  • 从后台唤醒时应用程序会重新启动

    iOS 大师您好 我已经广泛搜索了答案 但找不到答案 我打赌对我的问题的第一个答复将是另一个类似的问题 但我找不到它 不管怎样 我的问题是我正在运行一个简单的地图应用程序 用户可以在地图上放置图钉 并在放置的图钉周围有一个自定义的圆圈覆盖
  • 核心蓝牙在后台进行广告和扫描

    我一直在尝试设置一个应用程序 使设备既扫描外围设备又作为外围设备进行广告 目标是当两个设备通过蓝牙发现彼此靠近时在后台被唤醒 从 Apple 文档来看 您似乎应该能够在后台运行 BLE 启用蓝牙中心和蓝牙外设后台模式 并且当一台设备位于前台
  • 如何在 Monotouch 中对 UIImageView 进行运动模糊效果?

    在 MonoTouch 中进行实时运动模糊的方法是什么 当滚动惯性图片库时 我需要在 UIImageView 上应用运动模糊效果 以强度和方向作为参数 就像在 Photoshop 中一样 我在 CocoaTouch 或 CoreAnimat
  • 如何改进 iOS 中的 TWTweetComposeViewController 代码?

    我已经实现了以下代码来进行 Twitter 共享 在我的代码中 我尝试测试 iOS 5 如果这不起作用 我会回到使用 ShareKit 的 Twitter 代码进行共享的旧方式 我向同事展示了代码 他建议我的代码可能有缺陷 我需要做两件事
  • 使用 Parse.com 上传视频

    我是解析新手 正在尝试保存视频并将其上传到云端 这是我正在使用的代码 但是当调用 didButtonAction 时 它不断收到错误 我相信问题出在将视频保存为文件时 但我不知道如何解决这个问题 先感谢您 void imagePickerC
  • 自动布局:Y 位置为两个值中的最大值

    我有一个按钮 play Button 和两个 UIView myView 1 和 myView 2 它们的位置在执行过程中可能会发生变化 我希望 playButton 的顶部比 UIView 1 的底部或 UIView 2 的底部低 10
  • ios 8 核心数据崩溃

    保存时 CoreData 发生崩溃 2014 09 16 09 51 58 273 My app 2678 105246 Terminating app due to uncaught exception NSInvalidArgument
  • 如何保存 1 个 xcode 项目中的所有构建设置并在其他 xcode 项目上使用它们?

    我使用 xcode 4 5 和 cordova phonegap 来构建我的应用程序 我投入了大量时间来获取适合我的 Xcode 项目的构建设置 并且我想在我正在构建的多个应用程序上重用这些设置 我正在寻找是否有一种快速的方法来导出这些设置

随机推荐