Giter VIP home page Giter VIP logo

videoio's Introduction

VideoIO

Video Input/Output Utilities

VideoComposition

Wraps around AVMutableVideoComposition with custom video compositor. A BlockBasedVideoCompositor is provided for convenience.

With MetalPetal

let context = try! MTIContext(device: MTLCreateSystemDefaultDevice()!)
let handler = MTIAsyncVideoCompositionRequestHandler(context: context, tracks: asset.tracks(withMediaType: .video)) {   request in
    return FilterGraph.makeImage { output in
        request.anySourceImage => filterA => filterB => output
    }!
}
let composition = VideoComposition(propertiesOf: asset, compositionRequestHandler: handler.handle(request:))
let playerItem = AVPlayerItem(asset: asset)
playerItem.videoComposition = composition.makeAVVideoComposition()
player.replaceCurrentItem(with: playerItem)
player.play()

Without MetalPetal

let composition = VideoComposition(propertiesOf: asset, compositionRequestHandler: { request in
    //Process video frame
})
let playerItem = AVPlayerItem(asset: asset)
playerItem.videoComposition = composition.makeAVVideoComposition()
player.replaceCurrentItem(with: playerItem)
player.play()

AssetExportSession

Export AVAssets. With the ability to customize video/audio settings as well as pause / resume.

var configuration = AssetExportSession.Configuration(fileType: .mp4, videoSettings: .h264(videoSize: videoComposition.renderSize), audioSettings: .aac(channels: 2, sampleRate: 44100, bitRate: 128 * 1000))
configuration.metadata = ...
configuration.videoComposition = ...
configuration.audioMix = ...
self.exporter = try! AssetExportSession(asset: asset, outputURL: outputURL, configuration: configuration)
exporter.export(progress: { p in
    
}, completion: { error in
    //Done
})

PlayerVideoOutput

Output video buffers from AVPlayer.

let player: AVPlayer = ...
let playerOutput = PlayerVideoOutput(player: player) { videoFrame in
    //Got video frame
}
player.play()

MovieRecorder

Record video and audio.

AudioQueueCaptureSession

Capture audio using AudioQueue.

Camera

Simple audio/video capture.

videoio's People

Contributors

askaradeniz avatar casper6479 avatar dsmurfin avatar jackyoustra avatar little2s avatar samuelhorwitz avatar yuao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

videoio's Issues

How to switch microphone inputs to stereo?

Hi
Sorry for another beginner question but I'm try to set the microphone input to stereo,
I can see that self.camera.audioDevice returns the default microphone
[iPhone Microphone][com.apple.avfoundation.avcapturedevice.built-in_audio:0]
and that self.camera.audioCaptureConnection?.audioChannels.count is 1

At what point do I access the numberOfChannels and channelLayout? in AudioSettings? to set microphone to stereo?

Toggle front / back camera during capture session?

I'd like to provide functionality for toggling between the front and back camera during a capture session, and ideally, while recording as well. My current thinking is as follows, however the device hangs and the switch never occurs. This code was added directly to class CapturePipeline within the MetalPetal example project.

func toggleSelfie() {
		self.camera.disableVideoDataOutput()
		self.camera.disableAudioDataOutput()
		if !self.selfie {
			self.camera = {
				var configurator = Camera.Configurator()
				configurator.videoConnectionConfigurator = { camera, connection in
					connection.videoOrientation = .landscapeRight
				}
				return Camera(captureSessionPreset: .high, defaultCameraPosition: .front, configurator: configurator)
			}()
			self.toggleVideoMirrored()
		} else {
			self.camera = {
				var configurator = Camera.Configurator()
				configurator.videoConnectionConfigurator = { camera, connection in
					connection.videoOrientation = .landscapeRight
				}
				return Camera(captureSessionPreset: .high, defaultCameraPosition: .back, configurator: configurator)
			}()
			self.toggleVideoMirrored()
		}
		try? self.camera.enableVideoDataOutput(on: queue, delegate: self)
		try? self.camera.enableAudioDataOutput(on: queue, delegate: self)
		self.camera.videoDataOutput?.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_420YpCbCr8BiPlanarFullRange]
	}

I could not find a method to do so within the readme or existing issues. Is this something that is easily attainable? Thanks!

Video overlay on top of live feed

Hi!
So what I'd like to is overlay a short looping video as a texture on top of the live feed previewImage (cgImage) from the CameraFilterView.swift example on MetalPetal, then export the recording later. I've noticed in other threads on here the mention of using PlayerVideoOutput to do so.

Is there a possible simple usage example on how this would work when the live feed is a cgImage rather than an AVPlayer?

Would this be a good way to do it? -

  • Setup the overlay video as a separate MTIImage on top of the previewImage feed so it constantly plays
  • Then after recording, add the overlay video as a track into MTIVideoComposition when showing the player?

I think I'm confused on the implementation methods here though. Just want to live record with a looping MP4 playing on top.

Synchronized Video, Depth, and Audio Data

Hi!

Am I missing something, or is there no way of adding a synchronized DataOutput with these three data types on the Camera object?

I've been using public func enableSynchronizedVideoAndDepthDataOutput(on queue: DispatchQueue, delegate: AVCaptureDataOutputSynchronizerDelegate), but would really like to add audio capture to it instead of handling it in a separate delegate method.

Is there a reason this isn't done? I'm pretty new to audio/video capture and processing, so I might be missing something.

Thanks!!

Segment Duration Zero

Hi!

I'm using a MovieSegmentsRecorder, pretty much following the MetalPetal demo project's CameraViewController but replacing the MovieRecorder with MovieSegmentsRecorder, plus enabling audio recording. I noticed that even thugh func segmentsRecorder(_ recorder: MovieSegmentsRecorder, didUpdateWithDuration totalDuration: TimeInterval) is returning the correct segment duration, when the func segmentsRecorder(_ recorder: MovieSegmentsRecorder, didUpdateSegments segments: [MovieSegment]) callback is called, some Segments will have a duration of 0.0. It appears to be happening at random, and even though they have a duration of 0.0 in the segments array, they are all merged successfully with recorder.mergeAllSegments().

Adding some of the code below:

override func viewDidLoad() {
        super.viewDidLoad()
...
        let configuration = MovieRecorder.Configuration()
        segmentsRecorder = MovieSegmentsRecorder(configuration: configuration, delegate: self, delegateQueue: recorderQueue)
...
}
@IBAction func recordButtonTouchDown(_ sender: Any) {
        if isRecording {
            return
        }

	segmentsRecorder?.startRecording()
        
        self.isRecording = true
    }
@IBAction func recordButtonTouchUp(_ sender: Any) {
		self.segmentsRecorder?.stopRecording()
    }
DispatchQueue.main.async {
			if self.isRecording {
				if let pixelBuffer = try? self.pixelBufferPool?.makePixelBuffer(allocationThreshold: 30) {
					do {
						try self.context.render(outputImage, to: pixelBuffer)
						if let smbf = SampleBufferUtilities.makeSampleBufferByReplacingImageBuffer(of: sampleBuffer, with: pixelBuffer) {
							outputSampleBuffer = smbf
						}
					} catch {
						print("\(error)")
					}
				}
				self.segmentsRecorder?.append(sampleBuffer: outputSampleBuffer)
			}

I've added this extension to deal with audio capture:

extension CameraViewController: AVCaptureAudioDataOutputSampleBufferDelegate {
	func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
		DispatchQueue.main.async {
			if self.isRecording {
				self.segmentsRecorder?.append(sampleBuffer: sampleBuffer)
			}
		}
	}
}

And I've added the following MovieSegmentsRecorder delegate methods:

extension CameraViewController: MovieSegmentsRecorderDelegate {
	func segmentsRecorderDidStartRecording(_ recorder: MovieSegmentsRecorder) {

	}

	func segmentsRecorderDidCancelRecording(_ recorder: MovieSegmentsRecorder) {
		recordingStopped()
	}

	func segmentsRecorder(_ recorder: MovieSegmentsRecorder, didFailWithError error: Error) {
		recordingStopped()
	}

	func segmentsRecorderDidStopRecording(_ recorder: MovieSegmentsRecorder) {
		recordingStopped()
	}

	func segmentsRecorder(_ recorder: MovieSegmentsRecorder, didUpdateWithDuration totalDuration: TimeInterval) {
		print(totalDuration)
	}

	func segmentsRecorder(_ recorder: MovieSegmentsRecorder, didUpdateSegments segments: [MovieSegment]) {
		print("segments: \(segments)")
		self.movieSegments = segments

		if segments.count > 5 {
			recorder.mergeAllSegments()
		}
	}

	func segmentsRecorder(_ recorder: MovieSegmentsRecorder, didStopMergingWithURL url: URL) {
		print(url)
		DispatchQueue.main.async {
			self.showPlayerViewController(url: url)
		}
	}
}

Crash [AVAssetReaderVideoCompositionOutput copyNextSampleBuffer]

I don't have exact steps yet. But I assume that it's when
export and cancel functions called trick and fast.

*** -[AVAssetReaderVideoCompositionOutput copyNextSampleBuffer] cannot copy next sample buffer before adding this output to an instance of AVAssetReader (using -addOutput:) and calling -startReading on that asset reader
AssetExportSession.encode(from:to:)

Please see stack trace below:

Fatal Exception: NSInternalInconsistencyException
0  CoreFoundation                 0x1bab3c300 __exceptionPreprocess
1  libobjc.A.dylib                0x1ba850c1c objc_exception_throw
2  AVFoundation                   0x1c5025430 -[AVAssetReaderOutput _figAssetReaderSampleBufferDidBecomeAvailableForExtractionID:]
3  VideoIO                        0x103738fa4 AssetExportSession.encode(from:to:) + 186 (AssetExportSession.swift:186)
4  VideoIO                        0x103739c34 closure #3 in AssetExportSession.export(progress:completion:) + 256 (AssetExportSession.swift:256)
5  VideoIO                        0x1037392ec thunk for @escaping @callee_guaranteed () -> () (<compiler-generated>)
6  libdispatch.dylib              0x1ba7daec4 _dispatch_call_block_and_release
7  libdispatch.dylib              0x1ba7dc33c _dispatch_client_callout
8  libdispatch.dylib              0x1ba7e285c _dispatch_lane_serial_drain
9  libdispatch.dylib              0x1ba7e3290 _dispatch_lane_invoke
10 libdispatch.dylib              0x1ba7ec928 _dispatch_workloop_worker_thread
11 libsystem_pthread.dylib        0x1ba843714 _pthread_wqthread
12 libsystem_pthread.dylib        0x1ba8499c8 start_wqthread

Never ready for audio

I am trying to figure out what is going on with appending audio samples on macOS.

I have a few scenarios which seem to get different results.

If I add an internal Mac microphone alongside an external webcam video, then audio input is added successfully, and Line 440 of MultitrackMovieRecorder.swift is called to append the sample buffer.

If I add the microphone integrated into the webcam instead, then I fail at line 412 of MultitrackMovieRecorder.swift, and buffers are progressively added to pendingAudioSampleBuffers. Somehow though, audio is still recorded in this scenario which is confusing.

Lastly, if I add a non AVFoundation video and audio buffer then I get the same results as the webcam audio and video where both are recorded, but buffers are continually added to pendingAudioSampleBuffers.

Any thoughts much appreciated.

Xcode 15 Beta compile issues (macOS)

Camera.swift:403 Stored properties cannot be marked unavailable with '@available'
PlayerVideoOutput.swift:63 'CADisplayLink' is only available in macOS 14.0 or newer
PlayerVideoOutput.swift:208 'CADisplayLink' is only available in macOS 14.0 or newer

It is possible it may be desirable to support CADisplayLink on macOS 14 now it is available, but as a minimum we should resolve this in the short term so VideoIO can compile in Xcode 15 Beta.

CIImage from MTIImage

Hello,
Excuse if this is a really dumb question. I'm trying to use a skin smoothing filter and our codebase use CIImages.
How would I get a CIImage back from the filter below ?

class MetalPetalSkinSmoothingFilter: Filter {

    var name: String = "MetalSkinSmoothing"
    private let filter = MTIHighPassSkinSmoothingFilter()

    func process(image: CIImage) -> CIImage {
        let mtimage = MTIImage(ciImage: image)
        filter.inputImage = mtimage
        return filter.outputImage! // Get a CIImage?
    }
}

Thanks a ton for pointing me to the right direction :)

Create CocoaPod

I started using MetalPetal for Video, but has found that VideoComposition is an important part of video processing.

So what do you think about making this library as a part of CocoaPods. Or making subspec for MetalPetal?

Each time video recording is initiated, there is an error `Video inputs: not ready for media data`

Replace this paragraph with a short description of the incorrect behavior.

Checklist

  • [ x] I've read the README
  • [ x] If possible, I've reproduced the issue using the master branch of this repo
  • [ x] I've searched for existing GitHub issues

Environment

Info Value
MetalPetal Version latest
Integration Method Pod install
Platform & Version iOS 14.7 / macOS 11.5
Device iPhone Xr

Steps to Reproduce

Each time the camera recording session is initiated with call to startRecording

    func startRecording() throws {
        let sessionID = UUID()
        let url = FileManager.default.temporaryDirectory.appendingPathComponent("\(sessionID.uuidString).mp4")
        // record audio when permission is given
        let hasAudio = self.camera.audioDataOutput != nil
        let recorder = try MovieRecorder(url: url, configuration: MovieRecorder.Configuration(hasAudio: hasAudio))
        state.isRecording = true
        queue.async {
            self.recorder = recorder
        }
    }  

I receive a sequence of errors:

Video inputs: not ready for media data, dropping sample buffer (t: 135065.272566061).
Video inputs: not ready for media data, dropping sample buffer (t: 135065.339290207).
Video inputs: not ready for media data, dropping sample buffer (t: 135065.572860042).
Video inputs: not ready for media data, dropping sample buffer (t: 135065.639584103).

The consequence is that the first second of the recorded video is "glitchy" as some of the frames have been dropped. This by itself is ok, but sometimes ( 1 out of 20 times), the error: Video inputs: not ready for media data, dropping sample buffer is repeated for the entire duration of the recording, the result is that no video has been recorded at all.

Expected behavior

Describe what you expect to happen.

Consistently no error on start recording.

Actual behavior

Describe or copy/paste the behavior you observe.

Behavior: fail to record all frames on first second on every record invocation. Moreover, once every 20 invocations or so, the camera fails to record at all.

The entirety of the class I use to record video is reproduced below. It's a loose refactor of CapturePipeline found in the sample project.


import Foundation
import SwiftUI
import MetalPetal
import VideoIO
import VideoToolbox
import AVKit


//MARK:- pipeline for rendering effect in video


class MetalPipeline: NSObject, ObservableObject, AVCaptureVideoDataOutputSampleBufferDelegate, AVCaptureAudioDataOutputSampleBufferDelegate  {
    
    // depth of cache
    var cacheDepth : Int = 3

    // the rendered image with effect layered in
    @Published var previewImage: CGImage?
    
    // buffer to store recent images
    private var imageBuffer : [CGImage] = []
    private var cachedImage : CGImage?
    
    // default backward facing camera pose
    private var cameraPose : AVCaptureDevice.Position = .back
    
    struct Face {
        var bounds: CGRect
    }
        
    struct State {
        var isRecording: Bool = false
        var isVideoMirrored: Bool = false
    }
    
    @Published private var _state: State = State()
    
    private let stateLock = MTILockCreate()
    
    
    private(set) var state: State {
        get {
            stateLock.lock()
            defer {
                stateLock.unlock()
            }
            return _state
        }
        set {
            stateLock.lock()
            defer {
                stateLock.unlock()
            }
            _state = newValue
        }
    }
    
    private let renderContext = try! MTIContext(device: MTLCreateSystemDefaultDevice()!)
    
    private let queue: DispatchQueue = DispatchQueue(label: "org.metalpetal.capture")
    
    private let camera: Camera = {

        var configurator = Camera.Configurator()
        
        configurator.videoConnectionConfigurator = { camera, connection in
            #if os(iOS)
            switch UIApplication.shared.windows.first(where: { $0.windowScene != nil })?.windowScene?.interfaceOrientation {
            case .landscapeLeft:
                connection.videoOrientation = .landscapeLeft
            case .landscapeRight:
                connection.videoOrientation = .landscapeRight
            case .portraitUpsideDown:
                connection.videoOrientation = .portraitUpsideDown
            default:
                connection.videoOrientation = .portrait
            }
            #else
            connection.videoOrientation = .portrait
            #endif
        }
                            
        // @TODO: make sure you're able to change session cam default cam position
        let session_cam = Camera(captureSessionPreset: .hd1280x720, defaultCameraPosition: .back, configurator: configurator)
        return session_cam
    }()
    
    private let imageRenderer = PixelBufferPoolBackedImageRenderer()
    
    
    private var isMetadataOutputEnabled: Bool = false
    
    private var recorder: MovieRecorder?
    
    //MARK:- effects
    
    // filter effects
    enum Effect: String, Identifiable, CaseIterable {

        case polaroidA = "polaroidA"
        
        var id: String { rawValue }
        
        typealias Filter = (MTIImage, [Face]) -> MTIImage
        
        func makeFilter() -> Filter {

            let filter = MTICoreImageUnaryFilter()
            filter.filter = CIFilter(name: "CIPhotoEffectInstant")
            return { image, faces in
                filter.inputImage = image
                return filter.outputImage!
            }
            
            // return { image, faces in image }
        }
    }

    private var filter: Effect.Filter = { image, faces in image }    

    @Published var effect: Effect = .polaroidA {
        didSet {
            let filter = effect.makeFilter()
            queue.async {
                self.filter = filter
            }
        }
    }
    
    private var faces: [Face] = []

    //MARK:- end effect
    
    override init() {
        super.init()
        try? self.camera.enableVideoDataOutput(on: queue, delegate: self)
        try? self.camera.enableAudioDataOutput(on: queue, delegate: self)
        self.camera.videoDataOutput?.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_420YpCbCr8BiPlanarFullRange]

    }
    
    //MARK:- API
    
    func startRunningCaptureSession() {
        queue.async {
            self.camera.startRunningCaptureSession()
        }
    }
    
    func stopRunningCaptureSession() {
        queue.async {
            self.camera.stopRunningCaptureSession()
        }
    }
        
    func startRecording() throws {
        let sessionID = UUID()
        let url = FileManager.default.temporaryDirectory.appendingPathComponent("\(sessionID.uuidString).mp4")
        // record audio when permission is given
        let hasAudio = self.camera.audioDataOutput != nil
        let recorder = try MovieRecorder(url: url, configuration: MovieRecorder.Configuration(hasAudio: hasAudio))
        state.isRecording = true
        queue.async {
            self.recorder = recorder
        }
    }    
    
    func stopRecording(completion: @escaping (Result<URL, Error>) -> Void) {
        if let recorder = recorder {
            recorder.stopRecording(completion: { error in
                self.state.isRecording = false
                if let error = error {
                    completion(.failure(error))
                } else {
                    completion(.success(recorder.url))
                }
            })
            queue.async {
                self.recorder = nil
            }
        } 
    }
    
    // @use: flip the camera
    func flipCamera(){
        switch cameraPose {
        case .front:
            do {
                try self.camera.switchToVideoCaptureDevice(with: .back)
                self.cameraPose = .back
            } catch {
                return
            }

        default:
            do {
                try self.camera.switchToVideoCaptureDevice(with: .front)
                self.cameraPose = .front
            } catch {
                return
            }
        }
    }
    
    //@use: Take picture
    public func snapImage() -> CGImage? {
        if let im = self.previewImage {
            return im
        } else {
            return self.previewImage
        }
    }
    
    
    // @Use: cache the previous frame
    private func cachePreviousImg( _ img: CGImage? ){
        self.cachedImage = img;
    }
    
    //@use: cache multiple images in buffer
    private func cacheInBuffer(){
        if let m = self.previewImage {
            imageBuffer.append(m)
        }
    }
    
    //MARK:- render filtered image delegate

    // @note: this is a delegate fn that gets called. and is outputting rendered image
    func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {

        guard let formatDescription = sampleBuffer.formatDescription else {
            return
        }
        
        switch formatDescription.mediaType {
        case .audio:
            do {
                try self.recorder?.appendSampleBuffer(sampleBuffer)
            } catch {
                print("captureOutput audio error: ", error)
            }
        case .video:
            guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return }
            do {
                let image = MTIImage(cvPixelBuffer: pixelBuffer, alphaType: .alphaIsOne)
                let filterOutputImage = self.filter(image, faces)
                let outputImage = self.state.isVideoMirrored ? filterOutputImage.oriented(.upMirrored) : filterOutputImage
                let renderOutput = try self.imageRenderer.render(outputImage, using: renderContext)
                try self.recorder?.appendSampleBuffer(SampleBufferUtilities.makeSampleBufferByReplacingImageBuffer(of: sampleBuffer, with: renderOutput.pixelBuffer)!)
                DispatchQueue.main.async {

                    // output rendered image and cache image in buffer
                    self.cachedImage = self.previewImage
                    self.previewImage = renderOutput.cgImage
                    
                }
            } catch {
                print("captureOutput video error: ", error)
            }
        default:
            break
        }
    }
    
}




//MARK:- ios delegates

#if os(iOS)

extension MetalPipeline: AVCaptureMetadataOutputObjectsDelegate {
    func metadataOutput(_ output: AVCaptureMetadataOutput, didOutput metadataObjects: [AVMetadataObject], from connection: AVCaptureConnection) {
        var faces = [Face]()
        for faceMetadataObject in metadataObjects.compactMap({ $0 as? AVMetadataFaceObject}) {
            if let rect = self.camera.videoDataOutput?.outputRectConverted(fromMetadataOutputRect: faceMetadataObject.bounds) {
                faces.append(Face(bounds: rect.insetBy(dx: -rect.width/4, dy: -rect.height/4)))
            }
        }
        self.faces = faces
    }
}

#endif


Video artifacts and reduced video size after export

Hi!

After exporting videos using VideoIO, I am experiencing artifacts on the video, even without applying any filters. Additionally, when using the same codecs (in my case, .hevc), the video size is reduced by half. I am looking for suggestions on how to improve the video quality

Thanks and sorry if this is an amateur beginner question.

var configuration = AssetExportSession.Configuration(
      fileType: fileType,
      videoSettings: .hevc(
        videoSize: renderSize
      ),
      audioSettings: .aac(
        channels: 2,
        sampleRate: 44100,
        bitRate: 128 * 1000
      )
    )
    configuration.videoComposition = nil
    configuration.audioMix = audioMix

    self.exportSession = try AssetExportSession(
      asset: asset,
      outputURL: outputURL,
      configuration: configuration
    )

Focus hunting on .builtInWideAngleCamera on iPhone 12 pro

Hi Thanks again for the very useful VideoIO, I'm using this in conjunction with Metal Petal.
I'm having a problem with focus hunting currently only on the .builtInWideAngleCamera on an iPhone 12 pro on iOS 14.6.

I'm setting up the wide angle camera using

try self.camera.switchToVideoCaptureDevice(with: .back, preferredDeviceTypes: [.builtInWideAngleCamera])

and setting continuous auto focus with

queue.async{ let device = self.camera.videoDevice! do { try device.lockForConfiguration() if device.isFocusPointOfInterestSupported && device.isFocusModeSupported(.continuousAutoFocus) { device.focusPointOfInterest = focusPoint device.focusMode = .continuousAutoFocus } if device.isSmoothAutoFocusSupported { device.isSmoothAutoFocusEnabled = true } device.unlockForConfiguration() } catch { print(error) } }
using the same setup on the telephoto lens works perfectly but on the WideAngle camera I get the focus hunting. Any clues what might be causing this?

How to use multiple video assets in this code example.

let context = try! MTIContext(device: MTLCreateSystemDefaultDevice()!)
let handler = MTIAsyncVideoCompositionRequestHandler(context: context, tracks: asset.tracks(withMediaType: .video)) { request in
return FilterGraph.makeImage { output in
request.anySourceImage => filterA => filterB => output
}!
}
let composition = VideoComposition(propertiesOf: asset, compositionRequestHandler: handler.handle(request:))
let playerItem = AVPlayerItem(asset: asset)
playerItem.videoComposition = composition.makeAVVideoComposition()
player.replaceCurrentItem(with: playerItem)
player.play()

Cannot Encode Media on AssetExportSession

Sometimes I'm facing with issue when for input .mp4/.mov file returns error "Cannot Encode Media".

Attaching video examples which cases such kind of error.
Also here it's my configs:
`
func getConfiguration(for asset: AVAsset) -> AssetExportSession.Configuration? {
guard let videoTrack = asset.tracks(withMediaType: .video).first else {
return nil
}

let estimatedSize = __CGSizeApplyAffineTransform(videoTrack.naturalSize, videoTrack.preferredTransform)
let size: CGSize
if abs(estimatedSize.width) > abs(videoTrack.naturalSize.height) {
    size = .init(width: 1280, height: 720)
} else {
    size = .init(width: 720, height: 1280)
}

if (asset as? AVURLAsset)?.url.pathExtension.lowercased() == GlobalConstants.Video.mediaFormatType,
   videoTrack.nominalFrameRate <= Float(GlobalConstants.Video.fps + 1),
   videoTrack.naturalSize == size,
   videoTrack.estimatedDataRate <= Float(GlobalConstants.Video.averageBitRate) {
    return nil
}

var audioSampleRate: Double = 44100
var audioBitrate = 128 * 1000
var numberOfChannels: Int = 2

if let audioTrack = asset.tracks(withMediaType: .audio).first,
   let formatDescription = audioTrack.formatDescriptions.first,
   let basic = CMAudioFormatDescriptionGetStreamBasicDescription(formatDescription as! CMAudioFormatDescription) {
    audioSampleRate = basic.pointee.mSampleRate
    audioBitrate = Int(audioTrack.estimatedDataRate)
    numberOfChannels = Int(basic.pointee.mChannelsPerFrame)
}

return .init(fileType: .mp4,
             videoSettings: .h264(videoSize: size, averageBitRate: min(GlobalConstants.Video.averageBitRate, Int(videoTrack.estimatedDataRate))),
             audioSettings: .aac(channels: numberOfChannels, sampleRate: audioSampleRate, bitRate: audioBitrate))

}
`

Testing device: iPhone Xs, iOS 14.5.1
Pod version: 2.2.0

Image.from.iOS.2.mp4
Image.from.iOS.2.mov

Frames no longer being appended

Hi @YuAo

Thanks as ever for your thoughts. This is a little more of a question than an issue, although it's possible there does need to be some code changes.

I'm having some issues where a video will be recorded of the correct length, but after a certain amount of time frames stop changing. No errors seem to be thrown (or at least I'm not capturing them).

I'm wondering if this could be an issue with wrapping source time related to the following line in MultitrackMovieRecorder, as opposed to using CMTime.zero?

self.assetWriter.startSession(atSourceTime: presentationTime)

This is the text from the docs for this method:

In the case of the QuickTime movie file format, the first session begins at movie time 0, so a sample
you append with timestamp T plays at movie time (T-startTime). The writer adds samples with
timestamps earlier than the start time to the output file, but they don’t display during playback.

Thanks as ever!

Unable to set Frame rate to 60fps for 'vide'/'x420' 3840x2160

Hi
I'm Using VideoIO in conjunction with your terrific Metal Petal and I have a problem.
When I use Camera.Configurator() to setup the camera and then do a search for available formats using
print("Available device formats are (self.camera.videoDevice!.formats)")
it does not return the 'vide'/'x420' 3840x2160 { 1- 60 fps} option in the list of available formats.

I think this is due to the use of the AVCaptureDevice.DiscoverySession in VideoIO Camera.swift.

Is there a way to still use VideoIO to set up the camera device and get access to the other formats?

Thanks and sorry if this is an amateur beginner question.

AssetExportSession still exist in memory after cancel

During debugging I've observed the following issue.

  1. AssetExportSession.export
  2. AssetExportSession.cancel
  3. maybe repeat 1 and 2 several times.

As result videoInput.requestMediaDataWhenReady(on: self.queue) { [weak self] in line 307 still running, but self = nil.

Timer Label Overlay

Hi!
Thanks for the awesome Utilities!

Could you clarify me please how I can add a timer-label (for ex.: UILabel with a special color and font) on top of the camera , update the text (for ex., every second), add UIImage and eventually record a video?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.