ruanjx / videolab Goto Github PK
View Code? Open in Web Editor NEWHigh-performance and flexible video editing and effects framework, based on AVFoundation and Metal.
License: MIT License
High-performance and flexible video editing and effects framework, based on AVFoundation and Metal.
License: MIT License
添加图片 11 长的时候,内存暴涨到 1 个 G
edit video with linear speed is easy, but now, I want to realize curve or ramp speed such as accord from Bezier path. Any help will appreciated!
delete pls
I have a RenderComposition with a video RenderLayer. I want to mix an audio file with this video.
I tried to create another RenderLayer with an AVAsset created from an AAC file and add it to my RenderComposition. After that, I could hear two audio streams from my video and audio files together, but the image disappeared. I got a black screen instead of my video.
Also, the length of the resulting video equals the length of the AAC file, even though I set a shorter CMTimeRange.
So, how do I accomplish this? Is this supported by VideoLab? If not, will it be supported and how can I work around this problem now?
can i hire u for video editing app
Contact me pleaee live:.cid.83f046ffe2932e9d
Transitions
to the Framework.I've reviewed the demo project and see the limited options for Transitions
available. I would like to expand on this and implement full support for Transitions
but am having trouble finding the best approach for this. A lack of technical documentation also makes this more challenging.
My initial thought was to add a TransitionLayer
which would hold any RenderLayers
which you'd like the Transition
applied on. But this felt too restrictive. Ex: What if you want different Transitions
for each cut within your composition? I think this is a deadend idea but happy to hear other ways it might work at this level.
My next idea / approach was to add the Transition
as a property directly to RenderLayer
. The Transition
should be applied / rendered when the current RenderLayer
's content finishes playback. This felt like the best approach but curious if I'm overlooking any downsides.
The biggest issue is that even adding these classes, I haven't been able to figure out how to correctly apply the Filter effects to the composition. The LayerCompositor
class seemed like the obvious place but, again without any technical docs, it's all remaining a bit cryptic for me at the moment.
I would love if I could get some input from @ruanjx here or anyone else who's used this extensively.
在逆向分享那里有写到使用 IOSurface 生成纹理性能更优,这个是为什么呢?为什么不使用CVMetalTextureCache呢?
How could I use it in macOS development ?
背景:
我看到了RenderLayer
可以设置transform,来进行移动、旋转、缩放。
let renderLayer1 = RenderLayer(timeRange: timeRange, source: source)
var transform = Transform(center: center, rotation: 0, scale: 0.5)
renderLayer1.transform = transform
期望:
新增裁剪画面区域,比如:
et renderLayer1 = RenderLayer(timeRange: timeRange, source: source)
var transform = Transform(center: center, rotation: 0, scale: 0.5)
renderLayer1.transform = transform
/// ------
renderLayer1.cropFrame = CGRect(x: 0, y: 0, width: 300, height: 300) //< 新增裁剪方法
Error:
Fatal error: Could not create render pipeline state for vertex:oneInputVertex, fragment:lookupFragment, error:Error Domain=CompilerError Code=2 "reading from a rendertarget is not supported" UserInfo={NSLocalizedDescription=reading from a rendertarget is not supported}
What is wrong?
func transition2Demo() -> VideoLab {
// 1.1 LayerGroup1
var timeRange = CMTimeRange(start: CMTime.zero, duration: CMTime(seconds: 5, preferredTimescale: 600))
let layerGroup1 = RenderLayerGroup(timeRange: timeRange)
// Add sub-renderLayer1
var image = UIImage(named: "image1.JPG")
var imageSource = ImageSource(cgImage: image?.cgImage)
imageSource.selectedTimeRange = CMTimeRange(start: CMTime.zero, duration: timeRange.duration)
timeRange = imageSource.selectedTimeRange
let renderLayer1 = RenderLayer(timeRange: timeRange, source: imageSource)
var center = CGPoint(x: 0.5, y: 0.5)
// 添加旋转
let rotation = GLKMathDegreesToRadians(15)
var transform = Transform(center: center, rotation: rotation, scale: 0.15)
renderLayer1.transform = transform
// Add sub-renderLayer2
var url = Bundle.main.url(forResource: "video1", withExtension: "MOV")
var asset = AVAsset(url: url!)
var source = AVAssetSource(asset: asset)
source.selectedTimeRange = CMTimeRange(start: CMTime.zero, duration: timeRange.duration)
timeRange = source.selectedTimeRange
let renderLayer2 = RenderLayer(timeRange: timeRange, source: source)
center = CGPoint(x: 0.25, y: 0.25)
transform = Transform(center: center, rotation: rotation, scale: 0.5)
renderLayer2.transform = transform
.........
return videoLab
}
Hey, love this library!
How would you go about adding multiple text layers (for example captions) to the composition?
The composition can only take a single animationLayer
value, so no place to add an array.
My thoughts are to create a base CALayer, then add all of the subsequent text animation layers to that layer, adjust their animation (from and duration values), and then just assign that base layer to composition.
It hasn't worked for me yet, but is that the right approach at all?
Thanks!
Hi @ruanjx ,
First of all thank you for your effort, this is awesome framework that make work with AVFoundation easier.
Secondly I have a question about muting original audio of AVAsset
and add custom audio to RenderComposition
? I already tried to mute videoLayer
with audioConfiguration like this
var audioConfiguration = AudioConfiguration()
let volumeRampTimeRange = source.selectedTimeRange
let volumeRamp1 = VolumeRamp(startVolume: 0.0, endVolume: 0.0, timeRange: volumeRampTimeRange)
audioConfiguration.volumeRamps = [volumeRamp1]
videoLayer.audioConfiguration = audioConfiguration
and then added new renderLayer for custom audio (mp3 file) named audioLayer
and passed it to render composition like this
let composition = RenderComposition()
composition.layers = [videoLayer, audioLayer]
videoLayer
sound plays 1 second and then its being muted, audioLayer
audio works well but no video on composition entirely it becomes black screen.
Actually I couldn't see how to manage audio files in your documentation.
Any helps will be appreciated .
// 1. Layer 1
var url = Bundle.main.url(forResource: "video1", withExtension: "MOV")
var asset = AVAsset(url: url!)
var source = AVAssetSource(asset: asset)
source.selectedTimeRange = CMTimeRange(start: CMTime.zero, duration: asset.duration)
var timeRange = source.selectedTimeRange
let renderLayer1 = RenderLayer(timeRange: timeRange, source: source)
RenderLayer同样适用音频资源么?有没有demo展示一下?
I know how to create video from some images using AVFoundation. But exporting the video using AVFoundation takes time. I want to make this export time faster. So I want to use metal for exporting a video from UIImage. Is there any way to achieve this using this framework?
demo里,播放完视频后不释放内存,并且退到列表页也不释放内存。
VideoLab/MetalRendering.swift:113: Fatal error: Could not create render pipeline state for vertex:blendOperationVertex, fragment:blendOperationFragment, error:Error Domain=CompilerError Code=2 "reading from a rendertarget is not supported" UserInfo={NSLocalizedDescription=reading from a rendertarget is not supported}
2021-08-18 13:05:13.551297+0300 dubme-app[32964:963572] VideoLab/MetalRendering.swift:113: Fatal error: Could not create render pipeline state for vertex:blendOperationVertex, fragment:blendOperationFragment, error:Error Domain=CompilerError Code=2 "reading from a rendertarget is not supported" UserInfo={NSLocalizedDescription=reading from a rendertarget is not supported}
let tempFile = TemporaryMediaFile(withData: video.asset)
if let asset = tempFile.avAsset {
let resource = AVAssetSource(asset: asset)
resource.selectedTimeRange = CMTimeRange(start: CMTime.zero, duration: asset.duration)
var timeRange = resource.selectedTimeRange
lastVideo = resource
let renderLayer1 = RenderLayer(timeRange: timeRange, source: resource)
// 2. Composition
let composition = RenderComposition()
composition.renderSize = CGSize(width: 1280, height: 720)
composition.layers = [renderLayer1]
// 3. VideoLab
let videoLab = VideoLab(renderComposition: composition)
// 4. Make playerItem
let playerItem = videoLab.makePlayerItem()
// self.renderLayers.append(renderLayer1)
}
```
edit video with linear speed is easy, but now, I want to realize curve or Ramped Slow Motion such as accord from Bezier path. Any help will appreciated!
my app is crashing , when I click on save video while video is playing. And save functionality works fine when video is not playing or video ends playing , I want to save video meanwhile video playing on screen
我需要只渲染视频的一个部分到屏幕上,这可能吗?谢谢
Is it possible to make the video aspect fill and fit to the render size?
你好,我从相册取三个视频出来,url类似这样的:file:///var/mobile/Media/DCIM/105APPLE/IMG_1999.MOV。
AVURLAsset *urlAsset = (AVURLAsset *)asset;
NSURL *url = urlAsset.URL;
let asset = AVAsset(url: url)
let source = AVAssetSource(asset: asset);
后面走的代码跟simpleDemo()的是一模一样的,最终三个视频只播放了一个,VideoLab是不支持相册资源么?你们的演示示例只有加载项目里的资源。
想请教一下,这个问题出现的原因?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.