Giter VIP home page Giter VIP logo

gpuimage2's People

Contributors

andrewcampoli avatar aokholm avatar bradlarson avatar datskos avatar dylanmoo avatar gillygize avatar lyb0307 avatar manc1 avatar mz2 avatar nickager avatar ollie-hpcnt avatar rounak avatar shivahuang avatar wallerdev avatar zubco avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gpuimage2's Issues

OperationGroup not working

Hi Brad,

I'm trying your sample code:
let boxBlur = BoxBlur()
let contrast = ContrastAdjustment()
let myGroup = OperationGroup()
myGroup.configureGroup{input, output in
input --> self.boxBlur --> self.contrast --> output
}

"let myGroup = OperationGroup()" is always giving me the error as "'OperationGroup' cannot be constructed because it has no accessible initializers". Not sure if I missed any thing or the class has not been completed. Thanks.

Asynchronous Processing causes black image

Hi,
So i've recently been playing around and getting to know everything and i've ran into an issue where trying to process an image with this filter combo causes a black image.

I was hoping you might be able to shine some light on this

let saturation = SaturationAdjustment()
saturation.saturation = self.saturation

let pictureInput = PictureInput(image: self.image!)
let pictureOutput = PictureOutput()

pictureOutput.imageAvailableCallback = { [weak self] image in
    if let me = self {
        for view in me.backImageViews {
            view.image = image
        }
    }
}

pictureInput --> saturation --> blurFilter --> pictureOutput
pictureInput.processImage()

Hough Transform for Line Detection

It was in the first GPUImage but hasn't been ported over yet. This would also require the Parallel Coordinates transform be ported over.

Add ACV Support

I don't see any support for ACVs.
- (id)initWithACV:(NSString*)curveFilename;

Resize?

Is there any option to resize the canvas when using the Transform function to resize the image with Mattrix4x4(CGAffineTransformMakeScale(..., ..)): (the image gets resized but the surrounding canvas stays the same.

removeAllTargets does not completely remove target Unsharp Mask

The following Swift 3 iOS app demonstrates that after removeAllTargets is called, subsequent executions of a pipeline that includes UnsharpMask fails in various bad ways and will end up doubling frame buffers coming out of the pipeline.

https://github.com/mikebikemusic/Animate

The list of problems that occur in this app:

  1. Adding unsharpMask to the pipeline slows down the rendering immensely.
  2. The PictureInput instance has to be re-created for each frame of the simulated movie. I would much rather create one instance and be allowed to feed it one image after another.
  3. In order to do this safely, I had to removeAllTargets from the previous instance of PictureInput
  4. After the first frame, the output of RawDataOutput gets called twice. This is either a bug in removeAllTargets or in UnsharpMask.
  5. If I take UnsharpMask out of the pipeline, I don't get the doubling of RawDataOutput.
  6. Because of the doubling bug, I cannot determine the safest time to start a new pipeline. So, I end up calling processOneFrame more often than I should.
  7. The doubling bug occurs even if I move usharpMask to a later stage of the pipeline

simpleVideoRecorder example not working

Hi,

I found that the simpleVideoRecorder example is not working.

When I press the record button, the screen flashes for one time and then nothing happens - not recording, button label didn't change to "Stop", no video found in photo library. My device is iPad mini 2.
Here is the screencast: https://streamable.com/96v2

After some logging I found that when the record button is pressed, both the if part and else part of the capture function are called. Refer to below:

@IBAction func capture(sender: AnyObject) {

        print("CLICKED") // Called

        if (!isRecording) {
            do {
                self.isRecording = true
                let documentsDir = try NSFileManager.defaultManager().URLForDirectory(.DocumentDirectory, inDomain:.UserDomainMask, appropriateForURL:nil, create:true)
                let fileURL = NSURL(string:"test.mp4", relativeToURL:documentsDir)!
                do {
                    try NSFileManager.defaultManager().removeItemAtURL(fileURL)
                } catch {
                }

                movieOutput = try MovieOutput(URL:fileURL, size:Size(width:480, height:640), liveVideo:true)
                camera.audioEncodingTarget = movieOutput
                filter --> movieOutput!
                movieOutput!.startRecording()
                (sender as! UIButton).titleLabel?.text = "Stop"

                print("RECORDING!") // Called

            } catch {

                print("ERROR!!!!!") // Not Called

                fatalError("Couldn't initialize movie, error: \(error)")
            }
        } else {
            movieOutput?.finishRecording{
                self.isRecording = false
                dispatch_async(dispatch_get_main_queue()) {
                    (sender as! UIButton).titleLabel?.text = "Record"

                    print("STOPPPPPPP!!!") // Called, why?
                }

                print("STOPPED!!!") // Called, why?

                self.camera.audioEncodingTarget = nil
                self.movieOutput = nil
            }
        }
    }

Has anyone had the same problem? Any idea how to fix it?

Stan

XCode errors out when using ImageConsumer

Hi Brad,

I'm getting this error when trying to set up my own class that conforms to ImageConsumer:

"Source Container cannot be constructed because it has no accessible initializers"

screenshot 2016-05-05 22 46 04

All filters creating a red image

I'm running XCode 7.3.1. I've triple checked I've followed the set up instructions.

Any filter I run, via any method (filterWithOperation, filterWithPipeline, or using PictureOutput()) the outputImage is always just a red fill.

Maybe I'm doing something wrong, but maybe others are running into this?

HighlightShadowTintFilter does not work

I use the HighlightAndShadowTint filter and change value of the shadowTintIntensity property, but nothing happen.

`

override func loadView() {
    super.loadView()

    picture = PictureInput(image: UIImage(named: "WID-small.jpg")!)
    filter = HighlightAndShadowTint()
    picture --> filter --> renderView
    picture.processImage()
}

@IBAction func updateValue(sender: AnyObject) {
    filter.shadowTintIntensity = shadowIntensitySlider.value
    picture.processImage()

    print(filter.shadowTintIntensity)
}

`

Camera continues to call cameraOutput after stopCapture

If you have benchmarking turned on, you can see in the logs that frames are still coming in after calling stopCapture. Even without benchmarking, targets are being given the frames to process. While in this state, it is unsafe to nil the camera instance, as the deinit will be done without stopping captureOutput.

Swift 3 ->OperationGroup.swift error

Xcode 8.0 beta 5 (8S193k)
GPUImage2/framework/Source/OperationGroup.swift:16:78: Expected type

GPUImage2/framework/Source/OperationGroup.swift:17:32: Argument passed to call that takes no arguments
GPUImage2/framework/Source/OperationGroup.swift:16:78: Expected ',' separator

session wont stop and start seamlessly if filter is in the middle

If I pause the session by using stopCapture() to change a filter and then try and start it again the RenderView continues to show the still from stopping capture and never gets going again.

If I pipe the camera straight into the view it stops and starts again with no problem.

I'm using a simple crop and transform filter. i call removeallTargets on the camera. Create the same pipeline but it never kicks off again. DO we have to remove the camera and start completely from scratch?

Errors compiling the framework.............

Pipeline.swift could not compile...... I added the GPUImage-iOS.xcodeproj as a framework in Target Dependency and also in Link Binary with Libraries..........was there something else to be done besides that too....

how can i record?

i have a webview in view controller. i want to record only webview not fullscreen.
how can i do that with swift?
help me please

take care of swift 3

hi brad thanks for your objective c i worked with it alot please take care of swift 3 and ios 10 in this version thanx for your gr8 help

Multiple Outputs

I'm sure you have a ton on your plate with this rewrite but I thought I'd suggest a feature I'd love to see: one to many or many to one piping. That is, starting with a single video file, output several video files or starting with several video files, composite them into one video, similar to AVComposition.

720x1244 movie input gets rasterized as 280x480

High definition video made on iPhone with portrait orientation is screwed up because movie frames are not being scaled down properly. Put a breakpoint in movieInupt processMovieFrame and inspect bufferHeight and bufferWidth.

My github decided to only commit the Derived Data folder, so linking to zipped project that does this:

import UIKit
import GPUImage

class ViewController: UIViewController {
private var movieFile: MovieInput?
@IBOutlet weak var renderView: RenderView!

override func viewDidLoad() {
    super.viewDidLoad()
    do {
        movieFile = try MovieInput(url: NSBundle.mainBundle().URLForResource("IMG_2809", withExtension: "mov")!)
        movieFile! --> renderView
        movieFile?.start()
    } catch {
        print("Unable to play")
    }
}

}

https://drive.google.com/open?id=0Bz6FPHkUDa41LWRuMzZGZ0VCV00

Apply CIFilter to frames in a Live video feed?

Is there a way to apply a CoreImage filter to every frame and have that output to a renderView?

Also, is there a way to get a CMSampleBuffer back from the new filtered frame to pass into my custom encoder?

Thanks again for a great library.

GPUImage, real time camera

How to make a camera in real time? There is a task to transfer the image from GPUImage Camera to own function for further processing, for example, of TesseractOCR.

How do i swap to the front facing camera

I have been trying to swap to the front facing camera for a while now.
Can any one help me with this and also the zoom and flash.
It would be much appreciated

Switching filters in real-time

Hi,

I just wanted to confirm the proper steps to change filters in real time as with the first GPUImage one had to remove targets and add the targets of the new filter they wanted to display.

Thanks

Memory leaking or never deallocated

Hello! Been loving using this library -- makes everything much easier.

Quick thing I noticed though: I'm using a PictureOutput to write individual frames to the disk, but I notice my memory usage never decreases after writing...

Thanks for your help!

some code:

class ImageSaver: NSOperation {

    let imageData: NSData
    let url: NSURL

    init(imageData: NSData, url: NSURL) {
        self.imageData = imageData
        self.url = url
    }

    override func main() {
        if self.cancelled { return }

        if imageData.length > 0 {
            #if DEBUG
                print("Writing image to \(url)")
            #endif
            do {
                try imageData.writeToURL(url, options: .DataWritingAtomic)
            } catch {
                print("Error writing image: \(error)")
            }
        }
    }
}

// In my viewDidLoad() function
do {
    renderView.orientation = UIApplication.sharedApplication().statusBarOrientation.toImageOrientation()

    camera = try Camera(sessionPreset: AVCaptureSessionPreset640x480, cameraDevice: nil, location: .BackFacing, captureAsYUV: false)

    filter = MonochromeFilter()

    // Setup callback for picture data
    pictureOutput = PictureOutput()
    pictureOutput.encodedImageFormat = .JPEG
    pictureOutput.onlyCaptureNextFrame = false
    pictureOutput.encodedImageAvailableCallback = { imageData in
        let imageURL = self.folderURL.URLByAppendingPathComponent(String(format:"%19.0f.jpg", CFAbsoluteTimeGetCurrent() * 1e9))
        let imageSaver = ImageSaver(imageData: imageData, url: imageURL)
        self.imageQueue.addOperation(imageSaver)
    }

    camera --> filter --> renderView
    filter.addTarget(pictureOutput)

    camera.startCapture()
`

Video is rotated to portrait in Landscape app

Just trying to convert an existing project i was writing in swift to this new swift version but I cannot get the video to orientate correctly in a landscape app. It looks like the PhysicalCameraLocation enum handles the orientation. If I change it to Portrait (which should be wrong) it is upside down (which, perversely is what I want) but it would be nice to have more control.

What am I missing?

I've also had to make the avcapture device of the Camera public as I need to change focus and exposure.

UI Element

Hi Brad,

Thanks for making a SWIFT port, learning so much studying it. Any plans to do a UI Element demo similar to Obj-C version?

Why cameraProcessingQueue is a global concurrent queue ? -- Camera.swift line 131: videoOutput.setSampleBufferDelegate(self, queue:cameraProcessingQueue)

Hi Larson, I found the code: videoOutput.setSampleBufferDelegate(self, queue:cameraProcessingQueue) in Line 131 Camera.swift, and cameraProcessingQueue is a global concurrent queue, however, the function declared in AVFoundation SDK Document shows that "A serial dispatch queue must be used to guarantee that video frames will be delivered in order."

This also happens in GPUImageVideoCamera initWithSessionPreset: cameraPosition: implementation in GPUImage framework.

It makes me confused, and I want to know why ?

Thanks.

ChromaKeying works in RenderView but not in UIImageView

Hi,

I am playing around with the chroma key filters. I have an image with a green screen background and I am trying to isolate the minion. This works well when displayed in a render view but doesn't work at all when I display the filtered image in an UIImageView or when I write the PNG to disk. Am I using it right?

let output = PictureOutput() // instance variable

private func process() {
    output.imageAvailableCallback = { image in
        NSOperationQueue.mainQueue().addOperationWithBlock {
            self.imageView.image = image
        }
    }
    picture --> filter
    filter --> renderView
    filter --> output

    picture.processImage()
}

The top view is the RenderView, the bottom on the UIImageView showing the original image, even though it should show the processed output?

screen shot 2016-07-11 at 11 50 52

failed in compiling Argument passed to call that takes no arguments

Pipeline.swift:97:29: Argument passed to call that takes no arguments

`
public func generate() -> AnyGenerator<(ImageConsumer, UInt)> {
var index = 0

    return AnyGenerator { () -> (ImageConsumer, UInt)? in
        if (index >= self.targets.count) {
            return nil
        }

        while (self.targets[index].value == nil) {
            self.targets.removeAtIndex(index)
            if (index >= self.targets.count) {
                return nil
            }
        }

        index += 1
        return (self.targets[index - 1].value!, self.targets[index - 1].indexAtTarget)
    }
}

`

Xcode 8 Compatibility

GPUImage is not compatible with Xcode 8 using Swift 3.0.

Critical incompatibilities are still raised by setting the new Swift Legacy mode parameter in Xcode, if you try to use the automatic problem solver, the project will run but crash on start.

Crasher

runtime error at Framebuffer.swift:241

fatal error: Double value cannot be converted to Int32 because the result would be greater than Int32.max
2016-06-29 10:52:24.200599 GPUImageTest[2010:280192] fatal error: Double value cannot be converted to Int32 because the result would be greater than Int32.max

encountered when running on an iPhone 6 running iOS 10

I've been working on this program for several days without issue, then this occurred once. Thought I'd report it now in case it's really that rare.
screen shot 2016-06-29 at 10 56 49 am

Cannot use framework

  • Xcode 7.3
  • followed absolutely each step to add this framework in a swift project
  • I can import GPUImage module, but I can't use any of the classes provided by framework

My bad, I was following a tutorial for GPUImage (framework written in objc)

SimpleVideoRecorder not recording

Hi,

I got your simplevideorecorder example working (checkout date 2016-07-05). Two things:

  1. I think it should read
    dispatch_async(dispatch_get_main_queue()) {
    (sender as! UIButton).titleLabel?.text = "Stop"
    }
    The dispatch is missing. That resulted on my iPhone5 to the effect that the label did not change after starting recording.
  2. I was not able to record anything. According to the frame times it looks like AssetWriter would write due to the increase in work load. However after stopping the recording no file seems to be written. At least I could not find something in the Photos-App. BTW: The callback finishRecording() gets executed.

Any ideas ?

Thanks for your extremely good work

Chris

GrayScale Filter

This is so useful for me.
Here is my question. I want to create a GrayScale Filter, but I donot know how to create this operation.
Could you please make it?
Tks a lot!

GPUImage 1 vs GPUImage 2? Performance differences?

Hey Brad, super excited about this new release. Great work... You inspire.

I was wondering if you had done any metrics on the new version versus the old? Are there significant performance increases?

HistogramDisplay proportions

HistogramDisplay behaviour changed from 1.x now there is a bug that forces to run in a portrait mode proportion fashion

ImageConsumers not working unless class variable

Hello again!

Was having some trouble getting a MonochromeFilter and a PictureOutput to receive frame buffers until I made them class variables and defined them later. Not sure if this is intended or not, but didn't seem normal to me. My code is very simple and was just able to test that when my consumers were not class members, they did not even have their updateTargetsWithFramebuffer(_:) method called.

Here's a quick sample:

class ViewController: UIViewController {
    @IBOutlet cameraView: RenderView!

    override func viewDidLoad() {
        super.viewDidLoad()

        do {
            let camera = try Camera(sessionPreset: AVCaptureSessionPreset640x480, cameraDevice: nil, location: .BackFacing, captureAsYUV: true)

            let pictureOutput = PictureOutput()
            pictureOutput.encodedImageFormat = .PNG
            pictureOutput.imageAvailableCallback = { image in
                print("Got an image!")
            }

            let monochromeFilter = MonochromeFilter()

            // Setup pipelines
            camera --> monochromeFilter
            camera --> cameraView
            monochromeFilter --> pictureOutput

            monochromeFilter.targets.forEach({ (consumer) in
                print(consumer.0)
            })

            camera.startCapture()
        } catch {
            let errorAlertController = UIAlertController(title: NSLocalizedString("Error", comment: "Error"), message: "Couldn't initialize camera", preferredStyle: .Alert)
            errorAlertController.addAction(UIAlertAction(title: NSLocalizedString("OK", comment: "OK"), style: .Default, handler: nil))
            self.presentViewController(errorAlertController, animated: true, completion: nil)
            print("Couldn't Initialize Camer: \(error)")
        }
    }
}

I altered the updateTargetsWithFramebuffer(_:) in Pipeline.swift method like so and saw that only the Camera was called and the only target was the RenderView

public func updateTargetsWithFramebuffer(framebuffer:Framebuffer) {
    if targets.count == 0 { // Deal with the case where no targets are attached by immediately returning framebuffer to cache
        framebuffer.lock()
        framebuffer.unlock()
    } else {
        // Lock first for each output, to guarantee proper ordering on multi-output operations
        for _ in targets {
            framebuffer.lock()
        }
    }
    for (target, index) in targets {
        print(self)
        print(target)
        target.newFramebufferAvailable(framebuffer, fromSourceIndex:index)
    }
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.