Giter VIP home page Giter VIP logo

applefacedetection's Introduction

Face Detection with Vision Framework

ios11+ swift4+

Previously, in iOS 10, to detect faces in a picture, you can use CIDetector (Apple) or Mobile Vision (Google)

In iOS11, Apple introduces CoreML. With the Vision Framework, it's much easier to detect faces in real time 😃

Try it out with real time face detection on your iPhone! 📱

You can find out the differences between CIDetector and Vison Framework down below.

Moving From Voila-Jones to Deep Learning


Details

Specify the VNRequest for face recognition, either VNDetectFaceRectanglesRequest or VNDetectFaceLandmarksRequest.

private var requests = [VNRequest]() // you can do mutiple requests at the same time

var faceDetectionRequest: VNRequest!
@IBAction func UpdateDetectionType(_ sender: UISegmentedControl) {
    // use segmentedControl to switch over VNRequest
    faceDetectionRequest = sender.selectedSegmentIndex == 0 ? VNDetectFaceRectanglesRequest(completionHandler: handleFaces) : VNDetectFaceLandmarksRequest(completionHandler: handleFaceLandmarks) 
}

Perform the requests every single frame. The image comes from camera via captureOutput(_:didOutput:from:), see AVCaptureVideoDataOutputSampleBufferDelegate

func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
    guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer),
        let exifOrientation = CGImagePropertyOrientation(rawValue: exifOrientationFromDeviceOrientation()) else { return }
    var requestOptions: [VNImageOption : Any] = [:]

    if let cameraIntrinsicData = CMGetAttachment(sampleBuffer, kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix, nil) {
      requestOptions = [.cameraIntrinsics : cameraIntrinsicData]
    }
    
    // perform image request for face recognition
    let imageRequestHandler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, orientation: exifOrientation, options: requestOptions)

    do {
      try imageRequestHandler.perform(self.requests)
    }

    catch {
      print(error)
    }

}

Handle the return of your request, VNRequestCompletionHandler.

  • handleFaces for VNDetectFaceRectanglesRequest
  • handleFaceLandmarks for VNDetectFaceLandmarksRequest

then you will get the result from the request, which are VNFaceObservations. That's all you got from the Vision API

func handleFaces(request: VNRequest, error: Error?) {
    DispatchQueue.main.async {
        //perform all the UI updates on the main queue
        guard let results = request.results as? [VNFaceObservation] else { return }
        print("face count = \(results.count) ")
        self.previewView.removeMask()

        for face in results {
            self.previewView.drawFaceboundingBox(face: face)
        }
    }
}
    
func handleFaceLandmarks(request: VNRequest, error: Error?) {
    DispatchQueue.main.async {
        //perform all the UI updates on the main queue
        guard let results = request.results as? [VNFaceObservation] else { return }
        self.previewView.removeMask()
        for face in results {
            self.previewView.drawFaceWithLandmarks(face: face)
        }
    }
}

Lastly, DRAW corresponding location on the screen! <Hint: UIBezierPath to draw line for landmarks>

func drawFaceboundingBox(face : VNFaceObservation) {
    // The coordinates are normalized to the dimensions of the processed image, with the origin at the image's lower-left corner.

    let transform = CGAffineTransform(scaleX: 1, y: -1).translatedBy(x: 0, y: -frame.height)

    let scale = CGAffineTransform.identity.scaledBy(x: frame.width, y: frame.height)

    let facebounds = face.boundingBox.applying(scale).applying(transform)

    _ = createLayer(in: facebounds)

}

// Create a new layer drawing the bounding box
private func createLayer(in rect: CGRect) -> CAShapeLayer {

    let mask = CAShapeLayer()
    mask.frame = rect
    mask.cornerRadius = 10
    mask.opacity = 0.75
    mask.borderColor = UIColor.yellow.cgColor
    mask.borderWidth = 2.0

    maskLayer.append(mask)
    layer.insertSublayer(mask, at: 1)

    return mask
}

applefacedetection's People

Contributors

weijay avatar willjay90 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

applefacedetection's Issues

Face landmark in smaller AVCaptureVideoPreviewLayer

Hi, thank you for a great example!

I'm trying to do this in a smaller view but can´t figure out how to calculate the new Face landmark frames.

Say that I have a smaller AVCaptureVideoPreviewLayer say in a CGRect(x: 0, y: 44, width: 320, height: 300). I get face landmarks like this.

Do you have any idé how I can accomplish this?

img_6078e6f6106f-1

face landmark tracking over video

hi I'm playing with the vision framework and can use the face landmark feature to get the position of facial features in real time. However, I have to run the detector for every frame. This makes the real time face mask jittery.

any ideas on how we could optimize the landmark detection in a real time feed with only the iOS frameworks?

FYI I tried the object tracker, but it wasn't as impressive as it could be. Maybe you've had better luck?

thanks

Save CGrect of previewView

Hello,

I can't save the frame detected.
Is there a method to save the portion of photo detected ?

Thank you

Use CoreML on detected faces

I am trying to use CoreML on the detected faces. My original idea was to capture the image of each detected face and then run that agains the models, but I can't figure out how to get the screenshot of each face. Any hints? I tried using the facebounds in the drawFaceboundingBox method and using that rect to crop the image, but they don't line up.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.