Giter VIP home page Giter VIP logo

cables-classifier-ios-app's Introduction

Watson Visual Recognition and Core ML

Classify images offline with Watson Visual Recognition and Core ML.

A deep neural network model is trained on the cloud by Watson Visual Recognition. The app then downloads the model which can be used offline by Core ML to classify images. Everytime the app is opened it checks if there are any updates to the model and downloads them if it can.

App Screenshot

Before you begin

Make sure you have these software versions installed on your machine. These versions are required to support Core ML:

  • MacOS 10.11 El Capitan or later
  • iOS 11 or later (on your iPhone or iPad if you want the application to be on your device)
  • Xcode 9 or later
  • Carthage 0.29 or later

Carthage installation

If you don’t have Homebrew on your computer, it’s easier to setup Carthage with the .pkg installer. You can download it here.

Getting the files

Use GitHub to clone the repository locally, or download the .zip file of the repository and extract the files.

Setting up Visual Recognition in Watson Studio

  1. Log in to Watson Studio (dataplatform.ibm.com). From this link you can create an IBM Cloud account, sign up for Watson Studio, or log in.

Training a custom model

For an in depth walkthrough of creating a custom model, check out the Core ML & Watson Visual Recognition Code Pattern.

Installing the Watson Swift SDK

The Watson Swift SDK makes it easy to keep track of your custom Core ML models and to download your custom classifiers from IBM Cloud to your device.

Use the Carthage dependency manager to download and build the Watson Swift SDK.

  1. Open a terminal window and navigate to this project's directory.

  2. Run the following command to download and build the Watson Swift SDK:

    carthage update --platform iOS

Configure your app

  1. Open the project in XCode.
  2. Copy the Model ID of the model you trained and paste it into the modelId property in the CameraViewController.swift file.
  3. Copy your "apikey" from your Visual Recognition service credentials and paste it into the apiKey property in the Credentials.plist file.

Running the app

  1. In Xcode, select the Core ML Vision scheme.
  2. You can run the app in the simulator or on your device.

Note: The visual recognition classifier status must be Ready to use it. Check the classifier status in Watson Studio on the Visual Recognition instance overview page.

What to do next

Try using your own data: Train a Visual Recognition classifier with your own images. For details on the Visual Recognition service, see the links in the Resources section.

Resources

cables-classifier-ios-app's People

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.