Taking a look at my last post about CoreML object detection, I decided to update the two part series with the latest Turi Create (now using Python 3.6). The original parts were about detecting an object in the camera frame (photo or video) and drawing a bounding box around it. This post builds on the previous two parts by adding detection for multiple objects, with iOS 12 – the Vision framework it makes it easier to find detected objects in Swift, and the training can all be completed in order of magnitudes faster with GPU support!
Object Detection Training with Apple’s Turi Create for CoreML (Part 2)
The previous post was about training a Turi Create model with source imagery to use for CoreML and Vision frameworks. Now that we have our trained model, let’s integrate with Xcode to create a sample iOS object detection app.
Object Detection Training with Apple’s Turi Create for CoreML (Part 1)
A bit of downtime provided me with some time to explore CoreML and machine learning videos that Apple provided at WWDC 2017. And with lucky timing, Apple released the Turi Create as I was about to start up a demo project for fun.
The goal for this post is to take source images, train a model with Turi Create, to output a Xcode compatible mlmodel
file for use with machine learning object detection with the CoreML and Vision frameworks.
Anchoring with NSLayoutAnchor and Auto Layout
The NSLayoutConstraint
class has been Apple’s recommendation for layout, as they create relationships between views, parent views, and child views, therefore explicitly setting the frame
property is not needed. This also has benefits of helping with accessibility, multiple device screen size support, and helping with views in different orientations. I’m little late to the party with iOS 9’s NSLayoutAnchor
, which simplifies the wordy NSLayoutConstraint
instantiation, with a simple-to-use API.