Mobile Machine Learning on Ios Platform Part 1

Mobile Machine Learning on IOS Platform

When my company’s CEO asked me to write a blog, I was extremely excited. I’ve had many cool projects that I wanted to do, but I put them on the back burner due to the lack of time. For the last couple of years, I have been learning AI in the area of computer vision and autonomous robotics. I had thought about building the car robot with the Nvidia Jetson nano board, but I had always wanted to learn mobile programming; since the day I bought the Apple iPhone 5 in 2012. I’ve always believed it would be a great learning experience to apply Machine Learning/Deep Learning algorithms on the iOS platform. During Christmas 2018, I purchased an iPhone XS Max that comes with impressive AI hardware. In addition to the AI hardware acceleration, two cameras and ten sensors built-in provide almost all possible real-world input for mobile apps. Let’s do a quick deep dive of the iPhone XS series of hardware specifications.
The new 7 nm A12 Bionic chip, which was the fastest chip in 2018, has allowed XS Max to keep the performance crown again. This chip is a six cores with two high-performance cores that are 15% faster, and four power-efficient cores that use 50% less power than the A11 chip. On top of that, four GPU cores that is 50% faster than the A11 chip. Below is the comparison of the performance of XS Max vs. other leading competitors.

 https://www.tomsguide.com/us/iphone-xs-iphone-xs-max-benchmarks,review-5745.html

  • A dedicated eight cores neural engine (accelerator) chip that can perform 5 trillion 8 bit ops/sec. A11 neural chip can perform 600 billion 8 bits ops/sec. This chip is the first AI accelerator chip that is accessible to 3rd party providers. Apple is hoping a 3rd party will build the next generation AI application.

Without AI/ML library that fully utilizes the new neural processor in A12 chip, it’s meaningless. Fortunately, Apple has provided AI SDK to assist developers in taking advantage of neural processors in their iOS apps. The two SDK are Create ML 3.0 and Core ML 3.0. What is the difference between the two? Core ML is used to integrate the machine learning model into the app, whereas Create ML is used to build and train the model from scratch. I will heavily utilize Core ML to integrate models trained with Tensorflow or Keras. Core ML comes with many popular pre-trained models out of the box for both images and text. I will evaluate Create ML in comparison with Tensorflow and Keras in the future blog series.  Note Core ML model format is not compatible with Tensorflow model, however there is an open-source tool to convert the Tensorflow model to the Core ML model.
See https://github.com/tf-coreml/tf-coreml.

As you can see, building a smart AI iOS application involves complex integration of hardware and software stacks. In the next blog, I will discuss the machine learning pipelines and build automation in creating new iOS apps.


mobile machine learning blog

About the Author

Kiet Ly is the Chief Data Architect at E-INFOSOL where he is currently helping federal customers migrate big data analytics on AWS. Kiet has extensive experience architecting and building AWS data analytics applications in financial services and defense.

Interesting Fact About Kiet:
As a child he was stranded on a remote island for several days!