I'm Lance Somoza, a professional IT Consultant with over 15 years of industry experience and an obsession for technology. This is my tech soapbox.

Today, Apple announced a new journal (read: blog) to catalog their machine learning findings.

Welcome to the Apple Machine Learning Journal. Here, you can read posts written by Apple engineers about their work using machine learning technologies to help build innovative products for millions of people around the world. If you’re a machine learning researcher or student, an engineer or developer, we’d love to hear your questions and feedback. Write us at [email protected]

In the first entry, they discuss improving the realism of synthetic images by using large, diverse, and accurately annotated training sets.

Most successful examples of neural nets today are trained with supervision. However, to achieve high accuracy, the training sets need to be large, diverse, and accurately annotated, which is costly. An alternative to labelling huge amounts of data is to use synthetic images from a simulator. This is cheap as there is no labeling cost, but the synthetic images may not be realistic enough, resulting in poor generalization on real test images. To help close this performance gap, we’ve developed a method for refining synthetic images to make them look more realistic. We show that training models on these refined images leads to significant improvements in accuracy on various machine learning tasks.

They go into explaining the challenges and methods used to refine synthetic images, demonstrated by the example figure below.

Unlabeled Real Images
Figure 1. The task is to learn a model that improves the realism of synthetic images from a simulator using unlabeled real data, while preserving the annotation information.

The post is fascinating. Alternate reality and machine learning are the next frontier for computing, and a growing focus for Apple. This is demonstrated by iOS 11’s ARKit and CoreML, which allows developers to easily implement these technologies into their apps. In a recent interview with Bloomberg, Tim Cook talked about autonomous systems and Apple’s focus on them, including software for self-driving cars, calling it “the mother of all AI projects”.

Some are worried Apple is limiting themselves in these areas because of their privacy and security standpoints. It’s a self-imposed limitation, yes, but that could be why they are being more open about publishing their findings on efforts in this space—to attract like-minded individuals who have the same passion and belief system. For example, all machine learning features on iOS right now are done on-device. No identifiable data is sent back to iCloud and analyzed by a super computer to suggest similar faces in the Photos app, for instance. It’s all done by your iPhone or iPad. Mark Gurman even reported back in May that Apple is developing an ‘AI’ chip to specifically handle these tasks, similar to how the motion co-processor handles all motion data. Makes total sense.

I would much rather have the comfort knowing my device is doing all the work if it comes at a cost of speed to market. Besides, it’s only an inevitability that our machines will do more for us on their own. Apple may take a little more time to get there, but that’s their M.O. iPhone wasn’t the first smartphone, Apple Watch wasn’t the first smartwatch, but both products are now the benchmark in their markets. Apple will do this right, as opposed to other companies who live on getting their hands on your data, and it will be the benchmark for machine learning privacy.

Apple is indeed a secretive company, but under Tim Cook’s direction we are seeing them embrace the ability to be more open. One prior example is the open sourcing of Swift. It makes me excited to see what will come next as a result of this openness.

Tags