Combining CoreML and computer vision
When you're developing an app that works with photos or live camera footage, there are several things you might like to do using computer vision. For instance, it could be desirable to detect faces in an image. Or, maybe you would want to identify certain rectangular areas of photographs, such as traffic signs. You could also be looking for something more sophisticated, like detecting the dominant object in a picture.
To work with computer vision in your apps, Apple has created the Vision framework. You can combine Vision and CoreML to perform some pretty sophisticated image recognition. Before you implement a sample app that uses dominant object recognition, let's take a quick look at the Vision framework, so you have an idea of what it's capable of and when you might like to use it.
Understanding the Vision framework
The Vision framework is capable of many different tasks that revolve around computer vision. It is built upon several powerful deep learning...