iOS 11

Adds a lot of new posibilites for developers with new frameworks for Augmented Reality (ARKit), Object Detection (Vision) and Machine Learning (CoreML).

To demonstrate we created an augmented reality app with face-recognition:

 

How does it work:

  1. The app continuously grabs the camera image
  2. Detects faces on the image using Vision
  3. Extracts the faces and runs through our machine-learning model
  4. Identifies persons with the result
  5. Calculates a three dimensional coordinate according to the location on the screen
  6. Displays information in AR

 

We will write a more in-depth blog post, how we created the machine-learning model 😉

Meanwhile the project is available on GitHub:

https://github.com/NovaTecConsulting/FaceRecognition-in-ARKit

 

Leave a Comment

2 comments

  1. d

    hello, I want to know how do you created the machine-learning model?When can you write the blog?

  2. Could you create a guide on how to create the model? Thanks!

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close