Have you ever wondered how companies like Facebook automatically tag millions of user images?

Or did you find yourself in a situation where you want to automate tagging people in images… perhaps with tens of thousands of images?

Or maybe you just want to build a simple TV show celebrity classifier for your fan site?

Well now there’s a quick and efficient way of doing this today. And it scales seamlessly!

## De-mystifying: How does facial recognition work?

Simple explanation: Face recognizer uses deep learning technology to detect faces in images, and then use facial features like eyebrows, lips, noses, etc. to tell people apart from each other. It usually can tell people apart from each other when you feed the model at least 10 images per person.

In-depth explanation: The algorithm first tries detecting all of the faces in a given image. Each face is represented with facial landmarks. These landmarks are essentially mathematical feature points that represent facial features like the eyes, eyebrows, nose, lips, etc.

We then use a special trick called perceptual hashing to remove duplicate facial features. Essentially, if you get the exact same features for a person, you don’t need to add that again for training. Doing this prevents overfitting to specific features in training.

Later, we use these facial features with their respective labels (people names) to train a new model. Since training is pretty fast (usually under a few mins), we do this after every action. (like adding or removing images, etc.)

And voila! We have a face recognizer for the people we’ve trained on. Now you can pass a new image, and the classifier will tell you who’s in the image with other information like bounding box coordinates and a confidence value.

## Example use-case

For example, if you’re in a position where you have a ton of photos of users that aren’t tagged in, you can potentially create a classifier from scratch. The important point to emphasize here though is that you can do this unsupervised. This means that you don’t have to label the data yourself, which means you’ll save valuable time.

You would first start off by using the dlib/FaceDetection algorithm to filter out images that don’t have only one face in any given image. There’s a really good chance that the majority of those single face images are actually the user himself/herself. After filtering all of these images, you would just automatically label the faces with the owner of those images. Since we need at least 10 images per person, in a sizable social media site, you should be able to create a custom classifier for most of your users.

You can later use this classifier to automatically suggest user-tagging to your users, or just use it to determine real-life relationships between users by determining who’s in the photos.

## Code sample

1. First, create a a free account on Algorithmia
2. Go to your profile page and navigate to the Credentials tab. There you will find your API key. Copy that to a safe location, so we can use it later.
3. Next, let’s create a test dataset. I’ve picked 10 (+ 1 for testing) images per cast member from the show “Person of Interest”, but you can pick something else if you want.
4. Last but not least, install the Python Algorithmia client using the command pip install algorithmia.
5. Now, by running the code snippets (in Python) below, we’ll be able to build our own face recognizer from scratch!
import Algorithmia
api_key = "YOUR_API_KEY"
client = Algorithmia.client(api_key)

We’ll now be training a Person of Interest Celebrity Classifier as an example. We’ve collected 10 images per celebrity, and are going to add training images of Michael Emerson, Amy Acker, Jim Caviezel, Sarah Shahi and Kevin Chapman.

Here we’re training a new classifier by adding all of the celebrity images we’ve gathered.

# Step 1: Adding images and training
input1 = {
"action":"add_images",
"data_collection":"MyFaceClassifiers",
"name_space":"celebrities",
"images":[
{"url":"data://cv/face_recognition_example_data/jim_caviezel_01.jpg","person":"jim_caviezel"}, {"url":"data://cv/face_recognition_example_data/jim_caviezel_02.jpg","person":"jim_caviezel"}, {"url":"data://cv/face_recognition_example_data/jim_caviezel_03.jpg","person":"jim_caviezel"}, {"url":"data://cv/face_recognition_example_data/jim_caviezel_04.jpg","person":"jim_caviezel"}, {"url":"data://cv/face_recognition_example_data/jim_caviezel_05.jpg","person":"jim_caviezel"}, {"url":"data://cv/face_recognition_example_data/jim_caviezel_06.jpg","person":"jim_caviezel"}, {"url":"data://cv/face_recognition_example_data/jim_caviezel_07.jpg","person":"jim_caviezel"}, {"url":"data://cv/face_recognition_example_data/jim_caviezel_08.jpg","person":"jim_caviezel"}, {"url":"data://cv/face_recognition_example_data/jim_caviezel_09.jpg","person":"jim_caviezel"}, {"url":"data://cv/face_recognition_example_data/jim_caviezel_10.jpg","person":"jim_caviezel"},
{"url":"data://cv/face_recognition_example_data/michael_emerson_01.jpg","person":"michael_emerson"}, {"url":"data://cv/face_recognition_example_data/michael_emerson_02.jpg","person":"michael_emerson"}, {"url":"data://cv/face_recognition_example_data/michael_emerson_03.jpg","person":"michael_emerson"}, {"url":"data://cv/face_recognition_example_data/michael_emerson_04.jpg","person":"michael_emerson"}, {"url":"data://cv/face_recognition_example_data/michael_emerson_05.jpg","person":"michael_emerson"}, {"url":"data://cv/face_recognition_example_data/michael_emerson_06.jpg","person":"michael_emerson"}, {"url":"data://cv/face_recognition_example_data/michael_emerson_07.jpg","person":"michael_emerson"}, {"url":"data://cv/face_recognition_example_data/michael_emerson_08.jpg","person":"michael_emerson"}, {"url":"data://cv/face_recognition_example_data/michael_emerson_09.jpg","person":"michael_emerson"}, {"url":"data://cv/face_recognition_example_data/michael_emerson_10.jpg","person":"michael_emerson"},
{"url":"data://cv/face_recognition_example_data/amy_acker_01.jpg","person":"amy_acker"}, {"url":"data://cv/face_recognition_example_data/amy_acker_02.jpg","person":"amy_acker"}, {"url":"data://cv/face_recognition_example_data/amy_acker_03.jpg","person":"amy_acker"}, {"url":"data://cv/face_recognition_example_data/amy_acker_04.jpg","person":"amy_acker"}, {"url":"data://cv/face_recognition_example_data/amy_acker_05.jpg","person":"amy_acker"}, {"url":"data://cv/face_recognition_example_data/amy_acker_06.jpg","person":"amy_acker"}, {"url":"data://cv/face_recognition_example_data/amy_acker_07.jpg","person":"amy_acker"}, {"url":"data://cv/face_recognition_example_data/amy_acker_08.jpg","person":"amy_acker"}, {"url":"data://cv/face_recognition_example_data/amy_acker_09.jpg","person":"amy_acker"}, {"url":"data://cv/face_recognition_example_data/amy_acker_10.jpg","person":"amy_acker"},
{"url":"data://cv/face_recognition_example_data/sarah_shahi_01.jpg","person":"sarah_shahi"}, {"url":"data://cv/face_recognition_example_data/sarah_shahi_02.jpg","person":"sarah_shahi"}, {"url":"data://cv/face_recognition_example_data/sarah_shahi_03.jpg","person":"sarah_shahi"}, {"url":"data://cv/face_recognition_example_data/sarah_shahi_04.jpg","person":"sarah_shahi"}, {"url":"data://cv/face_recognition_example_data/sarah_shahi_05.jpg","person":"sarah_shahi"}, {"url":"data://cv/face_recognition_example_data/sarah_shahi_06.jpg","person":"sarah_shahi"}, {"url":"data://cv/face_recognition_example_data/sarah_shahi_07.jpg","person":"sarah_shahi"}, {"url":"data://cv/face_recognition_example_data/sarah_shahi_08.jpg","person":"sarah_shahi"}, {"url":"data://cv/face_recognition_example_data/sarah_shahi_09.jpg","person":"sarah_shahi"}, {"url":"data://cv/face_recognition_example_data/sarah_shahi_10.jpg","person":"sarah_shahi"},
{"url":"data://cv/face_recognition_example_data/kevin_chapman_01.jpg","person":"kevin_chapman"}, {"url":"data://cv/face_recognition_example_data/kevin_chapman_02.jpg","person":"kevin_chapman"}, {"url":"data://cv/face_recognition_example_data/kevin_chapman_03.jpg","person":"kevin_chapman"}, {"url":"data://cv/face_recognition_example_data/kevin_chapman_04.jpg","person":"kevin_chapman"}, {"url":"data://cv/face_recognition_example_data/kevin_chapman_05.jpg","person":"kevin_chapman"}, {"url":"data://cv/face_recognition_example_data/kevin_chapman_06.jpg","person":"kevin_chapman"}, {"url":"data://cv/face_recognition_example_data/kevin_chapman_07.jpg","person":"kevin_chapman"}, {"url":"data://cv/face_recognition_example_data/kevin_chapman_08.jpg","person":"kevin_chapman"}, {"url":"data://cv/face_recognition_example_data/kevin_chapman_09.jpg","person":"kevin_chapman"}, {"url":"data://cv/face_recognition_example_data/kevin_chapman_10.jpg","person":"kevin_chapman"}
]
}
print client.algo("cv/FaceRecognition/0.2.0").pipe(input1).result

Output:

{
"result": "success"
}

We now have our classifier! We can now test it on a few images.

# Step 2: Prediction of images
input2 = {
"name_space": "celebrities",
"data_collection": "MyFaceClassifiers",
"action": "predict",
"images": [
{"url": "data://cv/face_recognition_example_data/jim_caviezel_11.jpg"},
{"url": "data://cv/face_recognition_example_data/michael_emerson_11.jpg"},
{"url": "data://cv/face_recognition_example_data/amy_acker_11.jpg"},
{"url": "data://cv/face_recognition_example_data/sarah_shahi_11.jpg"},
{"url": "data://cv/face_recognition_example_data/kevin_chapman_11.jpg"}
]
}
print client.algo("cv/FaceRecognition/0.2.0").pipe(input2).result

Output:

{
"images": [
{
"url": "data://cv/face_recognition_example_data/jim_caviezel_11.jpg",
"predictions": [
{
"person": "jim_caviezel",
"confidence": 0.8432763475342038,
"bb": {
"top": 113,
"left": 254,
"right": 328,
"bottom": 188
}
}
]
},
{
"url": "data://cv/face_recognition_example_data/michael_emerson_11.jpg",
"predictions": [
{
"person": "michael_emerson",
"confidence": 0.5874254950737818,
"bb": {
"top": 81,
"left": 46,
"right": 201,
"bottom": 236
}
}
]
},
{
"url": "data://cv/face_recognition_example_data/amy_acker_11.jpg",
"predictions": [
{
"person": "amy_acker",
"confidence": 0.6739906776634973,
"bb": {
"top": 113,
"left": 303,
"right": 378,
"bottom": 188
}
}
]
},
{
"url": "data://cv/face_recognition_example_data/sarah_shahi_11.jpg",
"predictions": [
{
"person": "sarah_shahi",
"confidence": 0.8796627085594975,
"bb": {
"top": 184,
"left": 270,
"right": 425,
"bottom": 339
}
}
]
},
{
"url": "data://cv/face_recognition_example_data/kevin_chapman_11.jpg",
"predictions": [
{
"person": "kevin_chapman",
"confidence": 0.843659292056235,
"bb": {
"top": 192,
"left": 241,
"right": 464,
"bottom": 415
}
}
]
}
],
"result": "success"
}

That’s cool, but how about we see the classification on the image itself?

You can do that by adding an “output” attribute to the image tag you’re trying to predict.

# Step 3: Prediction with a visual
input3 = {
"name_space": "celebrities",
"data_collection": "MyFaceClassifiers",
"action": "predict",
"images": [
{
"url": "data://cv/face_recognition_example_data/poi_cast.jpg",
"output": "data://.algo/cv/FaceRecognition/temp/poi_cast_visualized.png"
}
]
}
print client.algo("cv/FaceRecognition/0.2.0").pipe(input3).result

Output:

You can now find your output image in your dataAPI page, and saved to wherever you’ve specified to output to. In our case, it’s under the cv/FaceRecognition temporary collection.

## Conclusion

We’ve now seen how easy it is to get your own face recognition up and running in less than an hour.

What will you be using Face Recognition for? Give us a shout @algorithmia to let us know!