Project Oxford: Emotion Recognition on Android

Machine Learning is awesome. Really, it is, especially today. Years ago it would have taken a lot of effort (and I mean A LOT of effort) to implement emotion recognition into your applications. And now it is only one REST API call away. Crazy right?

In preparing for a talk I built an application on Android for the Valentine’s Day theme, yes I’m posting this 2 weeks late, I’m sorry! But better late than never 🙂

Date Face – get your date face on!

The application I built is a dating application, similar to Tinder but with a twist. Instead of using your finger to swipe left and right, you use your face. No, no, not literally! Please don’t try to use your face to swipe on a touch screen! More accurately, you are using your emotions:

Date Face Application Architecture
Date Face Application Architecture

The application displays a picture of a person (but I used cats and BB8, because everyone loves cats and BB8), then takes a picture of you. The picture is then uploaded to Azure blob storage in order to get a URL for the image; this is because the Emotions API takes in a URL in the post request. Once we have the image URL, we make a HTTP POST request to the Emotions API using the APIs primary key (you can get this from projectoxford.ai).

The API will then return a JSON response that contains a “scores” array – this array contains the score for the 7 different emotions. The application then uses those scores to decide if the user was “happy or not”. If happy, the image swipes to the right by itself; if not happy (if the happiness score is less than 0.5) it swiped the image left.

Blob Storage

As I previously mentioned, the HTTP Post Request for the Emotions API takes in a URL for the image, so I had to upload the image somewhere to get a URL for it. This is why blob storage was used – it’s very easy to use and it has a cross platform SDK. For this app, I used Java and had to include the Azure Storage library for Android in my project (find out how to get this here).

Here are the steps to setup Azure blob storage:

  • Log in to the Azure portal @ portal.azure.com
  • Click on the ‘New’ button in the top left corner -> ‘Data + Storage’ -> ‘Storage account’ -> ‘Create’

blob 1

  • Fill in the name of your storage, and set the other field to whatever is relevant to you and press ‘Create’

blob 2

  • Once it has finished building go to: ‘Blob’ -> ‘Container’ -> name your container, in my case I named it ‘dateface’ which is my apps name (a container is simply just a folder to store your files (a.k.a. ‘blobs’). And finally, make sure you set the ‘Access type’ to ‘Blob’. Then press ‘Create’.

blob 3

  • The connection key for your storage account can be found by clicking on the key icon

blob 4

    • That’s it, now you have all the details needed to store items into your blob storage:

Storage URL: https://YOUR_STORAGE_NAME.blob.core.windows.net
Container Name
Storage Connection String

Here is the code to upload an image to Azure blob storage in Java:

private static final String storageURL = "BLOB_STORAGE_URL";
private static final String storageContainer = "NAME_OF_BLOB_STORAGE_CONTAINER";
private static final String storageConnectionString = "BLOB_STORAGE_CONNECTION_STRING";

protected void storeImageInBlobStorage(String imgPath){
    try
    {
        // Retrieve storage account from connection-string.
        CloudStorageAccount storageAccount = CloudStorageAccount.parse(storageConnectionString);

        // Create the blob client.
        CloudBlobClient blobClient = storageAccount.createCloudBlobClient();

        // Retrieve reference to a previously created container.
        CloudBlobContainer container = blobClient.getContainerReference(storageContainer);

        // Create or overwrite the blob (with the name "example.jpeg") with contents from a local file.
        CloudBlockBlob blob = container.getBlockBlobReference("example.jpg");
        File source = new File(imgPath);
        blob.upload(new FileInputStream(source), source.length());
    }
    catch (Exception e)
    {
        // Output the stack trace.
        e.printStackTrace();
    }
}

Emotions API

Now that I had captured my users image, saved to blob storage, and got the images URL – it was time to use the Emotions API. The Emotion API takes an image as an input, and returns the confidence across a set of emotions for each face in the image using the Face API. The emotions detected are anger, contempt, disgust, fear, happiness, neutral, sadness, and surprise. So I put together a HTTP POST request using the following details:

Request URL:

https://api.projectoxford.ai/emotion/v1.0/recognize

Request Headers:

Content-Type (optional) : application/json

Ocp-Apim-Subscription-Key : Your project oxford api primary key (found in your profile on the projectoxford.ai website)

Request Body:

{ “url”: “http://example.com/picture.jpg” }

The request body here (if using blob storage) would be in the format of: https://YOUR_STORAGE_NAME.blob.core.windows.net/CONTAINER_NAME/UPLOADED_IMAGE_NAME.jpeg

Code in Java:

private static final String poPrimaryKey = "PROJECT_OXFORD_API_PRIMARY_KEY";

protected String getEmotionScore(){

    //Set this to the URL of the project oxford API you want to use, in this case, I'm using the Emotions API
    String apiURL = "https://api.projectoxford.ai/emotion/v1.0/recognize";

    HttpClient client = new DefaultHttpClient();
    HttpResponse response;
    String result;
    String happinessScore = "0";

    try {
        //Where imageName == the name of the file you uploaded to blob storage e.g. "example.jpeg"
        StringEntity param = new StringEntity("{ \"url\": \"" + storageURL + "/" + storageContainer + "/" + imageName + "\" }");

        HttpPost post = new HttpPost(apiURL);
        post.setHeader("Content-Type", "application/json");
        post.setHeader("Ocp-Apim-Subscription-Key", poPrimaryKey);
        post.setEntity(param);
        response = client.execute(post);

        if(response!=null){
            result = EntityUtils.toString(response.getEntity());
            JSONArray resultJson = new JSONArray(result);
            JSONObject mainJson = resultJson.getJSONObject(0);
            JSONObject scores = mainJson.getJSONObject("scores");

            happinessScore = scores.getString("happiness");
        }

    } catch (IOException e) {
        e.printStackTrace();
    } catch (JSONException e) {
        e.printStackTrace();
    }

    return happinessScore;
}

The code above is opening a HTTP client and creating a POST request to the Emotions API

The response comes back in JSON format containing “scores” array, for example:

[
    "scores": {
      "anger": 0.00300731952,
      "contempt": 5.14648448E-08,
      "disgust": 9.180124E-06,
      "fear": 0.0001912825,
      "happiness": 0.9875571,
      "neutral": 0.0009861537,
      "sadness": 1.889955E-05,
      "surprise": 0.008229999
    }
]

So in the code, I extract the array from the response entity then use that to get the happiness score, which is what I’m using to decide if I should swipe the left or right in the logic behind my application.

The End

And that’s it, the above is all the code you need to add Machine Learning into your application and make it more intelligent. Pretty simple right?

If you want to see the full code behind my application feel free to check it out here:
https://github.com/liliankasem/DateFace-Android

Why not go a step further and use the Recommendations API to improve which “possible date” to show users based on which way the user swiped on the last date?

If you’re stuck or need help implementing any of this, feel free to contact me!