In this tutorial, we will examine at how to use Tensorflow.js and Pusher to build a realtime emotion recognition application that accepts an face image of a user, predicts their facial emotion and then updates a dashboard with the detected emotions in realtime. A practical use case of this application will be a company getting realtime feedback from users when they roll out incremental updates to their application.
With the rapid increase in computing power and the ability of machines to make sense of what is going on around them, users now interact with intelligent systems in a lot of their daily interactions. From Spotify’s awesomely accurate discover weekly playlists to Google Photos being able to show you all pictures of “Gaby” in your gallery after identifying her in one picture, companies are now interested in ways they can leverage this “silver bullet” in their service delivery.
The best part of this is that recognizing a users emotion happens right on the client side and the user’s image is never sent to the over to the server. All that is sent to the server is the emotion detected. This means, your users never have to be worry about you storing their images on your server. Let’s get to the good stuff now!
Tensorflow.js is a JavaScript library that allows developers train and use machine learning models in the browser. This really changes the game because it means that users no longer need “super” machines to be able to run our models. Once they have a browser, they will be able to get stuff done. This also allows for developer who are more familiar with JavaScript get into building and using machine learning models without the need to learn a new programming language.
To create the build the interface of our application, we are going to use Vue.js. Vue.js is a web framework used to build interactive interfaces with JavaScript. To get started, install the Vue CLI using the command:
yarn global add @vue/cli
Afterwards, create a new Vue project using the command:
vue create realtime-feedback
Follow the prompt to create the application using the using the Vue Router preset. This creates a starter Vue.js project which we will then update to fit our application.
Install the other JavaScript libraries you are going to use:
yarn add axios @tensorflow/tfjs @tensorflow-models/knn-classifier @tensorflow-models/mobilenet
To get users’ images and feed them to our model, we are going to make use of the a webcam class. Fetch the file from here and add it to your realtime-feedback/src/assets
directory. Afterwards, go ahead and ahead and get the Pusher logo from here and place it in the realtime-feedback/src/assets
directory.
In the src/components
folder, create component titled Camera
. Components allow us to split the user interface of application into reusable parts. Add the following markup to the new component:
1<!-- src/components/Camera.vue --> 2 <template> 3 <div> 4 <video autoplay playsinline muted id="webcam" width="250" height="250"></video> 5 </div> 6 </template> 7 8 [...]
Add the following code below the closing template tag:
1// src/components/Camera.vue 2 3 [...] 4 <script> 5 import {Webcam} from '../assets/webcam' 6 7 export default { 8 name: "Camera", 9 data: function(){ 10 return { 11 webcam: null, 12 } 13 }, 14 mounted: function(){ 15 this.loadWebcam(); 16 }, 17 methods: { 18 loadWebcam: function(){ 19 this.webcam = new Webcam(document.getElementById('webcam')); 20 this.webcam.setup(); 21 } 22 } 23 }; 24 </script>
When this component is mounted, a webcam is loaded and the user can now actively see what is going on from their camera.
Our application will have two basic views:
To allow for navigation between pages, we are going to make use of the Vue Router in our application. Go ahead and edit your router.js
file to specify what pages to show on different routes:
1// src/router.js 2 import Vue from "vue"; 3 import Router from "vue-router"; 4 import Home from "./views/Home.vue"; 5 import Dashboard from "./views/Dashboard.vue"; 6 7 Vue.use(Router); 8 9 export default new Router({ 10 mode: "history", 11 base: process.env.BASE_URL, 12 routes: [ 13 { 14 path: "/", 15 name: "home", 16 component: Home 17 }, 18 { 19 path: "/dashboard", 20 name: "dashboard", 21 component: Dashboard 22 } 23 ] 24 });
Also, you need to ensure that you include the router in your src/main.js
file like this:
1// src/main.js 2 import Vue from "vue"; 3 import App from "./App.vue"; 4 import router from "./router"; 5 6 Vue.config.productionTip = false; 7 8 new Vue({ 9 router, 10 render: h => h(App) 11 }).$mount("#app");
On the homepage, there are two basic modes, train
mode and test
mode. To give us the ability to successfully recognize emotions, we are going to make use of a pretrained MobileNet and pass the result from the inference to train KNNClassifier for our different moods. In simpler terms, MobileNet is responsible for getting activations from the image and the KNNClassifier accepts the activation for a particular image and predicts which class the image activation belongs to by selecting the class the activation is closest to.
More explanation on how predictions are generated will be shared later on in the article.
Create a new view in the src/views/
directory of the project:
touch src/views/Home.vue
The homepage has the following template:
1<!-- src/views/Home.vue --> 2 <template> 3 <div class="train"> 4 <template v-if="mode == 'train'"> 5 <h1>Take pictures that define your different moods in the dropdown</h1> 6 </template> 7 <template v-else> 8 <h1>Take a picture to let us know how you feel about our service</h1> 9 </template> 10 <select id="use_case" v-on:change="changeOption()"> 11 <option value="train">Train</option> 12 <option value="test">Test</option> 13 </select> 14 <Camera></Camera> 15 <template v-if="mode == 'train'"> 16 <select id="emotion_options"> 17 <template v-for="(emotion, index) in emotions"> 18 <option :key="index" :value="index">{{emotion}}</option> 19 </template> 20 </select> 21 <button v-on:click="trainModel()">Train Model</button> 22 </template> 23 <template v-else> 24 <button v-on:click="getEmotion()">Get Emotion</button> 25 </template> 26 <h1>{{ detected_e }}</h1> 27 </div> 28 </template> 29 [...]
If the selected mode is train
mode, the camera module is displayed and a dropdown is presented for the user to train the different available classes.
Note: In a real-world application, you’ll likely want to train your model before porting it to the web
If the test
mode is selected, the user is then shown a button prompting them to take a picture of their face and allow the model predict their emotion.
Now, let’s take a look at the rest of the Home
component and see how it all works:
1<!-- src/view/Home.vue --> 2 [...] 3 4 <script> 5 // @ is an alias to /src 6 import Camera from "@/components/Camera.vue"; 7 import * as tf from '@tensorflow/tfjs'; 8 import * as mobilenetModule from '@tensorflow-models/mobilenet'; 9 import * as knnClassifier from '@tensorflow-models/knn-classifier'; 10 import axios from 'axios'; 11 12 [...]
First import the Camera
component, the Tensorflow.js library, the MobileNet model and the KNNClassifier. There are also other models available open source on the Tensorflow Github repository.
Afterwards, go ahead and then specify the data to be rendered to the DOM. Notice that there’s an array of the emotions
that we train the model to recognize and predict. The other data properties include:
classifer
- which will represent the KNNClassifier.mobilenet
- which will represents the loaded MobileNet model.class
- which represents the class to train. Used in train
mode.detected_e
- which represents the emotion that model predicts. Used in test
mode.mode
- which represents what mode is in use.1// src/views/Home.vue 2 [...] 3 export default { 4 name: "Home" 5 components: { 6 Camera 7 }, 8 data: function(){ 9 return { 10 emotions: ['angry','neutral', 'happy'], 11 classifier: null, 12 mobilenet: null, 13 class: null, 14 detected_e: null, 15 mode: 'train', 16 } 17 }, 18 19 [...]
Let’s also add the methods to the Home
component:
1// src/view/Home.vue 2 3 [...] 4 mounted: function(){ 5 this.init(); 6 }, 7 methods: { 8 async init(){ 9 // load the load mobilenet and create a KnnClassifier 10 this.classifier = knnClassifier.create(); 11 this.mobilenet = await mobilenetModule.load(); 12 }, 13 trainModel(){ 14 let selected = document.getElementById("emotion_options"); 15 this.class = selected.options[selected.selectedIndex].value; 16 this.addExample(); 17 }, 18 addExample(){ 19 const img= tf.fromPixels(this.$children[0].webcam.webcamElement); 20 const logits = this.mobilenet.infer(img, 'conv_preds'); 21 this.classifier.addExample(logits, parseInt(this.class)); 22 }, 23 24 [...]
When the component mounts on the DOM, the init()
function is called. This creates an empty KNN Classifier and also loads the pretrained MobileNet module. When the trainModel()
is called, we fetch the image from the camera element and then feed it to the MobileNet model for inference. This returns intermediate activations (logits) as Tensorflow tensors and then add it as an example for the selected class in the classifier. What have just done is also known as transfer learning.
Let’s take a look at the methods that are called when in the test
mode. When the getEmotion()
method is called, we fetch the image and also obtain logits. Then we call the predictClass
method of the classifier to fetch the class the image belongs to.
After the emotion is obtained, we also call the registerEmotion()
that sends the detected emotion over to a backend server.
Notice here that the users image is never sent anywhere. Only the predicted emotion.
1// src/view/Home.vue 2 [...] 3 async getEmotion(){ 4 const img = tf.fromPixels(this.$children[0].webcam.webcamElement); 5 const logits = this.mobilenet.infer(img, 'conv_preds'); 6 const pred = await this.classifier.predictClass(logits); 7 this.detected_e = this.emotions[pred.classIndex]; 8 this.registerEmotion(); 9 }, 10 changeOption(){ 11 const selected = document.getElementById("use_case"); 12 this.mode = selected.options[selected.selectedIndex].value; 13 }, 14 registerEmotion(){ 15 axios.post('http://localhost:3128/callback', { 16 'emotion': this.detected_e 17 }).then( () => { 18 alert('Thanks for letting us know how you feel'); 19 }); 20 } 21 } 22 }; 23 </script>
Let’s see how to create the backend server that triggers events in realtime. Create a server
folder inside your realtime-feedback
folder and initialize an empty node project:
1mkdir server && cd server 2 yarn init
Install the necessary modules for the backend server:
yarn add body-parser cors dotenv express pusher
We need a way to be able to trigger realtime events in our application when a new emotion is predicted. To do this, let’s use Pusher. Pusher allows you to seamlessly add realtime features to your applications without worrying about infrastructure. To get started, create a developer account. Once that is done, create your application and obtain your application keys.
Create a .env
in your server
directory to hold the environment variables for this application:
touch .env
Add the following to the .env
file:
1PUSHER_APPID='YOUR_APP_ID' 2 PUSHER_APPKEY='YOUR_APP_KEY' 3 PUSHER_APPSECRET='YOUR_APP_SECRET' 4 PUSHER_APPCLUSTER='YOUR_APP_CLUSTER'
Afterward, create an index.js
file in the server
directory and add the following to it:
1// server/index.js 2 require("dotenv").config(); 3 const express = require("express"); 4 const cors = require("cors"); 5 const bodyParser = require("body-parser"); 6 const Pusher = require("pusher"); 7 8 // create express application 9 const app = express(); 10 app.use(cors()); 11 app.use(bodyParser.urlencoded({ extended: false })); 12 app.use(bodyParser.json()); 13 14 // initialize pusher 15 const pusher = new Pusher({ 16 appId: process.env.PUSHER_APPID, 17 key: process.env.PUSHER_APPKEY, 18 secret: process.env.PUSHER_APPSECRET, 19 cluster: process.env.PUSHER_APPCLUSTER, 20 encrypted: true 21 }); 22 23 // create application routes 24 app.post("/callback", function(req, res) { 25 // now that we are here just go ahead and then 26 pusher.trigger("emotion_channel", "new_emotion", { 27 emotion: req.body.emotion 28 }); 29 return res.json({ status: true }); 30 }); 31 32 app.listen("3128");
We create a simple Express application, then initialize Pusher using the environment variables specified in the .env
. Afterwards, we create a simple /callback
route that is responsible for triggering a new_emotion
event on the emotion_channel
with the detected emotion passed as the body.
Now, on the dashboard, we are listening on the emotion_channel
for a new_emotion
event. Let’s see how to do this:
Firstly, add the Pusher minified script to your index.html
file for use in our application:
1<!-- public/index.html --> 2 <!DOCTYPE html> 3 <html lang="en"> 4 <head> 5 <meta charset="utf-8"> 6 <meta http-equiv="X-UA-Compatible" content="IE=edge"> 7 <meta name="viewport" content="width=device-width,initial-scale=1.0"> 8 <link rel="icon" href="<%= BASE_URL %>favicon.ico"> 9 <title>Realtime Emotion Recognition Feedback Application</title> 10 <script src="https://js.pusher.com/4.3/pusher.min.js"></script> 11 </head> 12 13 <body> 14 <noscript> 15 <strong>We're sorry but the application doesn't work properly without JavaScript enabled. Please enable it to continue.</strong> 16 </noscript> 17 <div id="app"></div> 18 <!-- built files will be auto injected --> 19 </body> 20 </html>
Create a new Dashboard view in the src/views
directory of the realtime-feedback
application:
touch src/views/Dashboard.vue
The dashboard page has the following template:
1<!-- src/views/Dashboard.vue --> 2 <template> 3 <div class="dashboard"> 4 <h1>Here's a summmary of how users feel about your service in realtime</h1> 5 <div> 6 <template v-for="(emotion, index) in emotions"> 7 <div :key="index"> 8 <strong>{{index}}</strong> clients: {{ emotion }} 9 </div> 10 </template> 11 </div> 12 </div> 13 </template> 14 15 [...]
The component only has one function init()
which we call when the component is mounted. The function creates a new Pusher object, subscribes to the emotion_channel
and then listens for a new_emotion
event and then updates the feedback summary on the dashboard in realtime without any need to refresh the page.
Add the following to the Dashboard view:
1<!-- src/views/Dashboard.vue --> 2 [...] 3 4 <script> 5 export default { 6 name: "Dashboard", 7 data: function(){ 8 return { 9 emotions: { 10 angry: 0, 11 neutral: 0, 12 happy: 0 13 }, 14 pusher_obj: null, 15 e_channel: null, 16 } 17 }, 18 mounted: function(){ 19 this.init(); 20 }, 21 methods: { 22 init (){ 23 // create a new pusher object 24 // PUSHER_APPKEY should be your pusher application key 25 this.pusher_obj = new Pusher('PUSHER_APPKEY',{ 26 cluster: 'PUSHER_APPCLUSTER', 27 encrypted: true 28 }); 29 // subscribe to channel 30 this.e_channel = this.pusher_obj.subscribe('emotion_channel'); 31 // bind the channel to the new event and specify what should be done 32 let self = this; 33 this.e_channel.bind('new_emotion', function(data) { 34 // increment the counnt for the emotion by one 35 self.emotions[`${data.emotion}`] += 1; 36 }); 37 }, 38 }, 39 } 40 </script>
Note: You’ll need to replace
PUSHER_APPKEY
andPUSHER_APPCLUSTER
with your application key and cluster.
Finally, the src/App.vue
is responsible for rendering all our views and components. Edit your src/App.vue
to look like this:
1<!-- src/App.vue --> 2 <template> 3 <div id="app"> 4 <img alt="Pusher logo" src="./assets/pusher.jpg" height="100px"> 5 <router-view/> 6 </div> 7 </template> 8 9 <style> 10 #app { 11 font-family: 'Avenir', Helvetica, Arial, sans-serif; 12 -webkit-font-smoothing: antialiased; 13 -moz-osx-font-smoothing: grayscale; 14 text-align: center; 15 color: #2c3e50; 16 } 17 #nav { 18 padding: 30px; 19 } 20 21 #nav a { 22 font-weight: bold; 23 color: #2c3e50; 24 } 25 26 #nav a.router-link-exact-active { 27 color: #42b983; 28 } 29 </style>
Now, we can take our application for a spin! Run the frontend server using the command:
yarn serve
And in another terminal tab, navigate to the server/
directory and then run the backend server using the command:
node index.js
When you head over your application and navigate to http://localhost:8080
in your browser to view the homepage.
Open the http://locahost:8080/dashboard
route in another browser tab so you can see your results in realtime on the dashboard.
Note: To allow you see how the training and testing process, you’ll need to train at least 10 samples for each of the 3 classes, also, the training data is lost on refresh of your browser. If you want to persist the trained model, you can save the trained model to your browser’s local storagea
In this tutorial, we went through how to create build a realtime emotion recognition application using Pusher, Tensorflow and Vue.js in the browser without needing to send the image of the user to any external service. Feel free to explore more on machine learning and play with some awesome demos here. Here’s a link to the GitHub repository. Happy hacking!