ml5 Feature Extractor Mood Background Changer

Denisa Marcisovska
5 min readJun 4, 2020

A live example of the code: https://dmarcisovska.github.io/ml5-feature-extractor-mood-background-changer/

Github code: https://github.com/dmarcisovska/ml5-feature-extractor-mood-background-changer

In this code example I created a background mood changer using the ml5 feature extractor. The end user will train the pre-trained model to detect if he/she is smiling or frowning, and based on their facial expression, the background will change to either rain or sunshine.

The ml5 feature extractor is a pre-trained model that takes advantage of transfer learning. You are using the feature part of a pre-trained model that allows you to retrain or reuse the model for a new task. In this instance, the task will be determining whether the user is smiling or frowning.

HTML

To create the base of the ml5 feature extractor project I added in some external CSS and JavaScript files. I added in p5.js, ml5.js, Google Fonts, Bootstrap, Font Awesome, an external weather css file, and my own css file. The p5.js allows me to create a canvas, the ml5.js provides the ml5 feature extractor and the weather css is what shows the rain drops.

I added in some instructions so the end user knows how to use the m15 feature extractor. I also added in three audio files — if the user is sad it will play the rain, if the user is happy it will play the cricket file and when the user clicks the train button it will make a noise. I created buttons for the user to click on to train the model.

<!DOCTYPE html>
<html lang="en">
<head>
<title> Mood Background Changer</title>
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/0.10.2/p5.js"></script>
<script src="https://unpkg.com/ml5@0.4.3/dist/ml5.min.js"></script>
<link href='https://fonts.googleapis.com/css?family=Lato:300' rel='stylesheet' type='text/css'>
<link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.5.0/css/bootstrap.min.css" integrity="sha384-9aIt2nRpC12Uk9gS9baDl411NQApFmC26EwAOH8WgZl5MYYxFfc+NcPb1dKGj7Sk" crossorigin="anonymous">
<link rel="stylesheet" type="text/css" href="https://cdn.rawgit.com/moqmar/weather.css/master/weather.min.css">
<link rel="stylesheet" type="text/css" href="style.css">
<meta charset="utf-8" />
</head>
<body class="">
<div class="container">
<div class="col">
<div class="row">
<div class="folded-corner mt-5">
<h3> Mood Background Changer</h3>
<hr>
<p>
Train the model below to detect your mood. </p>
<ol>
<li> Make your best exaggerated smile and press the smile button 10 or more times.</li>
<li> Make your best exaggerated frown and press the frown button 10 or more times.</li>
<li> Click the train button once.</li>
<li> Wait a few seconds for the model to train itself.</li>
<li> The model will now change background based on if you are smiling or frowning.</li>
<li> If you like you can use objects or different gestures instead of a smile or frown.</li>
</ol>
<div class="mt-2">
<button type="button" id="happy" class="btn btn-light mr-2"> <i class="fas fa-smile"></i> Happy</button>
<button type="button" id="sad" class="btn btn-light mr-2"> <i class="fas fa-sad-tear"></i> Sad</button>
<button type="button" id="train" class="btn btn-light"> <i class="fas fa-running"></i> Train</button>
</div>
</div>
</div>
</div>
</div>
<audio id="crickets" src="assets/crickets.wav">
<p class="text-center">If you are reading this, it is because your browser does not support the audio element. </p>
</audio>
<audio id="rain" src="assets/rain.wav">
<p class="text-center">If you are reading this, it is because your browser does not support the audio element. </p>
</audio>
<audio id="buttonAudio" src="assets/button.wav">
<p class="text-center">If you are reading this, it is because your browser does not support the audio element. </p>
</audio>
<script src="https://kit.fontawesome.com/4c7879890d.js" crossorigin="anonymous"></script>
<script src="https://code.jquery.com/jquery-3.5.1.slim.min.js" integrity="sha384-DfXdz2htPH0lsSSs5nCTpuj/zy4C+OGpamoFVy38MVBnE+IbbVYUew+OrCXaRkfj" crossorigin="anonymous"></script>
<script src="https://cdn.jsdelivr.net/npm/popper.js@1.16.0/dist/umd/popper.min.js" integrity="sha384-Q6E9RHvbIyZFJoft+2mJbHaEWldlvI9IOYy5n3zV9zzTtmI3UksdQRVvoxMfooAo" crossorigin="anonymous"></script>
<script src="https://stackpath.bootstrapcdn.com/bootstrap/4.5.0/js/bootstrap.min.js" integrity="sha384-OgVRvuATP1z7JjHLkuOU7Xw704+h835Lr+6QL9UvYjZE3Ipu6Tp75j7Bh/kR0JKI" crossorigin="anonymous"></script>
<script src="sketch.js"></script>
</body>
</html>

CSS

To create the design of the ml5 feature extractor project I added in CSS to center the video. I added in an initial background image to the page before the background image changes. I added in CSS to mimic paper for the direction portion of the page. Lastly, I added in some CSS to change the button icon color.

canvas {
display: block;
margin-left: auto;
margin-right: auto;
margin-top: 20px;
}
body {
background: url("assets/night.jpg") no-repeat fixed 0 0 / cover;
font-family: 'Lato', sans-serif;
}
.folded-corner {
position: relative;
width: 80%;
padding: 1em 1.5em;
margin: 0 auto;
color: #fff;
background-color:rgba(71,167,154,.8);
overflow: hidden;
}
.folded-corner:before {
content: "";
position: absolute;
top: 0;
right: 0;
border-width: 0 20px 20px 0;
border-style: solid;
border-color: #15253c #15253c #308277 #308277;
-webkit-box-shadow: 0 1px 1px rgba(0,0,0,0.3), -1px 1px 1px rgba(0,0,0,0.2);
-moz-box-shadow: 0 1px 1px rgba(0,0,0,0.3), -1px 1px 1px rgba(0,0,0,0.2);
box-shadow: 0 1px 1px rgba(0,0,0,0.3), -1px 1px 1px rgba(0,0,0,0.2);
display: block;
width: 0;
}
.fas:before {
color: #47a79a;
}

JavaScript

I grabbed the body and audio elements and set those to variables — body, rainAudio, cricketAudio.

During training, the ml5 feature extractor is calculating the probability it got the image wrong. As it continues training, the error gets lower and lower, eventually so low it reaches null. In the function whileTraining we want to know when the training error gets to null (loss), and once it does so we classify our results using the gotResults function.

The gotResults function returns an error if an error is present, otherwise it returns your results in the variable I named results. If the result label is equal to sad I added in JavaScript to change the background image to a rainy photo and I play the rain audio file. I also pause the cricket audio file if it is playing. I added in a weather rain class to the body tag. If the results are happy then I set the background to a sunny photo, and pause the rain audio file. I search to see if a weather rain class is present on the body tag and if so I remove it, getting rid of the rain.

In the setup function, I grab the sad and happy button elements and set onclick functions so they will train the model and grab my image whenever the button is pressed. The sad and happy buttons are designed to track the number of clicks and add the clicks to the button elements so the user knows how many sad and happy images they have saved to be used to train the feature extractor. I also grab the train button which starts the training process.

In the draw function I created a canvas and placed a video inside it.I reversed the video so the user would see a reflection of themselves.

let mobilenet;
let classifier;
let video;
let results;
let sadClicks = 0;
let happyClicks = 0;
let body = document.getElementsByTagName('body')[0];
let rainAudio = document.getElementById("rain");
let cricketAudio = document.getElementById("crickets");
let buttonAudio = document.getElementById("buttonAudio");
function modelReady() {
console.log('Model is ready!!!');
}
function videoReady() {
console.log('Video is ready!!!');
}
function whileTraining(loss) {
if (loss == null) {
console.log('Training Complete');
classifier.classify(gotResults);
} else {
console.log(loss);
}
}
function gotResults(error, result) {
if (error) {
console.error(error);
} else {
results = result[0].label;
classifier.classify(gotResults);
if (results === "sad"){
document.body.style.backgroundImage = "url('assets/sad.jpg')";
body.setAttribute('class', 'weather rain');
cricketAudio.pause();
rainAudio.play();
}
if (results === "happy"){
if (document.getElementsByClassName('weather').length) {
body.classList.remove("weather");
body.classList.remove("rain");
}
document.body.style.backgroundImage = "url('assets/happy.jpg')";
rainAudio.pause();
cricketAudio.play();
}
}
}
function setup() {createCanvas(340, 270);
video = createCapture(VIDEO);
video.hide();
background(0);
mobilenet = ml5.featureExtractor('MobileNet', modelReady);
classifier = mobilenet.classification(video, videoReady);
document.getElementById('sad').onclick = function () {
sadClicks += 1;
document.getElementById('sad').innerHTML = " <i class=\"fas fa-sad-tear\"></i>" + "Sad images trained: " + sadClicks;
classifier.addImage('sad');
};
document.getElementById('happy').onclick = function() {
happyClicks += 1;
document.getElementById('happy').innerHTML = " <i class=\"fas fa-smile\"></i>" + "Happy images trained: " + happyClicks;
classifier.addImage('happy');
};
document.getElementById('train').onclick = function() {
buttonAudio.play();
document.getElementById('sad').innerHTML = " <i class=\"fas fa-sad-tear\"></i>" + "Sad";
document.getElementById('happy').innerHTML = " <i class=\"fas fa-smile\"></i>" + "Happy";
happyClicks = 0;
sadClicks = 0;
classifier.train(whileTraining);
};
}
function draw() {
push();
translate(width,0);
scale(-1, 1);
image(video, 0, 0, 340, 270);
pop();
}

--

--