CC – Week 12: Automation – Project
This week, we’re using Google’s Teachable Machine with ml5.js to use machine learning to identify if you’re a human, a phone, or a human holding a phone 👀 & speak the result aloud!
(Note: requires Chrome or another browser that supports the Web Speech API.)
View demo
The model
Plum & I trained a simple model using Teaching Machine with pictures of us & our iPhones:
Try on Teachable Machine
Making a web app
With p5, ml5, & the pre-made model, it’s pretty straightforward to show a camera feed & the model’s identification on a website:
let classifierlet imageModelURL = 'https://teachablemachine.withgoogle.com/models/AI5i76oG/'let videolet flippedVideolet label = ''function preload() {classifier = ml5.imageClassifier(imageModelURL + 'model.json')}function setup() {createCanvas(windowWidth, windowWidth * 0.75)video = createCapture(VIDEO)video.size(width, height)video.hide()flippedVideo = ml5.flipImage(video)classifyVideo()}function draw() {background(0)image(flippedVideo, 0, 0)noStroke()fill(100, 100, 100, 200)rect(0, height - 64, width, 64)fill(255)textFont('system-ui')textSize(48)textAlign(CENTER)text(label, width / 2, height - 15)}function classifyVideo() {flippedVideo = ml5.flipImage(video)classifier.classify(flippedVideo, gotResult)}function gotResult(error, results) {if (error) {console.error(error)return}label = results[0].labelclassifyVideo()}
Adding a voice
But that’s boring! A few weeks ago I used p5.speech for speech recognition, but the library also allows text-to-speech. What if we announced the label aloud every time it changed?
To inititalize the voice, this is all we need, at the top of the JavaScript file:
let voice = new p5.Speech()
Then in the gotResult
function:
const newLabel = results[0].labelif (newLabel !== label) {voice.speak(label)label = newLabel}
That’s it!