3. Detecting facial expressions from videos

Written by Jin Hyun Cheong and Eshin Jolly

In this tutorial we’ll explore how to use the Detector class to process video files. You can try it out interactively in Google Collab: Open In Colab

# Uncomment the line below and run this only if you're using Google Collab
# !pip install -q py-feat

3.1 Setting up the Detector

We’ll begin by creating a new Detector instance just like the previous tutorial

from feat import Detector

face_model = "img2pose"
landmark_model = "mobilenet"
au_model = "logistic"
emotion_model = "fer"
facepose_model = "img2pose"
detector = Detector(
    face_model=face_model,
    landmark_model=landmark_model,
    au_model=au_model,
    emotion_model=emotion_model,
    facepose_model=facepose_model,
)

detector
feat.detector.Detector(face_model=img2pose, landmark_model=mobilenet, au_model=logistic, emotion_model=fer, facepose_model=img2pose)

3.2 Processing videos

Detecting facial expressions in videos is easy to do using the .detect_video() method. This sample video included in Py-Feat is by Wolfgang Langer from Pexels.

from feat.utils import get_test_data_path
import os

test_data_dir = get_test_data_path()
test_video_path = os.path.join(test_data_dir, "WolfgangLanger_Pexels.mp4")

# Show video
from IPython.core.display import Video
Video(test_video_path, embed=True)