Detecting FEX from images

How to use the Feat Detector class.

Written by Jin Hyun Cheong

Here is an example of how to use the Detector class to detect faces, facial landmarks, Action Units, and emotions, from face images or videos.

Let’s start by installing Py-FEAT if you have not already done so or are using this notebook from Google Colab

# !pip install -q py-feat

Detecting facial expressions from images.

First, load the detector class. You can specify which models you want to use.

from feat import Detector
face_model = "retinaface"
landmark_model = "mobilenet"
au_model = "rf"
emotion_model = "resmasknet"
detector = Detector(face_model = face_model, landmark_model = landmark_model, au_model = au_model, emotion_model = emotion_model)
/root/miniconda3/lib/python3.8/site-packages/nilearn/datasets/__init__.py:87: FutureWarning: Fetchers from the nilearn.datasets module will be updated in version 0.9 to return python strings instead of bytes and Pandas dataframes instead of Numpy arrays.
  warn("Fetchers from the nilearn.datasets module will be "
Loading Face Detection model:  retinaface
Using downloaded and verified file: /tf/notebooks/second-py-feat/feat/resources/mobilenet0.25_Final.pth
Using downloaded and verified file: /tf/notebooks/second-py-feat/feat/resources/mobilenet_224_model_best_gdconv_external.pth.tar
Using downloaded and verified file: /tf/notebooks/second-py-feat/feat/resources/hog_pca_all_emotio.joblib
Using downloaded and verified file: /tf/notebooks/second-py-feat/feat/resources/hog_pca_all_emotio.joblib
Using downloaded and verified file: /tf/notebooks/second-py-feat/feat/resources/hog_scalar_aus.joblib
Using downloaded and verified file: /tf/notebooks/second-py-feat/feat/resources/RF_568.joblib
Using downloaded and verified file: /tf/notebooks/second-py-feat/feat/resources/hog_pca_all_emotio.joblib
Using downloaded and verified file: /tf/notebooks/second-py-feat/feat/resources/hog_scalar_aus.joblib
Using downloaded and verified file: /tf/notebooks/second-py-feat/feat/resources/ResMaskNet_Z_resmasking_dropout1_rot30.pth
Loading Face Landmark model:  mobilenet
Loading au model:  rf
Loading emotion model:  resmasknet
Loading facepose model:  pnp

Find the file you want to process. In our case, we’ll use our test image input.jpg.

# Find the file you want to process.
from feat.tests.utils import get_test_data_path
import os
test_data_dir = get_test_data_path()
test_image = os.path.join(test_data_dir, "input.jpg")

Here is what our test image looks like.

from PIL import Image
import matplotlib.pyplot as plt
f, ax = plt.subplots()
im = Image.open(test_image)
ax.imshow(im);
../_images/detector_7_0.png

Now we use our initialized detector instance to make predictions with the detect_image() method.

image_prediction = detector.detect_image(test_image)
# Show results
image_prediction
frame FaceRectX FaceRectY FaceRectWidth FaceRectHeight FaceScore x_0 x_1 x_2 x_3 ... Roll Yaw anger disgust fear happiness sadness surprise neutral input
0 0 196.976852 140.997742 173.810471 257.639343 0.999681 192.864591 191.586714 192.874615 197.39479 ... -1.903961 4.869264 0.000369 0.000026 0.000485 0.986996 0.000046 0.01201 0.000068 /tf/notebooks/second-py-feat/feat/tests/data/i...

1 rows × 173 columns

The output is a Fex class instance which allows you to run the built-in methods for Fex.

Visualizing detection results.

For example, you can easily plot the detection results.

image_prediction.plot_detections();
../_images/detector_11_0.png

If you are interested in visualizing the head pose, you can do so simply by setting pose=True. This setting will overlay the x,y, and z-axis of the head onto the image.

image_prediction.plot_detections(pose=True);
../_images/detector_13_0.png

Accessing face expression columns of interest.

You can also access the columns of interests (AUs, emotion) quickly.

image_prediction.facebox()
FaceRectX FaceRectY FaceRectWidth FaceRectHeight FaceScore
0 196.976852 140.997742 173.810471 257.639343 0.999681
image_prediction.aus()
AU01 AU02 AU04 AU05 AU06 AU07 AU09 AU10 AU11 AU12 AU14 AU15 AU17 AU20 AU23 AU24 AU25 AU26 AU28 AU43
0 0.56313 0.510732 0.18804 0.17787 0.874216 0.694585 0.319272 0.927146 0.4151 0.964379 0.646568 0.339296 0.2386 0.25925 0.224025 0.039379 0.966135 0.389137 0.142019 0.141042
image_prediction.emotions()
anger disgust fear happiness sadness surprise neutral
0 0.000369 0.000026 0.000485 0.986996 0.000046 0.01201 0.000068
image_prediction.facepose() # (in degrees)
Pitch Roll Yaw
0 0.968232 -1.903961 4.869264

Detecting facial expressions and saving to a file.

You can also output the results into file by specifying the outputFname. The detector will return True when it’s finished.

detector.detect_image(test_image, outputFname = "output.csv")
True

Loading detection results from saved file.

The outputs can be loaded using our read_feat() function or a simple Pandas read_csv(). We recommend using read_feat() because that will allow you to use the full suite of Feat functionalities more easily.

from feat.utils import read_feat
image_prediction = read_feat("output.csv")
# Show results
image_prediction
frame FaceRectX FaceRectY FaceRectWidth FaceRectHeight FaceScore x_0 x_1 x_2 x_3 ... Roll Yaw anger disgust fear happiness sadness surprise neutral input
0 0 196.976852 140.997742 173.810471 257.639343 0.999681 192.864591 191.586714 192.874615 197.39479 ... -1.903961 4.869264 0.000369 0.000026 0.000485 0.986996 0.000046 0.01201 0.000068 /tf/notebooks/second-py-feat/feat/tests/data/i...

1 rows × 173 columns

import pandas as pd
image_prediction = pd.read_csv("output.csv")
# Show results
image_prediction
frame FaceRectX FaceRectY FaceRectWidth FaceRectHeight FaceScore x_0 x_1 x_2 x_3 ... Roll Yaw anger disgust fear happiness sadness surprise neutral input
0 0 196.976852 140.997742 173.810471 257.639343 0.999681 192.864591 191.586714 192.874615 197.39479 ... -1.903961 4.869264 0.000369 0.000026 0.000485 0.986996 0.000046 0.01201 0.000068 /tf/notebooks/second-py-feat/feat/tests/data/i...

1 rows × 173 columns

Detecting facial expressions images with many faces.

Feat’s Detector can find multiple faces in a single image. Let’s also practice switching the emotion detector

face_model = "img2pose" # img2pose acts as both a face detector and a face pose estimator
landmark_model = "mobilenet"
au_model = "logistic"
emotion_model = "fer"
facepose_model = "img2pose"
detector = Detector(face_model=face_model, landmark_model=landmark_model, au_model=au_model, 
                    emotion_model=emotion_model, facepose_model=facepose_model)

test_image = os.path.join(test_data_dir, "tim-mossholder-hOF1bWoet_Q-unsplash.jpg")
image_prediction = detector.detect_image(test_image)
# Show results
image_prediction
Loading Face Detection model:  img2pose
Using downloaded and verified file: /tf/notebooks/second-py-feat/feat/resources/img2pose_v1.pth
Using downloaded and verified file: /tf/notebooks/second-py-feat/feat/resources/WIDER_train_pose_mean_v1.npy
Using downloaded and verified file: /tf/notebooks/second-py-feat/feat/resources/WIDER_train_pose_stddev_v1.npy
Using downloaded and verified file: /tf/notebooks/second-py-feat/feat/resources/reference_3d_68_points_trans.npy
Using downloaded and verified file: /tf/notebooks/second-py-feat/feat/resources/mobilenet_224_model_best_gdconv_external.pth.tar
Using downloaded and verified file: /tf/notebooks/second-py-feat/feat/resources/hog_pca_all_emotio.joblib
Using downloaded and verified file: /tf/notebooks/second-py-feat/feat/resources/hog_pca_all_emotio.joblib
Using downloaded and verified file: /tf/notebooks/second-py-feat/feat/resources/hog_scalar_aus.joblib
Using downloaded and verified file: /tf/notebooks/second-py-feat/feat/resources/Logistic_520.joblib
Using downloaded and verified file: /tf/notebooks/second-py-feat/feat/resources/hog_pca_all_emotio.joblib
Using downloaded and verified file: /tf/notebooks/second-py-feat/feat/resources/hog_scalar_aus.joblib
Using downloaded and verified file: /tf/notebooks/second-py-feat/feat/resources/best_ferModel.pth
Loading Face Landmark model:  mobilenet
Loading au model:  logistic
Loading emotion model:  fer
Loading facepose model:  img2pose
frame FaceRectX FaceRectY FaceRectWidth FaceRectHeight FaceScore x_0 x_1 x_2 x_3 ... Roll Yaw anger disgust fear happiness sadness surprise neutral input
0 0 307.0 230.0 107.0 126.0 0.994716 315.429249 316.183640 317.912851 321.205881 ... -2.973651 -4.679367 0.000101 0.002426 0.000216 0.914512 0.001248 0.001236 0.080261 /tf/notebooks/second-py-feat/feat/tests/data/t...
1 0 529.0 303.0 110.0 132.0 0.994049 536.733620 534.369016 533.012366 533.194444 ... 4.255120 8.171329 0.000269 0.006189 0.001372 0.973058 0.000859 0.000572 0.017682 /tf/notebooks/second-py-feat/feat/tests/data/t...
2 0 676.0 283.0 120.0 140.0 0.993604 686.931943 685.053395 684.001543 684.407677 ... 9.734631 6.029474 0.000677 0.005988 0.001107 0.953000 0.001957 0.005966 0.031306 /tf/notebooks/second-py-feat/feat/tests/data/t...
3 0 215.0 45.0 102.0 125.0 0.990833 220.173312 218.134724 216.974434 217.157989 ... 14.068018 -4.917812 0.001056 0.005686 0.000178 0.930699 0.000242 0.001177 0.060962 /tf/notebooks/second-py-feat/feat/tests/data/t...
4 0 430.0 208.0 85.0 100.0 0.990231 439.764975 439.281783 439.591148 440.700340 ... 1.971143 -4.030670 0.001312 0.018950 0.000629 0.665189 0.040488 0.000821 0.272612 /tf/notebooks/second-py-feat/feat/tests/data/t...

5 rows × 173 columns

Visualize multiple faces in a single image.

image_prediction.plot_detections(pose=True);
../_images/detector_27_0.png

Detecting facial expressions from multiple images

You can also detect facial expressions from a list of images. Just place the path to images in a list and pass it to detect_images.

# Find the file you want to process.
from feat.tests.utils import get_test_data_path
import os, glob
test_data_dir = get_test_data_path()
test_images = [file for file in glob.glob(os.path.join(test_data_dir, "*.jpg"))
               if not os.path.basename(file).startswith('no-face')]  # Avoid the test image with no face contained in it
print(test_images)

image_prediction = detector.detect_image(test_images)
image_prediction
['/tf/notebooks/second-py-feat/feat/tests/data/input.jpg', '/tf/notebooks/second-py-feat/feat/tests/data/tim-mossholder-hOF1bWoet_Q-unsplash.jpg']
frame FaceRectX FaceRectY FaceRectWidth FaceRectHeight FaceScore x_0 x_1 x_2 x_3 ... Roll Yaw anger disgust fear happiness sadness surprise neutral input
0 0 185.0 139.0 205.0 267.0 0.989039 194.488554 192.784420 193.521747 197.423195 ... -3.737930 6.101128 0.001600 0.017335 0.000693 0.950639 0.000309 0.000382 0.029042 /tf/notebooks/second-py-feat/feat/tests/data/i...
0 0 307.0 230.0 107.0 126.0 0.994716 315.429249 316.183640 317.912851 321.205881 ... -2.973651 -4.679367 0.000101 0.002426 0.000216 0.914512 0.001248 0.001236 0.080261 /tf/notebooks/second-py-feat/feat/tests/data/t...
1 0 529.0 303.0 110.0 132.0 0.994049 536.733620 534.369016 533.012366 533.194444 ... 4.255120 8.171329 0.000269 0.006189 0.001372 0.973058 0.000859 0.000572 0.017682 /tf/notebooks/second-py-feat/feat/tests/data/t...
2 0 676.0 283.0 120.0 140.0 0.993604 686.931943 685.053395 684.001543 684.407677 ... 9.734631 6.029474 0.000677 0.005988 0.001107 0.953000 0.001957 0.005966 0.031306 /tf/notebooks/second-py-feat/feat/tests/data/t...
3 0 215.0 45.0 102.0 125.0 0.990833 220.173312 218.134724 216.974434 217.157989 ... 14.068018 -4.917812 0.001056 0.005686 0.000178 0.930699 0.000242 0.001177 0.060962 /tf/notebooks/second-py-feat/feat/tests/data/t...
4 0 430.0 208.0 85.0 100.0 0.990231 439.764975 439.281783 439.591148 440.700340 ... 1.971143 -4.030670 0.001312 0.018950 0.000629 0.665189 0.040488 0.000821 0.272612 /tf/notebooks/second-py-feat/feat/tests/data/t...

6 rows × 173 columns

When you have multiple images, you can still call the plot_detection which will plot results for all input images. If you have a lot of images, we recommend checking one by one using slicing.

image_prediction.plot_detections();
../_images/detector_31_0.png ../_images/detector_31_1.png

You can use the slicing function to plot specific rows in the detection results or for a particular input file.

image_prediction.iloc[[1]].plot_detections();
../_images/detector_33_0.png
image_to_plot = image_prediction.input().unique()[1]
image_prediction.query("input == @image_to_plot").plot_detections();
../_images/detector_34_0.png

Detecting FEX from videos

Detecting facial expressions in videos is also easy by using the detect_video() method. This sample video is by Wolfgang Langer from Pexels.

# Find the file you want to process.
from feat.tests.utils import get_test_data_path
import os, glob
test_data_dir = get_test_data_path()
test_video = os.path.join(test_data_dir, "WolfgangLanger_Pexels.mp4")

# Show video
from IPython.display import Video
Video(test_video, embed=True)