Detecting FEX from images

How to use the Feat Detector class.

Written by Jin Hyun Cheong

Here is an example of how to use the Detector class to detect faces, facial landmarks, Action Units, and emotions, from face images or videos.

Let’s start by installing Py-FEAT if you have not already done so or usign this from Google Colab

!pip install -q py-feat

Detecting facial expressions from images.

First, load the detector class. You can specify which models you want to use.

from feat import Detector
face_model = "retinaface"
landmark_model = "mobilenet"
au_model = "rf"
emotion_model = "resmasknet"
detector = Detector(face_model = face_model, landmark_model = landmark_model, au_model = au_model, emotion_model = emotion_model)
Loading Face Detection model:  retinaface
Using downloaded and verified file: /home/jcheong/packages/feat/feat/resources/mobilenet0.25_Final.pth
Using downloaded and verified file: /home/jcheong/packages/feat/feat/resources/mobilenet_224_model_best_gdconv_external.pth.tar
Using downloaded and verified file: /home/jcheong/packages/feat/feat/resources/hog_pca_all_emotio.joblib
Using downloaded and verified file: /home/jcheong/packages/feat/feat/resources/RF_568.joblib
Using downloaded and verified file: /home/jcheong/packages/feat/feat/resources/ResMaskNet_Z_resmasking_dropout1_rot30.pth
Loading Face Landmark model:  mobilenet
Loading au model:  rf
/home/jcheong/anaconda3/lib/python3.8/site-packages/sklearn/base.py:329: UserWarning: Trying to unpickle estimator PCA from version 0.24.1 when using version 0.23.2. This might lead to breaking code or invalid results. Use at your own risk.
  warnings.warn(
/home/jcheong/anaconda3/lib/python3.8/site-packages/sklearn/base.py:329: UserWarning: Trying to unpickle estimator DecisionTreeClassifier from version 0.24.1 when using version 0.23.2. This might lead to breaking code or invalid results. Use at your own risk.
  warnings.warn(
/home/jcheong/anaconda3/lib/python3.8/site-packages/sklearn/base.py:329: UserWarning: Trying to unpickle estimator RandomForestClassifier from version 0.24.1 when using version 0.23.2. This might lead to breaking code or invalid results. Use at your own risk.
  warnings.warn(
Loading emotion model:  resmasknet

Find the file you want to process. In our case, we’ll use our test image input.jpg.

# Find the file you want to process.
from feat.tests.utils import get_test_data_path
import os
test_data_dir = get_test_data_path()
test_image = os.path.join(test_data_dir, "input.jpg")

Here is what our test image looks like.

from PIL import Image
import matplotlib.pyplot as plt
f, ax = plt.subplots()
im = Image.open(test_image)
ax.imshow(im);
../_images/detector_7_0.png

Now we use our initialized detector instance to make predictions with the detect_image() method.

image_prediction = detector.detect_image(test_image)
# Show results
image_prediction
frame FaceRectX FaceRectY FaceRectWidth FaceRectHeight FaceScore x_0 x_1 x_2 x_3 ... AU28 AU43 anger disgust fear happiness sadness surprise neutral input
0 0.0 196.976852 140.997742 173.810471 257.639343 0.999681 192.864591 191.586714 192.874615 197.39479 ... 0.117955 0.143632 0.000369 0.000026 0.000485 0.986996 0.000046 0.01201 0.000068 /home/jcheong/packages/feat/feat/tests/data/in...

1 rows × 170 columns

The output is a Fex class instance which allows you to run the built-in methods for Fex.

Visualizing detection results.

For example, you can easily plot the detection results.

image_prediction.plot_detections();
../_images/detector_11_0.png

Accessing face expression columns of interest.

You can also access the columns of interests (AUs, emotion) quickly.

image_prediction.facebox()
FaceRectX FaceRectY FaceRectWidth FaceRectHeight FaceScore
0 196.976852 140.997742 173.810471 257.639343 0.999681
image_prediction.aus()
AU01 AU02 AU04 AU05 AU06 AU07 AU09 AU10 AU11 AU12 AU14 AU15 AU17 AU20 AU23 AU24 AU25 AU26 AU28 AU43
0 0.592975 0.487862 0.182525 0.175082 0.890265 0.687173 0.361338 0.929177 0.464157 0.959261 0.657362 0.350172 0.248738 0.269894 0.248381 0.044258 0.963012 0.358748 0.117955 0.143632
image_prediction.emotions()
anger disgust fear happiness sadness surprise neutral
0 0.000369 0.000026 0.000485 0.986996 0.000046 0.01201 0.000068

Detecting facial expressions and saving to a file.

You can also output the results into file by specifying the outputFname. The detector will return True when it’s finished.

detector.detect_image(test_image, outputFname = "output.csv")
True

Loading detection results from saved file.

The outputs can be loaded using our read_feat() function or a simple Pandas read_csv(). We recommend using read_feat() because that will allow you to use the full suite of Feat functionalities more easily.

from feat.utils import read_feat
image_prediction = read_feat("output.csv")
# Show results
image_prediction
frame FaceRectX FaceRectY FaceRectWidth FaceRectHeight FaceScore x_0 x_1 x_2 x_3 ... AU28 AU43 anger disgust fear happiness sadness surprise neutral input
0 0.0 196.976852 140.997742 173.810471 257.639343 0.999681 192.864591 191.586714 192.874615 197.39479 ... 0.117955 0.143632 0.000369 0.000026 0.000485 0.986996 0.000046 0.01201 0.000068 /home/jcheong/packages/feat/feat/tests/data/in...

1 rows × 170 columns

import pandas as pd
image_prediction = pd.read_csv("output.csv")
# Show results
image_prediction
frame FaceRectX FaceRectY FaceRectWidth FaceRectHeight FaceScore x_0 x_1 x_2 x_3 ... AU28 AU43 anger disgust fear happiness sadness surprise neutral input
0 0.0 196.976852 140.997742 173.810471 257.639343 0.999681 192.864591 191.586714 192.874615 197.39479 ... 0.117955 0.143632 0.000369 0.000026 0.000485 0.986996 0.000046 0.01201 0.000068 /home/jcheong/packages/feat/feat/tests/data/in...

1 rows × 170 columns

Detecting facial expressions images with many faces.

Feat’s Detector can find multiple faces in a single image. Let’s also practice switching the emotion detector

face_model = "mtcnn"
landmark_model = "mobilenet"
au_model = "logistic"
emotion_model = "fer"
detector = Detector(face_model = face_model, landmark_model = landmark_model, au_model = au_model, emotion_model = emotion_model)

test_image = os.path.join(test_data_dir, "tim-mossholder-hOF1bWoet_Q-unsplash.jpg")
image_prediction = detector.detect_image(test_image)
# Show results
image_prediction
Loading Face Detection model:  mtcnn
Using downloaded and verified file: /home/jcheong/packages/feat/feat/resources/onet.npy
Using downloaded and verified file: /home/jcheong/packages/feat/feat/resources/pnet.npy
Using downloaded and verified file: /home/jcheong/packages/feat/feat/resources/rnet.npy
Using downloaded and verified file: /home/jcheong/packages/feat/feat/resources/mobilenet_224_model_best_gdconv_external.pth.tar
Using downloaded and verified file: /home/jcheong/packages/feat/feat/resources/hog_pca_all_emotio.joblib
Using downloaded and verified file: /home/jcheong/packages/feat/feat/resources/Logistic_520.joblib
Using downloaded and verified file: /home/jcheong/packages/feat/feat/resources/best_ferModel.pth
Loading Face Landmark model:  mobilenet
Loading au model:  logistic
Loading emotion model:  fer
/home/jcheong/anaconda3/lib/python3.8/site-packages/sklearn/base.py:329: UserWarning: Trying to unpickle estimator PCA from version 0.24.1 when using version 0.23.2. This might lead to breaking code or invalid results. Use at your own risk.
  warnings.warn(
/home/jcheong/anaconda3/lib/python3.8/site-packages/sklearn/base.py:329: UserWarning: Trying to unpickle estimator LogisticRegression from version 0.24.1 when using version 0.23.2. This might lead to breaking code or invalid results. Use at your own risk.
  warnings.warn(
frame FaceRectX FaceRectY FaceRectWidth FaceRectHeight FaceScore x_0 x_1 x_2 x_3 ... AU28 AU43 anger disgust fear happiness sadness surprise neutral input
0 0 686.666818 293.999382 101.031716 120.632281 1.000000 686.673264 684.464720 683.203394 683.556753 ... 0.041033 0.000191 0.000488 0.004986 0.000750 0.950166 0.001690 0.004727 0.037192 /home/jcheong/packages/feat/feat/tests/data/ti...
1 0 317.520424 235.865105 91.047654 119.294925 0.999948 314.853761 315.877231 317.895325 321.337098 ... 0.015860 0.000025 0.000108 0.002324 0.000214 0.917784 0.001381 0.001332 0.076859 /home/jcheong/packages/feat/feat/tests/data/ti...
2 0 435.751950 212.001750 76.493827 95.736649 0.999937 439.072266 438.872138 439.476009 440.813668 ... 0.300772 0.000001 0.001694 0.022541 0.000762 0.649048 0.043860 0.000816 0.281279 /home/jcheong/packages/feat/feat/tests/data/ti...
3 0 533.900150 308.100535 91.076060 119.357605 0.999775 535.784887 533.329005 531.925614 532.258772 ... 0.035089 0.011815 0.000310 0.005809 0.000977 0.972369 0.000534 0.000787 0.019214 /home/jcheong/packages/feat/feat/tests/data/ti...
4 0 214.450517 64.650659 91.337077 111.502833 0.995610 220.870587 218.820371 217.513163 217.441024 ... 0.035445 0.000021 0.000504 0.004014 0.000283 0.926783 0.000241 0.001621 0.066554 /home/jcheong/packages/feat/feat/tests/data/ti...

5 rows × 170 columns

Visualize multiple faces in a single image.

image_prediction.plot_detections();
../_images/detector_24_0.png

Detecting facial expressions from multiple images

You can also detect facial expressions from a list of images. Just place the path to images in a list and pass it to detect_images.

# Find the file you want to process.
from feat.tests.utils import get_test_data_path
import os, glob
test_data_dir = get_test_data_path()
test_images = glob.glob(os.path.join(test_data_dir, "*.jpg"))
print(test_images)

image_prediction = detector.detect_image(test_images)
image_prediction
['/home/jcheong/packages/feat/feat/tests/data/input.jpg', '/home/jcheong/packages/feat/feat/tests/data/tim-mossholder-hOF1bWoet_Q-unsplash.jpg']
frame FaceRectX FaceRectY FaceRectWidth FaceRectHeight FaceScore x_0 x_1 x_2 x_3 ... AU28 AU43 anger disgust fear happiness sadness surprise neutral input
0 0.0 185.127976 151.353452 180.762838 234.933019 0.998658 193.944269 192.360442 193.323525 197.581484 ... 0.080031 0.000019 0.001407 0.017716 0.001101 0.944522 0.000425 0.000589 0.034241 /home/jcheong/packages/feat/feat/tests/data/in...
0 0.0 686.666818 293.999382 101.031716 120.632281 1.000000 686.673264 684.464720 683.203394 683.556753 ... 0.041033 0.000191 0.000488 0.004986 0.000750 0.950166 0.001690 0.004727 0.037192 /home/jcheong/packages/feat/feat/tests/data/ti...
1 0.0 317.520424 235.865105 91.047654 119.294925 0.999948 314.853761 315.877231 317.895325 321.337098 ... 0.015860 0.000025 0.000108 0.002324 0.000214 0.917784 0.001381 0.001332 0.076859 /home/jcheong/packages/feat/feat/tests/data/ti...
2 0.0 435.751950 212.001750 76.493827 95.736649 0.999937 439.072266 438.872138 439.476009 440.813668 ... 0.300772 0.000001 0.001694 0.022541 0.000762 0.649048 0.043860 0.000816 0.281279 /home/jcheong/packages/feat/feat/tests/data/ti...
3 0.0 533.900150 308.100535 91.076060 119.357605 0.999775 535.784887 533.329005 531.925614 532.258772 ... 0.035089 0.011815 0.000310 0.005809 0.000977 0.972369 0.000534 0.000787 0.019214 /home/jcheong/packages/feat/feat/tests/data/ti...
4 0.0 214.450517 64.650659 91.337077 111.502833 0.995610 220.870587 218.820371 217.513163 217.441024 ... 0.035445 0.000021 0.000504 0.004014 0.000283 0.926783 0.000241 0.001621 0.066554 /home/jcheong/packages/feat/feat/tests/data/ti...

6 rows × 170 columns

When you have multiple images, you can still call the plot_detection which will plot results for all input images. If you have a lot of images, we recommend checking one by one using slicing.

image_prediction.plot_detections();
../_images/detector_28_0.png ../_images/detector_28_1.png

You can use the slicing function to plot specific rows in the detection results or for a particular input file.

image_prediction.iloc[[1]].plot_detections();
../_images/detector_30_0.png
image_to_plot = image_prediction.input().unique()[1]
image_prediction.query("input == @image_to_plot").plot_detections();
../_images/detector_31_0.png

Detecting FEX from videos

Detecting facial expressions in videos is also easy by using the detect_video() method. This sample video is by Wolfgang Langer from Pexels.

# Find the file you want to process.
from feat.tests.utils import get_test_data_path
import os, glob
test_data_dir = get_test_data_path()
test_video = os.path.join(test_data_dir, "WolfgangLanger_Pexels.mp4")

# Show video
from IPython.core.display import Video
Video(test_video, embed=True)