2. Detecting facial expressions from images
Contents
2. Detecting facial expressions from images#
Written by Jin Hyun Cheong and Eshin Jolly
In this tutorial we’ll explore the Detector
class in more depth, demonstrating how to detect faces, facial landmarks, action units, and emotions from images. You can try it out interactively in Google Collab:
# Uncomment the line below and run this only if you're using Google Collab
# !pip install -q py-feat
2.1 Detecting a single face from a single image#
Setting up the Detector#
When using the Detector
you can either specify specific models to use or just load the default models which are defined explicitly below:
from feat import Detector
detector = Detector(
face_model="retinaface",
landmark_model="mobilefacenet",
au_model='svm',
emotion_model="resmasknet",
facepose_model="img2pose",
)
detector
feat.detector.Detector(face_model=retinaface, landmark_model=mobilefacenet, au_model=svm, emotion_model=resmasknet, facepose_model=img2pose)
Let’s process a single image with a single face. Py-feat includes a demo image for this purpose called single_face.jpg
so lets use that. You can also use the convenient imshow
function which will automatically load an image into a numpy array if provided a path unlike matplotlib:
from feat.utils import get_test_data_path
from feat.plotting import imshow
import os
# Helper to point to the test data folder
test_data_dir = get_test_data_path()
# Get the full path
single_face_img_path = os.path.join(test_data_dir, "single_face.jpg")
# Plot it
imshow(single_face_img_path)

Now we use our initialized detector
instance to make predictions with the detect_image()
method. This is the main workhorse method that will perform face, landmark, au, and emotion detection using the loaded models. It always returns a Fex
data instance:
single_face_prediction = detector.detect_image(single_face_img_path)
# Show results
single_face_prediction
/Users/Esh/anaconda3/envs/py-feat/lib/python3.8/site-packages/torch/functional.py:568: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /Users/distiller/project/pytorch/aten/src/ATen/native/TensorShape.cpp:2228.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
frame | FaceRectX | FaceRectY | FaceRectWidth | FaceRectHeight | FaceScore | x_0 | x_1 | x_2 | x_3 | ... | Roll | Yaw | anger | disgust | fear | happiness | sadness | surprise | neutral | input | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0 | 196.976837 | 140.997742 | 173.810486 | 257.639343 | 0.999681 | 191.624944 | 191.644596 | 193.813321 | 199.024401 | ... | -3.809027 | 6.605721 | 0.000369 | 0.000026 | 0.000485 | 0.986996 | 0.000046 | 0.01201 | 0.000068 | /Users/Esh/Documents/pypackages/py-feat/feat/t... |
1 rows × 173 columns
Working with Fex data class results#
Because the output is a Fex
data class instance, we can utilize its various helper methods and attributes to inspect our predictions.
Easily accessing FEX columns of interest.#
Fex data classes make it simple to access various columns of interest (AUs, emotion, faceboxes, etc):
single_face_prediction.facebox
FaceRectX | FaceRectY | FaceRectWidth | FaceRectHeight | FaceScore | |
---|---|---|---|---|---|
0 | 196.976837 | 140.997742 | 173.810486 | 257.639343 | 0.999681 |
single_face_prediction.aus
AU01 | AU02 | AU04 | AU05 | AU06 | AU07 | AU09 | AU10 | AU11 | AU12 | AU14 | AU15 | AU17 | AU20 | AU23 | AU24 | AU25 | AU26 | AU28 | AU43 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 |
single_face_prediction.emotions
anger | disgust | fear | happiness | sadness | surprise | neutral | |
---|---|---|---|---|---|---|---|
0 | 0.000369 | 0.000026 | 0.000485 | 0.986996 | 0.000046 | 0.01201 | 0.000068 |
single_face_prediction.facepose # (in degrees)
Pitch | Roll | Yaw | |
---|---|---|---|
0 | 0.832747 | -3.809027 | 6.605721 |
Saving detection to a file#
We can also save our detection directly to a file by specifying an outputFname
when using .detect_image
. The detector will return True
when it’s finished.
detector.detect_image(single_face_img_path, outputFname = "output.csv")
frame | FaceRectX | FaceRectY | FaceRectWidth | FaceRectHeight | FaceScore | x_0 | x_1 | x_2 | x_3 | ... | Roll | Yaw | anger | disgust | fear | happiness | sadness | surprise | neutral | input | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0 | 196.976837 | 140.997742 | 173.810486 | 257.639343 | 0.999681 | 191.624944 | 191.644596 | 193.813321 | 199.024401 | ... | -3.809027 | 6.605721 | 0.000369 | 0.000026 | 0.000485 | 0.986996 | 0.000046 | 0.01201 | 0.000068 | /Users/Esh/Documents/pypackages/py-feat/feat/t... |
1 rows × 173 columns
Loading detection results from a saved file#
We can load this output using the read_feat()
function, which behaves just like pd.read_csv
from Pandas, but returns a Fex
data class instead of a DataFrame. This gives you the full suite of Fex funcionality right away.
# prefer to pandas read_csv
from feat.utils import read_feat
input_prediction = read_feat("output.csv")
# Show results
input_prediction
frame | FaceRectX | FaceRectY | FaceRectWidth | FaceRectHeight | FaceScore | x_0 | x_1 | x_2 | x_3 | ... | Roll | Yaw | anger | disgust | fear | happiness | sadness | surprise | neutral | input | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0 | 196.97684 | 140.99774 | 173.81049 | 257.63934 | 0.999681 | 191.624944 | 191.644596 | 193.813321 | 199.024401 | ... | -3.809027 | 6.605721 | 0.000369 | 0.000026 | 0.000485 | 0.986996 | 0.000046 | 0.01201 | 0.000068 | output.csv |
1 rows × 173 columns
Visualizing detection results.#
We can use the .plot_detections()
method to generate a summary figure of detected faces, action units and emotions. It always returns a list of matplotlib figures:
figs = single_face_prediction.plot_detections(poses=True)

By default .plot_detections()
will overlay facial lines on top of the input image. However, it’s also possible to visualize a face using Py-Feat’s standardized AU landmark model, which takes the detected AUs and projects them onto a template face. You an control this by change by setting faces='aus'
instead of the default faces='landmarks'
. For more details about this kind of visualization see the visualizing facial expressions and the creating an AU visualization model tutorials:
figs = single_face_prediction.plot_detections(faces='aus', muscles=True)

2.2 Detecting multiple faces from a single image#
A Detector
can automatically find multiple faces in a single image. We’ll see that in the next example, the number of rows of the Fex data class returned from .detect_image()
has one row for each detected face. We’ll also try using a different model this time, Img2Pose, which acts as both a face detector and a face pose estimator.
Notice how image_prediction
is now a Fex instance with 5 rows, one for each detected face. We can confirm this by plotting our detection results and poses like before:
multi_face_image_path = os.path.join(test_data_dir, "multi_face.jpg")
multi_face_prediction = detector.detect_image(multi_face_image_path)
# Show results
multi_face_prediction
frame | FaceRectX | FaceRectY | FaceRectWidth | FaceRectHeight | FaceScore | x_0 | x_1 | x_2 | x_3 | ... | Roll | Yaw | anger | disgust | fear | happiness | sadness | surprise | neutral | input | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0 | 683.844116 | 288.876495 | 103.788025 | 134.104675 | 0.999768 | 685.243732 | 683.924138 | 683.493398 | 684.032188 | ... | -2.973639 | -4.679388 | 0.050378 | 0.002782 | 0.160675 | 0.011653 | 0.587488 | 0.075299 | 0.111725 | /Users/Esh/Documents/pypackages/py-feat/feat/t... |
1 | 0 | 533.678894 | 309.400024 | 96.237732 | 124.128448 | 0.999421 | 535.572383 | 533.746383 | 533.008999 | 533.146794 | ... | 4.255018 | 8.171312 | 0.03112 | 0.001458 | 0.186241 | 0.104354 | 0.269578 | 0.014566 | 0.392683 | /Users/Esh/Documents/pypackages/py-feat/feat/t... |
2 | 0 | 316.984406 | 233.779205 | 92.016876 | 126.462952 | 0.999196 | 313.083172 | 314.54322 | 317.242715 | 321.314606 | ... | 9.734623 | 6.029476 | 0.033716 | 0.114599 | 0.055362 | 0.181432 | 0.060542 | 0.110806 | 0.443542 | /Users/Esh/Documents/pypackages/py-feat/feat/t... |
3 | 0 | 221.29747 | 64.152306 | 85.109207 | 109.057442 | 0.996842 | 219.466511 | 217.443319 | 216.14972 | 216.210632 | ... | 14.068047 | -4.917797 | 0.000943 | 0.001107 | 0.036498 | 0.08564 | 0.020177 | 0.772348 | 0.083288 | /Users/Esh/Documents/pypackages/py-feat/feat/t... |
4 | 0 | 437.129089 | 213.861359 | 79.529785 | 97.050537 | 0.996773 | 437.59345 | 438.101893 | 439.270088 | 441.157225 | ... | 1.971143 | -4.030666 | 0.20795 | 0.002251 | 0.002176 | 0.416435 | 0.002668 | 0.219107 | 0.149411 | /Users/Esh/Documents/pypackages/py-feat/feat/t... |
5 rows × 173 columns
figs = multi_face_prediction.plot_detections(add_titles=False)

2.3 Detecting faces from multiple images#
Detector
is also flexible enough to process multiple image files simultaneously if .detect_image()
is passed a list of images. You can process multiple images in a batch to speed up processing, but all images in a batch must have the same dimensions.
In the example below we process both our single and multi-face example images from above, but force Py-Feat not to batch process them by setting batch_size = 1
.
Notice how the returned Fex data class instance has 6 rows: 1 for the first face in the first image, and 5 for the faces in the second image:
img_list = [single_face_img_path, multi_face_image_path]
mixed_prediction = detector.detect_image(img_list, batch_size=1)
mixed_prediction
frame | FaceRectX | FaceRectY | FaceRectWidth | FaceRectHeight | FaceScore | x_0 | x_1 | x_2 | x_3 | ... | Roll | Yaw | anger | disgust | fear | happiness | sadness | surprise | neutral | input | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0 | 196.976837 | 140.997742 | 173.810486 | 257.639343 | 0.999681 | 191.624944 | 191.644596 | 193.813321 | 199.024401 | ... | -3.809027 | 6.605721 | 0.000369 | 0.000026 | 0.000485 | 0.986996 | 0.000046 | 0.01201 | 0.000068 | /Users/Esh/Documents/pypackages/py-feat/feat/t... |
1 | 1 | 683.844116 | 288.876495 | 103.788025 | 134.104675 | 0.999768 | 685.243732 | 683.924138 | 683.493398 | 684.032188 | ... | -2.973639 | -4.679388 | 0.050378 | 0.002782 | 0.160675 | 0.011653 | 0.587488 | 0.075299 | 0.111725 | /Users/Esh/Documents/pypackages/py-feat/feat/t... |
2 | 1 | 533.678894 | 309.400024 | 96.237732 | 124.128448 | 0.999421 | 535.572383 | 533.746383 | 533.008999 | 533.146794 | ... | 4.255018 | 8.171312 | 0.03112 | 0.001458 | 0.186241 | 0.104354 | 0.269578 | 0.014566 | 0.392683 | /Users/Esh/Documents/pypackages/py-feat/feat/t... |
3 | 1 | 316.984406 | 233.779205 | 92.016876 | 126.462952 | 0.999196 | 313.083172 | 314.54322 | 317.242715 | 321.314606 | ... | 9.734623 | 6.029476 | 0.033716 | 0.114599 | 0.055362 | 0.181432 | 0.060542 | 0.110806 | 0.443542 | /Users/Esh/Documents/pypackages/py-feat/feat/t... |
4 | 1 | 221.29747 | 64.152306 | 85.109207 | 109.057442 | 0.996842 | 219.466511 | 217.443319 | 216.14972 | 216.210632 | ... | 14.068047 | -4.917797 | 0.000943 | 0.001107 | 0.036498 | 0.08564 | 0.020177 | 0.772348 | 0.083288 | /Users/Esh/Documents/pypackages/py-feat/feat/t... |
5 | 1 | 437.129089 | 213.861359 | 79.529785 | 97.050537 | 0.996773 | 437.59345 | 438.101893 | 439.270088 | 441.157225 | ... | 1.971143 | -4.030666 | 0.20795 | 0.002251 | 0.002176 | 0.416435 | 0.002668 | 0.219107 | 0.149411 | /Users/Esh/Documents/pypackages/py-feat/feat/t... |
6 rows × 173 columns
Calling .plot_detections()
will now plot detections for all images the detector was passed:
figs = mixed_prediction.plot_detections(add_titles=False)


However, it’s easy to use pandas slicing sytax to just grab predictions for the image you want. For example you can use .loc
and chain it to .plot_detections()
:
# Just plot the detection corresponding to the first row in the Fex data
figs = mixed_prediction.loc[0].plot_detections(add_titles=False)

Likewise you can use .query()
and chain it to .plot_detections()
. Fex
data classes store each file path in the 'input'
column. So we can use regular pandas methods like .unique()
to get all the unique images (2 in our case) and pick the second one.
# Choose plot based on image file name
img_name = mixed_prediction['input'].unique()[1]
axes = mixed_prediction.query("input == @img_name").plot_detections(add_titles=False)
