Py-Feat: Python Facial Expression Analysis Toolbox

Package versioning Build Status Coverage Status Python Versions GitHub forks GitHub stars DOI

Py-Feat provides a comprehensive set of tools and models to easily detect facial expressions (Action Units, emotions, facial landmarks) from images and videos, preprocess & analyze facial expression data, and visualize facial expression data.

Why you should use Py-Feat

Facial expressions convey rich information about how a person is thinking, feeling, and what they are planning to do. Recent innovations in computer vision algorithms and deep learning algorithms have led to a flurry of models that can be used to extract facial landmarks, Action Units, and emotional facial expressions with great speed and accuracy. However, researchers seeking to use these algorithms or tools such as OpenFace, iMotions-Affectiva, or Noldus FaceReacer may find them difficult to install, use, or too expensive to purchase. It’s also difficult to use the latest model or know exactly how good the models are for proprietary tools. We developed Py-Feat to create a free, open-source, and easy to use tool for working with facial expressions data.

Who is it for?

Py-Feat was created for two primary audiences in mind:

  • Human behavior researchers: Extract facial expressions from face images or videos with a simple line of code and analyze your data with Feat.

  • Computer vision researchers: Develop & share your latest model to a wide audience of users.

and anyone else interested in analyzing facial expressions!

Installation

Install from pip

pip install py-feat

Install from source

git clone https://github.com/cosanlab/feat.git
cd feat && python setup.py install

You can install it in Google Colab or Kaggle using the code above. You can also install it in Development Mode:

!git clone https://github.com/cosanlab/feat.git  
!cd feat && pip install -q -r requirements.txt
!cd feat && pip install -q -e . 
!cd feat && python bin/download_models.py
# Click Runtime from top menu and Restart Runtime! 

The last development mode installation using the pip install -e . can also be useful when contributing to Py-Feat.

Check installation

Import the Fex class

from feat import Fex

Import the Detector class

from feat import Detector

Available models

Below is a list of models implemented in Py-Feat and ready to use. The model names are in the titles followed by the reference publications.

Action Unit detection

  • rf: Random Forest model trained on Histogram of Oriented Gradients extracted from BP4D, DISFA, CK+, UNBC-McMaster shoulder pain, and AFF-Wild2 datasets

  • svm: SVM model trained on Histogram of Oriented Gradients extracted from BP4D, DISFA, CK+, UNBC-McMaster shoulder pain, and AFF-Wild2 datasets

  • logistic: Logistic Classifier model trained on Histogram of Oriented Gradients extracted from BP4D, DISFA, CK+, UNBC-McMaster shoulder pain, and AFF-Wild2 datasets

  • JAANET: Joint facial action unit detection and face alignment via adaptive attention trained with BP4D and BP4D+ (Shao et al., 2020)

  • DRML: Deep region and multi-label learning for facial action unit detection by (Zhao et al., 2016)

Emotion detection

  • rf: Random Forest model trained on Histogram of Oriented Gradients extracted from ExpW, CK+, and JAFFE datasets

  • svm: SVM model trained on Histogram of Oriented Gradients extracted from ExpW, CK+, and JAFFE datasets

  • fernet: Deep convolutional network

  • ResMaskNet: Facial expression recognition using residual masking network by (Pham et al., 2020)

Face detection

Facial landmark detection

  • PFLD: Practical Facial Landmark Detector by (Guo et al, 2019)

  • MobileFaceNet: Efficient CNNs for accurate real time face verification on mobile devices (Chen et al, 2018)

  • MobileNet: Efficient convolutional neural networks for mobile vision applications (Howard et al, 2017)

Contributions

We are excited for people to add new models and features to Py-Feat. Please see the contribution guides.

License

Py-FEAT is provided under the MIT license. You also need to cite and respect the licenses of each model you are using. Please see the LICENSE file for links to each model’s license information.