API Reference#

This reference provides detailed documentation for all the features in FEAT

feat.detector module#

Main Detector class. The Detector class wraps other pre-trained models (e.g. face detector, au detector) and provides a high-level API to make it easier to perform detection

class feat.detector.Detector(face_model='retinaface', landmark_model='mobilefacenet', au_model='xgb', emotion_model='resmasknet', facepose_model='img2pose', identity_model='facenet', device='cpu', n_jobs=1, verbose=False, **kwargs)#

Bases: object

change_model(**kwargs)#

Swap one or more pre-trained detector models for another one. Just pass in the the new models to use as kwargs, e.g. emotion_model=’svm’

detect_aus(frame, landmarks, **au_model_kwargs)#

Detect Action Units from image or video frame

Parameters:
  • frame (np.ndarray) – image loaded in array format (n, m, 3)

  • landmarks (array) – 68 landmarks used to localize face.

Returns:

Action Unit predictions

Return type:

array

Examples

>>> from feat import Detector
>>> from feat.utils import read_pictures
>>> frame = read_pictures(['my_image.jpg'])
>>> detector = Detector()
>>> detector.detect_aus(frame)
detect_emotions(frame, facebox, landmarks, **emotion_model_kwargs)#

Detect emotions from image or video frame

Parameters:
  • frame ([type]) – [description]

  • facebox ([type]) – [description]

  • landmarks ([type]) – [description]

Returns:

Action Unit predictions

Return type:

array

Examples

>>> from feat import Detector
>>> from feat.utils import read_pictures
>>> img_data = read_pictures(['my_image.jpg'])
>>> detector = Detector()
>>> detected_faces = detector.detect_faces(frame)
>>> detected_landmarks = detector.detect_landmarks(frame, detected_faces)
>>> detector.detect_emotions(frame, detected_faces, detected_landmarks)
detect_facepose(frame, landmarks=None, **facepose_model_kwargs)#

Detect facepose from image or video frame.

When used with img2pose, returns all detected poses, and facebox and landmarks are ignored. Use detect_face method in order to obtain bounding boxes corresponding to the detected poses returned by this method.

Parameters:
  • frame (np.ndarray) – list of images

  • landmarks (np.ndarray | None, optional) – (num_images, num_faces, 68, 2)

  • and (landmarks for the faces contained in list of images; Default None) –

  • detectors (ignored for img2pose and img2pose-c) –

Returns:

poses (num_images, num_faces, [pitch, roll, yaw]) - Euler angles (in degrees) for each face within in each image}

Return type:

list

detect_faces(frame, threshold=0.5, **face_model_kwargs)#

Detect faces from image or video frame

Parameters:
  • frame (np.ndarray) – 3d (single) or 4d (multiple) image array

  • threshold (float) – threshold for detectiong faces (default=0.5)

Returns:

list of lists with the same length as the number of frames. Each list item is a list containing the (x1, y1, x2, y2) coordinates of each detected face in that frame.

Return type:

list

detect_identity(frame, facebox, **identity_model_kwargs)#

Detects identity of faces from image or video frame using face representation embeddings

Parameters:
  • frame (np.ndarray) – 3d (single) or 4d (multiple) image array

  • threshold (float) – threshold for matching identity (default=0.8)

Returns:

list of lists with the same length as the number of frames. Each list item is a list containing the (x1, y1, x2, y2) coordinates of each detected face in that frame.

Return type:

list

detect_image(input_file_list, output_size=None, batch_size=1, num_workers=0, pin_memory=False, frame_counter=0, face_detection_threshold=0.5, face_identity_threshold=0.8, **kwargs)#

Detects FEX from one or more image files. If you want to speed up detection you can process multiple images in batches by setting batch_size > 1. However, all images must have the same dimensions to be processed in batches. Py-feat can automatically adjust image sizes by using the output_size=int. Common output-sizes include 256 and 512.

NOTE: Currently batch processing images gives slightly different AU detection results due to the way that py-feat integrates the underlying models. You can examine the degree of tolerance by checking out the results of `test_detection_and_batching_with_diff_img_sizes` in our test-suite

Parameters:
  • input_file_list (list of str) – Path to a list of paths to image files.

  • output_size (int) – image size to rescale all image preserving aspect ratio. Will raise an error if not set and batch_size > 1 but images are not the same size

  • batch_size (int) – how many batches of images you want to run at one shot. Larger gives faster speed but is more memory-consuming. Images must be the

  • batches! (same size to be run in) –

  • num_workers (int) – how many subprocesses to use for data loading. 0 means that the data will be loaded in the main process.

  • pin_memory (bool) – If True, the data loader will copy Tensors into CUDA pinned memory before returning them. If your data elements are a custom type, or your collate_fn returns a batch that is a custom type

  • frame_counter (int) – starting value to count frames

  • face_detection_threshold (float) – value between 0-1 to report a detection based on the confidence of the face detector; Default >= 0.5

  • face_identity_threshold (float) – value between 0-1 to determine similarity of person using face identity embeddings; Default >= 0.8

  • **kwargs – you can pass each detector specific kwargs using a dictionary like: face_model_kwargs = {…}, au_model_kwargs={…}, …

Returns:

Prediction results dataframe

Return type:

Fex

detect_landmarks(frame, detected_faces, **landmark_model_kwargs)#

Detect landmarks from image or video frame

Parameters:
  • frame (np.ndarray) – 3d (single) or 4d (multiple) image array

  • detected_faces (array) –

Returns:

x and y landmark coordinates (1,68,2)

Return type:

list

Examples

>>> from feat import Detector
>>> from feat.utils import read_pictures
>>> img_data = read_pictures(['my_image.jpg'])
>>> detector = Detector()
>>> detected_faces = detector.detect_faces(frame)
>>> detector.detect_landmarks(frame, detected_faces)
detect_video(video_path, skip_frames=None, output_size=700, batch_size=1, num_workers=0, pin_memory=False, face_detection_threshold=0.5, face_identity_threshold=0.8, **kwargs)#

Detects FEX from a video file.

Parameters:
  • video_path (str) – Path to a video file.

  • skip_frames (int or None) – number of frames to skip (speeds up inference,

  • None (but less temporal information); Default) –

  • output_size (int) – image size to rescale all imagee preserving aspect ratio

  • batch_size (int) – how many batches of images you want to run at one shot. Larger gives faster speed but is more memory-consuming

  • num_workers (int) – how many subprocesses to use for data loading. 0 means that the data will be loaded in the main process.

  • pin_memory (bool) – If True, the data loader will copy Tensors into CUDA pinned memory before returning them. If your data elements are a custom type, or your collate_fn returns a batch that is a custom type

  • face_detection_threshold (float) – value between 0-1 to report a detection based on the confidence of the face detector; Default >= 0.5

  • face_identity_threshold (float) – value between 0-1 to determine similarity of person using face identity embeddings; Default >= 0.8

Returns:

Prediction results dataframe

Return type:

Fex

feat.data module#

Py-FEAT Data classes.

class feat.data.Fex(*args, **kwargs)#

Bases: DataFrame

Fex is a class to represent facial expression (Fex) data

Fex class is an enhanced pandas dataframe, with extra attributes and methods to help with facial expression data analysis.

Parameters:
  • filename – (str, optional) path to file

  • detector – (str, optional) name of software used to extract Fex. Currently only

  • supported ('Feat' is) –

  • sampling_freq (float, optional) – sampling rate of each row in Hz; defaults to None

  • features (pd.Dataframe, optional) – features that correspond to each Fex row

  • sessions – Unique values indicating rows associated with a specific session (e.g., trial, subject, etc).Must be a 1D array of n_samples elements; defaults to None

append(data, session_id=None, axis=0)#

Append a new Fex object to an existing object

Parameters:
  • data – (Fex) Fex instance to append

  • session_id – session label

  • axis – ([0,1]) Axis to append. Rows=0, Cols=1

Returns:

Fex instance

property aus#

Returns the Action Units data

Returns Action Unit data using the columns set in fex.au_columns.

Returns:

Action Units data

Return type:

DataFrame

baseline(baseline='median', normalize=None, ignore_sessions=False)#

Reference a Fex object to a baseline.

Parameters:
  • method – {‘median’, ‘mean’, ‘begin’, FexSeries instance}. Will subtract baseline from Fex object (e.g., mean, median). If passing a Fex object, it will treat that as the baseline.

  • normalize – (str). Can normalize results of baseline. Values can be [None, ‘db’,’pct’]; default None.

  • ignore_sessions – (bool) If True, will ignore Fex.sessions information. Otherwise, method will be applied separately to each unique session.

Returns:

Fex object

clean(detrend=True, standardize=True, confounds=None, low_pass=None, high_pass=None, ensure_finite=False, ignore_sessions=False, *args, **kwargs)#

Clean Time Series signal

This function wraps nilearn functionality and can filter, denoise, detrend, etc.

See http://nilearn.github.io/modules/generated/nilearn.signal.clean.html

This function can do several things on the input signals, in the following order: detrend, standardize, remove confounds, low and high-pass filter

If Fex.sessions is not None, sessions will be cleaned separately.

Parameters:
  • confounds – (numpy.ndarray, str or list of Confounds timeseries) Shape must be (instant number, confound number), or just (instant number,). The number of time instants in signals and confounds must be identical (i.e. signals.shape[0] == confounds.shape[0]). If a string is provided, it is assumed to be the name of a csv file containing signals as columns, with an optional one-line header. If a list is provided, all confounds are removed from the input signal, as if all were in the same array.

  • low_pass – (float) low pass cutoff frequencies in Hz.

  • high_pass – (float) high pass cutoff frequencies in Hz.

  • detrend – (bool) If detrending should be applied on timeseries (before confound removal)

  • standardize – (bool) If True, returned signals are set to unit variance.

  • ensure_finite – (bool) If True, the non-finite values (NANs and infs) found in the data will be replaced by zeros.

  • ignore_sessions – (bool) If True, will ignore Fex.sessions information. Otherwise, method will be applied separately to each unique session.

Returns:

cleaned Fex instance

compute_identities(threshold=0.8, inplace=False)#

Compute Identities using face embeddings from identity detector using threshold

decompose(algorithm='pca', axis=1, n_components=None, *args, **kwargs)#

Decompose Fex instance

Parameters:
  • algorithm – (str) Algorithm to perform decomposition types=[‘pca’,’ica’,’nnmf’,’fa’]

  • axis – dimension to decompose [0,1]

  • n_components – (int) number of components. If None then retain as many as possible.

Returns:

a dictionary of decomposition parameters

Return type:

output

property design#

Returns the design data

Returns the study design information using columns in fex.design_columns.

Returns:

time data

Return type:

DataFrame

distance(method='euclidean', **kwargs)#

Calculate distance between rows within a Fex() instance.

Parameters:

method – type of distance metric (can use any scikit learn or sciypy metric)

Returns:

Outputs a 2D distance matrix.

Return type:

dist

downsample(target, **kwargs)#
Downsample Fex columns. Relies on nltools.stats.downsample,

but ensures that returned object is a Fex object.

Parameters:
  • target (float) – downsampling target, typically in samples not seconds

  • kwargs – additional inputs to nltools.stats.downsample

property emotions#

Returns the emotion data

Returns emotions data using the columns set in fex.emotion_columns.

Returns:

emotion data

Return type:

DataFrame

extract_boft(min_freq=0.06, max_freq=0.66, bank=8, *args, **kwargs)#

Extract Bag of Temporal features

Parameters:
  • min_freq – maximum frequency of temporal filters

  • max_freq – minimum frequency of temporal filters

  • bank – number of temporal filter banks, filters are on exponential scale

Returns:

list of Morlet wavelets with corresponding freq hzs: list of hzs for each Morlet wavelet

Return type:

wavs

extract_max(ignore_sessions=False)#

Extract maximum of each feature

Parameters:

ignore_sessions – (bool) ignore sessions or extract separately by sessions if available.

Returns:

(Fex) maximum values for each feature

Return type:

fex

extract_mean(ignore_sessions=False)#

Extract mean of each feature

Parameters:

ignore_sessions – (bool) ignore sessions or extract separately by sessions if available.

Returns:

mean values for each feature

Return type:

Fex

extract_min(ignore_sessions=False)#

Extract minimum of each feature

Parameters:

ignore_sessions – (bool) ignore sessions or extract separately by sessions if available.

Returns:

(Fex) minimum values for each feature

Return type:

Fex

extract_multi_wavelet(min_freq=0.06, max_freq=0.66, bank=8, *args, **kwargs)#

Convolve with a bank of morlet wavelets.

Wavelets are equally spaced from min to max frequency. See extract_wavelet for more information and options.

Parameters:
  • min_freq – (float) minimum frequency to extract

  • max_freq – (float) maximum frequency to extract

  • bank – (int) size of wavelet bank

  • num_cyc – (float) number of cycles for wavelet

  • mode – (str) feature to extract, e.g., [‘complex’,’filtered’,’phase’,’magnitude’,’power’]

  • ignore_sessions – (bool) ignore sessions or extract separately by sessions if available.

Returns:

(Fex instance)

Return type:

convolved

extract_sem(ignore_sessions=False)#

Extract std of each feature

Parameters:

ignore_sessions – (bool) ignore sessions or extract separately by sessions if available.

Returns:

mean values for each feature

Return type:

Fex

extract_std(ignore_sessions=False)#

Extract std of each feature

Parameters:

ignore_sessions – (bool) ignore sessions or extract separately by sessions if available.

Returns:

mean values for each feature

Return type:

Fex

extract_summary(mean=True, std=True, sem=True, max=True, min=True, ignore_sessions=False, *args, **kwargs)#

Extract summary of multiple features

Parameters:
  • mean – (bool) extract mean of features

  • std – (bool) extract std of features

  • sem – (bool) extract sem of features

  • max – (bool) extract max of features

  • min – (bool) extract min of features

  • ignore_sessions – (bool) ignore sessions or extract separately by sessions if available.

Returns:

(Fex)

Return type:

fex

extract_wavelet(freq, num_cyc=3, mode='complex', ignore_sessions=False)#

Perform feature extraction by convolving with a complex morlet wavelet

Parameters:
  • freq – (float) frequency to extract

  • num_cyc – (float) number of cycles for wavelet

  • mode – (str) feature to extract, e.g., ‘complex’,’filtered’,’phase’,’magnitude’,’power’]

  • ignore_sessions – (bool) ignore sessions or extract separately by sessions if available.

Returns:

(Fex instance)

Return type:

convolved

property facebox#

Returns the facebox data

Returns:

facebox data

Return type:

DataFrame

property faceboxes#

Returns the facebox data

Returns:

facebox data

Return type:

DataFrame

property facepose#

Returns the facepose data using the columns set in fex.facepose_columns

Returns:

facepose data

Return type:

DataFrame

property identities#

Returns the identity labels

Returns:

identity data

Return type:

DataFrame

property identity_embeddings#

Returns the identity embeddings

Returns:

identity data

Return type:

DataFrame

property info#

Print all meta data of fex

Loops through metadata set in self._metadata and prints out the information.

property input#

Returns input column as string

Returns input data in the “input” column.

Returns:

path to input image

Return type:

string

property inputs#

Returns input column as string

Returns input data in the “input” column.

Returns:

path to input image

Return type:

string

isc(col, index='frame', columns='input', method='pearson')#

[summary]

Parameters:
  • col (str]) – Column name to compute the ISC for.

  • index (str, optional) – Column to be used in computing ISC. Usually this would be the column identifying the time such as the number of the frame. Defaults to “frame”.

  • columns (str, optional) – Column to be used for ISC. Usually this would be the column identifying the video or subject. Defaults to “input”.

  • method (str, optional) – Method to use for correlation pearson, kendall, or spearman. Defaults to “pearson”.

Returns:

Correlation matrix with index as colmns

Return type:

DataFrame

itersessions()#

Iterate over Fex sessions as (session, series) pairs.

Returns:

a generator that iterates over the sessions of the fex instance

Return type:

it

property landmark#

Returns the landmark data

Returns landmark data using the columns set in fex.landmark_columns.

Returns:

landmark data

Return type:

DataFrame

property landmark_x#

Returns the x landmarks.

Returns:

x landmarks.

Return type:

DataFrame

property landmark_y#

Returns the y landmarks.

Returns:

y landmarks.

Return type:

DataFrame

property landmarks#

Returns the landmark data

Returns landmark data using the columns set in fex.landmark_columns.

Returns:

landmark data

Return type:

DataFrame

property landmarks_x#

Returns the x landmarks.

Returns:

x landmarks.

Return type:

DataFrame

property landmarks_y#

Returns the y landmarks.

Returns:

y landmarks.

Return type:

DataFrame

plot_detections(faces='landmarks', faceboxes=True, muscles=False, poses=False, gazes=False, add_titles=True, au_barplot=True, emotion_barplot=True, plot_original_image=True)#

Plots detection results by Feat. Can control plotting of face, AU barplot and Emotion barplot. The faces kwarg controls whether facial landmarks are draw on top of input images or whether faces are visualized using Py-Feat’s AU visualization model using detected AUs. If detection was performed on a video an faces = ‘landmarks’, only an outline of the face will be draw without loading the underlying vidoe frame to save memory.

Parameters:
  • faces (str, optional) – ‘landmarks’ to draw detected landmarks or ‘aus’ to

  • model. (generate a face from AU detections using Py-Feat's AU landmark) –

  • 'landmarks'. (Defaults to) –

  • faceboxes (bool, optional) – Whether to draw the bounding box around detected

  • True. (the face. Defaults to) –

  • muscles (bool, optional) – Whether to draw muscles from AU activity. Only

  • False. (faces='landmarks'. Defaults to) –

  • poses (bool, optional) – Whether to draw facial poses. Only applies if

  • False.

  • gazes (bool, optional) – Whether to draw gaze vectors. Only applies if faces=’aus’. Defaults to False.

  • add_titles (bool, optional) – Whether to add the file name as a title above

  • True.

  • au_barplot (bool, optional) – Whether to include a subplot for au detections. Defaults to True.

  • emotion_barplot (bool, optional) – Whether to include a subplot for emotion detections. Defaults to True.

Returns:

list of matplotlib figures

Return type:

list

property poses#

Returns the facepose data using the columns set in fex.facepose_columns

Returns:

facepose data

Return type:

DataFrame

predict(X, y, model=<class 'sklearn.linear_model._base.LinearRegression'>, cv_kwargs={'cv': 5}, *args, **kwargs)#

Predicts y from X using a sklearn model.

Predict a variable of interest y using your model of choice from X, which can be a list of columns of the Fex instance or a dataframe.

Parameters:
  • X (list or DataFrame) – List of column names or dataframe to be used as features for prediction

  • y (string or array) – y values to be predicted

  • model (class, optional) – Any sklearn model. Defaults to LinearRegression.

  • args – Model arguments

  • kwargs – Model arguments

Returns:

Fit model instance.

Return type:

model

read_feat(filename=None, *args, **kwargs)#

Reads facial expression detection results from Feat Detector

Parameters:

filename (string, optional) – Path to file. Defaults to None.

Returns:

Fex

read_file()#

Loads file into FEX class

Returns:

Fex class

Return type:

DataFrame

read_openface(filename=None, *args, **kwargs)#

Reads facial expression detection results from OpenFace

Parameters:

filename (string, optional) – Path to file. Defaults to None.

Returns:

Fex

rectification(std=3)#

Removes time points when the face position moved more than N standard deviations from the mean.

Parameters:

std (default 3) – standard deviation from mean to remove outlier face locations

Returns:

cleaned FEX object

Return type:

data

regress(X, y, fit_intercept=True, *args, **kwargs)#

Regress using nltools.stats.regress.

fMRI-like regression to predict Fex activity (y) from set of regressors (X).

Parameters:
  • X (list or str) – Independent variable to predict.

  • y (list or str) – Dependent variable to be predicted.

  • fit_intercept (bool) – Whether to add intercept before fitting. Defaults to True.

Returns:

Dataframe of betas, ses, t-stats, p-values, df, residuals

property time#

Returns the time data

Returns the time information using fex.time_columns.

Returns:

time data

Return type:

DataFrame

ttest_1samp(popmean=0)#

Conducts 1 sample ttest.

Uses scipy.stats.ttest_1samp to conduct 1 sample ttest

Parameters:
  • popmean (int, optional) – Population mean to test against. Defaults to 0.

  • threshold_dict ([type], optional) – Dictonary for thresholding. Defaults to None. [NOT IMPLEMENTED]

Returns:

t-statistics and p-values

Return type:

t, p

ttest_ind(col, sessions=None)#

Conducts 2 sample ttest.

Uses scipy.stats.ttest_ind to conduct 2 sample ttest on column col between sessions.

Parameters:
  • col (str) – Column names to compare in a t-test between sessions

  • session (array-like) – session name to query Fex.sessions, otherwise uses the

  • Fex.sessions. (unique values in) –

Returns:

t-statistics and p-values

Return type:

t, p

update_sessions(new_sessions)#

Returns a copy of the Fex dataframe with a new sessions attribute after validation. new_sessions should be a dictionary mapping old to new names or an iterable with the same number of rows as the Fex dataframe

Parameters:

new_sessions (dict, Iterable) – map or list of new session names

Returns:

self

Return type:

Fex

upsample(target, target_type='hz', **kwargs)#
Upsample Fex columns. Relies on nltools.stats.upsample,

but ensures that returned object is a Fex object.

Parameters:
  • target (float) – upsampling target, default ‘hz’ (also ‘samples’, ‘seconds’)

  • kwargs – additional inputs to nltools.stats.upsample

class feat.data.FexSeries(*args, **kwargs)#

Bases: Series

This is a sub-class of pandas series. While not having additional methods of it’s own required to retain normal slicing functionality for the Fex class, i.e. how slicing is typically handled in pandas. All methods should be called on Fex below.

property aus#

Returns the Action Units data

Returns:

Action Units data

Return type:

DataFrame

property design#

Returns the design data

Returns:

time data

Return type:

DataFrame

property emotions#

Returns the emotion data

Returns:

emotion data

Return type:

DataFrame

property facebox#

Returns the facebox data

Returns:

facebox data

Return type:

DataFrame

property faceboxes#

Returns the facebox data

Returns:

facebox data

Return type:

DataFrame

property facepose#

Returns the facepose data

Returns:

facepose data

Return type:

DataFrame

property identity#

Returns the identity data

Returns:

identity data

Return type:

DataFrame

property info#

Print class meta data.

property input#

Returns input column as string

Returns:

path to input image

Return type:

string

property inputs#

Returns input column as string

Returns:

path to input image

Return type:

string

property landmark#

Returns the landmark data

Returns:

landmark data

Return type:

DataFrame

property landmark_x#

Returns the x landmarks.

Returns:

x landmarks.

Return type:

DataFrame

property landmark_y#

Returns the y landmarks.

Returns:

y landmarks.

Return type:

DataFrame

property landmarks#

Returns the landmark data

Returns:

landmark data

Return type:

DataFrame

property landmarks_x#

Returns the x landmarks.

Returns:

x landmarks.

Return type:

DataFrame

property landmarks_y#

Returns the y landmarks.

Returns:

y landmarks.

Return type:

DataFrame

plot_detections(*args, **kwargs)#

Alias for Fex.plot_detections

property poses#

Returns the facepose data

Returns:

facepose data

Return type:

DataFrame

property time#

Returns the time data

Returns:

time data

Return type:

DataFrame

class feat.data.ImageDataset(images, output_size=None, preserve_aspect_ratio=True, padding=False)#

Bases: Dataset

Torch Image Dataset

Parameters:
  • output_size (tuple or int) – Desired output size. If tuple, output is matched to

  • int (aspect ratio by adding padding. If) –

  • is (will set largest edge to output_size if target size) –

  • bigger

  • the (or smallest edge if target size is smaller to keep aspect ratio) –

  • same.

  • preserve_aspect_ratio (bool) – Output size is matched to preserve aspect ratio. Note that longest edge of output size is preserved, but actual output may differ from intended output_size.

  • padding (bool) – Transform image to exact output_size. If tuple, will preserve

  • int

  • size. (will set both sides to the same) –

Returns:

dataset of [batch, channels, height, width] that can be passed to DataLoader

Return type:

Dataset

class feat.data.VideoDataset(video_file, skip_frames=None, output_size=None)#

Bases: Dataset

Torch Video Dataset

Parameters:

skip_frames (int) – number of frames to skip

Returns:

dataset of [batch, channels, height, width] that can be passed to DataLoader

Return type:

Dataset

calc_approx_frame_time(idx)#

Calculate the approximate time of a frame in a video

Parameters:

frame_idx (int) – frame number

Returns:

time in seconds

Return type:

float

static convert_sec_to_min_sec(duration)#
get_video_metadata(video_file)#
load_frame(idx)#

Load in a single frame from the video using a lazy generator

feat.plotting module#

Helper functions for plotting

feat.plotting.animate_face(AU=None, start=None, end=None, save=None, include_reverse=True, feature_range=None, **kwargs)#

Create a matplotlib animation interpolating between a starting and ending face. Can either work like plot_face by taking an array of AU intensities for start and end, or by animating a single AU using the AU keyword argument and setting start and end to a scalar value.

Parameters:
  • AU (str/int, optional) – action unit id (e.g. 12 or ‘AU12’). Defaults to None.

  • start (float/np.ndarray, optional) – AU intensity to start at. Defaults to None

  • 0. (which a neutral face with all AUs =) –

  • end (float/np.ndarray, optional) – AU intensity(s) to end at. We don’t recommend

  • None. (going beyond 3. Defaults to) –

  • save (str, optional) – file to save animation to. Defaults to None.

  • include_reverse (bool, optional) – Whether to also reverse the animation, i.e.

  • True. (start -> end -> start. Defaults to) –

  • title (str, optional) – plot title. Defaults to None.

  • fps (int, optional) – frame-rate; Defaults to 15fps

  • duration (float, optional) – length of animation in seconds. Defaults to 0.5

  • padding (float, optional) – additional time to wait in seconds on the first and

  • animation. (last frame of the animation. Useful when you plan to loop the) –

  • 0.25 (Defaults to) –

  • interp_func (callable, optional) – interpolation function that takes a start and

  • values (end keyword argument and returns a function that will be applied to) –

  • np.linspace (0, 1, num_frames) –

  • https – //github.com/semitable/easing-functions for other options.

Returns:

matplotlib Animation

feat.plotting.draw_lineface(currx, curry, ax=None, color='k', linestyle='-', linewidth=1, gaze=None, *args, **kwargs)#

Plot Line Face

Parameters:
  • currx – vector (len(68)) of x coordinates

  • curry – vector (len(68)) of y coordinates

  • ax – matplotlib axis to add

  • color – matplotlib line color

  • linestyle – matplotlib linestyle

  • linewidth – matplotlib linewidth

  • gaze – array (len(4)) of gaze vectors (fifth value is whether to draw vectors)

feat.plotting.draw_muscles(currx, curry, au=None, ax=None, *args, **kwargs)#

Draw Muscles

Parameters:
  • currx – vector (len(68)) of x coordinates

  • curry – vector (len(68)) of y coordinates

  • ax – matplotlib axis to add

feat.plotting.draw_vectorfield(reference, target, color='r', scale=1, width=0.007, ax=None, *args, **kwargs)#

Draw vectorfield from reference to target

Parameters:
  • reference – reference landmarks (2,68)

  • target – target landmarks (2,68)

  • ax – matplotlib axis instance

  • au – vector of action units (len(17))

feat.plotting.get_heat(muscle, au, log)#

Function to create heatmap from au vector

Parameters:
  • muscle (string) – string representation of a muscle

  • au (list) – vector of action units

  • log (boolean) – whether the action unit values are on a log scale

Returns:

color of muscle according to its au value

feat.plotting.imshow(obj, figsize=(3, 3), aspect='equal')#

Convenience wrapper function around matplotlib imshow that creates figure and axis boilerplate for single image plotting

Parameters:
  • obj (str/Path/PIL.Imag) – string or Path to image file or pre-loaded PIL.Image instance

  • figsize (tuple, optional) – matplotlib figure size. Defaults to None.

  • aspect (str, optional) – passed to matplotlib imshow. Defaults to “equal”.

feat.plotting.interpolate_aus(start, end, num_frames, interp_func=None, num_padding_frames=None, include_reverse=True)#

Helper function to interpolate between starting and ending AU values using non-linear easing functions

Parameters:
  • start (np.ndarray) – array of starting intensities

  • end (np.ndarray) – array of ending intensities

  • num_frames (int) – number of frames to interpolate over

  • interp_func (callable, optional) – easing function. Defaults to None.

  • num_padding_frames (int, optional) – number of additional freeze frames to add

  • None. (before the first frame and after the last frame. Defaults to) –

  • include_reverse (bool, optional) – return the reverse interpolation appended to

  • True. (the end of the interpolation. Useful for animating start -> end -> start. Defaults to) –

Returns:

frames x au 2d array

Return type:

np.ndarray

feat.plotting.plot_face(au=None, model=None, vectorfield=None, muscles=None, ax=None, feature_range=False, color='k', linewidth=1, linestyle='-', border=True, gaze=None, muscle_scaler=None, *args, **kwargs)#

Core face plotting function

Parameters:
  • model (Default's to Py-Feat's 20 AU landmark AU) – (str/PLSRegression instance) Name of AU visualization model to use.

  • model

  • au – vector of action units (same length as model.n_components)

  • vectorfield – (dict) {‘target’:target_array,’reference’:reference_array}

  • muscles – (dict) {‘muscle’: color}

  • ax – matplotlib axis handle

  • (tuple (feature_range) – None): If a tuple with (min, max), scale input AU intensities to (min, max) before prediction.

  • default – None): If a tuple with (min, max), scale input AU intensities to (min, max) before prediction.

  • color – matplotlib color

  • linewidth – matplotlib linewidth

  • linestyle – matplotlib linestyle

  • gaze – array of gaze vectors (len(4))

Returns:

plot handle

Return type:

ax

feat.plotting.predict(au, model=None, feature_range=None)#

Helper function to predict landmarks from au given a sklearn model

Parameters:
  • au – vector of action unit intensities

  • model – sklearn pls object (uses pretrained model by default)

  • (tuple (feature_range) – None): If a tuple with (min, max), scale input AU intensities to (min, max) before prediction.

  • default – None): If a tuple with (min, max), scale input AU intensities to (min, max) before prediction.

Returns:

Array of landmarks (2,68)

Return type:

landmarks

feat.utils module#

py-feat helper functions and variables

feat.utils.is_list_of_lists_empty(list_of_lists)#

Helper function to check if list of lists is empty

feat.utils.set_torch_device(device='cpu')#

Helper function to set device for pytorch model

feat.pretrained module#

Helper functions specifically for working with included pre-trained models

feat.pretrained.fetch_model(model_type, model_name)#

Fetch a pre-trained model class constructor. Used by detector init

feat.pretrained.get_pretrained_models(face_model, landmark_model, au_model, emotion_model, facepose_model, identity_model, verbose)#

Helper function that validates the request model names and downloads them if necessary using the URLs in the included JSON file. User by detector init