API Reference#

This reference provides detailed documentation for all the features in FEAT

feat.detector module#

class feat.detector.Detector(*args, **kwargs)#

Bases: Module, PyTorchModelHubMixin

detect(inputs, data_type='image', output_size=None, batch_size=1, num_workers=0, pin_memory=False, face_identity_threshold=0.8, face_detection_threshold=0.5, skip_frames=None, progress_bar=True, save=None, **kwargs)#

Detects FEX from one or more imagathe files.

Parameters:
  • inputs (list of str, torch.Tensor) – Path to a list of paths to image files or torch.Tensor of images (B, C, H, W)

  • data_type (str) – type of data to be processed; Default ‘image’ [‘image’, ‘tensor’, ‘video’]

  • output_size (int) – image size to rescale all image preserving aspect ratio.

  • batch_size (int) – how many batches of images you want to run at one shot.

  • num_workers (int) – how many subprocesses to use for data loading.

  • pin_memory (bool) – If True, the data loader will copy Tensors into CUDA pinned memory before returning them.

  • face_identity_threshold (float) – value between 0-1 to determine similarity of person using face identity embeddings; Default >= 0.8

  • face_detection_threshold (float) – value between 0-1 to determine if a face was detected; Default >= 0.5

  • skip_frames (int or None) – number of frames to skip to speed up inference (video only); Default None

  • progress_bar (bool) – Whether to show the tqdm progress bar. Default is True.

  • **kwargs – additional detector-specific kwargs

  • save (None or str or Path) – if immediately append detections to a csv file at with the given name after processing each batch, which can be useful to interrupted/resuming jobs and saving memory/RAM

Returns:

Concatenated results for all images in the batch

Return type:

pd.DataFrame

detect_faces(images, face_size=112, face_detection_threshold=0.5)#

detect faces and poses in a batch of images using img2pose

Parameters:
  • img (torch.Tensor) – Tensor of shape (B, C, H, W) representing the images

  • face_size (int) – Output size to resize face after cropping.

Returns:

Prediction results dataframe

Return type:

Fex

forward(faces_data)#

Run Model Inference on detected faces.

Parameters:

faces_data (list of dict) – Detected faces and associated data from detect_faces.

Returns:

Prediction results dataframe

Return type:

Fex

training: bool#

feat.data module#

Py-FEAT Data classes.

class feat.data.Fex(*args, **kwargs)#

Bases: DataFrame

Fex is a class to represent facial expression (Fex) data

Fex class is an enhanced pandas dataframe, with extra attributes and methods to help with facial expression data analysis.

Parameters:
  • filename – (str, optional) path to file

  • detector – (str, optional) name of software used to extract Fex. Currently only

  • supported ('Feat' is) –

  • sampling_freq (float, optional) – sampling rate of each row in Hz; defaults to None

  • features (pd.Dataframe, optional) – features that correspond to each Fex row

  • sessions – Unique values indicating rows associated with a specific session (e.g., trial, subject, etc).Must be a 1D array of n_samples elements; defaults to None

append(data, session_id=None, axis=0)#

Append a new Fex object to an existing object

Parameters:
  • data – (Fex) Fex instance to append

  • session_id – session label

  • axis – ([0,1]) Axis to append. Rows=0, Cols=1

Returns:

Fex instance

property aus#

Returns the Action Units data

Returns Action Unit data using the columns set in fex.au_columns.

Returns:

Action Units data

Return type:

DataFrame

baseline(baseline='median', normalize=None, ignore_sessions=False)#

Reference a Fex object to a baseline.

Parameters:
  • method – {‘median’, ‘mean’, ‘begin’, FexSeries instance}. Will subtract baseline from Fex object (e.g., mean, median). If passing a Fex object, it will treat that as the baseline.

  • normalize – (str). Can normalize results of baseline. Values can be [None, ‘db’,’pct’]; default None.

  • ignore_sessions – (bool) If True, will ignore Fex.sessions information. Otherwise, method will be applied separately to each unique session.

Returns:

Fex object

clean(detrend=True, standardize=True, confounds=None, low_pass=None, high_pass=None, ensure_finite=False, ignore_sessions=False, *args, **kwargs)#

Clean Time Series signal

This function wraps nilearn functionality and can filter, denoise, detrend, etc.

See http://nilearn.github.io/modules/generated/nilearn.signal.clean.html

This function can do several things on the input signals, in the following order: detrend, standardize, remove confounds, low and high-pass filter

If Fex.sessions is not None, sessions will be cleaned separately.

Parameters:
  • confounds – (numpy.ndarray, str or list of Confounds timeseries) Shape must be (instant number, confound number), or just (instant number,). The number of time instants in signals and confounds must be identical (i.e. signals.shape[0] == confounds.shape[0]). If a string is provided, it is assumed to be the name of a csv file containing signals as columns, with an optional one-line header. If a list is provided, all confounds are removed from the input signal, as if all were in the same array.

  • low_pass – (float) low pass cutoff frequencies in Hz.

  • high_pass – (float) high pass cutoff frequencies in Hz.

  • detrend – (bool) If detrending should be applied on timeseries (before confound removal)

  • standardize – (bool) If True, returned signals are set to unit variance.

  • ensure_finite – (bool) If True, the non-finite values (NANs and infs) found in the data will be replaced by zeros.

  • ignore_sessions – (bool) If True, will ignore Fex.sessions information. Otherwise, method will be applied separately to each unique session.

Returns:

cleaned Fex instance

compute_identities(threshold=0.8, inplace=False)#

Compute Identities using face embeddings from identity detector using threshold

decompose(algorithm='pca', axis=1, n_components=None, *args, **kwargs)#

Decompose Fex instance

Parameters:
  • algorithm – (str) Algorithm to perform decomposition types=[‘pca’,’ica’,’nnmf’,’fa’]

  • axis – dimension to decompose [0,1]

  • n_components – (int) number of components. If None then retain as many as possible.

Returns:

a dictionary of decomposition parameters

Return type:

output

property design#

Returns the design data

Returns the study design information using columns in fex.design_columns.

Returns:

time data

Return type:

DataFrame

distance(method='euclidean', **kwargs)#

Calculate distance between rows within a Fex() instance.

Parameters:

method – type of distance metric (can use any scikit learn or sciypy metric)

Returns:

Outputs a 2D distance matrix.

Return type:

dist

downsample(target, **kwargs)#
Downsample Fex columns. Relies on nltools.stats.downsample,

but ensures that returned object is a Fex object.

Parameters:
  • target (float) – downsampling target, typically in samples not seconds

  • kwargs – additional inputs to nltools.stats.downsample

property emotions#

Returns the emotion data

Returns emotions data using the columns set in fex.emotion_columns.

Returns:

emotion data

Return type:

DataFrame

extract_boft(min_freq=0.06, max_freq=0.66, bank=8, *args, **kwargs)#

Extract Bag of Temporal features

Parameters:
  • min_freq – maximum frequency of temporal filters

  • max_freq – minimum frequency of temporal filters

  • bank – number of temporal filter banks, filters are on exponential scale

Returns:

list of Morlet wavelets with corresponding freq hzs: list of hzs for each Morlet wavelet

Return type:

wavs

extract_max(ignore_sessions=False)#

Extract maximum of each feature

Parameters:

ignore_sessions – (bool) ignore sessions or extract separately by sessions if available.

Returns:

(Fex) maximum values for each feature

Return type:

fex

extract_mean(ignore_sessions=False)#

Extract mean of each feature

Parameters:

ignore_sessions – (bool) ignore sessions or extract separately by sessions if available.

Returns:

mean values for each feature

Return type:

Fex

extract_min(ignore_sessions=False)#

Extract minimum of each feature

Parameters:

ignore_sessions – (bool) ignore sessions or extract separately by sessions if available.

Returns:

(Fex) minimum values for each feature

Return type:

Fex

extract_multi_wavelet(min_freq=0.06, max_freq=0.66, bank=8, *args, **kwargs)#

Convolve with a bank of morlet wavelets.

Wavelets are equally spaced from min to max frequency. See extract_wavelet for more information and options.

Parameters:
  • min_freq – (float) minimum frequency to extract

  • max_freq – (float) maximum frequency to extract

  • bank – (int) size of wavelet bank

  • num_cyc – (float) number of cycles for wavelet

  • mode – (str) feature to extract, e.g., [‘complex’,’filtered’,’phase’,’magnitude’,’power’]

  • ignore_sessions – (bool) ignore sessions or extract separately by sessions if available.

Returns:

(Fex instance)

Return type:

convolved

extract_sem(ignore_sessions=False)#

Extract std of each feature

Parameters:

ignore_sessions – (bool) ignore sessions or extract separately by sessions if available.

Returns:

mean values for each feature

Return type:

Fex

extract_std(ignore_sessions=False)#

Extract std of each feature

Parameters:

ignore_sessions – (bool) ignore sessions or extract separately by sessions if available.

Returns:

mean values for each feature

Return type:

Fex

extract_summary(mean=True, std=True, sem=True, max=True, min=True, ignore_sessions=False, *args, **kwargs)#

Extract summary of multiple features

Parameters:
  • mean – (bool) extract mean of features

  • std – (bool) extract std of features

  • sem – (bool) extract sem of features

  • max – (bool) extract max of features

  • min – (bool) extract min of features

  • ignore_sessions – (bool) ignore sessions or extract separately by sessions if available.

Returns:

(Fex)

Return type:

fex

extract_wavelet(freq, num_cyc=3, mode='complex', ignore_sessions=False)#

Perform feature extraction by convolving with a complex morlet wavelet

Parameters:
  • freq – (float) frequency to extract

  • num_cyc – (float) number of cycles for wavelet

  • mode – (str) feature to extract, e.g., ‘complex’,’filtered’,’phase’,’magnitude’,’power’]

  • ignore_sessions – (bool) ignore sessions or extract separately by sessions if available.

Returns:

(Fex instance)

Return type:

convolved

property facebox#

Returns the facebox data

Returns:

facebox data

Return type:

DataFrame

property faceboxes#

Returns the facebox data

Returns:

facebox data

Return type:

DataFrame

property facepose#

Returns the facepose data using the columns set in fex.facepose_columns

Returns:

facepose data

Return type:

DataFrame

property identities#

Returns the identity labels

Returns:

identity data

Return type:

DataFrame

property identity_embeddings#

Returns the identity embeddings

Returns:

identity data

Return type:

DataFrame

property info#

Print all meta data of fex

Loops through metadata set in self._metadata and prints out the information.

property input#

Returns input column as string

Returns input data in the “input” column.

Returns:

path to input image

Return type:

string

property inputs#

Returns input column as string

Returns input data in the “input” column.

Returns:

path to input image

Return type:

string

iplot_detections(bounding_boxes=False, landmarks=False, aus=False, poses=False, emotions=False, emotions_position='right', emotions_opacity=1.0, emotions_color='white', emotions_size=14, frame_duration=1000, facebox_color='cyan', facebox_width=3, pose_width=2, landmark_color='white', landmark_width=2, au_cmap='Blues', au_heatmap_resolution=1000, au_opacity=0.9, *args, **kwargs)#

Plot Py-FEAT detection results using plotly backend. There are currently two different types of plots implemented. For single Frames, uses plot_singleframe_detections() to create an interactive plot where different detector outputs can be toggled on or off. For multiple frames, uses plot_multipleframes_detections() to create a plotly animation to scroll through multiple frames. However, we currently are unable to interactively toggle on and off the detectors, so the detector output must be prespecified when generating the plot.

Parameters:
  • bounding_boxes (bool) – will include faceboxes when plotting detector output for multiple frames.

  • landmarks (bool) – will include face landmarks when plotting detector output for multiple frames.

  • poses (bool) – will include 3 axis line plot indicating x,y,z rotation information when plotting detector output for multiple frames.

  • aus (bool) – will include action unit heatmaps when plotting detector output for multiple frames.

  • emotions (bool) – will add text annotations indicating probability of discrete emotion when plotting detector output for multiple frames.

  • emotions_position (str) – position around facebox to plot emotion annotations. default=’right’

  • emotions_opacity (float) – opacity of emotion annotation text (default=1.)

  • emotions_color (str) – color of emotion annotation text (default=’white’)

  • emotions_size (int) – size of emotion annotations (default=14)

  • frame_duration (int) – duration in milliseconds to play each frame if plotting multiple frames (default=1000)

  • facebox_color (str) – color of facebox bounding box (default=”cyan”)

  • facebox_width (int) – line width of facebox bounding box (default=3)

  • pose_width (int) – line width of pose rotation plot (default=2)

  • landmark_color (str) – color of landmark detectors (default=”white”)

  • landmark_width (int) – line width of landmark detectors (default=2)

  • au_cmap (str) – colormap to use for AU heatmap (default=’Blues’)

  • au_heatmap_resolution (int) – resolution of heatmap values (default=1000)

  • au_opacity (float) – opacity of AU heatmaps (default=0.9)

Returns:

a plotly figure instance

isc(col, index='frame', columns='input', method='pearson')#

[summary]

Parameters:
  • col (str]) – Column name to compute the ISC for.

  • index (str, optional) – Column to be used in computing ISC. Usually this would be the column identifying the time such as the number of the frame. Defaults to “frame”.

  • columns (str, optional) – Column to be used for ISC. Usually this would be the column identifying the video or subject. Defaults to “input”.

  • method (str, optional) – Method to use for correlation pearson, kendall, or spearman. Defaults to “pearson”.

Returns:

Correlation matrix with index as colmns

Return type:

DataFrame

itersessions()#

Iterate over Fex sessions as (session, series) pairs.

Returns:

a generator that iterates over the sessions of the fex instance

Return type:

it

property landmark#

Returns the landmark data

Returns landmark data using the columns set in fex.landmark_columns.

Returns:

landmark data

Return type:

DataFrame

property landmark_x#

Returns the x landmarks.

Returns:

x landmarks.

Return type:

DataFrame

property landmark_y#

Returns the y landmarks.

Returns:

y landmarks.

Return type:

DataFrame

property landmarks#

Returns the landmark data

Returns landmark data using the columns set in fex.landmark_columns.

Returns:

landmark data

Return type:

DataFrame

property landmarks_x#

Returns the x landmarks.

Returns:

x landmarks.

Return type:

DataFrame

property landmarks_y#

Returns the y landmarks.

Returns:

y landmarks.

Return type:

DataFrame

plot_detections(faces='landmarks', faceboxes=True, muscles=False, poses=False, gazes=False, add_titles=True, au_barplot=True, emotion_barplot=True, plot_original_image=True)#

Plots detection results by Feat. Can control plotting of face, AU barplot and Emotion barplot. The faces kwarg controls whether facial landmarks are draw on top of input images or whether faces are visualized using Py-Feat’s AU visualization model using detected AUs. If detection was performed on a video an faces = ‘landmarks’, only an outline of the face will be draw without loading the underlying vidoe frame to save memory.

Parameters:
  • faces (str, optional) – ‘landmarks’ to draw detected landmarks or ‘aus’ to

  • model. (generate a face from AU detections using Py-Feat's AU landmark) –

  • 'landmarks'. (Defaults to) –

  • faceboxes (bool, optional) – Whether to draw the bounding box around detected

  • True. (the face. Defaults to) –

  • muscles (bool, optional) – Whether to draw muscles from AU activity. Only

  • False. (faces='landmarks'. Defaults to) –

  • poses (bool, optional) – Whether to draw facial poses. Only applies if

  • False.

  • gazes (bool, optional) – Whether to draw gaze vectors. Only applies if faces=’aus’. Defaults to False.

  • add_titles (bool, optional) – Whether to add the file name as a title above

  • True.

  • au_barplot (bool, optional) – Whether to include a subplot for au detections. Defaults to True.

  • emotion_barplot (bool, optional) – Whether to include a subplot for emotion detections. Defaults to True.

Returns:

list of matplotlib figures

Return type:

list

plot_singleframe_detections(bounding_boxes=False, landmarks=False, poses=False, emotions=False, aus=False, image_opacity=0.9, facebox_color='cyan', facebox_width=3, pose_width=2, landmark_color='white', landmark_width=2, emotions_position='right', emotions_opacity=1.0, emotions_color='white', emotions_size=12, au_heatmap_resolution=1000, au_opacity=0.9, au_cmap='Blues', *args, **kwargs)#

Function to generate interactive plotly figure to interactively visualize py-feat detectors on a single image frame.

Parameters:
  • image_opacity (float) – opacity of image overlay (default=.9)

  • emotions_position (str) – position around facebox to plot emotion annotations. default=’right’

  • emotions_opacity (float) – opacity of emotion annotation text (default=1.)

  • emotions_color (str) – color of emotion annotation text (default=’white’)

  • emotions_size (int) – size of emotion annotations (default=14)

  • frame_duration (int) – duration in milliseconds to play each frame if plotting multiple frames (default=1000)

  • facebox_color (str) – color of facebox bounding box (default=”cyan”)

  • facebox_width (int) – line width of facebox bounding box (default=3)

  • pose_width (int) – line width of pose rotation plot (default=2)

  • landmark_color (str) – color of landmark detectors (default=”white”)

  • landmark_width (int) – line width of landmark detectors (default=2)

  • au_cmap (str) – colormap to use for AU heatmap (default=’Blues’)

  • au_heatmap_resolution (int) – resolution of heatmap values (default=1000)

  • au_opacity (float) – opacity of AU heatmaps (default=0.9)

Returns:

a plotly figure instance

property poses#

Returns the facepose data using the columns set in fex.facepose_columns

Returns:

facepose data

Return type:

DataFrame

predict(X, y, model=<class 'sklearn.linear_model._base.LinearRegression'>, cv_kwargs={'cv': 5}, *args, **kwargs)#

Predicts y from X using a sklearn model.

Predict a variable of interest y using your model of choice from X, which can be a list of columns of the Fex instance or a dataframe.

Parameters:
  • X (list or DataFrame) – List of column names or dataframe to be used as features for prediction

  • y (string or array) – y values to be predicted

  • model (class, optional) – Any sklearn model. Defaults to LinearRegression.

  • args – Model arguments

  • kwargs – Model arguments

Returns:

Fit model instance.

Return type:

model

read_feat(filename=None, *args, **kwargs)#

Reads facial expression detection results from Feat Detector

Parameters:

filename (string, optional) – Path to file. Defaults to None.

Returns:

Fex

read_file()#

Loads file into FEX class

Returns:

Fex class

Return type:

DataFrame

read_openface(filename=None, *args, **kwargs)#

Reads facial expression detection results from OpenFace

Parameters:

filename (string, optional) – Path to file. Defaults to None.

Returns:

Fex

rectification(std=3)#

Removes time points when the face position moved more than N standard deviations from the mean.

Parameters:

std (default 3) – standard deviation from mean to remove outlier face locations

Returns:

cleaned FEX object

Return type:

data

regress(X, y, fit_intercept=True, *args, **kwargs)#

Regress using nltools.stats.regress.

fMRI-like regression to predict Fex activity (y) from set of regressors (X).

Parameters:
  • X (list or str) – Independent variable to predict.

  • y (list or str) – Dependent variable to be predicted.

  • fit_intercept (bool) – Whether to add intercept before fitting. Defaults to True.

Returns:

Dataframe of betas, ses, t-stats, p-values, df, residuals

property time#

Returns the time data

Returns the time information using fex.time_columns.

Returns:

time data

Return type:

DataFrame

ttest_1samp(popmean=0)#

Conducts 1 sample ttest.

Uses scipy.stats.ttest_1samp to conduct 1 sample ttest

Parameters:
  • popmean (int, optional) – Population mean to test against. Defaults to 0.

  • threshold_dict ([type], optional) – Dictonary for thresholding. Defaults to None. [NOT IMPLEMENTED]

Returns:

t-statistics and p-values

Return type:

t, p

ttest_ind(col, sessions=None)#

Conducts 2 sample ttest.

Uses scipy.stats.ttest_ind to conduct 2 sample ttest on column col between sessions.

Parameters:
  • col (str) – Column names to compare in a t-test between sessions

  • session (array-like) – session name to query Fex.sessions, otherwise uses the

  • Fex.sessions. (unique values in) –

Returns:

t-statistics and p-values

Return type:

t, p

update_sessions(new_sessions)#

Returns a copy of the Fex dataframe with a new sessions attribute after validation. new_sessions should be a dictionary mapping old to new names or an iterable with the same number of rows as the Fex dataframe

Parameters:

new_sessions (dict, Iterable) – map or list of new session names

Returns:

self

Return type:

Fex

upsample(target, target_type='hz', **kwargs)#
Upsample Fex columns. Relies on nltools.stats.upsample,

but ensures that returned object is a Fex object.

Parameters:
  • target (float) – upsampling target, default ‘hz’ (also ‘samples’, ‘seconds’)

  • kwargs – additional inputs to nltools.stats.upsample

class feat.data.FexSeries(*args, **kwargs)#

Bases: Series

This is a sub-class of pandas series. While not having additional methods of it’s own required to retain normal slicing functionality for the Fex class, i.e. how slicing is typically handled in pandas. All methods should be called on Fex below.

property aus#

Returns the Action Units data

Returns:

Action Units data

Return type:

DataFrame

property design#

Returns the design data

Returns:

time data

Return type:

DataFrame

property emotions#

Returns the emotion data

Returns:

emotion data

Return type:

DataFrame

property facebox#

Returns the facebox data

Returns:

facebox data

Return type:

DataFrame

property faceboxes#

Returns the facebox data

Returns:

facebox data

Return type:

DataFrame

property facepose#

Returns the facepose data

Returns:

facepose data

Return type:

DataFrame

property identity#

Returns the identity data

Returns:

identity data

Return type:

DataFrame

property info#

Print class meta data.

property input#

Returns input column as string

Returns:

path to input image

Return type:

string

property inputs#

Returns input column as string

Returns:

path to input image

Return type:

string

property landmark#

Returns the landmark data

Returns:

landmark data

Return type:

DataFrame

property landmark_x#

Returns the x landmarks.

Returns:

x landmarks.

Return type:

DataFrame

property landmark_y#

Returns the y landmarks.

Returns:

y landmarks.

Return type:

DataFrame

property landmarks#

Returns the landmark data

Returns:

landmark data

Return type:

DataFrame

property landmarks_x#

Returns the x landmarks.

Returns:

x landmarks.

Return type:

DataFrame

property landmarks_y#

Returns the y landmarks.

Returns:

y landmarks.

Return type:

DataFrame

plot_detections(*args, **kwargs)#

Alias for Fex.plot_detections

property poses#

Returns the facepose data

Returns:

facepose data

Return type:

DataFrame

property time#

Returns the time data

Returns:

time data

Return type:

DataFrame

class feat.data.ImageDataset(images, output_size=None, preserve_aspect_ratio=True, padding=False)#

Bases: Dataset

Torch Image Dataset

Parameters:
  • output_size (tuple or int) – Desired output size. If tuple, output is matched to

  • int (aspect ratio by adding padding. If) –

  • is (will set largest edge to output_size if target size) –

  • bigger

  • the (or smallest edge if target size is smaller to keep aspect ratio) –

  • same.

  • preserve_aspect_ratio (bool) – Output size is matched to preserve aspect ratio. Note that longest edge of output size is preserved, but actual output may differ from intended output_size.

  • padding (bool) – Transform image to exact output_size. If tuple, will preserve

  • int

  • size. (will set both sides to the same) –

Returns:

dataset of [batch, channels, height, width] that can be passed to DataLoader

Return type:

Dataset

class feat.data.VideoDataset(video_file, skip_frames=None, output_size=None)#

Bases: Dataset

Torch Video Dataset

Parameters:

skip_frames (int) – number of frames to skip

Returns:

dataset of [batch, channels, height, width] that can be passed to DataLoader

Return type:

Dataset

calc_approx_frame_time(idx)#

Calculate the approximate time of a frame in a video

Parameters:

frame_idx (int) – frame number

Returns:

time in seconds

Return type:

float

static convert_sec_to_min_sec(duration)#
get_video_metadata(video_file)#
load_frame(idx)#

Load in a single frame from the video using a lazy generator

feat.plotting module#

Helper functions for plotting

feat.plotting.animate_face(start, end, save, AU=None, include_reverse=True, feature_range=None, **kwargs)#

Create a matplotlib animation interpolating between a starting and ending face. Can either work like plot_face by taking an array of AU intensities for start and end, or by animating a single AU using the AU keyword argument and setting start and end to a scalar value.

Parameters:
  • AU (str/int, optional) – action unit id (e.g. 12 or ‘AU12’). Defaults to None.

  • start (float/np.ndarray, optional) – AU intensity to start at. Defaults to None

  • 0. (which a neutral face with all AUs =) –

  • end (float/np.ndarray, optional) – AU intensity(s) to end at. We don’t recommend

  • None. (going beyond 3. Defaults to) –

  • save (str, optional) – file to save animation to. Defaults to None.

  • include_reverse (bool, optional) – Whether to also reverse the animation, i.e.

  • True. (start -> end -> start. Defaults to) –

  • title (str, optional) – plot title. Defaults to None.

  • fps (int, optional) – frame-rate; Defaults to 15fps

  • duration (float, optional) – length of animation in seconds. Defaults to 0.5

  • padding (float, optional) – additional time to wait in seconds on the first and

  • animation. (last frame of the animation. Useful when you plan to loop the) –

  • 0.25 (Defaults to) –

  • interp_func (callable, optional) – interpolation function that takes a start and

  • values (end keyword argument and returns a function that will be applied to) –

  • np.linspace (0, 1, num_frames) –

  • https – //github.com/semitable/easing-functions for other options.

Returns:

matplotlib Animation

feat.plotting.draw_lineface(currx, curry, ax=None, color='k', linestyle='-', linewidth=1, gaze=None, *args, **kwargs)#

Plot Line Face

Parameters:
  • currx – vector (len(68)) of x coordinates

  • curry – vector (len(68)) of y coordinates

  • ax – matplotlib axis to add

  • color – matplotlib line color

  • linestyle – matplotlib linestyle

  • linewidth – matplotlib linewidth

  • gaze – array (len(4)) of gaze vectors (fifth value is whether to draw vectors)

feat.plotting.draw_muscles(currx, curry, au=None, ax=None, *args, **kwargs)#

Draw Muscles

Parameters:
  • currx – vector (len(68)) of x coordinates

  • curry – vector (len(68)) of y coordinates

  • ax – matplotlib axis to add

feat.plotting.draw_plotly_au(row, img_height, fig, heatmap_resolution=1000, au_opacity=0.9, cmap='Blues', output='dictionary')#

Helper function to draw an SVG path for a plotly figure object

NOTES:

Need to clean up muscle ids after looking at face anatomy action units

Parameters:
  • row (FexSeries) – FexSeries instance

  • img_height (int) – height of image overlay. used to adjust coordinates

  • fig – plotly figure handle

  • heatmap_resolution (int) – precision of cmap

  • au_opacity (float) – amount of opacity for face muscles

  • cmap (str) – colormap

  • output (str) – type of output “figure” for plotly figure object or “dictionary”

Returns:

plotly figure handle

Return type:

fig

feat.plotting.draw_plotly_landmark(row, img_height, fig, line_width=3, line_color='white', output='dictionary')#

Helper function to draw an SVG path for a plotly figure object

Parameters:
  • row – (FexSeries) a row of a Fex object

  • img_height (int) – height of the image to flip the y-coordinates

  • fig – a plotly figure instance

  • output (str) – type of output “figure” for plotly figure object or “dictionary”

  • line_width (int) – (optional) line width if outputting a plotly figure instance

  • line_color (int) – (optional) line color if outputting a plotly figure instance

Returns:

an SVG string

Return type:

fig (str)

feat.plotting.draw_plotly_pose(row, img_height, fig, line_width=2, output='dictionary')#

Helper function to draw a path indicating the x,y,z pose position.

Parameters:
  • row (FexSeries) – FexSeries instance

  • img_height (int) – height of image overlay. used to adjust coordinates

  • fig – plotly figure handle

  • line_width (int) – (optional) width of line if outputing a plotly figure instance

  • output (str) – type of output “figure” for plotly figure object or “dictionary”

Returns:

plotly figure handle

Return type:

fig

feat.plotting.draw_vectorfield(reference, target, color='r', scale=1, width=0.007, ax=None, *args, **kwargs)#

Draw vectorfield from reference to target

Parameters:
  • reference – reference landmarks (2,68)

  • target – target landmarks (2,68)

  • ax – matplotlib axis instance

  • au – vector of action units (len(17))

feat.plotting.emotion_annotation_position(row, img_height, img_width, emotions_size=12, emotions_position='bottom')#

Helper function to adjust position of emotion annotations

Parameters:
  • row (FexSeries) – FexSeries instance

  • img_height (int) – height of image overlay. used to adjust coordinates

  • img_width (int) – width of image overlay. used to adjust coordinates

  • emotions_size (int) – size of text used to adjust positions

  • emotions_position (str) – position to place emotion annotations [‘left’, ‘right’, ‘top’, ‘bottom’]

Returns:

y_position (int): align (str): plotly annotation text alignment [‘top’,’bottom’, ‘left’, ‘right ] valign (str): plotly annotation vertical alignment [‘middle’, ‘top’, ‘bottom’]

Return type:

x_position (int)

feat.plotting.face_part_path(row, img_height, line_points)#

Helper function to draw SVG path for a specific face part. Requires list of landmark point positions (i.e., [0,1,2]). Last coordinate is end point

Parameters:
  • row – (FexSeries) a row of a Fex object

  • img_height (int) – the height of the image

  • line_points (list) – a list of points on a landmark (i.e., [0:68])

Returns:

an SVG string

Return type:

fig (str)

feat.plotting.face_polygon_svg(line_points, img_height)#

Helper function to draw SVG path for a polygon of a specific face part. Requires list of landmark x,y coordinate tuples (i.e., [(2,2),(5,33)]).

Parameters:
  • line_points (list) – a list of tuples of landmark coordinates

  • img_height (int) – height of the image to flip the y-coordinates

Returns:

an SVG string

Return type:

fig (str)

feat.plotting.get_heat(muscle, au, log)#

Function to create heatmap from au vector

Parameters:
  • muscle (string) – string representation of a muscle

  • au (list) – vector of action units

  • log (boolean) – whether the action unit values are on a log scale

Returns:

color of muscle according to its au value

feat.plotting.imshow(obj, figsize=(3, 3), aspect='equal')#

Convenience wrapper function around matplotlib imshow that creates figure and axis boilerplate for single image plotting

Parameters:
  • obj (str/Path/PIL.Imag) – string or Path to image file or pre-loaded PIL.Image instance

  • figsize (tuple, optional) – matplotlib figure size. Defaults to None.

  • aspect (str, optional) – passed to matplotlib imshow. Defaults to “equal”.

feat.plotting.interpolate_aus(start, end, num_frames, interp_func=None, num_padding_frames=None, include_reverse=True)#

Helper function to interpolate between starting and ending AU values using non-linear easing functions

Parameters:
  • start (np.ndarray) – array of starting intensities

  • end (np.ndarray) – array of ending intensities

  • num_frames (int) – number of frames to interpolate over

  • interp_func (callable, optional) – easing function. Defaults to None.

  • num_padding_frames (int, optional) – number of additional freeze frames to add

  • None. (before the first frame and after the last frame. Defaults to) –

  • include_reverse (bool, optional) – return the reverse interpolation appended to

  • True. (the end of the interpolation. Useful for animating start -> end -> start. Defaults to) –

Returns:

frames x au 2d array

Return type:

np.ndarray

feat.plotting.plot_face(au=None, model=None, vectorfield=None, muscles=None, ax=None, feature_range=False, color='k', linewidth=1, linestyle='-', border=True, gaze=None, muscle_scaler=None, *args, **kwargs)#

Core face plotting function

Parameters:
  • model (Default's to Py-Feat's 20 AU landmark AU) – (str/PLSRegression instance) Name of AU visualization model to use.

  • model

  • au – vector of action units (same length as model.n_components)

  • vectorfield – (dict) {‘target’:target_array,’reference’:reference_array}

  • muscles – (dict) {‘muscle’: color}

  • ax – matplotlib axis handle

  • (tuple (feature_range) – None): If a tuple with (min, max), scale input AU intensities to (min, max) before prediction.

  • default – None): If a tuple with (min, max), scale input AU intensities to (min, max) before prediction.

  • color – matplotlib color

  • linewidth – matplotlib linewidth

  • linestyle – matplotlib linestyle

  • gaze – array of gaze vectors (len(4))

Returns:

plot handle

Return type:

ax

feat.plotting.predict(au, model=None, feature_range=None)#

Helper function to predict landmarks from au given a sklearn model

Parameters:
  • au – vector of action unit intensities

  • model – sklearn pls object (uses pretrained model by default)

  • (tuple (feature_range) – None): If a tuple with (min, max), scale input AU intensities to (min, max) before prediction.

  • default – None): If a tuple with (min, max), scale input AU intensities to (min, max) before prediction.

Returns:

Array of landmarks (2,68)

Return type:

landmarks

feat.utils module#

py-feat helper functions and variables

feat.utils.flatten_list(data)#

Helper function to flatten a list of lists

feat.utils.generate_coordinate_names(num_points=478)#

Generates a list of names for x, y, z coordinates for a given number of points.

Parameters:

num_points (int) – Number of points (478 in this case).

Returns:

List of coordinate names like [‘x_1’, ‘y_1’, ‘z_1’, …, ‘x_n’, ‘y_n’, ‘z_n’].

Return type:

list

feat.utils.is_list_of_lists_empty(list_of_lists)#

Helper function to check if list of lists is empty

feat.utils.set_torch_device(device='cpu')#

Helper function to set device for pytorch model

feat.pretrained module#

Helper functions specifically for working with included pre-trained models

feat.pretrained.fetch_model(model_type, model_name)#

Fetch a pre-trained model class constructor. Used by detector init

feat.pretrained.get_pretrained_models(face_model, landmark_model, au_model, emotion_model, facepose_model, identity_model, verbose)#

Helper function that validates the request model names and downloads them if necessary using the URLs in the included JSON file. User by detector init

feat.pretrained.load_model_weights(model_type='au', model='xgb', location='huggingface')#

Load weights for the AU models