Change Log#
0.7.0#
Notes#
This is a major structural overhaul of py-feat’s internal Detector
class. We’ve pruned unreliable models, simpliefied and streamlined, and extended the codebase with an eye towards future development. This includes:
integration with huggingface: all our pre-trained models are now versioned and available on our HuggingFace model hub and will be automatically downloaded when you first initialize a
Detector
objectwe have an experimental mediapipe detector in the works that focuses on real-time performance
numerous bug fixes and improvements
Breaking API Changes#
Detector.detect_image()
andDetector.detect_video()
have been removed and all detections can be be performed using a singleDetector.detect()
methodTo process different types of data use the
data_type
argument which supportsimage
,tensor
, andvideo
Changes to Default Models#
We have completely switched and modiefied our face detection model to
img2pose
which also provides 6-degrees of head-pose estimation - this can no longer be changedOur other default detectors remain unchanges. Please see our documentation for more details.
New Identity Detector#
Py-feat now includes an identity detector via facenet. This works by projecting each detected face in an image or video frame into a 512d embedding space and clustering these embeddings using their cosine similarity
Fex
objects include an extra column containing the identity label for each face (accessible viaFex.identities
) as well as additional columns for each of the 512 embeddings dimensions (accessible viaFex.identity_embeddings
). Embeddings can be useful for downstream model-training tasks.Note: identity embeddings are affected by facial expressions to some degree, and while our default threshold of 0.8 works well for many cases, you should adjust this to tailor it to your particular data.
To save computation time, we make it possible to recompute identity labels after detection has been performed using the
.compute_identities(threshold=new_threshold)
method onFex
data objects. By default this returns a newFex
object with new labels in the'Identity'
column, but can also overwrite itself in-place.You can also adjust the threshold at detection time using the
face_identity_threshold
keyword argument toDetector.detect_image()
orDetector.detect_video()
.Recomputing identity labels by changing the threshold does not change the 512d embeddings, it just adjusts how clustering is performed to get the identity labels.
Documentation updates#
Our tutorials have been updated to reflect the new API change
We have a new FAQ to help address common questions
0.6.1#
Notes#
This version drops support for Python 3.7 and fixes several dependency related issues:
0.6.0#
Notes#
This is a large model-update release. Several users noted issues with our AU models due to problematic HOG feature extraction. We have now retrained all of our models that were affected by this issue. This version will automatically download the new model weights and use them without any additional user input.
Detector
Changes#
We have made the decision to make video processing much more memory efficient at the trade-off of increased processing time. Previously py-feat
would load all frames into RAM and then process them. This was problematic for large videos and would cause kernel panics or system freezes. Now, py-feat
will lazy-load video-frames one at a time, which scales to videos of any length or size assuming that your system has enough RAM to hold a few frames in memory (determined by batch_size
). However, this also makes processing videos a bit slower and GPU benefits less dramatic. We have made this trade-off in favor of an easier end-user experience, but will be watching torch’s VideoReader implementation closely and likely use that in future versions.
0.5.1#
Notes#
This is a maintenance release that addresses multiple under-the-hood issues with py-feat
failing when images or videos contain 0 faces. It addresses the following specific issues amongst others and is recommended for all users:
0.5.0#
Notes#
This is a large overhaul and refactor of some of the core testing and API functionality to make future development, maintenance, and testing easier. Notable highlights include:
tighter integration with
torch
data loadersdropping
opencv
as a dependencyexperimental support for macOS m1 GPUs
passing keyword arguments to underlying
torch
models for more control
Detector
Changes#
New#
you can now pass keyword arguments directly to the underlying pytorch/sklearn models on
Detector
initialization using dictionaries. For example you can do:detector = Detector(facepose_model_kwargs={'keep_top_k': 500})
to initializeimg2pose
to only use 500 instead of 750 featuresall
.detect_*
methods can also pass keyword arguments to the underlying pytorch/sklearn models, albeit these will be passed to their underlying__call__
methodsSVM AU model has been retrained with new HOG feature PCA pipeline
new XGBoost AU model with new HOG feature PCA pipeline
.detect_image
and.detect_video
now display atqdm
progressbarnew
skip_failed_detections
keyword argument to still generate aFex
object when processing multiple images and one or more detections fail
Breaking#
the new default model for landmark detection was changed from
mobilenet
tomobilefacenet
.the new default model for AU detection was changed to our new
xgb
model which gives continuous valued predictions between 0-1remove support for
fer
emotion modelremove support for
jaanet
AU modelremove support for
logistic
AU modelremove support for
pnp
facepose detectordrop support for reading and manipulating Affectiva and FACET data
.detect_image
will no longer resize images on load as the new default foroutput_size=None
. If you want to process images withbatch_size > 1
and images differ in size, then you will be required to manually setoutput_size
otherwise py-feat will raise a helpful error message
Fex
Changes#
New#
new
.update_sessions()
method that returns a copy of aFex
frame with the.sessions
attribute updated, making it easy to chain operations.predict()
and.regress()
now support passing attributes toX
and orY
using string names that match the attribute names:'emotions'
use all emotion columns (i.e.fex.emotions
)'aus'
use all AU columns (i.e.fex.aus
)'poses'
use all pose columns (i.e.fex.poses
)'landmarks'
use all landmark columns (i.e.fex.landmarks
)'faceboxes'
use all facebox columns (i.e.fex.faceboxes
)You can also combine feature groups using a comma-separated string e.g.
fex.regress(X='emotions,poses', y='landmarks')
.extract_*
methods now includestd
andsem
. These are also included in.extract_summary()
Breaking#
All
Fex
attributes have been pluralized as indicated below. For the time-being old attribute access will continue to work but will show a warning. We plan to formally drop support in a few versions.landmark
->.landmarks
.facepose
->.poses
.input
->.inputs
.landmark_x
->.landmarks_x
.landmark_y
->.landmarks_y
.facebox
->.faceboxes
Development changes#
test_pretrained_models.py
is now more organized usingpytest
classesadded tests for
img2pose
modelsadded more robust testing for the interaction between
batch_size
andoutput_size
General Fixes#
data loading with multiple images of potentially different sizes should be faster and more reliable
fix bug in
resmasknet
that would give poor predictions when multiple faces were present and particularly small#150
#149
#148
#147
#145
#137
#134
#132
#131
#130
#129
#127
#121
#104
0.4.0#
Major version breaking release!#
This release includes numerous bug fixes, api updates, and code base changes make it largely incompatible with previous releases
To fork development from an older version of
py-feat
you can use this archival repo instead
New#
Added
animate_face
andplot_face
functions infeat.plotting
moduleFex
data-classes returned fromDetector.detect_image()
orDetector.detect_video()
now store the names of the different detectors used as attributes:.face_model
,.au_model
, etcThe AU visualization model used by
plot_face
andDetector.plot_detections(faces='aus')
has been updated to include AU11 and remove AU18 making it consistent with Py-feat’s custom AU detectors (svm
andlogistic
)A new AU visualization model supporting the
jaanet
AU detector, which only has 12 AUs, has now been added and will automatically be used ifDetector(au_model='jaanet')
.This visualization model can also be used by the
plot_face
function by by passing it to themodel
argument:plot_face(model='jaanet_aus_to_landmarks')
Breaking Changes#
Detector
no longer support unintialized models, e.g.any_model = None
This is is also true for
Detector.change_model
Columns of interest on
Fex
data classes were previously accessed like class methods, i.e.fex.aus()
. These have now been changed to class attributes, i.e.fex.aus
Remove support for
DRML
AU detectorRemove support for
RF
AU and emotion detectorsNew default detectors:
svm
for AUsresmasknet
for emotionsimg2pose
for head-pose
Development changes#
Revamped pre-trained detector handling in new
feat.pretrained
moduleMore tests including testing all detector combinations
Fixes#
0.3.7#
Fix import error due to missing init
0.3.6#
Trigger Zenodo release
0.2.0#
Testing pypi upload