Loading data from other detectors
Contents
Loading data from other detectors¶
written by Jin Hyun Cheong
While Py-FEAT provides it’s own set of detectors, you can still use Py-FEAT if you extracted features from other models. Currently we support data files extracted from OpenFace, FACET iMotions, and Affectiva JavaScript SDK. Please open an Issue if you would like to see support for other model outputs.
Loading OpenFace data¶
import glob, os
from feat.tests.utils import get_test_data_path
from feat.utils import read_openface
openface_file = os.path.join(get_test_data_path(), "OpenFace_Test.csv")
detections = read_openface(openface_file)
print(type(detections))
display(detections.head())
<class 'feat.data.Fex'>
frame | timestamp | confidence | success | gaze_0_x | gaze_0_y | gaze_0_z | gaze_1_x | gaze_1_y | gaze_1_z | ... | AU12_c | AU14_c | AU15_c | AU17_c | AU20_c | AU23_c | AU25_c | AU26_c | AU28_c | AU45_c | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 1 | 0.000 | 0.883333 | 1 | 0.109059 | 0.062474 | -0.992070 | -0.124401 | 0.066311 | -0.990014 | ... | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 |
1 | 2 | 0.001 | 0.883333 | 1 | 0.110256 | 0.065356 | -0.991752 | -0.123464 | 0.069979 | -0.989879 | ... | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 |
2 | 3 | 0.002 | 0.883333 | 1 | 0.108539 | 0.064244 | -0.992014 | -0.122873 | 0.070540 | -0.989912 | ... | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 |
3 | 4 | 0.003 | 0.883333 | 1 | 0.108724 | 0.064943 | -0.991948 | -0.122172 | 0.070736 | -0.989985 | ... | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 |
4 | 5 | 0.004 | 0.883333 | 1 | 0.109766 | 0.065250 | -0.991813 | -0.121321 | 0.070529 | -0.990104 | ... | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 |
5 rows × 431 columns
All functionalities of the Fex
class will be available when you load an OpenFace file using read_openface
. For example, you can quickly grab the facial landmark columns using landmarks()
or the aus using aus()
detections.landmark().head()
x_0 | x_1 | x_2 | x_3 | x_4 | x_5 | x_6 | x_7 | x_8 | x_9 | ... | y_58 | y_59 | y_60 | y_61 | y_62 | y_63 | y_64 | y_65 | y_66 | y_67 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 277.276 | 278.163 | 280.209 | 283.258 | 288.537 | 297.109 | 307.888 | 320.240 | 333.596 | 346.018 | ... | 243.032 | 241.552 | 237.763 | 238.249 | 238.527 | 237.790 | 236.296 | 236.401 | 237.104 | 236.823 |
1 | 276.980 | 277.893 | 279.952 | 283.021 | 288.339 | 296.925 | 307.663 | 319.981 | 333.443 | 346.086 | ... | 243.535 | 241.895 | 237.777 | 238.333 | 238.601 | 237.848 | 236.234 | 236.828 | 237.579 | 237.295 |
2 | 276.902 | 277.785 | 279.844 | 282.911 | 288.141 | 296.601 | 307.278 | 319.657 | 333.284 | 346.138 | ... | 244.668 | 242.816 | 238.284 | 238.824 | 239.104 | 238.397 | 236.835 | 237.882 | 238.606 | 238.292 |
3 | 276.757 | 277.626 | 279.664 | 282.699 | 287.902 | 296.353 | 307.023 | 319.397 | 333.035 | 345.927 | ... | 244.484 | 242.662 | 238.209 | 238.804 | 239.092 | 238.379 | 236.786 | 237.723 | 238.453 | 238.137 |
4 | 276.342 | 277.208 | 279.248 | 282.285 | 287.478 | 295.913 | 306.562 | 318.911 | 332.540 | 345.428 | ... | 244.297 | 242.514 | 238.063 | 238.688 | 238.958 | 238.232 | 236.595 | 237.525 | 238.267 | 237.974 |
5 rows × 136 columns
detections.aus().head()
AU01_r | AU02_r | AU04_r | AU05_r | AU06_r | AU07_r | AU09_r | AU10_r | AU12_r | AU14_r | ... | AU12_c | AU14_c | AU15_c | AU17_c | AU20_c | AU23_c | AU25_c | AU26_c | AU28_c | AU45_c | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0.6773 | 0.4275 | 0.2435 | 0.3434 | 0 | 0.0 | 0.0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 |
1 | 0.5958 | 0.3507 | 0.3347 | 0.3434 | 0 | 0.0 | 0.0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 |
2 | 0.6017 | 0.3078 | 0.4339 | 0.2920 | 0 | 0.0 | 0.0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 |
3 | 0.6545 | 0.3294 | 0.5075 | 0.2899 | 0 | 0.0 | 0.0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 |
4 | 0.5636 | 0.2709 | 0.5708 | 0.1455 | 0 | 0.0 | 0.0 | 0 | 0 | 0 | ... | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 |
5 rows × 35 columns
Loading FACET iMotions data¶
Loading a FACET file as a Fex class is also simple using read_facet
.
from feat.utils import read_facet
facet = os.path.join(get_test_data_path(), "iMotions_Test_v6.txt")
detections = read_facet(facet)
print(type(detections))
display(detections.head())
<class 'feat.data.Fex'>
StudyName | ExportDate | Name | Age | Gender | StimulusName | SlideType | EventSource | Timestamp | MediaTime | ... | NOSE_TIPX | NOSE_TIPY | 7X | 7Y | LiveMarker | KeyStroke | MarkerText | SceneType | SceneOutput | SceneParent | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | RotationTest | 20171006 | separate_rotation | 0 | MALE | gopro | TestImage | Emotient FACET | 35 | 35 | ... | 525.0905 | 394.8224 | 0.0 | 0.0 | NaN | NaN | NaN | NaN | NaN | NaN |
1 | RotationTest | 20171006 | separate_rotation | 0 | MALE | gopro | TestImage | Emotient FACET | 68 | 68 | ... | 525.9382 | 385.9986 | 0.0 | 0.0 | NaN | NaN | NaN | NaN | NaN | NaN |
2 | RotationTest | 20171006 | separate_rotation | 0 | MALE | gopro | TestImage | Emotient FACET | 101 | 101 | ... | 525.2611 | 378.3876 | 0.0 | 0.0 | NaN | NaN | NaN | NaN | NaN | NaN |
3 | RotationTest | 20171006 | separate_rotation | 0 | MALE | gopro | TestImage | Emotient FACET | 134 | 134 | ... | 524.6653 | 372.0715 | 0.0 | 0.0 | NaN | NaN | NaN | NaN | NaN | NaN |
4 | RotationTest | 20171006 | separate_rotation | 0 | MALE | gopro | TestImage | Emotient FACET | 168 | 168 | ... | 525.9166 | 366.4871 | 0.0 | 0.0 | NaN | NaN | NaN | NaN | NaN | NaN |
5 rows × 78 columns
You can take advantage of the Fex functionalties such as grabbing the emotions
detections.emotions().head()
Joy | Anger | Surprise | Fear | Contempt | Disgust | Sadness | Confusion | Frustration | Neutral | Positive | Negative | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | -0.462367 | -3.642415 | -1.254946 | -1.967231 | 0.428609 | -2.541990 | -1.224254 | -3.051880 | -2.369417 | 1.784896 | -0.462367 | 0.428609 |
1 | -4.371406 | -3.821383 | -2.094306 | -3.278980 | -0.516405 | -4.306592 | -1.787594 | -2.694475 | -2.190143 | 2.697488 | -4.371406 | -0.516405 |
2 | -5.124269 | -2.969738 | -1.833445 | -3.807125 | -0.724873 | -4.186139 | -1.579113 | -2.003944 | -1.487295 | 2.437310 | -5.124269 | -0.724873 |
3 | -5.310449 | -3.882663 | -2.479799 | -4.622220 | -0.881652 | -4.608058 | -2.352544 | -2.248295 | -1.931136 | 3.032463 | -5.310449 | -0.881652 |
4 | -5.206233 | -4.181856 | -2.407882 | -4.627919 | -0.880042 | -5.046319 | -2.624082 | -2.401067 | -1.991587 | 3.191700 | -5.206233 | -0.880042 |
You can also extract features from the data. For example, to extract bags of temporal features from this video, you would simply set the sampling frequency and run extract_boft
.
detections.sampling_freq = 30
detections.emotions().dropna().extract_boft()
/home/jcheong/packages/feat/feat/data.py:1338: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
wavs = np.array(wavs)[::-1]
pos0_hz_0.66_Joy | pos1_hz_0.66_Joy | pos2_hz_0.66_Joy | pos3_hz_0.66_Joy | pos4_hz_0.66_Joy | pos5_hz_0.66_Joy | neg0_hz_0.66_Joy | neg1_hz_0.66_Joy | neg2_hz_0.66_Joy | neg3_hz_0.66_Joy | ... | pos2_hz_0.06_Negative | pos3_hz_0.06_Negative | pos4_hz_0.06_Negative | pos5_hz_0.06_Negative | neg0_hz_0.06_Negative | neg1_hz_0.06_Negative | neg2_hz_0.06_Negative | neg3_hz_0.06_Negative | neg4_hz_0.06_Negative | neg5_hz_0.06_Negative | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 2 | 0 | 0 | 0 | 3 | 3 | 0 | 0 | 0 | 2 | ... | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 |
1 rows × 1152 columns
Loading Affectiva API file¶
You can also load an affectiva file processed through the Affectiva Javascript SDK available here
from feat.utils import read_affectiva
facet = os.path.join(get_test_data_path(), "sample_affectiva-api-app_output.json")
detections = read_affectiva(facet)
print(type(detections))
display(detections.head())
<class 'feat.data.Fex'>
Joy | Sadness | Disgust | Contempt | Anger | Fear | Surprise | Valence | Engagement | Timestamp | ... | AU25 | Smirk | AU43 | Attention | AU07 | AU26 | AU14 | AU05 | AU06 | AU20 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 99.930573 | 0.000002 | 0.002528 | 0.000107 | 0.000096 | 4.059986e-07 | 1.76971 | 78.124275 | 99.921013 | 0 | ... | 99.999939 | 0 | 5.448178e-07 | 91.699234 | 11.572159 | 82.992149 | 0.000003 | 0.000744 | 94.181007 | 0.262457 |
1 rows × 32 columns
Loading a completely new file as a Fex class¶
It’s easy to cast a dataframe which might be neither an OpenFace or FACET outputs into a Fex class. Simply cast your dataframe as a Fex class.
from feat import Fex
import pandas as pd, numpy as np
au_columns = [f"AU{i}" for i in range(20)]
fex = Fex(pd.DataFrame(np.random.rand(20,20)))
fex.columns = au_columns
print(type(fex))
<class 'feat.data.Fex'>
To take full advantage of Py-Feat’s features, make sure you set the attributes.
fex.au_columns = au_columns
display(fex.aus().head())
AU0 | AU1 | AU2 | AU3 | AU4 | AU5 | AU6 | AU7 | AU8 | AU9 | AU10 | AU11 | AU12 | AU13 | AU14 | AU15 | AU16 | AU17 | AU18 | AU19 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0.003699 | 0.042126 | 0.532839 | 0.410701 | 0.079237 | 0.612938 | 0.560122 | 0.999783 | 0.889522 | 0.169887 | 0.374350 | 0.631952 | 0.040375 | 0.852979 | 0.329220 | 0.196842 | 0.118240 | 0.631301 | 0.709566 | 0.003093 |
1 | 0.887544 | 0.882457 | 0.265571 | 0.078094 | 0.341208 | 0.563913 | 0.294769 | 0.307120 | 0.480530 | 0.321297 | 0.321300 | 0.513947 | 0.566524 | 0.816932 | 0.493617 | 0.541507 | 0.624186 | 0.138642 | 0.845837 | 0.638981 |
2 | 0.222848 | 0.007902 | 0.806356 | 0.392177 | 0.842465 | 0.765245 | 0.260304 | 0.372778 | 0.648263 | 0.118601 | 0.695544 | 0.800680 | 0.870643 | 0.588052 | 0.266168 | 0.247223 | 0.479493 | 0.928013 | 0.061631 | 0.158413 |
3 | 0.161766 | 0.678874 | 0.994342 | 0.076421 | 0.618708 | 0.880437 | 0.302860 | 0.191823 | 0.443328 | 0.221810 | 0.321260 | 0.154832 | 0.330742 | 0.281754 | 0.707123 | 0.500014 | 0.457013 | 0.575017 | 0.955154 | 0.216029 |
4 | 0.802379 | 0.951092 | 0.047695 | 0.914416 | 0.651683 | 0.183344 | 0.446120 | 0.872419 | 0.468856 | 0.118990 | 0.985744 | 0.793074 | 0.155625 | 0.178448 | 0.217853 | 0.966267 | 0.455350 | 0.970413 | 0.781624 | 0.193531 |