Jeffrey M. Girard, Wen-Sheng Chu, László A. Jeni, Jeffrey F. Cohn, Fernando De la Torre, Michael A. Sayette
Proceedings of the IEEE Conference on Automatic Face & Gesture Recognition, Pages 581-588
Publication year: 2017

Despite the important role that facial expressions play in interpersonal communication and our knowledge that interpersonal behavior is influenced by social context, no currently available facial expression database includes multiple interacting participants. The Sayette Group Formation Task (GFT) database addresses the need for well-annotated video of multiple participants during unscripted interactions. The database includes 172,800 video frames from 96 participants in 32 three-person groups. To aid in the development of automated facial expression analysis systems, GFT includes expert annotations of FACS occurrence and intensity, facial landmark tracking, and baseline results for linear SVM, deep learning, active patch learning, and personalized classification. Baseline performance is quantified and compared using identical partitioning and a variety of metrics (including means and confidence intervals). The highest performance scores were found for the deep learning and active patch learning methods. Learn more at http://osf.io/7wcyz

3 Responses to “Sayette group formation task (GFT) spontaneous facial expression database”

  1. Jeffrey Girard

    View this project on the Open Science Framework: https://osf.io/7wcyz/

  2. Wu Shuo

    Dear sir, I am a graduate student in the field of action unit recognition. Thank you and your team for making the excellent GFT dataset. I apply for GFT data set to enrich and improve my experiment and I am looking forward to your help.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.