O. Aran and D. Gatica-perez, One of a kind, Proceedings of the 15th ACM on International conference on multimodal interaction, ICMI '13, pp.11-18, 2013.
DOI : 10.1145/2522848.2522859

M. P. Aylett and C. J. Pidcock, The CereVoice Characterful Speech Synthesiser SDK, pp.413-414, 2007.
DOI : 10.1007/978-3-540-74997-4_65

J. N. Bailenson and N. Yee, Digital Chameleons: Automatic Assimilation of Nonverbal Gestures in Immersive Virtual Environments, Psychological Science, vol.23, issue.10, pp.814-819, 2005.
DOI : 10.1016/S0022-1031(03)00014-3

A. Beall, J. Bailenson, J. Loomis, J. Blascovich, R. et al., Non-zerosum mutual gaze in collaborative virtual environments, Proceedings of HCI international, 2003.

L. Breiman, Random forests, Machine Learning, vol.45, issue.1, pp.5-32, 2001.
DOI : 10.1023/A:1010933404324

D. R. Carney, J. A. Hall, and L. Lebeau, Beliefs about the nonverbal expression of social power, Journal of Nonverbal Behavior, vol.46, issue.2, pp.105-123, 2005.
DOI : 10.1007/978-1-4612-5106-4_6

J. Cassell, Embodied conversational interface agents, Communications of the ACM, vol.43, issue.4, pp.70-78, 2000.
DOI : 10.1145/332051.332075

A. Cerekovic, O. Aran, and D. Gatica-perez, Rapport with Virtual Agents: What Do Human Social Cues and Personality Explain?, IEEE Transactions on Affective Computing, vol.8, issue.3, 2016.
DOI : 10.1109/TAFFC.2016.2545650

D. M. Dehn and S. Van-mulken, The impact of animated interface agents: a review of empirical research, International Journal of Human-Computer Studies, vol.52, issue.1, pp.1-22, 2000.
DOI : 10.1006/ijhc.1999.0325

P. Ekman, W. V. Friesen, and J. C. Hager, The facial action coding system, 2002.

N. Fourati and C. Pelachaud, Relevant body cues for the classification of emotional body expression in daily actions, 2015 International Conference on Affective Computing and Intelligent Interaction (ACII), pp.267-273, 2015.
DOI : 10.1109/ACII.2015.7344582

M. L. Hecht, J. A. Devito, and L. K. Guerrero, Perspectives on nonverbal communication: Codes, functions, and contexts, The nonverbal communication reader, pp.3-18, 1999.

Y. Kim, H. Lee, and E. M. Provost, Deep learning for robust feature generation in audiovisual emotion recognition, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp.3687-3691, 2013.
DOI : 10.1109/ICASSP.2013.6638346

URL : http://www-personal.umich.edu/~yelinkim/YKimPapers/KimICASSP2013b.pdf

S. Kopp, L. Gesellensetter, N. C. Krämer, and I. Wachsmuth, A Conversational Agent as Museum Guide ??? Design and Evaluation of a Real-World Application, International Workshop on Intelligent Virtual Agents, pp.329-343, 2005.
DOI : 10.1007/11550617_28

URL : https://pub.uni-bielefeld.de/download/1601743/2655217

N. Krämer, S. Kopp, C. Becker-asano, and N. Sommer, Smile and the world will smile with you???The effects of a virtual agent???s smile on users??? evaluation and behavior, International Journal of Human-Computer Studies, vol.71, issue.3, pp.335-349, 2013.
DOI : 10.1016/j.ijhcs.2012.09.006

N. C. Krämer, Social Effects of Virtual Assistants. A Review of Empirical Results with Regard to Communication, Proceedings of the international conference on Intelligent Virtual Agents (IVA), pp.507-508, 2008.
DOI : 10.1007/978-3-540-85483-8_63

R. E. Mayer and C. S. Dapra, An embodiment effect in computer-based learning with animated pedagogical agents., Journal of Experimental Psychology: Applied, vol.18, issue.3, p.239, 2012.
DOI : 10.1037/a0028616

G. Mckeown, M. Valstar, R. Cowie, M. Pantic, and M. Schroder, The SEMAINE Database: Annotated Multimodal Records of Emotionally Colored Conversations between a Person and a Limited Agent, IEEE Transactions on Affective Computing, vol.3, issue.1, pp.5-17, 2012.
DOI : 10.1109/T-AFFC.2011.20

E. Mower, D. J. Feil-seifer, M. J. Mataric, and S. Narayanan, Investigating Implicit Cues for User State Estimation in Human-Robot Interaction Using Physiological Measurements, RO-MAN 2007, The 16th IEEE International Symposium on Robot and Human Interactive Communication, pp.1125-1130, 2007.
DOI : 10.1109/ROMAN.2007.4415249

C. Nass and Y. Moon, Machines and Mindlessness: Social Responses to Computers, Journal of Social Issues, vol.56, issue.1, pp.81-103, 2000.
DOI : 10.1111/0022-4537.00153

M. Ochs, R. Niewiadmoski, and C. Pelachaud, How a virtual agent should smile? morphological and dynamic characteristics of virtual agent's smiles, Proceedings of the international conference on Intelligent Virtual Agents (IVA), pp.427-440, 2010.
DOI : 10.1007/978-3-642-15892-6_47

M. Ochs, R. Niewiadomski, P. Brunet, and C. Pelachaud, Smiling virtual agent in social context, Cognitive Processing, vol.7, issue.6, pp.519-532, 2012.
DOI : 10.1080/026999300378941

URL : https://hal.archives-ouvertes.fr/hal-01793293

N. M. Oliver, B. Rosario, and A. P. Pentland, A bayesian computer vision system for modeling human interactions. Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol.22, pp.831-843, 2000.
DOI : 10.1007/3-540-49256-9_16

URL : http://www.doc.ic.ac.uk/~xh1/Referece/Others/bayesian-model-for-interaction.pdf

D. Pardo, B. L. Mencia, Á. H. Trapote, and L. Hernández, Non-verbal communication strategies to improve robustness in??dialogue systems: a comparative study, Journal on Multimodal User Interfaces, vol.52, issue.6, pp.285-297, 2009.
DOI : 10.1007/1-4020-2730-3_4

C. Pelachaud, Studies on gesture expressivity for a virtual agent, Speech Communication, vol.51, issue.7, pp.630-639, 2009.
DOI : 10.1016/j.specom.2008.04.009

L. S. Rashotte, What Does That Smile Mean? The Meaning of Nonverbal Behaviors in Social Interaction, Social Psychology Quarterly, vol.65, issue.1, pp.92-102, 2002.
DOI : 10.2307/3090170

B. Reeves and C. Nass, How people treat computers, television, and new media like real people and places, 1996.

C. Strobl, A. Boulesteix, T. Kneib, T. Augustin, and A. Zeileis, Conditional Variable Importance for Random Forests, BMC Bioinformatics, vol.9, issue.1, 2008.
DOI : 10.1186/1471-2105-9-307

URL : https://bmcbioinformatics.biomedcentral.com/track/pdf/10.1186/1471-2105-9-307?site=bmcbioinformatics.biomedcentral.com

A. Vinciarelli, Speakers Role Recognition in Multiparty Audio Recordings Using Social Network Analysis and Duration Distribution Modeling, IEEE Transactions on Multimedia, vol.9, issue.6, pp.1215-1226, 2007.
DOI : 10.1109/TMM.2007.902882

URL : http://www.idiap.ch/~vincia/papers/roleRecognition.pdf

A. Vinciarelli, M. Pantic, and H. Bourlard, Social signal processing: Survey of an emerging domain, Image and Vision Computing, vol.27, issue.12, pp.1743-1759, 2009.
DOI : 10.1016/j.imavis.2008.11.007

URL : http://www.doc.ic.ac.uk/~maja/IVCJ-SSPsurvey-FINAL.pdf

A. Vinciarelli and A. S. Pentland, New Social Signals in a New Interaction World: The Next Frontier for Social Signal Processing, IEEE Systems, Man, and Cybernetics Magazine, vol.1, issue.2, pp.10-17, 2015.
DOI : 10.1109/MSMC.2015.2441992

X. Xiong and F. De-la-torre, Supervised Descent Method and Its Applications to Face Alignment, 2013 IEEE Conference on Computer Vision and Pattern Recognition, pp.532-539, 2013.
DOI : 10.1109/CVPR.2013.75

URL : http://www.ri.cmu.edu/pub_files/2013/5/main.pdf

Z. Yu, D. Gerritsen, A. Ogan, A. W. Black, and J. Cassell, Automatic prediction of friendship via multi-model dyadic features, Proceedings of SIGDIAL, pp.51-60, 2013.