In this article, we propose an architecture of a socio-affective Embodied Conversational Agent (ECA). The different computational models of the architecture enable an ECA to express emotions and social attitudes during an interaction with a user. Based on corpora of actors expressing emotions, models have been defined to compute the emotional facial expressions of an ECA and the characteristics of its corporal movements. A user-perceptive approach has been used to design models to define how an ECA should adapt its non-verbal behavior according to the social attitude the ECA wants to display and the behavior of its interlocutor. The emotions and the social attitudes to express are computed by cognitive models presented in this article.