skip to main content
10.1145/3011263.3011270acmotherconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
short-paper

Attitude recognition of video bloggers using audio-visual descriptors

Published:12 November 2016Publication History

ABSTRACT

In social media, vlogs (video blogs) are a form of unidirectional communication, where the vloggers (video bloggers) convey their messages (opinions, thoughts, etc.) to a potential audience which cannot give them feedback in real time. In this kind of communication, the non-verbal behaviour and personality impression of a video blogger tends to influence viewers' attention because non-verbal cues are correlated with the messages conveyed by a vlogger. In this study, we use the acoustic and visual features (body movements that are captured by low-level visual descriptors) to predict the six different attitudes (amusement, enthusiasm, friendliness, frustration, impatience and neutral) annotated in the speech of 10 video bloggers. The automatic detection of attitude can be helpful in a scenario where a machine has to automatically provide feedback to bloggers about their performance in terms of the extent to which they manage to engage the audience by displaying certain attitudes. Attitude recognition models are trained using the random forest classifier. Results show that: 1) acoustic features provide better accuracy than the visual features, 2) while fusion of audio and visual features does not increase overall accuracy, it improves the results for some attitudes and subjects, and 3) densely extracted histograms of flow provide better results than other visual descriptors. A three-class (positive, negative and neutral attitudes) problem has also been defined. Results for this setting show that feature fusion degrades overall classifier accuracy, and the classifiers perform better on the original six-class problem than on the three-class setting.

References

  1. H. Akira, F. Haider, L. Cerrato, N. Campbell, and S. Luz. Detection of cognitive states and their correlation to speech recognition performance in speech-to-speech machine translation systems. In Sixteenth Annual Conference of the International Speech Communication Association, pages 2539--2543, 2015.Google ScholarGoogle Scholar
  2. J. Allwood and P. J. Henrichsen. Predicting the attitude flow in dialogue based on multi-modal speech cues. In NEALT Proceedings. Northern European Association for Language and Technology; 4th Nordic Symposium on Multimodal Communication; November 15-16; Gothenburg; Sweden, number 093, pages 47--53. Linköping University Electronic Press, 2013.Google ScholarGoogle Scholar
  3. J.-I. Biel, O. Aran, and D. Gatica-Perez. You are known by how you vlog: Personality impressions and nonverbal behavior in youtube. In ICWSM, pages 446--449, 2011.Google ScholarGoogle Scholar
  4. J.-I. Biel and D. Gatica-Perez. Vlogsense: Conversational behavior and social attention in youtube. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), 7(1):33, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. J.-I. Biel, L. Teijeiro-Mosquera, and D. Gatica-Perez. Facetube: predicting personality from facial expressions of emotion in online conversational video. In Proceedings of the 14th ACM international conference on Multimodal interaction, pages 53--56. ACM, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. L. Breiman. Manual on setting up, using, and understanding random forests v3. 1. Statistics Department University of California Berkeley, CA, USA, 1, 2002.Google ScholarGoogle Scholar
  7. K. Chatfield, V. S. Lempitsky, A. Vedaldi, and A. Zisserman. The devil is in the details: an evaluation of recent feature encoding methods. In BMVC, volume 2, page 8, 2011.Google ScholarGoogle ScholarCross RefCross Ref
  8. J. A. De Moraes, A. Rilliard, B. A. de Oliveira Mota, and T. Shochi. Multimodal perception and production of attitudinal meaning in brazilian portuguese. Proc. Speech Prosody, paper, 340, 2010.Google ScholarGoogle Scholar
  9. F. Eyben, F. Weninger, F. Groß, and B. Schuller. Recent developments in opensmile, the munich open-source multimedia feature extractor. In Proceedings of the 21st ACM international conference on Multimedia, pages 835--838. ACM, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. J. Fleiss. Statistical methods for rates and proportions. john wiley & sons; new york: 1981. The measurement of interrater agreement, pages 212--36.Google ScholarGoogle Scholar
  11. A. Liaw and M. Wiener. Classification and regression by randomforest. R news, 2(3):18--22, 2002.Google ScholarGoogle Scholar
  12. M. Liu, R. Wang, S. Li, S. Shan, Z. Huang, and X. Chen. Combining multiple kernel methods on riemannian manifold for emotion recognition in the wild. In Proceedings of the 16th International Conference on Multimodal Interaction, pages 494--501. ACM, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. N. Madzlan, J. Han, F. Bonin, and N. Campbell. Towards automatic recognition of attitudes: Prosodic analysis of video blogs. Speech Prosody, Dublin, Ireland, pages 91--94, 2014.Google ScholarGoogle ScholarCross RefCross Ref
  14. N. A. Madzlan, J. G. Han, F. Bonin, and N. Campbell. Automatic recognition of attitudes in video blogs-prosodic and visual feature analysis. In INTERSPEECH, pages 1826--1830, 2014.Google ScholarGoogle Scholar
  15. N. A. Madzlan, Y. Huang, and N. Campbell. Automatic classification and prediction of attitudes: Audio-visual analysis of video blogs. In International Conference on Speech and Computer, pages 96--104. Springer, 2015.Google ScholarGoogle ScholarCross RefCross Ref
  16. N. A. Madzlan, J. Reverdy, F. Bonin, L. Cerrato, and N. Campbell. Annotation and multimodal perception of attitudes: A study on video blogs. In Proceedings from the 3rd European Symposium on Multimodal Communication, Dublin, September 17-18, 2015, number 105, pages 50--54. Linköping University Electronic Press, 2016.Google ScholarGoogle Scholar
  17. D. McNeill. Hand and mind: What gestures reveal about thought. University of Chicago Press, 1992.Google ScholarGoogle Scholar
  18. D. McNeill. Gesture and thought. University of Chicago Press, 2008.Google ScholarGoogle Scholar
  19. F. Perronnin, J. Sánchez, and T. Mensink. Improving the fisher kernel for large-scale image classification. In European conference on computer vision, pages 143--156. Springer, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. J. Uijlings, I. Duta, E. Sangineto, and N. Sebe. Video classification with densely extracted hog/hof/mbh features: an evaluation of the accuracy/computational efficiency trade-off. International Journal of Multimedia Information Retrieval, 4(1):33--44, 2015.Google ScholarGoogle ScholarCross RefCross Ref
  21. A. Vedaldi and B. Fulkerson. VLFeat: An open and portable library of computer vision algorithms. http://www.vlfeat.org/, 2008.Google ScholarGoogle Scholar
  22. C. Vogel and L. Mamani Sanchez. Epistemic signals and emoticons affect kudos. In P. Baranyi, editor, 3rd IEEE Conference on Cognitive Infocommunications, pages 517--522, 2012.Google ScholarGoogle Scholar

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    MA3HMI '16: Proceedings of the Workshop on Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction
    November 2016
    64 pages
    ISBN:9781450345620
    DOI:10.1145/3011263

    Copyright © 2016 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 12 November 2016

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • short-paper

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader