Multimodal Mobile Affective Computing
When on the move users change different contexts and thus experience life.
Affective states even if expressed naturally by facial muscles, voice tone or physiological signals are difficult to observe and understand properly in the wild by mobile devices. Even if the lab results are very promising on desktop computers interesting challenges are for the mobile and wearable devices to recognise and understand user affective states on the move and out of lab using multiple modalities (visual, physiological).
Sensor fusion, feature extraction and machine learning techniques are expected to be combined and used to provide state of the art solutions for multimodal mobile affective states recognition.
This seminar will be supervised by Prof. Dr. Iulian Benta form Cluj, Romania, Dr. Christian Bartelt and Christian Schreckenberger.
10. September 2018: Please register for the kick-off meeting by sending two preferred topics and a list of your completed courses (Transcript of Records, CV optional) via mail to Christian Schreckenberger.
12. September 2018: As we can only offer a limited amount of places, you will be informed whether you can participate in this seminar
14. September 2018: Latest possible drop-out date without a penalty (A drop-out after this date will be graded with 5.0)
17. September 2018: Milestone 1 - Kick-Off with supervisors
17-19. October 2018: Milestone 2 - Intermediate Result Presentation (30 % of your final grade)
03. December 2018: Milestone 3 - Submission of your seminar thesis (70 % of your final grade)
- Missing a mile-stone will be graded with a 5.0
- This seminar is open for Bachelor and Master Students focussing on "Business Informatics" and "Data Science". Master students enrolled in the "Mannheim Master in Data Science" are also highly welcome to apply for this semniar.
- Only Master Students enrolled in the programm "Business Informatics": This seminar will be held as Module "CS 704" and is thus only applicable for the Specialization Tracks „Information Technology“, „System Design and Development“ and „Data und Web Science“. If you want to pursue another track, please contact Christian Schreckenberger before the start of the seminar.
- Wearables for Affective Computing
- Introduction: Wearable devices are becoming easier to program and also wide spread. By composing different signals provided by the wearables we may extract the semantic of user’s context and also user’s affective states. Challenges in signal processing on low power processors and machine learning on mobile devices and information mobility optimisation schemes are to be investigated by the student closely supervised by the researcher.
- Goal and Objective: Comparison and evaluation of the existing wearable devices and their efficiency in affective states recognition and understanding in different contexts
- Mobile Smart Phones and Tablets for Affective Computing
- Introduction: In order to prepare for the implementation of intensive processing of images and machine learning (e.g. Deep Neural Networks) needed for Affective Computing mobile devices (Smart Phones and Tablets) should be able to support or delegate some signal processing in real-time. Processors FLOPS, dynamic memory, communication bandwidth and other applications running in parallel are parameters to determine a local or distant facial image expression recognition. The outcome of the study could be a set of rules or an intelligent decision system to tackle with this constraints in order to give user the optimum facial expression recognition system.
- Goal and Objective: Comparison and evaluation of the existing and simulated hardware and operating systems for Smart Phones and Tablets.
- Understanding My Stressing Situations
- Introduction: In everyday life we experience stressful situations. Sometimes we are not aware of all the stressors (stress causal factors) und thus we cannot overcome them. Understanding the personal aspects of stress in different context (measured by sensors, logged and intelligently mined for pattern recognition) we will me warned up before when similar stressful condition tend to occur. A semantic representation of the context (location, temperature, air pressure, social context) and affective, mental and physiological states (e.g. stress) can help us build a general and yet personalise a user stress profile.
- Goal and Objecive: A comparison between Polar M600 and Empatica E4 capabilities (Heart Rate, Electro Dermal Activity) in monitoring stress. The evaluation of how well the user’s context (situation) is understood from different sensor values.
- Mobile Facial Expression Recognition
- Introduction: In mobile environment users facing camera move their faces (away/pitch/roll/yawn) in a variable lighting environment and background. In order to support real-time reliable image processing on limited mobile devices some optimisation solution should be adopted. A comparison between facial expression recognition algorithms on real/simulated mobile devices is expected to indicate the best solution.
- Goal and Objective:A theoretical comparison between facial expression recognition algorithms is to be followed by the actual evaluation on real/simulated mobile devices.