Search for a command to run...
Video conferencing has become an invaluable tool to society and is essential in many professional and private settings. As the interaction of participants is purely virtual it lacks many aspects of “real life” including lack of or different feedback channels and henceforth the communication is less intimate. In particular, user engagement in video conferences is often difficult to assess for others and contributes to impediment of communication in general. This applies not only to small video conferences but even more so to conferences with a one to many topology. Measuring the user engagement in an accurate manner would provide a useful feedback channel to the speaker or organizer of a video conference and also enable many valued-added services such as predicting turn-taking for natural conversational speech intention and assisting AI-enabled video conference management. One of the challenges in measuring user engagement is to measure it outside an artificial lab setting and for an individual rather than an aggregate. In this work, we assess whether the problem of measuring user engagement is even well-defined. We also perform experiments with commercially available services and machine learning algorithms proposed by academia to test their ability of measuring user engagement in a realistic non-lab setting. Furthermore, we propose an outlier based algorithm and validate its superior performance against existing solutions.