Showing posts with label Embodied Action. Show all posts
Showing posts with label Embodied Action. Show all posts

Wednesday, December 17, 2014

Review: The Effect of Interactivity on Learning Physical Actions in Virtual Reality

#Title#
The Effect of Interactivity on Learning Physical Actions in Virtual Reality

#Authors#
Jeremy Bailenson, Kayur Patel, Alexia Nielsen, Ruzena Bajscy, Sang-Hack Jung & Gregorij Kurillo

#Venue#
Media Psychology, Volume 11, Issue 3, 2008

#DOI#

#Abstract#
Virtual reality (VR) offers new possibilities for learning, specifically for training individuals to perform physical movements such as physical therapy and exercise. The current article examines two aspects of VR that uniquely contribute to media interactivity: the ability to capture and review physical behaviour and the ability to see one's avatar rendered in real time from third person points of view. In two studies, we utilised a state-of-the-art, image-based tele-immersive system, capable of tracking and rendering many degrees of freedom of human motion in real time. In Experiment 1, participants learned better in VR than in a video learning condition according to self-report measures, and the cause of the advantage was seeing one's avatar stereoscopically in the third person. In Experiment 2, we added a virtual mirror in the learning environment to further leverage the ability to see oneself from novel angles in real time. Participants learned better in VR than in video according to objective performance measures. Implications for learning via interactive digital media are discussed.

#Comments#

The interesting idea here to me is the possibility of understanding how the movements are better encoded due to visual and proprioceptive feedback.  The question then, is this a two way mechanism?  If the same stimuli are presented, will it bring back better memory.  In addition, is it a movement encoding.  Can the encoding be accessed better by an expert.  Here they are novices in training, what happens if the person already knows the moves, but cannot explain them.  Can they explain them better if they have an immersive 3D view of their actions?  No brainer, it should work in this case clearly.  The question is, will it work as well if the visualisation is not the user of the tool at that moment in time.

They use the term "Sea of Cameras," would have thought a forest would be more apt. :-)

Experiment 1 

They rate the VR learning experience more positively.  Found the VR trainer to be more credible.

Experiment 2

VR version of learning system outperformed the video version, with significant results.

Interesting Quote:

"In fact, there is ample research dedicated to the discordance among self report measures and behavioural measures when measuring behaviour in virtual reality (Bailenson et al., 2004, Bailenson,
Swinth, et al., 2005;  Slater, 2005), concluding that neither self-report nor behavioural measures are sufficient, and that only by examining a host of measures can one assess virtual behaviour"

Shows that we need to use repeated versions of both behavioural and subjective to obtain insights into the effectiveness of such VR interfaces.  Need to keep this in mind for future experiments.

The image reproduction methods used in the paper are poor (think Kinect image level) so the quality of the images in the Tai Chi lessons may have had an effects on results.  Also, they had to look forward to compare themselves in the VR case; a cave or HMD might be better in this case due to a universal viewpoint capability.  Using an Oculus might introduce better results due to ego centre being situated in the training space.


#ImportantRefs#

Bailenson, J. N., Aharoni, E., Beall, A. C., Guadagno, R. E., Dimov, A., & Blascovich,
J. (2004). Comparing behavioral and self-report measures of embodied agents’
social presence in immersive virtual environments. Proceedings of the 7th Annual
International Workshop on PRESENCE, October 13–15, Valencia, Spain.

Saturday, May 31, 2014

Paper Review: Learning to Manipulate and Categorize in Human and Artificial Agents

#Title#
Learning to Manipulate and Categorize in Human and Artificial Agents

#Authors#
Giuseppe Morlino, Claudia Gianelli, Anna M. Borghi, Stefano Nolfia

#Venue#
Cognitive Science (2014) 1–26

#DOI#
DOI: 10.1111/cogs.12130

#Abstract#
This study investigates the acquisition of integrated object manipulation and categorization abilities through a series of experiments in which human adults and artificial agents were asked to learn to manipulate two-dimensional objects that varied in shape, color, weight, and color intensity. The analysis of the obtained results and the comparison of the behavior displayed by human and artificial agents allowed us to identify the key role played by features affecting the agent/environment  interaction, the relation between category and action development, and the role of cognitive biases originating from previous knowledge.

#Comments#
The paper looks at issues in the effect of action on categorisation.  They present that categorisation is grounded in in the sensorimotor system, according to present experiments and theory.  And again suggest the central role of action in cognition.

They also look at the issues around how categories enable the flexible usage of objects, and how the grasping of objects changes according to the tasks needed, as per the classic idea of affordances by Gibson (1979).

Important quote: "Affordances are proposed to be the product of the conjunction, in the brain, of repeated visuomotor experiences." Probably a no-brainer to the design community, but important to me, as I need to see this generalise to virtual worlds.  It should be noted that the systems used in this experiment were synthetic, so the effects should generalise to a virtual world, as it is simply shapes and colours with physical properties.  However, there is a history of visual search research with simple shapes not generalising to real images.  This must be considered in any assumptions of efficacy in virtual world simulations.

The experiments involved the manipulation of 2D objects on the screen with a mouse pointer in placing and shaking tasks.   The weight of the objects is aligned with categories and some of the categories are also based on colour, blinking and shape.  The humans (20) were compared to neural network agents.

"The results indicated the discriminative features affecting the agent environment interaction such as weight facilitate the acquisition of the required categorisation abilities with respect to alternative features that are equally informative but that do not affect the outcome of the agent actions."  This leads them to the conclusion that the categorisation for both humans and agents, not withstanding any other factors, is affected by the embodiment of the activity; weight required interaction, not just observation.

The results showed support for a model whereby the interaction with light vs heavy objects produces categories far more effectively than other factors.  Embodied action thus has a great affect on categorisation, whether it affects every category is still uncertain, as the other visual effects (from grounded cognitive affects) still caused categories to form, just not as soon in the training.

They consider this to contribute to a STRONG position of embodiment being central to the creation of categories, and not just being a more peripheral contributor.

They also note a shape effect with humans, ie. they used a curvilinear path with circles, and a rectilinear path with square.  Thus previous memories of the objects influenced their actions and thus the categories.

They also note that the categories are from an interaction of the agent with the environment, and not so from top-down or bottom-up processes exclusively, not overgeneralised or fine granularity categories, but as a dynamic process between agent and environment.

While this is categorisation, and not a memory task, one still has to wonder, for my work, if the memory of a process will be much more enhanced by embodied interactions, and not just visual interactions alone.  One could hypothesise that if the category is more strongly created with embodied action, then the memory of that category (if it maps to say activity specifications) then should be stronger on acting it out.  So an Occulus and Kinect space should measurably work better in process memory tasks than a pure visual space; with both working better than a simple interview.

Something to think about I guess.

#ImportantRefs#