VIDEO: Brain-Computer Interface Demo

In this demonstration, Tami Griffith of the U.S. Army Research Laboratory‘s Simulation & Training Technology Center, maneuvers through virtual space using a brain-computer interface and a head-mounted display. Tami is controlling the virtual world entirely through her own two headsets. What she is seeing in the head-mounted display is also projected on the screen to the left.

The brain-computer interface provides movement (forward, back, stop) while the head-mounted display injects directional information. As the user turns, direction is injected into the virtual environment. The brain-computer interface is wireless, while the head-mounted display is wired. The head-mounted display also provides 3D sound and a microphone. The total cost is $2,100.

The project is part of Tami’s exploration of ways to improve a user’s sense of presence using low-cost, off-the-shelf technology, with current emphasis on neural-navigation.

US Army Research, Development and Engineering Command

Sign up for Armed with Science email alerts!

This entry was posted in Technology, Videos and tagged , , , , . Bookmark the permalink.

4 Responses to VIDEO: Brain-Computer Interface Demo

  1. this is remarkable…reminds me of this article I saw on U of Washington researchers building a robot controlled by brain waves

  2. JimT says:

    Well Tami's brain controls her neck to turn her head and her legs to turn her body by stepping in place. Since the virtual locomotion is controlled by the tracked motion in which the head-mounted display faces, then yes, in a sense it is a Brain-Computer Interface. Accordingly all user interfaces are Brain-Computer Interfaces.

    “The brain-computer interface provides movement (forward, back,
    stop) while the head-mounted display injects directional

    The only time I see her stop is when she collides with virtual surfaces.
    She does not appear to move backwards in this clip.

    • Tami Griffith says:

      Great points. I had the sensitivity for “push” on the emotiv cranked up a bit to make it very easy to move forward. The orientation of my head determines which direction I move. Thinking “push” or “pull” is what makes the avatar move forward and back. Reactions with the BCI tend to be slow so its not easy to stop immediately. Our next experiment will be to have stopping points, points to kneel, and go prone (which we do currently using wii mote and numchuk through glovepie), walking backwards, etc.

      It is true that I was less than nimble in the demonstrating state changes. We will work to capture that better in the next video.

      Thanks so much for your feedback!