Prev Next

3D GravityDJ Project

The Evolution of Interaction in Three Trinity Projects

Byron Lahey, Arizona State University A.M.E. / Sculpture

Yves Klein, Arizona State University A.M.E.

Isaac Wallis, Arizona State University A.M.E. Abstract

SMALLab1 Arizona State University

Aisling Kelliher, Professor at Arizona State University

David Birchfield, Associate Professor at Arizona State University

Abstract:
Over the course of three projects, developed in SMALLab1, an immersive multi-modal environment in
the Arts and Media Engineering program at Arizona State University, we explored ways of leveraging
simple gestural inputs to create highly active and engaging environments. Starting with simple collision
detection programming and evolving to physics models inspired by gravitational equations, we created
systems that functioned as an interactive art experience, a game, and finally, a surround sound DJ
mixing tool. The lessons we learned developing these projects have implications for artists, internet media designers, game designers, sound designers, computer musicians who want their audience to know they aren't just checking their email, and anyone else who is interested in producing high quality interactive experiences.

Interactive systems always face the challenge of being simple enough for a new user to quickly learn to use and yet sophisticated enough to keep the user stimulated and engaged. The system developer has many tools at his disposal to produce this experience. They should present an interesting story or an intriguing concept to explore. They should have high quality media and be well designed; aesthetic that is consistent with their concept and appropriate for their intended audience. They should also learn to design interactions that are appropriate between human and the computer interface they are working with. In almost every case, the interface will offer certain unique ways for the user to interact, while other modes of interaction will be difficult or completely absent.

Being sensitive to these realities and embracing the modes of interaction that provide the most natural and intuitive user experience will go a long ways towards making the system highly functional and engaging. Once one has determined an optimal user interface, one must then figure out how to make the most out of the data they gather from the user's interaction. Complex behaviors and interactions can be generated from a relatively small set of variables and rules.

1 Introduction
In the SMALLab environment, a minimal amount
of input can produce a very complex and meaningful
output. Three numbers, each ranging from 0. to 1.,
represent the position of a ball tracked in the space.
With these three numbers, updated about 30 times per
second, one can deduce the position, velocity,
direction, acceleration, and curves or gesture
executed in this physical space. This information can
then be used to control an infinite variety of audio and
visual elements. For a simple system, one might
directly translate this spatial information into a one to
control of a visually projected virtual object. This virtual
object would move in the same direction, distance and
speed as the physical object being tracked. However
one can, through some relatively simple programming,
greatly expand the complexity of the behaviors of
these virtual audio and visual objects. Our three
projects explored how complex behaviors and
interactions could be generated from essentially the
same basic gestural control system. This paper will
explain some of the programming techniques we
utilized and the design considerations directing our
decisions as our project evolved from relatively simple
two-dimensional systems with a few objects to
complex three-dimensional spaces with many objects.
Figure 1

2 Simple Interaction
One characteristic of the motion tracking
interaction system that is utilized in the SMALLab is
the meaningful scale that results from this form of
interaction. The person or persons in the lab interact
with the system by picking up and moving glowing
balls around in the space. Very small motions
generally have a negligible effect on the system and
could even be below signal to noise ratios. So larger,
natural human gestures are more typical and reinforce
the immersive qualities of the space. Virtual objects
sharing the space can easily feel like they are of
actual size in relation to the user, and a typical
physical interaction that one might engage in outside
the lab, such as throwing or hitting something, feels
like a natural action to do in the lab. For our first
project, we took advantage of these characteristics
and created a system where one had the opportunity
to catch and throw a virtual disc within the space. The
following sections will give an overview of this project
and emphasize how we programmed the audio and
visual virtual objects to react in ways that were
consistent with the physical gestures of the user
initiating the action, and the bounds of the virtual room
that we defined as the space in which the object could
be thrown.

2 1 Our Virtual Objects
For our purposes, we define virtual objects as
anything generated by the computer system that can
be seen or heard in the SMALLab environment. Not all
virtual objects can be directly interacted with by the
user, and they need not vary in any way over the
course of one’s interaction with the program.
The virtual objects that we created for our first
project1 will be described in the following subsections.
Following that, the interaction between them will be
described.

2 1 1 Anchor Point
The Anchor Point is a virtual object that is linked
directly with the movements of the physical control
ball, which is tracked by the computer vision system. It
is visually presented as a small round disc drawn
using Open GL. It is the smallest shape in the space
and has an alpha of 1.0 (making it completely
opaque). This object provides the user a direct
feedback in the environment. Only the users’ direct
movements cause this object to move2.

2 1 2 Capture Zone
The Capture Zone is another round disc drawn by
Open GL. It is a larger disc than the Anchor Point,
which it surrounds and is linked to. It has an alpha of
around 0.5, allowing it to be transparent, revealing the
Anchor Point, which it encloses. Its size is variable.
We will discuss the implications of its size in the
section describing more about the interaction
programming. Its x and y coordinates are directly tied
to the anchor point. It is programmed to receive its
coordinates from the anchor point, though one might
note that it could just as easily receive these
coordinates from the same physical ball tracking data
that the anchor point receives its coordinates from.
This is true, but this link of one virtual object to
another, even in this simplest of ways, represents the
first step in removing the user from direct control of the
environment and thus represents a step towards an
environment that is more active and independent. As
the systems become more complex one will observe
how a small gesture made by the user will propagate
into very complex feedback.

2 1 3 Blue Trace
The Blue Trace, so called because of its origin in a
program created by David Birchfield, is the disc object
that the user is able to catch and throw. It's size, color
and opacity is variable. It is drawn by Open GL. It has
one significant visual difference from the other objects
described up to this point. It has a variable number of
trailing discs that are drawn at lower opacity trailing its
path. This produces a linear effect like a trail of smoke.
More details will be provided about this object and it's
behavior in the section describing the interaction
programming.

1 A detailed description of this project and it's programming may be found here: http://ame4.hc.asu.edu/MultimodalEnvironments/index.php/First_Example

2 Limitations of the computer vision system may cause this object to move if the vision system mistakes the
movement of a virtual object for the movement of the physical ball that it is intended to track. This problem
can be minimized by making sure the physical balls are well charged and therefore very bright and by
restricting the use of colors for virtual objects to avoid this mistaken identity problem.

2 1 4 Room
The Room is a virtual object with no visual
component. It is a rectangular object of variable
dimensions that serves as a barrier, limiting the
distance the Blue Trace will go in any direction. The
Room will be further explained in the next section
covering the interaction programming and in the
following section addressing the programming of
sound.

2 2 Interaction Programming
Now that we've defined what we are calling virtual
objects, and described the virtual objects we are using
in our first program, we can explain how the user
interacts with the objects and how the objects interact
with one another.

As explained in the introduction, the user interface
in the SMALLab environment consists of a computer
vision system tracking the position of one or two
illuminated balls in the space. So to interact with the
program, the user simply picks up a ball and moves it
around. In our program, the motion of this ball directly
translates into an equivalent motion of the Anchor
Point. The position of the Anchor Point is used to
control the position of the Capture Zone. The user
experiences this as his initial interaction and as one of
directly controlling a large transparent ball with a
smaller solid core. The two objects move as one and
are under the direct control of the user.

When the program is initiated the Blue Trace
object starts off moving freely throughout the space. It
travels at an initial speed that is defined in our
program. This speed is variable and has an effect on
the interaction that will be explain later.

The Blue Trace travels in a straight line until its
position in either an x or y coordinate is greater than or
equal to or less than or equal to the corresponding
coordinates defining the virtual object known as the
Room. At this point, the movement in the axis in which
the logical event described above occurs will be
reversed. In other words the Blue Trace moves until it
hits a wall and then bounces off of it.

The Room is simply created by programming in
upper and lower limits for the potential position of the
Blue Trace. With no user interaction, the result of the
interaction of the Blue Trace and its Room boundaries
is simply the endless movement of the Blue Trace
throughout the room, bouncing off walls and changing
direction whenever collision is detected. It is important
to note that the dimensions of the room are completely
variable and can be adjusted to change the users’
experience. It is even more important to emphasize
that the room can be defined as much larger than the
visual space, so the Blue Trace can be allowed to fly
out of the visual space, collide with one or more walls
and eventually return to the visual space. This
characteristic will be discussed further in the next
section covering the programming of the sound.
So at this point we have the Blue Trace flying
around and bouncing off walls and the user is
controlling the position of the Anchor Point and
Capture Zone. The logical question to now address is:
what happens when the Blue Trace collides with the
Capture Zone? As one might guess from the name,
Capture Zone, the Blue Trace gets captured by the
Capture Zone when its position matches the position
defined by the Capture Zone. It was noted in the
description of the Capture Zone that its dimensions
were variable and that this would affect the interaction.
The first way is probably obvious. The larger the
Capture Zone is, the more likely the Blue Trace is to
enter its space or in other words, it makes it easier to
catch the Blue Trace.

Catching the Blue Trace essentially consists of
comparing the position of the Blue Trace with the
position and size of the Capture Zone. When the Blue
Trace crosses the threshold of the space occupied by
the Capture Zone it is treated as a collision detection
in the same way as if it had collided with a wall. The
significant difference between this collision event and
the wall collision event is the way in which the new
trajectory for the Blue Trace is assigned. With a wall
collision the new trajectory is assigned as a reversal of
the trajectory in the axis in which the collision occured,
resulting in a natural billiard ball like change of
direction. When a Capture Zone collision occurs, the
trajectory is assigned to create a movement directly
towards the Anchor Point, which resides at the center
of the Capture Zone.

Once captured, the Blue Trace is now under the
indirect control of the user. But here is where the initial
speed of the Blue Trace again becomes significant to
the interaction. While in the Capture Zone, the Blue
Trace's trajectory is always towards the Anchor Point
at the center of the Capture Zone, however it will only
move towards this point at a rate defined by its initial
speed. So if the user moves the Anchor Point around
at a moderate speed, the Blue Trace will dutifully
follow the Anchor Point. If however, the user moves
Anchor Point faster than the Blue Trace can keep up
with, it will eventually pass back out of the space
defined by the Capture Zone and no longer be
assigned to move towards the Anchor Point. It will

once again be free to fly around and collide with walls.
Effectively this interaction is experienced by the user
as one of throwing the Blue Trace and different
throwing gestures will result in different speeds and
directions when the Blue Trace is released. With a
little practice the user can catch and throw the Blue
Trace, bouncing it off walls like a racket ball.

2 3 Programming of Sound
As mentioned earlier, the Room size can be
defined as larger than the visual space. This makes
sound and extremely important aspect to the user's
experience and interaction in the space. Without
sound, when the Blue Trace leaves the visual space it
becomes completely invisible. One can visualize its
trajectory and speed at the point is was last seen and
with a little experience, make an educated guess as to
when and where it might be seen again. But it is far
more satisfying to have a perceptible feedback to
gage its current location. This is why audio plays a
major roll in this program. It provides spatial and
velocity information about the Blue Trace, makes the
boundary defined by The Room clearly perceptible
and creates an overall ambiance and sense of motion.

The Blue Trace itself has elaborate sounds
characteristics associated with it created by a render
engine called flying_sound. This render engine was
based on a foundation created by David Birchfield
which includes audio spatialization and reverberation
parameters to effectively position the sound in the
space. This space is an acoustic space that can be
perceptibly larger than the physical positions of the
loudspeakers would imply. We expanded this render
engine to include a Doppler shift effect and a trailing
sound. Both the primary sound and the trailing sound
have effects that can be fine tuned to control the
characteristics of the sound.3 The trailing sound
follows the ball in space as though it had a tail that
made noise. It lags, based on the frame rate and
velocity, so a very quick-moving object would lead to a
trailing sound which had stretched back quite some
ways, whereas a slow-moving sound would have a
trailing sound practically on top of the leading sound. If
the leading sound follows some curving or changing
path, the tail will follow that same path, like the
segments in a game of centipede. The fine tuning
controls for the sound elements as well as the velocity
dependent Doppler and trailing sound effects are all
present to provide the user with a clear perception of
the movement of the Blue Trace through the space
and to impart it with a highly energized feeling.
The second sound component to the program is
far simpler, but still very important to the user's
experience and perception of the space. This sound
component is an audio feedback that occurs whenever
the Blue Trace collides with a wall. This sound is also
affected by a reverberation effect and localizes the
sound to correspond with the coordinates of the
collision.

2 4 Project 1, Interaction Summary
This first project illustrates how simple throwing
and catching gestures can be utilized to create a
complex virtual object behavior in an immersive
multimodal environment.

Next we will look at a more complex program that
requires essentially the same throwing and catching
interaction but expands greatly on the action and
interaction of virtual objects.

Figure 2
3 Complex Interactions of Multiple
Objects
One of the most interesting and dramatic changes
that we made in our second program was the shift
from a two dimensional visual and computational
space to a three dimensional one. Since the user
interface in the SMALLab is inherently a three
dimensional one, it seemed contrary to the spirit of the
space to work two dimensionally. Human gestures are
naturally three-dimensional motions so we wanted to

3 Details at: http://ame4.hc.asu.edu/MultimodalEnvironments/index.php/First_Example
take advantage of this and expand our system to
embrace this natural way of moving and interacting.

3 1 Our Virtual Objects
Our second project took off where our last one left,
and evolved into a game we called “TerraWhip”.4 For
virtual objects, Anchor Point, Capture Zone, Blue
Trace and Room are all back to play again. These
virtual objects are close relatives to those created for
our first project. The biggest difference is simply that
all the motion and interactions now happen in threedimensional
space. Discs that were two dimensional
Open GL renderings are now three dimensional
spheres, also rendered using Open GL. Similarly the
two dimensional rectangle that defined the room was
transformed into a three dimensional cube. Because
these objects are so similar in design and functionality
to their two dimensional parents, we will not describe
them in any further detail here. However one
additional type of virtual object was created for this
project and it will be briefly introduced here and further
explained in the interaction description.
Death Balls was the name we chose for the new
virtual objects for our game. As the not so subtle name
implies, these are the enemies or antagonists in the
game. They are small white Open GL spheres that are
periodically instantiated with positions initialized
outside the space defined by the room. Their vectors
are randomly initiated. They start out just floating
around relatively passively (though they will still claim
a life if they are allowed to touch the Anchor Point
while in this state). A few seconds after instantiation,
they become active and are then attracted to the
Anchor Point and become more difficult to avoid and
must be actively destroyed.

3 2 Interaction Programming
The description of this program presented here
will focus on the nature of the interactions between the
user and the virtual objects. We will also address the
design decisions that affected the audio feedback. A
full description of the game play and program design
is beyond the scope of this paper. See the reference in
footnote 5 to find out more about this project.
As described in the introduction to this project, we
were very interested in taking advantage of the threedimensional
sensing system to let the user interact
with the virtual environment in a way that made sense
from a natural physiological perspective. We didn't
want the throwing, catching and other movements of
the user to be reduced to two-dimensional actions in
the virtual environment.

The first thing we did to implement this plan was
make all the existing objects three-dimensional. This
was the easy part of the project.

Next we had to expand all the programming to use
x,y and z as coordinates. For the Anchor Point and
Capture Zone this was very simple. We simply had to
add the additional z position parameter to the objects
and use the z position data from the physical ball that
was tracked by the computer vision system as its data
source. For the other objects we had to alter the
collision detection functions to take this extra
dimension into consideration, but most of these
adaptations from two dimensional space to three
dimensional space where not all that difficult to
implement. Before long we had a system up and
running with the same functionality as our first project,
only in three dimensions. One important change that
we made for this system was to limit the length and
width of the Room to keep all the walls within the
visual space. The added visual complexity of the three
dimensional space coupled with our plans to introduce
more objects into that space, lead us to the conclusion
that we would be better off having visible room walls.
The basic interaction in our new space would be
familiar to anyone who had the opportunity to
experience our first project but would not be difficult
for a complete novice user to quickly comprehend and
become comfortable with. One was once again in
direct control of the Anchor Point and Capture Zone
and could catch and throw the Blue Trace which now
bounces around in a three dimensional Room. One
can now throw the ball down into the space as well as
to one side or the other. Catching the ball becomes
more difficult in this space since one must now be in
the correct place to intersect its path in three
dimensions, but overall the experience is very
stimulating and entertaining.

This is a good point to pause and reflect on the
level of complexity and naturalistic interaction that has
already been generated from a simple stream of three
numbers representing the position of one physical ball
in space.

From this starting point we added additional virtual
objects and rules defining the relationships between

4 For a thorough description of this program download the pdf file at: http://ame4.hc.asu.edu/
MultimodalEnvironments/index.php/Second_Example
these objects to create our game.

The basic rules defining the interactions between
objects go as follows:
1.If a Death Ball (described earlier) collides
with an Anchor Point (controlled by the
player), the Anchor Point and Capture
Zone are destroyed and the player looses
one life.

2.If the Blue Trace (indirectly controlled by the
player) collides with a Death Ball, the
Death Ball is destroyed and the player is
rewarded with a point.

While developing this project, Yves Klein managed
to get a functioning gravitational simulator system
working using the same programming tools that we
were using. We decided that we could adapt some of
his code to cause the Death Balls to be attracted to
the Anchor Point. This programming of this gravity
system is described in depth in the next section. The
addition of this actively attracted mode to the Death
Balls made the game more challenging. To reward the
player for doing well we created another mode for the
Blue Trace. After the player successfully threw the
Blue Trace and destroyed a few Death Balls the Blue
Trace would be placed in “gravity mode”. In this mode
the Blue Trace would orbit around the Anchor Point
and could be whipped around to destroy the Death
Balls.

3 3 Programming of Sound
With all of the movement, collision events and
destruction of Anchor Points, Capture Zones and
Death Balls going on, some serious sound design was
in order to help the user make sense of it all and to
add some drama and entertainment value to the
project.

Figure 3
A complete description of the sound design is
provided in the documentation of the project. See
footnote 5 for this. We decided to limit the sound
feedback to spatialized collision events since there
would be enough of these events to create a saturated
sonic atmosphere. All of the sounds generated for this
project, while distinctly different from one another, are
all generated by not only the same audio engine, but
by the same synthesizer. The synthesizer is a Max-
MSP patch. It takes advantage of a Max object called
“poly~”. A simple way to understand what poly~ allows
is to imagine a piano with only one string and one key
to strike. This one stringed piano represents a simple
synthesizer. Only one note can be played at a time.
One can retune the string to get a different pitch and
can play it louder or softer and hold notes for varying
times, but one can not play more than one note, even
one of the same pitch, simultaneously. What poly~
does is to take advantage of the existing structure of
the piano and add more keys and strings so one can
play all the notes they like at once. More technically
described, poly~ loads a subpatch for each “voice”
desired. The number of voices available is determined
by an argument defined in the poly~ object and is
dynamically variable.

Using poly~ in our audio engine patch, we were
able to create one synthesizer subpatch and create
voices for all the events that might occur
simultaneously during game play.
For each collision event the following information
was sent into the poly~ synthesizer subpatch:
1. x and y coordinates where the event occurred
(used to position the sound in the space)

2. A fundamental frequency (establishes a
primary pitch for the sound synthesis)

3. A frequency multiplier (creates a simple way
to create pitch intervals)

4. An envelope domain (determines the length of
time the sound will occur in)

With these few variables, a great many different
interesting sounds could be generated. An interesting
fact to note is that all the sounds use exactly the same
amplitude envelope. The distinct wavering sound that
is clearly heard in the sounds with longer envelope
domains becomes more of a timbral quality in the
briefer sounds.

Figure 4
The wall collision sounds were specifically
designed to be brief sounds with a sharp attack since
they would be occurring fairly frequently and would
often be overlapping sounds. We wanted them to have
hard reflective sounds like something hard slamming
into another hard surface. The two different wall
collisions were varied by frequency and a subtle
difference in duration to distinguish them from one
another.

The Death Ball Destroyed sound was customized
to produce a sound that is brighter and more energetic
sounding. We wanted this sound to be a rewarding
sound. It could last longer since it wasn't as likely to
occur as frequently. Its frequency is twice as high as
the Blue Trace collision sound and occurs over a one
second duration, compared to a third of a second for
the Blue Trace collision.

Figure 5
The Player Destroyed sound was designed to
have an initial impact sound followed by a cartoon like
dying sound. This sound actually uses two voices, one
for this first quick impact, which is closer to the sounds
of the wall collisions and a second for the slower
wavering death sound. This sound is a little
reminiscent of a disc spinning down and falling flat or
an engine surging a few times with diminishing RPM's.
This sound, which occurs over the longest duration, is
the one that lets one perceive the shape of the
amplitude envelope the clearest.
When the game was being played and all these
sounds were being triggered it was quite a cacophony,
but the individual events remained remarkably distinct
from one another.

3 4 Project 2, Interaction Summary
While this project was very successful in many
respects and represented many great technical steps
forward for our group, our final analysis of the end
product was that the complexity of the game driven
interactions ultimately became a little overwhelming
and distracted from the more interesting aspects of the
project, most notably the really cool gravity model we
now had functioning. So we decided to simplify our
system again, reduce the number and complexity of
virtual object interactions and produce a program that
highlighted our gravity system and used its properties
in a novel way. The project that this line of thinking led
to will be described in the next section.

Figure 6
4 Gravity Based Audio Mixing
Dj-MIXER reuses some of the modules that we
previously created and we modified them to provide a
design environment. Some of the modules we ported
are the sound system, the gravitational simulator,
and other supporting modules. What we added is a
menu selection user interface that allows the selection
of sound clips “Show” and basic recording “Rec”. The
sound clips once selected can be associated to one of
the gravitating balls. To select a sound clip first we
select the “Show” menu by moving the control ball
over it which cause it to change color and swinging the
ball up and down in the z-axis. The menu change to
“Hide” and a list of sound clips appears.

Following the same principle a sound clip is
selected and then we bring the control ball back to the
now called “Hide” menu which when activated will hide
the list of sound clips.

Now we are ready to bounce a gravitating ball to
assign it the sound clip. As the ball rotates around the
central ball the user can grab that central gravitational
ball to agitate the sound balls in whichever direction
desired.

The result is an atmospheric sound that seems to
dance around the user(s) in the space.
The recording menu “Rec” provides the means to
preserve the music generated. The menu if selected
start recording, if selected again it will “stop“ recording.

Figure 7
4 1 Interaction Programming
To create a visual sense of gravity the entire
environment is simulated by looking at two time
frames. Called "now" and the previous time frame
"ago". The temporal distance between these two time
frames is the screen refresh rate of .02 seconds.
These two variables arrays contain the now positions
of the balls and the positions of the balls from the
previous time frame. The "now" and "ago" time frame’s
difference provides a vector for each balls indicating
their motion. Those vectors are added to the "now"
variables to create the new "now" positions. The old
positions prior to the addition of the variable becomes
the new "ago". In this way the ball is given the
property of motion and the balls behaves like they
have inertia. Next, to give visual effect of gravity two
other quantities were added. They are based on the
attractive inverse square for gravity and the repulsive
inverse cube of the distance between the two balls for
anti-gravity.

In the simulated gravity environment it was quickly
discovered through experiments that the first of the
two laws, the inverse square law for gravitational
attraction, needed adjustment. The reason was that
the ball would plunge into the user's ball too quickly,
and stick to the center point firmly which made it
impossible to whip it out. The constant derived through
experimental trial and error that seemed to work the
best was in the vicinity of .001 to .003. This was used
to attenuate the raw inverse square by multiplication
into more manageable and aesthetically pleasing
behaviors. Its direction vector is negative and is used
to diminish the distance between two balls along the
radius between them.

Figure 8
Even with this modification there was another
problem that had to be dealt with: an inactive user's
ball. When the user's ball was not moving the
dependent ball would again plunge into the center of
the user's ball and become once more immovable. To
overcome this issue an additional force, anti-gravity
was needed. This function worked by multiplying a
constant by a repulsive inverse cube law. The
constant was determined through experimentation in a
similar manner to the derivation of the inverse square
constant: by trial and error. It drops much more quickly
than the inverse square law and for distances greater
than about 2 sphere radii the inverse square equation
takes over. When in the vicinity of about 1 sphere
radius the effect of the inverse cube law is so strong
that the behavior of the user's ball is essentially
repulsive and no merging of the two balls is possible.
Its direction vector is positive and is used to increase
the distance between two balls along the radius
between them.

Figure 9
The summation of these three vectors, its motion,
gravity, and antigravity, provides a reasonably
believable visual simulation of a gravitational system.

Figure 10
4 2 Project 3, Interaction Summary
While the gravitational simulator system worked
beautifully, the Sound Balls sounded great orbiting in
the space and it was easy to achieve varying effects
by affecting the orbits with different combinations of
movements and collisions, the use of the menu to
select sounds to assign to the balls, while functional,
felt awkward and inefficient. This form of user interface
is very good for some things and not as good for
others. It reminds us why we have mice and
keyboards and Wacom tablets and need to create
additional forms of physical computing systems that
are optimized for the particular tasks at hand.

Figure 11
5 Results
In the end, we learned numerous ways of
leveraging a relatively small amount of data to
produce a variety of complex systems. We used
everything from simple collision rules that date back to
the earliest video games to computer processor taxing
algorithms that localized audio events in a space to
create immersive multimodal environments. One of
the discoveries along the way, which came as no great
surprise was that more action and interaction did not
necessarily result in a more engaging environment.
Often a relatively simple interaction would be a richer
experience probably in large part because the user
could actually become immersed in the experience
rather than overwhelmed by it.

6 Conclusions and Future Work
It was very interesting to discover how this
gravitational environment could be used for such a
variety of applications, from the more art and
educational oriented first project, to the gaming world
with the second project and finally the creation of a
design tool for musicians, Dj’s, and other night clubs
environments in this last project.

We were successful in creating a design tool that
was fun, entertaining and functional as a multimodal
tool. In each one of these projects we explored
different means of interaction, keeping in mind the
need for a simple interface that could easily be
learned, nevertheless powerful enough to be useful in
a professional setting.

In the First project our focus was to develop the
basic interface that we would be using throughout the
different projects and developed the sonic means to
materialize the virtual space with objects that utilize
Doppler effect and sound cues to help the user to
localize them. We established the basic rules that we
felt were simple enough to be generalized and
reusable in a multitude of applications, however still
complex enough to provide rich content.
With the Second project we focused our attention
on the gravitational and anti-gravitational system and
ways to control the orbiting objects in a gaming
environment as well as simple ways to display
relevant information back to the user. In the game
TerraWhip we where able to use the same basic
elements for a challenging and entertaining game.
Finally, the Third project, the Dj-MIXER presents a
novel way to manipulate sounds in a 3D space where
one can hear the sounds gravitating around spatially
and follow the complex patterns that the gravitational
model can generate. The combination of the elements
of control that the users have with the more random
behavior of the gravitational system provides an
innovative way to create a jam session. And could
easy be extended to a full featured music design tool.
With the addition of a second control ball or the
rotation of the primary ball and a better tracking
system more complex behaviors could be
implemented, which could bring this software as a
professional tool for Dj’s, musicians as well as
choreographers. It would facilitate and increase
efficiency in the menu selection of the different items
and actions that could be implemented. We envision
simple ways to capture the objects and quickly with
twist of the wrist like if we where opening a door, open
the content of the object for editing. This kind of
interface will create worlds within worlds that can
easily be visited, edited or create and delete new
entities. Our menu system can also be extended to
offer more features that would be expected in a
professional design tool.

We plan to continue to build on the raw materials
that we've created in these three projects. We have
very functional physical models that create
dramatically active environments with naturalistic
actions, with the addition of being stimulating and
entertaining for the user as well as the spectators, for
a multitude of application using simple rule sets.

7 Acknowledgments
Work like this never emerges from a vacuum. We relied
on the building blocks laid ahead of us to build upon. We like
to acknowledge and thank all of the following for their
contributions. All the artists and programmers at Cycling '74
(and the work from IRCAM that they built on). The
visionaries at OpenCV for helping make the computer vision
system that we almost, but don't quite, take for granted,
possible. The supporting infrastructure of SMALLab at A.S.U
and all the A.M.E. technicians, engineers, and professors
that helped us in the development process. All our
classmates who provided valuable criticism and ideas. And
especially our Professors, David Birchfield and Aisling
Kelliher for all their support and inspiration.

CR Categories: Multimodal Interactive 3D
Environment System for Design, Art, Gaming and
Education.
Keywords: gravity, DJ, sound, spatialization, gaming,
interaction, max-msp-jitter, Multimodal education,
Interactive Art.

Stats