Three-dimensional Computation Visualization for
Computer Graphics Rendering Algorithms
David A. Goldman
Richard R. Eckert
Maxine S. Cohen
State University of New York at Binghamton
Computation visualization or algorithm animation is becoming
an increasingly popular and effective way of teaching, debugging, and analyzing
algorithms. Over the past ten years, many algorithm animation systems have
been produced. Proposed here is a new approach and framework for visualizing
three-dimensional algorithms or computations. Implemented on a prototype
algorithm animation system under development here, this framework, termed
the vector-guided view, produces insightful visualizations of three-dimensional
computation by effectively conquering the problems of 3-D scene navigation.
The creation of this framework was motivated by the desire to produce visualizations
of an increasingly large and complex set of rendering algorithms now ubiquitous
within the field of computer graphics. To show the potential of this framework,
a dynamic visualization of a recursive ray-tracing program has been created
here. A brief summary of the algorithm animation system being utilized
is also presented. In the future, it is hoped that this work may be used
to animate a large variety of programs, even beyond the scope of computer
Key Words and Phrases: algorithm animation, program
visualization, computation visualization, simulation, three-dimensional,
rendering algorithms, ray tracing, scene navigation, computer science education.
The most significant problems that occur when creating animations
of a rendering algorithm are problems that are inherent to most three-dimensional
computation visualizations. A symbolic representation of a three-dimensional
computation (for example, projecting a vector into a 3-D scene) usually
necessitates a three-dimensional representation itself. If a computation
involves a cube or sphere, symbolically representing those objects as a
square or circle can be detrimental (at best) to the effectiveness of the
visualization. However, displaying information in three-dimensions poses
noteworthy problems. These problems stem from the obvious fact that when
projecting three-dimensional information onto a two-dimensional view plane
(i.e. a computer screen) information is lost. In the case of computation
visualization, this loss can be categorized into three groups:
Presented in this paper are techniques for visualizing and
animating three-dimensional graphics rendering algorithms. In particular,
a method for visualizing three-dimensional computations in general will
be given. This work may also be used to increase the effectiveness of animations
of simple 2-D computations or have applications within certain areas of
scientific data visualization. However, this is not focused on here. To
illustrate the usefulness of the techniques presented, a recursive ray
tracing has been animated using a prototype algorithm animation system
incorporating these new methods. The resulting animation, while being a
valuable tool by itself, is most important in that it shows the potential
of this approach to three-dimensional computation visualization. Finally,
since the methods devised here to animate 3-D rendering algorithms (and
3-D computations in general) are a superset of the methods used to animate
many other types of algorithms, it is hoped that this work may be incorporated
into existing general purpose systems yielding more flexible and robust
algorithm animation platforms.
Over the past ten years, algorithm animation has proven
to be a very useful tool for both educators and researchers alike. Its
power to convey detailed information about algorithms quickly and effectively
has been demonstrated on countless occasions. Marc Brown and Robert Sedgewick's
pioneering BALSA system , created over ten years ago, is still in use
today. Since its introduction, many advances have been made in this field
yielding other well-known systems such as AACE, ALADDIN, POLKA, and Zeus.
[15, 18, 28, 6] All of these systems strive to use the capabilities of
interactive multi-media and dynamic high-resolution graphics displays to
illustrate and analyze how particular algorithms function. They accomplish
this goal by first abstracting the fundamental operations occurring within
an algorithm and then by symbolically representing those operations (via
audio or video) as a dynamic interactive presentation.  This, in fact,
is the basic premise on which algorithm animation is based.
Here, we were interested in applying this technology to
visualize three-dimensional rendering algorithms. More specifically, an
effective way of quickly examining and comprehending the inner workings
of an increasingly large and complex set of rendering algorithms was desired.
Unfortunately, out of the many systems currently available, none was well
suited to animate this subset of algorithms. As will later be discussed,
this set of algorithms demands unique features from an algorithm animation
system. The inadequacy of existing systems is partially due to the fact
that research into the visualization of three-dimensional computations
(or algorithms) in general is still in its infancy. Hurdles, such as three-dimensional
scene navigation, have still not been overcome in previous work.
Proposed here are techniques intended to solve some of
these problems and create an environment better equipped to animate sophisticated
rendering algorithms. Most significant here, will be the introduction of
a new type of viewing framework which effectively illustrates three-dimensional
computation by overcoming the inherent problem of three-dimensional scene
navigation. This framework, termed here as the Vector-guided view, may
eventually prove useful in animating a large variety of 3-D and 2-D computations
even beyond the context of computer graphics algorithms. For now though,
the techniques developed here will be presented within the context of animating
the recursive ray tracing algorithm on a prototype algorithm animation
system termed 3D-AAPE (3-Dimensional graphics Algorithm
Environment). This system is currently being developed here at the
Human-Computer Interaction lab at this university.
Motivation: The value of this type of research
to computer graphics is unmistakable and will hopefully have a great impact
on how graphics algorithms are debugged, analyzed, and taught. Whether
you teach undergraduate courses on basic rendering techniques or you are
a pioneering researcher in this area, program visualization can be an invaluable
tool. As Stasko keenly states, "Animated visualizations help researchers
and designers debug programs and explore new variants of existing algorithms.
The graphical views of a program illustrate behaviors and characteristics
not evident during its initial design. Consequently, they promote the discovery
of alternate solutions to problems."  With this and other goals in
mind, a new approach to three-dimensional computation visualization will
now be summarized. The following sections will first describe the problem
context, and then define the conceptual model constructed. After this a
brief discussion of implementation and future work will be given.
Problems of the first category can become even more apparent
when displaying objects which are abstractions from an algorithm rather
than models of real world items. Rendering abstracted objects such as vectors
and line segments, which have no natural thickness, requires slightly different
considerations. Also, the subjective nature of all three of these categories
tends to cloud some of these problems further. In most situations, there
is rarely a single magic number or "perfect" parameter that yields the
best results. Finally, it must be remembered that the goal is to have an
accurate visualization of a dynamic process (i.e. algorithm) available
at a minimal cost to the user. At first glance, it may seem reasonable
to simply add user controls which alter viewing parameters for the scene
and allow the user to view the data (or computation) from any and all possible
viewpoints. While this approach may in fact be well suited for viewing
large amounts of static information, when that information becomes dynamic,
as in the case of algorithmic computation, it becomes impractical. In that
type of situation, the user would be required to search for an appropriate
viewpoint every time the scene changed as the algorithm progressed. This
is both time consuming and self-defeating since the user may not be able
to determine an appropriate view point for the same three reasons listed
above. Or, even worse, the user may find a viewpoint which he or she "believes"
is appropriate and inadvertently miss other vital information. In addition,
simply manipulating the viewpoint may not be enough to solve all three
problem categories. For example, one might have rectified categories one
and three, but still have objects obscuring the user's view. Thus, we are
left with difficulties for which the most obvious solutions are of no avail.
Limited perception of movement - Depending on the
user's viewpoint and the nature of the 3-D objects, it may be difficult
(if not impossible) to see objects moving toward or away from the user
(e.g. objects whose location or attribute change is parallel to the eye
vector for the scene).
3-D object interference - An object may remain partially
or completely hidden in the scene due to other objects (or parts of objects)
located "in front" of it.
View scale inadequacy - If the user is placed at viewpoints
too close to or too distant from the 3-D scene representation, vital information
(objects) may lie outside the viewing frustum or be so small that they
go unnoticed by the user.
Related Work: Recently, three-dimensional algorithm
animation has finally started to draw some attention and research. Marc
Brown and Marc Najork at Digital Equipment Corporation have used 3-D interactive
graphics in their Zeus system to enhance the effectiveness of algorithms
that are not inherently three-dimensional. They use this extra dimension
to represent additional information about an algorithm, such as "capturing
a history of a two-dimensional view". While the work presented in this
paper has definite applications here, this is not its primary focus. With
respect to scene navigation, Brown and Najork take the interactive approach,
implementing viewpoint changes via mouse controls. One interesting variation
here is the ability to "specify a momentum" by which the 3-D scene continually
Other noteworthy researchers in this area, John Stasko
and Joseph Wehrli at the Graphics, Visualization and Usability Center at
Georgia Tech, have also focused on similar types of visualization as well
as dealing with the visualization of algorithms which are inherently three-dimensional.
They have created a very impressive 2-D algorithm animation system termed
POLKA (Parallel program-focused Object-oriented Low Key Animation) which
builds upon experience gained with their previous system TANGO (Transition-Based
Animation Generation).  Combining path-based methods with frame-based
animation clock methods, the system integrated atomic animation primitives
in order to visualize parallel activities occurring within algorithms.
Their more recent work extends the system to 3-D animation by supplementing
graphics toolkit objects such as rectangles and circles with their 3-D
counterparts, cubes and spheres. However, they too are still working to
alleviate some of the problems related to three-dimensional scene navigation.
"In working with 3-D computation visualizations, we
have found one of the most challenging problems to be navigation control,
allowing the viewer to adjust the imagery to illuminate a particularly
informative angle or projection. As an animation runs, a viewer typically
wants to alter his or her view to examine different aspects of the presentation.
We have used both dials and scroll bars to support navigation control,
but we have been unhappy with each. Another possibility is to preprogram
a particular 'fly-by' viewing sequence that might be effective for presenting
a computation. We are wary of the loss of interactivity, however."
Inherent in many algorithms containing three-dimensional
computations (especially rendering algorithms)is the use of vectors. Whether
computing particle movement, determining normals to surfaces, or shooting
rays into a 3-D scene, vector manipulation is a frequent and important
task that many algorithms must perform. Unfortunately, mentally picturing
or imagining three-dimensional vectors and how they interact is not easy
work. The vector-guided view to help her by keeping track of (and sometimes
graphically displaying) the vectors that an algorithm is computing. This
view displays these vectors within the context of a graphic representation
of the algorithm's data. In the animation of the ray tracer here, this
context is a wire-frame view of the scene being rendered. For example,
in the ray tracing algorithm, the vector-guided view shows how a ray is
projected into a scene and then how it is reflected or refracted by opaque
or transparent objects in the scene. Unfortunately, because rays are reflected
in many different directions at different times, a single stationary viewpoint
would be very limiting and would even hide some vector operations. For
instance, if a vector is displayed that points directly toward the user's
current viewpoint, all the user sees is a single point. In this case, it
would be better for the viewer to be smoothly moved to a new location in
the scene, allowing him or her to see more clearly the new vector being
displayed. As one can plainly see, this algorithm exemplifies all the problems
outlined in the previous section. With this in mind, we now further define
our solution method.
THE VECTOR-GUIDED VIEW - CONCEPTUAL MODEL
The framework presented here attempts to solve the problem
of three-dimensional scene navigation for a large variety of three-dimensional
visualizations. As mentioned earlier, our initial experiments with it focus
on animating a 3-D rendering algorithm, recursive ray tracing. This algorithm
was chosen for two reasons. First, as mentioned above, it requires computations
which may be thought of as inherently three-dimensional, thus the necessity
for a 3-D visualization is clear. Second, its implementation is straight-forward
enough that the application of the model may be illustrated easily without
overwhelming the reader with the algorithm's complexities. Before the model
is actually defined, it is useful first to gain an understanding of some
terminology used here. Specifically, it is necessary to discuss the concept
of an "ideal viewpoint".
As used her, the ideal viewpoint is the location
within a three-dimensional visualization where all important information
may be seen clearly. This is where the user will gain the best perspective,
or view, of a three-dimensional visualization and is therefore the most
desirable location from which to view the 3-D data. The mere nature of
this definition makes it a very subjective term, since choices now need
to be made about where important information resides in a scene. Information
one person deems important may seem trivial to another. Indeed, there are
very few absolute right or wrong answers. To complicate things further,
even if conclusions can be made about where vital information is located,
it is, again, a non-exact science to derive where an ideal viewpoint exists
such that all of this important information is easily seen. For
these reasons, the formulation of an ideal view point must be entrusted
to the algorithm's animation designer in hopes that reasonable (although
possibly no perfect) choices will be made. As algorithm animation system
designers, it is our job to provide tools which make those choices as effortless
and as easy to implement as possible.
The vector-guided view is an approach which allows the
algorithm animator to produce dynamic ideal (or at least nearly ideal)
viewpoints of a three-dimensional computation visualization. Thus as an
algorithm executes, a symbolic 3-D representation is created and altered
dynamically. Concurrently, as the scene changes, the user is smoothly guided
to new viewpoints where important information is best seen. As stated above,
these new viewpoints, while definitely useful and informative, may not
be absolutely perfect. For this reason, it is not suggested that user controls
for the scene's viewpoint be completely abandoned. One may consider these
viewpoint as "best guess" starting points to view a computation visualization.
The creation of this type of view basically involves four steps.
Four step methodology for vector-guided view:
Transparency: One final feature added to this framework was the
use of transparency. This was added to reduce conflicts when attempting
to alleviate all three categories of 3-D visualization problems outlined
in Section 2. Conflicts arise, for example, when trying to lessen conditions
one and three (limited perception of movement and view scale inadequacy).
If these conditions are reduced (for example, by placing the viewer orthogonal
to movement in the scene and at an appropriate viewing distance), it is
still possible for problems of the second category (3-D object interference)
to arise. In effect, we can have a good viewpoint except that other 3-D
objects or parts of objects are still obscuring important information in
the scene. In this case, the interfering objects are made partially
transparent to allow a clear view of what is occurring behind them while
still indicating their presence in the scene or computation. This feature
is attained by assigning and maintaining priorities for all objects present
in a visualization. A higher priority indicates that an object should not
be made transparent. These priorities can be maintained explicitly or automatically
by the system. Currently, automatic maintenance is achieved by keeping
track of which objects the current vectors are interacting with or associated
with (i.e., touching or indicating movement). Those objects then have their
priority raised above a predefined threshold. All objects obscuring the
current vector operation and having priorities below the threshold are
then made partially transparent. This scenario has worked well in preliminary
test cases which have only involved simple convex objects. However, problems
will occur when the objects in the scene become more complex and contain
concavities. In these instances, when the concavities of an object obscure
important information, we plan on making part of all of the given object
Abstract vectors - Extract a sequence of vectors Vi
such that i is the index of the vector extracted (indirectly
or directly) from the algorithm being animated or visualized. In the case
of a ray tracer, the vectors extracted are vectors that were computed directly
within the algorithm. However, this is not a requirement for vector-guided
view visualization in general. Many times it is possible to artificially
(or synthetically) abstract vectors from an algorithm. For example, the
visualization of a network protocol may involve five nodes (for instance).
If these five nodes are artificially assigned vertical timelines in 3-D
space, communication between nodes may be represented (or visualized) using
vectors which connect various nodes' timelines (see Fig. 1). Note, we are
using vectors as a visualization technique despite the fact that nowhere
in the algorithm are these vectors being directly computed. Although the
resulting visualization is somewhat synthetic in nature, it still may be
quite useful.  It should also be mentioned that there is no requirement
that the abstracted vectors be displayed within the visualization. In fact,
in certain situations it may prove useful to use these vectors for the
sole purpose of deriving viewpoints since displaying the vectors in certain
cases may only confuse or clutter the display.
Define constraints - Define a sequence of one
or more constraints, Cj, (using one or more of the Vi)
on the viewpoints that will be calculated. Again, in the of symbolically
viewing the operations of a recursive ray tracer, one possible constraint
could specify that the current viewpoint, Ek, always be orthogonal
to both the current projected ray, Vk, and the intersected surface
normal vector and/or reflected ray, Vk+n (where k+n is the index
of the vector spawned by a reflection of Vk from an object).
In fact this, with the additional constraint that the current viewpoint
never differ by more than 90 degrees from the scene's up vector, were two
constraints used here to animate the ray tracer visualization. The second
constraint was added so that the scene was never viewed from underneath,
since this could have been disorienting to the viewer. One other constraint
specified was that the full length of the current projected ray should
be completely visible in the display window. This, in most cases, just
required an increase/decrease in the view distance (or a "zoom out"/"zoom
in") so that the vector was not clipped by the view volume. While all of
these particular constraints only require information obtained from two
or three vectors at most, it is possible to produce viewpoints which depend
on a considerably larger quantity of vectors.
Derive viewpoints - Derive and save the sequence of calculated viewpoints
(or eye points), Ek, (such that for each Ek, all
Cj hold true for all Vi) so that they may be traversed
or used in any order. In addition, save all associated vectors, Vi,
so that they may be displayed if desired. Note, in some instances, even
with all given constraints applied, several choices for a single Ek
may still exist. In these cases, a heuristic is used (this may be thought
of an additional pre-defined constraint) which chooses the Ek
that is closest to the previous Ek in the sequence. This serves
to eliminate, when possible, large rotations of the visualization scene
which can become "dizzying" to the viewer. The minimization of these large
rotations may also be accomplished, in many cases, simply by a careful
formulation of the constraints. In cases where multiple derived viewpoints
for a singe Ek severely conflict (e.g., differ by more then
90 degrees), we are considering having the vector-guided view spawn additional
views and thus concurrently present the scene from these different viewpoints.
Define additional translation parameters - Define additional parameters
(or constraints) which will facilitate the desired type of automatic smooth
transition between calculated viewpoints. The benefits of an incremental
display versus discrete views, have been argued by many researchers. For
this reason, the ability to provide smooth transitions between viewpoints
was deemed an important feature to be included within this model. Initial
experiments with the framework have used simple linear transitions from
one viewpoint to another. However, it may prove useful to change different
parameters of the viewpoint (e.g., camera or eye point) at different speeds.
For example, zooming in or out more quickly at the beginning of a transition
(before the view angle has been significantly altered) could perhaps yield
some benefit. There may, in fact, be some functions (exponential, spline,
or others) that give particularly pleasing transitions. However, this depends
on the algorithm being animated and most likely requires research into
the area of human perception, which is beyond the scope of this paper.
In any case, we do offer the capability of precisely specifying how one
wishes a transition to occur. Once these parameters are set, the system
considers the current animation speed and computes the necessary number
of frames required to generate a smooth transition.
The framework outlined above was implemented on top of a
prototype algorithm animation system under development here. This prototype
system was written in C and created to run under Microsoft Windows TM version
3.1 and later on IBM PC compatibles. This particular platform was selected
primarily for its low cost and familiar user interface. Currently, a 486/66
DX with 8MB RAM and local bus graphics is more than adequate to support
the system with reasonable animation speeds. Similar to AACE's user interface,
tape deck type controls are the primary user interface for the system.
It was felt that this offered a familiar yet flexible way to control the
animations. The system supports algorithm execution in forward or reverse
(roll-back mode) by utilizing temporary event log files. It is primarily
an event-based system similar but not identical to systems such as BALSA.
To create a system specifically tailored to visualize computer graphics
rendering algorithms was our original objective. However, after additional
consideration, its use as a general purpose algorithm animation system
is now also being explored. A full length description and specification
of the system and its uniqueness is too lengthy to include here, but this
information will hopefully be made available in a future publication. For
now, just a few more brief highlights will be given.
IMPLEMENTATION AND APPLICATION
In addition to displaying the vector-guided view, the
system can also maintain several other views, including textual and actual
views. In the ray tracing animation created here, a text view is used to
display source code and highlight lines as they are executed. An actual
view maintains any output that the algorithm is writing to the screen or
graphics display. This view was built to display any graphics calls (putting
pixels, drawing lines, etc.) made within the algorithm. Additionally, the
view maintains an intelligent infinite undo feature to permit roll-back
or reverse execution of the program. Unlike some paint programs, this undo
feature replaces only the pixels that were altered by the specified graphics
calls and does not require that the entire view be redrawn. This feature
was needed to allow reverse execution at reasonable speeds.
All constraints and parameters for the vector-guided view
were specified and coded in C, although this is soon to change. The visual
programming of these constraints is a subject currently being investigated.
The input vectors needed by the view are collected by or passed to the
system via vector-guided view events transparently specified in
the algorithm's source code. It is also necessary that new viewpoints be
formulated ahead of time so that smooth transitions may be scheduled and
initiated before displaying the next ideal viewpoint. Furthermore, certain
constraints may require knowledge of abstracted vectors that will follow
the ones currently being processed. For these reasons, it is necessary
to internally run the algorithm ahead of the animation being displayed.
This "look ahead" type of execution is completely unperceived by the user
viewing the animation.
The vector-guided view events, which may be thought of
as records sent to the vector-guided view during algorithm execution and
used to animate the ray tracer, were classified into five categories:
Algorithm animation systems have made considerable progress
over the past ten years. Advancing from simple two-dimensional black and
white graphics displays to high performance three-dimensional color animations,
the field has matured considerably. However, with these new advances have
come new difficulties. The problem of three-dimensional scene navigation
is one such difficulty. The vector-guided view is one possible solution
to this problem, allowing three-dimensional animations to be navigated
automatically. Furthermore, the consistent event-based framework of the
3D-AAPE prototype should allow new types of views to be added easily.
Eye-ray events - One of these events is given to the
vector-guided view every time a new ray (vector) is calculated and projected
from the eye-point into the 3-D scene (through the view plane). It contains
or stores the normalized value of this ray (vector).
Intersection point events - One of these events is
given to the view when the closest intersection point of a ray is found
(i.e., the ray hits an object in the scene). It stores the location (in
3-D space) of the intersection.
Normal-vector events - This event informs the view
that a surface normal has just been computed. It stores the normalized
value of this surface normal.
Reflected or transparent ray events - These events
indicate that a reflection or transparency ray (vector) has been projected
from an object within the scene. This occurs if the object is reflective
or transparent. It stores the normalized value of this ray (vector).
Backtrack events - These events indicate when
a projected ray has not intersected any objects in the scene, thus one
level of recursion is ended. They do not store any 3-D or vector information.
The following diagram (Fig. 3) shows these events
are generated by the algorithm and accepted by the system.
One purpose of this ray-tracer visualization is to explain
how effects such as reflection and transparency are created in a ray-traced
scene. The results of the vector-guided view here may be summarized as
follows. An initial viewpoint is specified to be coincident with the rendered
scene's camera position. Thus, initially one sees a wire-frame view of
the scene exactly as it is to be rendered. Then, a ray is projected from
the scene's camera or eye position. This is pictured in the vector-guided
view as a line segment slowly growing (getting longer) from the scene's
view plane into the actual 3-D scene. Concurrent with this projection,
the vector-guided view is automatically rotating and zooming out so that
one may more clearly see this projection (or growth of the line segment).
If an intersection occurs and additional rays must be projected, they are
represented in a similar fashion and the view's rotation continues in order
to yield better perspectives. As levels of recursion are reduced, their
associated vector projections are gradually collapsed and the sequence
of ideal viewpoints is traversed in reverse back to the initial wire-frame
vies. Additionally surface normals (when computed) are displayed in red,
reflection vectors are white, and transparent vectors are represented in
green. All projections and displays are, of course, synchronized. For example,
only when a vector is shown being computed in the source code is it then
actually symbolically displayed in the vector-guided view window.
The actual ray-tracing program being visualized was written
C an taken from an article by Roman Kuchkuda. It has only been modified
to write pixels to the screen (instead of to a file).
Figure 4. 3D-AAPE prototype screen.
An animation of a simple ray tracer is shown above. The
actual view of what the algorithm would display if it were executed outside
the system is shown on the top left. On the top right is the vector-guided
view showing an eye ray being reflected from the back plane in the scene.
On the bottom, the current section of source code that has just executed
CONCLUSIONS AND FUTURE WORK
Three-dimensional graphics rendering algorithms are one
subset of sophisticated programs that unfortunately have not been given
much attention in the fields of algorithm animation or computation visualization.
The 3D-AAPE prototype and vector-guided view will hopefully change this
by giving a means by which one may successfully animate this category of
Future work: Updating the prototype system to a
full-blown algorithm animation system implementing some additional features,
not mentioned here previously, is of the highest priority. This includes
adding to the system such features as a pre-processor and a visual programming
tool set. The benefits of implementing parts of the system in C++ rather
than C are also being examined. Upon completion of the next version, the
system will then be used to create a variety of animation for graphics
algorithms other than ray tracing. Also, new types of three-dimensional
views will be explored.
After we have used the system to animate a larger variety
of rendering algorithms, it is hoped that the framework may be expanded
and prove useful when animating other types of algorithms. Extensions allowing
for the simulation of parallel computations are also a possibility. The
ultimate goal will be to produce a general-purpose system well suited to
animate a large and diverse variety of algorithms. Undoubtedly a system
such as this could aid both researchers and educators alike, being useful
as bot a debugging and analysis tool, as well as a tool for education.
Baeker, Ronald M., "An Application Overview of Program Visualization,"
Graphics, July, 1986,
Bentley, Jon L. and Kernighan, Brain W., "A System for Algorithm
Animation," Computing Systems, Winter, 1991, pp. 5-30.
Brown, Marc H., Algorithm Animation, MIT Press, Cambridge,
Brown, Marc H., "Exploring Algorithms Using Balsa-II," IEEE
Computer, May, 1988, pp. 14-36.
Brown, Marc H., "Perspectives on Algorithm Animation," Conference
Proceedings of CHI'88: Human Factors in Computing Systems, 1988, pp.
Brown, Marc H., "Zeus: A System of Algorithm Animation and
Multi-View Editing," 1999 IEEE Workshop on Visual Languages, October,
1991, pp. 4-9.
Brown, Marc H., "Animation of Geometric Algorithms: A Video
Review," 1992 ACM Symposium on Computational Geometry, 1992.
Brown, Marc H. and Hershberger, John, "Color and Sound in
Algorithm Animation," IEEE Computer, December, 1992, pp. 52-63.
Brown, Marc H. and Najork, Marc A., "Algorithm Animation
Using 3-D Interactive Graphics," Proceedings of the UIST'93, November,
1993, pp. 93-100.
Brown, Marc H. and Sedgewick, Robert, "A System for Algorithm
Animation," Proceedings of SIGGRAPH'84 (ACM Computer Graphics),
July, 1984, pp. 177-186.
Brown, Marc H. and Sedgewick, Robert, "Techniques for Algorithm
Animation," IEEE Software, January, 1985, pp. 28-39.
Cormen, Thomas H., Leiserson, Charles E. and Rivest, Ronald
L., Introduction to Algorithms, MIT Press, Cambridge, MA, 1990.
Cox, Kenneth C. and Roman, Gruia-Catalin, "Abstraction in
Algorithm Animation," 1992 IEEE Workshop on Visual Language, September,
1992, pp. 18-24.
Foley, James, van Dam, Andries, Feiner, Steven, and Hughes,
John, Computer Graphics: Principle and Practice, 2nd Edition,
Addison Wesley, 1991.
Gloor, Peter A., "AACE - Algorithm Animation for Computer
Science Education," Proceedings of IEEE Workshop on Visual Languages,
'92, October, 1992, pp. 25-31.
Kuchkuda, Roman, "An Introduction to Ray Tracing," Theoretical
Foundations of Computer Graphics and CAD, Ed. R.A. Earnshaw, Springer-Verlag,
Berlin, 1988, pp. 1039-1060.
Duisberg, Robert A., "Visual Programming of Program Visualizations
- A Gestural Interface for Animating Algorithms, IEEE Computer Society
Workshop on Visual Languages, August, 1987, pp. 55-66.
Helttula, Esa, Hyrskykari, Aulikki and Raiha, Kari-Jouko,
"Graphical Specification of Algorithm Animation with ALADDIN," Proceedings
of the 22nd International Conference on System Sciences (IEEE),
1989, pp. 892-901.
Hudson, Scott E. and Stasko, John T., "Animation Support
in a User Interface Toolkit: Flexible, Robust, and Reusable Abstractions,"
of the UIST'93, November, 1993, pp. 57-67.
Hyrskykari, Aulikki, and Raiha, Kari-Jouko, "Animation of
Programs without Programming," Proceedings of the Conference on Visual
Languages 1987, August, 1987.
Lieberman, Henry, "A Three-Dimensional Representation for
Program Execution," 19889 IEEE Workshop on Visual Languages, October,
1989, pp. 111-116.
Moher, Thomas G., "PROVIDE: A Process Visualization and Debugging
Environment," IEEE Transactions on Software Engineering, June, 1988,
Reiss, Steven P., "A Framework for Abstract 3-D Visualization,"
Symposium on Visual Languages '93, August, 1993, pp. 108-115.
Robertson, George G., Card, Stuart K., and Mackinlay, Jock
D., "Information Visualization Using 3-D Interactive Animation," Communications
of the ACM, April, 1993, pp. 57-71.
Stasko, John T., "Tango: A Framework and System for Algorithm
Animation," IEEE Computer, September, 1990, pp. 27-39.
Stasko, John T. and Wehrli, Joseph F. "Three-Dimensional
Computation Visualization," IEEE/CS Symposium on Visual Languages '93,
August, 1993, pp. 100-107.
Stasko, John T. and Kraemer, Eileen, "The Visualization of
Parallel Systems: An Overvies," Journal of Parallel and Distributed
Computing, 1983, pp. 105-117.
Stasko, John T. and Kraemer, Eileen, "A Methodology for Building
Application-Specific Visualizations of Parallel Programs," ," Journal
of Parallel and Distributed Computing, 1993, pp. 258-264.