Cave Automatic Virtual Environment: Wikis


Note: Many of our articles have direct quotes from sources you can cite, within the Wikipedia article! This article doesn't yet, but we're working on it! See more info or our list of citable articles.


From Wikipedia, the free encyclopedia

A Cave Automatic Virtual Environment (better known by the recursive acronym CAVE) is an immersive virtual reality environment where projectors are directed to three, four, five or six of the walls of a room-sized cube. The name is also a reference to the allegory of the Cave in Plato's Republic where a philosopher contemplates perception, reality and illusion.


General characteristics of the CAVE

The CAVE is a 10’ X 10’ X 9’ theatre that sits in a larger room measured to be around 35’ X 25’ X 13’. The walls of the CAVE are made up of rear-projection screens, and the floor is made of a down-projection screen. High-resolution projectors (the University of Illinois uses an Electrohome Marquee 8000) display images on each of the screens by projecting the images onto mirrors which reflect the images onto the projection screens. The user will go inside of the CAVE wearing special glasses to allow for the 3-D graphics that are generated by the CAVE to be seen. With these glasses, people using the CAVE can actually see objects floating in the air, and can walk around them, getting a proper view of what the object would look like when they walk around it. This is made possible with electromagnetic sensors. The frame of the CAVE is made out of non-magnetic stainless steel in order to interfere as little as possible with the electromagnetic sensors. When a person walks around in the CAVE, their movements are tracked with these sensors and the video adjusts accordingly. Computers control this aspect of the CAVE as well as the audio aspects. There are multiple speakers placed from multiple angles in the CAVE, giving one not only 3-D video, but 3-D audio as well. [1]


The first CAVE

The first CAVE was developed in the Electronic Visualization Laboratory at University of Illinois at Chicago and was announced and demonstrated at the 1992 SIGGRAPH. The CAVE was developed in response to a challenge from the SIGGRAPH 92 Showcase effort (and its chair James E. George) for scientists to create and show off a one-to-many visualization tool that utilized large projection screens. The CAVE answered that challenge, and became the third major physical form of immersive VR (after goggles 'n' gloves and vehicle simulators). Carolina Cruz-Neira, Thomas A. DeFanti and Daniel J. Sandin are credited with its invention. It has been used and developed in cooperation with the NCSA, to conduct research in various virtual reality and scientific visualization fields. CAVE is a registered trademark of the University of Illinois Board of Regents. The name was first licensed to Pyramid Systems and is currently licensed to Mechdyne Corporation, the parent company of Fakespace Systems (Fakespace Systems acquired Pyramid Systems in 1999). Commercial systems based on the concept of the CAVE are available from a handful of manufacturers.


A lifelike visual display is created by projectors positioned outside the CAVE and controlled by physical movements from a user inside the CAVE. Stereoscopic LCD shutter glasses convey a 3D image. The computers rapidly generate a pair of images, one for each of the user's eyes. The glasses are synchronized with the projectors so that each eye only sees the correct image. Since the projectors are positioned outside of the cube, mirrors often reduce the distance required from the projectors to the screens. One or more computers, often SGI workstations, drive the projectors. Clusters of desktop PCs are popular to run CAVEs, because they cost less and run faster.


Software and libraries designed specifically for CAVE applications are available. There are several techniques for rendering the scene. OpenGL is better for simpler simulations, not large scenes. There are 3 popular scene graphs in use today: OpenSG, OpenSceneGraph, and OpenGL Performer. OpenSG and OpenSceneGraph are open source, while OpenGL Performer is a commercial product from SGI.

CAVELib is the original Application Programmer's Interface (API) developed for the CAVE(TM) system created at the Electronic Visualization Lab at University of Illinois Chicago. The software was commercialized in 1996 and further enhanced by VRCO Inc. The CAVELib is a low level VR software package in such that it abstracts away for a developer window and viewport creation, viewer-centered perspective calculations, displaying to multiple graphics channels, multi-processing and multi-threading, cluster synchronization and data sharing, and stereoscopic viewing. Developers create all of the graphics for their environment and the CAVELib makes it display properly. The CAVELib API is platform independent enabling developers to create high-end virtual reality applications on Windows and Linux operating systems (IRIX, Solaris, and HP-UX are no longer supported). CAVELib-based applications are externally configurable at run-time making an application executable independent of the display system.

VR Juggler is a suite of APIs designed to simplify the VR application development process. VR Juggler allows the programmer to write an application that will work with any VR display device, with any VR input devices, without changing any code or having to recompile the application. Juggler is used in over 100 CAVEs worldwide.

CoVE is a suite of APIs designed to enable the creation of reusable VR applications. CoVE provides programmers with an API to develop multi-user, multi-tasking, collaborative, cluster-ready applications with rich 2D interfaces using an immersive window manager and windowing API to provide windows, menus, buttons, and other common widgets within the VR system. CoVE also supports running X11 applications within the VR environment.

Equalizer is an open source rendering framework and resource management system for multipipe applications, ranging from single pipe workstations to VR installations. Equalizer provides an API to write parallel, scalable visualization applications which are configured at run-time by a resource server.

Syzygy is a freely-distributed grid operating system for PC Cluster Virtual Reality, Tele-Collaboration, and Multimedia Supercomputing, developed by the Integrated Systems Laboratory at the Beckman Institute of the University of Illinois at Urbana-Champaign. This middleware runs on Mac OS, Linux, Windows, and Irix. C++, OpenGL, and Python applications (as well as other regular computer apps) can run on this and be distributed for VR.

Avango is a framework for building distributed virtual reality applications. It provides a field/fieldcontainer based application layer similar to VRML. Within this layer a scene graph, based on OpenGL Performer, input sensors, and output actuators are implemented as runtime loadable modules (or plugins). A network layer provides automatic replication/distribution of the application graph using a reliable multi-cast system. Applications in Avango are written in Scheme and run in the scripting layer. The scripting layer provides complete access to fieldcontainers and their fields; this way distributed collaborative scenarios as well as render-distributed applications (or even both at the same time) are supported. Avango was originally developed at the VR group at GMD, now Virtual Environments Group at Franhofer IAIS and has been open-sourced in 2004. An in-depth description can be found in here.

CaveUT is an open source mutator for Unreal Tournament 2004. Developed by PublicVR, CaveUT leverages existing gaming technologies to create a CAVE environment. By using Unreal Tournament's spectator function CaveUT can position virtual viewpoints around the player's "head". Each viewpoint is a separate client that, when projected on a wall, gives the illusion of a 3D environment.

Quest3D A real-time 3D engine and development platform, suitable for CAVE implementations.

Vrui & 3DVisualizer are software packages developed for the cave in the Keck Center for Active Visualization in Earth Sciences and have been publicly released with continuing development. Vrui (Virtual Reality User Interface) handles real-time rendering, head tracking, etc. whereas 3DVisualizer provide volume visualization tools. Lidar and terrain data viewers are also under development as are a number of other projects that use Vrui as the underlying user interface.

inVRs The inVRs framework provides a clearly structured approach for the design of highly interactive and responsive VEs and NVEs. It is developed following Open Source principles (LGPL) and easy to use with CAVEs and a variety of input devices.

Developments in CAVE research

The biggest issue that researchers are faced with when it comes to the CAVE is size and cost. Researchers have realized this and have come up with a derivative of the CAVE system, called ImmersaDesk. With the ImmersaDesk, the user looks at one projection screen instead of being completely blocked out from the outside world, as is the case with the original CAVE. The idea behind the ImmersaDesk is that it is a single screen placed on a 45-degree angle so that the person using the machine has the opportunity to look forward and downward. The screen is 4’ X 5’, so it is wide enough to give the user the width that they need to obtain the proper 3-D experience. The 3-D images come out by using the same glasses as were used in the CAVE. This system uses sonic hand tracking and head tracking, so the system still uses a computer to process the users’ movements.

This system is much more affordable and practical than the original CAVE system for some obvious reasons. First, one does not need to create a “room inside of a room”. That is to say that one does not need to place the ImmersaDesk inside of a pitch-black room that is large enough to accommodate it. One needs one projector instead of four, and only one projection screen. One does not need a computer as expensive or with the same capabilities that are necessary with the original CAVE. Another thing that makes the ImmersaDesk attractive is the fact that since it was derived from the original CAVE, it is compatible with all of the CAVE’s software packages and also with all of the CAVE’s libraries and interfaces. [2]


In order to be able to create an image that will not be distorted or out of place, calibration must take place in the CAVE before an image is projected. The things that are actually being calibrated here are the electromagnetic sensors. What will happen is a person will put on the special glasses needed to be able to see the images in 3-D. The projectors then fill the CAVE with many one-inch boxes that are set one foot apart. The person then takes an instrument called an “ultrasonic measurement device” which has a cursor in the middle of it, and positions the device so that the cursor is visually in line with the projected box. This process can go on until almost 400 different blocks are measured. Each time the cursor is placed inside of a block, a computer program records the location of that block and sends the location to another computer. If the points are calibrated accurately, there should be no distortion in the images that are projected in the CAVE. This also allows the CAVE to correctly identify where the user is located and can precisely track their movements, leading to the projectors being able to display images based on where the person walks inside of the CAVE. [3]


The concept of the original CAVE has been reapplied and is currently being used in a variety of fields. Many universities own CAVE systems.

CAVEs are used for many things. Many engineering companies use CAVEs to enhance product development. Prototypes of parts can be created and tested, interfaces can be developed, and factory layouts can be simulated, all before spending any money on physical parts. This gives engineers a better idea of how a part will behave in the entire product.

List of CAVEs at universities


External links



Got something to say? Make a comment.
Your name
Your email address