Automatic and Interactive Methodologies for 3D Scene Generation


Traditional approaches for designing and constructing 3D scenes are often not automatic, difficult, and tedious for non-expert users. Our work investigates both automatic and interactive methodologies for scene generation. One of our main goals is to allow users to easily construct and manipulate 3D scenes without requiring knowledge of 3D graphics or complex 3D authoring tools.


Real-time 3D Scene Generation Incorporating Natural Language Voice and Text

This work combines 3D graphics, voice, and natural language processing to allow 3D scenes to be sketched from user descriptions. The key idea is to develop a system capable of creating scenes composed of objects from any collection of polygonal mesh based models without any additional information, through the use of simple descriptions given in voice or text.

Paper thumbnail  
L. Seversky and L. Yin. Real-time automatic 3D scene generation from natural language voice and text descriptions. In MULTIMEDIA '06: Proceedings of the 14th annual ACM international conference on Multimedia, pages 61-64, 2006. [  PDF  |  Video 1  |  Video 2  ]

Automatic scene generation using voice and text offers a unique multimedia approach to classic storytelling and human computer interaction with 3D graphics. In this paper, we present a newly developed system that generates 3D scenes from voice and text natural language input. Our system is intended to benefit non-graphics domain users and applications by providing advanced scene production through an automatic system. Scene descriptions are constructed in real-time using a method for depicting spatial relationships between and among different objects. Only the polygon representations of the objects are required for object placement. In addition, our system is robust. The system supports different quality polygon models such as those widely available on the Internet.

Paper thumbnail  
L. Seversky and L. Yin. Real-time spatial relationship based 3d scene composition of unknown objects. SIGGRAPH Poster Session, 2006. [  PDF  ]

Manual scene composition in 3D is a diffcult task and existing approaches attempt to construct scenes automatically [Coyne2001][Xu2002]. These methods depend heavily on explicit per object knowledge that is used to determine placement. We present a method for automatically generating 3D scenes composed of unknown objects in real-time. Our method does not require any a priori knowledge of the objects and therefore the objects are considered to be unknown to our system. All necessary information is computed from the object's geometric representation and is designed to support varying qualities of polygon models. The use of spatial relationships and relative positioning of objects is a natural and effective way for scene composition. Our method composes scenes by computing object placements that satisfy a desired spatial relationship such as on, under, next to, above, below, in front of, behind and to the left or right of. To illustrate our placement algorithm and its ability to be used interactively, a real-time scene composition framework using text and voice natural language input is developed.

Contact Information

Dr. Lijun Yin -- lijun aatt cs.binghamton ddott edu

Lee Seversky -- lee.seversky aatt binghamton ddott edu