Andreas Dietrich Enrico Gobbetti Sung-Eui Yoon
http://www.cnblogs.com/Files/bopi/2007 Massive Model Rendering Techniques.rar
We present an overview of current real-time massive model visualization technology, with the goal of providing readers with a high level understanding of the domain, as well as with pointers to the literature.
Interactive visualization and exploration of massive 3D models is a crucial component of many scientific and engineering disciplines and is becoming increasingly important for simulations, education, and entertainment applications such as movies and games. In all those fields, we are observing data explosion, i.e., information quantity is exponentially increasing. Typical sources of rapidly increasing massive data include the following:
• Large-scale engineering projects. Today, complete aircrafts, ships,
cars, etc. are designed purely digital. Usually, many geographically dispersed
teams are involved in such a complex process, creating thousands of different
parts that are modeled at the highest possibly accuracy. For example, the
Boeing 777 airplane seen in Figure
• Scientific simulations. Numerical simulations of natural real world effects can produce vast amounts of data that need to visualize to be scientifically interpreted. Examples include nuclear reactions, jet engine combustion, and fluid-dynamics to mention a few. Increased numerical accuracy as well as faster computation can lead to datasets of gigabyte or even terabyte size (Figure 1b).
• Acquisition and measuring of real-world objects. Apart from modeling and
computing geometry, scanning of real-world objects is a common way of acquiring
model data. Improvements in measuring equipment allow scanning in sub-mm
accuracy range, which can result in millions to billions of samples per object
• Modeling natural environments. Natural landscapes contain an incredible
amount of visual detail. Even for a limited field of view, hundreds of
thousands of individual plants might be visible. Moreover, plants are made of
highly complex structures themselves, e.g., countless leaves, complicated
branchings, wrinkled bark, etc. Even modeling only some of these effects can
produce excessive quantities of data. For example, the landscape model depicted
in Figure 1d measures “only” a square area of
Handling such massive models presents important challenges to developers. This is particularly true for highly interactive 3D programs, such as visual simulations and virtual environments, with their inherent focus on interactive, low latency, and real-time processing.
In the last decade, the graphics community has witnessed tremendous
improvements in the performance and capabilities of computing and graphics
hardware. It therefore naturally arises the question if such a performance
boost does not transform rendering performance problems into memories of the
past. A single standard dual-core 3 GHz Opteron processor has roughly 20
GFlops, a Play station
As a result, massive datasets cannot be interactively rendered by brute force methods. To overcome this limitation, researchers have proposed a wide variety of output-sensitive rendering algorithms, i.e., rendering techniques whose runtime and memory footprint is proportional to the number of image pixels, not to the total model complexity. In addition to requiring out-of-core data management, for handling datasets larger than main memory or for providing applications the ability to explore data stored on remote servers, these methods require the integration of techniques for filtering out as efficiently as possible the data that is not contributing to a particular image.
因而，大 规模数据集的交互渲染不能通过强力模型进行。要克服这个限制，研究者们提出了一系列的输出敏感型的渲染算法。如，渲染技术的运行时间和内存的要求与象素成 比例而不是与全部的模型复杂度成比例。此外在要求外核的数据管理技术时，要处理大于内存的数据集或提供应用程序能力来探测数据存储在远端服务器上，这些方 法需要集成高效的过虑对最终生成的图像不起作用的数据。
This article provides an overview of current massive model rendering technology, with the goal of providing readers with a high level understanding of the domain, as well as with pointers to the literature. The main focus will be on rendering of large static polygonal models, which are by far the current main test case for massive model visualization. We will first discuss the two main rendering techniques (Section II) employed in rendering massive models: rasterization and ray tracing. We will then illustrate how rendering complexity can be reduced by employing appropriate data structures and algorithms for visibility or detail culling, as well as by choosing alternate graphics primitive representations (Section III). We will further focus on data management (Section IV) and parallel processing issues (Section V), which are increasingly important on current architectures. The article concludes with an overview of how the various techniques are integrated into representative state-of-the-art systems, and a discussion of the benefits and limitations of the various approaches (Section VII).