Three-Dimensional Visualization

Overview

  • 6.1.1 Visualization
  • 6.1.1.1 Introduction

The progress made in hardware technology allows today’s computer systems to store very large amounts of data. Researchers from the University of Berkeley estimate that every year about 1.5 exabytes (= 1.5 million terabytes) of data are generated, of which a large portion is available in digital form. It is possible that in the next three years, more data will be generated than in all of human history to date.

The data are often automatically recorded via sensors and monitoring systems. Even simple transactions of everyday life, such as paying by credit card or using the telephone, are typically recorded by computers. Many variables are usually recorded, resulting in data with a high dimensionality. The data are collected because people believe they are a potential source of valuable information, providing new insights or a competitive advantage (at some point). Finding valuable information hidden in the data, however, is a difficult task. With today’s data management systems, it is only possible to examine quite small portions of the data. If the data are presented textually, the amount of data that can be displayed is in the range of some 100 data items, but this is a drop in the ocean when dealing with datasets containing millions of data items. If there is no possibility to adequately explore the large amounts of data that have been collected because of their potential usefulness, the data become useless and the databases become “data dumps.”

Information visualization and visual data analysis can help to deal with the flood of information. The advantage of visual data exploration is that the user is directly involved in the data analysis process. A large number of information visualization techniques have been developed over the last two decades to support the exploration of large datasets. In this chapter, we give an overview of information visualization and visual exploration using a classification based on the relations of the data type to the visualized, the visualization technique, and the interaction technique (Berthold & Hand, 2002).

6.1.1.2 Benefits of Visual Data Exploration

For data analysis to be effective, it is important to include the human in the data exploration process and combine the flexibility, creativity, and general knowledge of the human with the enormous storage capacity and the computational power of today’s computers. Visual data mining aims at integrating the human in to the data analysis process, applying human perceptual abilities to the analysis of large datasets available in today’s computer systems.

The basic idea of visual data mining is to present the data in some visual form, allowing the user to gain insight into the data, draw conclusions, and directly interact with the data. Visual data analysis techniques have proven to be of high value in exploratory data analysis. Visual data mining is especially useful when little is known about the data, and the exploration goals are vague. Since the user is directly involved in the exploration process, shifting and adjusting the exploration goals can be done in a continuous fashion as needed.

Visual data exploration can be seen as a hypothesis generation process; the visualizations of the data allow the user to gain insight into the data and come up with new hypotheses. The verification of the hypotheses can be done via data visualization but may also be accomplished by automatic techniques from statistics, pattern recognition, or machine learning. In addition to the direct involvement of the user, the main advantages of visual data exploration over automatic data analysis techniques are the following:

  • • Visual data exploration can easily deal with highly nonhomogeneous and noisy data.
  • • Visual data exploration is intuitive and requires no understanding of complex mathematical or statistical algorithms or parameters.
  • • Visualization can provide a qualitative overview of the data, allowing data phenomena to be isolated for further quantitative analysis.

As a result, visual data exploration usually allows faster data exploration and often provides more interesting results, especially in cases where automatic algorithms fail. In addition, visual data exploration techniques provide a much higher degree of confidence in the findings of the exploration. These facts lead to a high demand for visual exploration techniques and make them indispensable in conjunction with automatic exploration techniques (Berthold & Hand, 2002).

6.1.1.3 Visual Exploration Paradigm

Visual data exploration usually follows a three-step process: overview, zoom and filter, and details-on- demand (also called the information-seeking mantra; see Shneiderman, 1996). First, the user needs to get an overview of the data. In the overview, the user identifies interesting patterns or groups in the data and focuses on one or more of them (zoom and filter). Then, to analyze the patterns, the user must drill down and access details of the data.

Visualization technology may be used for all the three steps of the data exploration process. Visualization techniques are useful for showing an overview of the data, allowing the user to identify interesting subsets. In this step, it is important to keep the overview visualization while focusing on the subset using another visualization technique. An alternative is to distort the overview visualization in order to focus on the interesting subsets. This can be performed by dedicating a larger percentage of the display to the interesting subsets while decreasing screen utilization for uninteresting data. To further explore the interesting subsets, the user needs a drill-down capability to observe the details of the data. Note that visualization technology provides the base visualization techniques for all the three steps and also bridges the gaps between them (Keim & Ward, 2002).

6.1.2 Three-Dimensional Visualization

Briefly stated, three-dimensional or 3D approaches try to create visualizations that are closer to real- world metaphors or to improve space usage by adding an extra dimension. The user is able to rotate and move 3D objects and navigate inside a 3D world. The use of 3D software visualization has the potential to aid in the development process. Three-dimensional software visualization may transform the way that knowledge gathering activities take place during software engineering phases.

Some approaches propose using a 2D layout seen in a 3D perspective with interaction limited to 2D,

i.e., a 2.5D approach (Teyseyre & Campo, 2008).

In fact, there is an ongoing debate on 2D vs. 3D in the information visualization area. To identify and analyze the strengths and weaknesses of 3D/2D, we first review a categorization of 3D visualizations (Stasko & Wehrli, 1993):

  • 1. Augmented 2D views: These are typical 2D visualizations where the third dimension is added for aesthetic purposes. For example, Figure 6.1a shows a 3D presentation of a traditional 2D bar chart sorting algorithm (Carson, Parberry, & Jensen, 2007).
  • 2. Adapted 2D views: These are 2D visualizations extended to 3D to encode additional information. For example, Figure 6.1b shows a 3D visual representation of a software release history that displays the structure of the system in 2D planes and uses the third dimension to display historical information (Gall, Jazayeri, & Riva, 1999).

3. Inherent 3D application domain views: This category includes computations involving inherent 3D entities. For instance, Figure 6.1c represents a software system and its relationships using a metaball metaphor, i.e., a 3D modeling technique commonly used to represent complex organic shapes and structural relationships in biology and chemistry (Rilling & Mudur, 2002).

In general, the use of the third dimension in category 3 is not discussed in the literature. However, recent research in specific domains shows that 2D and 3D presentations are useful for different task types, and, hence, combined 2D/3D displays are suggested. Yet the question of the benefits offered by 3D over 2D remains salient in the other categories.

Several authors state that when two dimensions are enough to show information, it is not desirable to add a third. This extra dimension should be only used to visualize a dataset semantically richer. However, other authors think 3D presentations facilitate perception of the human visual system. They believe the inclusion of aesthetically appealing elements, such as 3D graphics and animation, can greatly increase a design’s appeal, intuitiveness, and memorability (Brath, Peters, & Senior, 2005). For example, when Irani and Ware compared 2D UML diagrams to geon diagrams (3D shaded solids), they found users could identify substructures and relationship types with much lower error rates for geon diagrams than UML diagrams (Irani & Ware, 2003). In addition, the use of 3D presentations provides greater information density than 2D ones (Robertson, Card, & Mackinlay, 1993). An experiment (Ware & Franck, 1996) suggests larger graphs can be interpreted if laid out in 3D and displayed with stereo and/ or motion depth cues to support spatial perception. This extra dimension also helps to provide a clear perception of relations between objects by integrating local views with global views and by composing multiple 2D views in a single 3D view (Irani & Ware, 2003). Finally, 3D graphics’ similarity with the real world enables us to represent the world in a more natural way. The representation of objects can be done according to their associated real concept; therefore, the interactions can be more powerful

Three-dimensional view categorization

FIGURE 6.1 Three-dimensional view categorization: (a) 2D augmented view: a bubble sort visualization; (b) 2D adapted view: software release history; (c) Inherent 3D view: metaballs (Teyseyre & Campo. 2008).

(ranging from immersive navigation to different manipulation techniques), and the animations can be more realistic (Teyseyre & Campo, 2008).

There are some problems with 3D, however, such as intensive computation, more complex implementation, and user adaptation and disorientation. The first problem can be addressed using powerful and specialized hardware. Development complexity can be reduced using 3D toolkits and frameworks, 3D modeling languages, or 3D software visualization frameworks.

That leaves the problem of user adaptation. Most users just have experience with classical WIMP (Windows, Icons, Menus, Pointing) 2D desktop metaphors. Therefore, the interaction with 3D presentations and possibly the use of special devices demand considerable adaptation efforts (Teyseyre & Campo. 2008).

Furthermore, it is often difficult for users to understand 3D spaces and perform actions inside them. In particular, as a consequence of a richer set of interactions and more degrees of freedom, users may be disoriented. For example. Plaisant, Grosjean, and Bederson (2002) suggest 3D representations only marginally improve the screen space problem while increasing the complexity of interaction. When Cockburn and McKenzie (2002) evaluated the effectiveness of spatial memory in 2D and 3D. they found navigation in 3D spaces can be difficult, and even simple tasks can be problematic.

To overcome these limitations, 3D enhanced interfaces have been proposed. These interfaces might offer simpler navigation, more compelling functionality, safer movements, and less occlusion than 3D reality (Shneiderman, 2003). For instance, one way to reduce disorientation is to constrain user navigation with lateral or linear movements or to use physical laws such as gravity (Bowman & Hodges, 1995). Another possibility is to include automatic camera assistance during the transition phase from one focus object to the other. Several approaches have been proposed using landmarks to help users to orient in a 3D world (Teyseyre & Campo, 2008).

Finally, occlusion may distortion the user’s perception of the data space when the information space is dense (Chuah, Roth, Mattis, & Kolojejchick, 1995). Occlusion is a serious problem because when objects are occluded, they are invisible to the user (Teyseyre & Campo, 2008).

To sum up, there is a vast literature on the advantages and disadvantages of 3D versus 2D with conflicting results. Table 6.1 summarizes 3D visualization strengths and weaknesses (Teyseyre & Campo, 2008). Ultimately, if used in ways that exploit their strengths while avoiding their weaknesses (Mullet et al., 1995), 3D visualizations have the potential to aid and improve the development process.

6.1.2.1 Software Visualization

It is a well-known fact that developing software systems is complex and requires a number of cognitive tasks, such as search, comprehension, analysis, and design, among others. Software visualization can be a helpful tool to enhance the comprehension of computer programs. In fact, in a recent survey of 111 researchers from software maintenance, reengineering, and reverse engineering, 40% of the respondents said software visualization was very necessary for their work and another 42% found it important but not critical (Koschke, 2002).

The aim of software visualization is not to create impressive images but to create images that evoke the user’s mental images for better software comprehension (Diehl, 2007). Through visualization, engineers can obtain an initial perception of how software is structured, understand the software logic, and

TABLE 6.1

Three-Dimensional Strengths and Weaknesses (Teyseyre & Campo, 2008)

Strengths

Weaknesses

  • • Greater information density.
  • • Integration of local views with global views.
  • • Composition of multiple 2D views in a single 3D view.
  • • Enhanced perception of the human visual system.
  • • Familiarity, realism, and real-world representations.
  • • Intensive computation.
  • • More complex implementation.
  • • User adaptation to 3D metaphors and special devices.
  • • More difficult for users to understand 3D spaces and perform actions in it.
  • • Occlusion.

explain and communicate the development. Software visualization combines techniques from different areas, including software engineering, data mining, computer graphics, information visualization, and human-computer interactions (Teyseyre & Campo, 2008). More precisely, software visualization is a specialized area of information visualization that can be defined as:

a representation of computer programs, associated documentation and data, that enhances, simplifies and clarifies the mental representation the software engineer has of the operation of a computer system.

(МШ & Steiner, 2002)

Software visualization in 2D has been extensively studied, and many techniques for representing software systems have been proposed. However, there is a demand for effective programs to understand techniques and methods.

Although the question of the benefits offered by 3D over 2D still remains to be answered, a growing area of research is investigating the application of 3D graphics to software visualization with good results. Researchers are trying to find new 3D visual representations to overcome some of the limitations of 2D and exploit 3D’s richer expressiveness. Three-dimensional software visualization has been studied in such areas as algorithm animation for educational purposes, debugging, 3D programming, requirements engineering, software evolution, cyberattacks, ontology visualization and semantic Web, mobile objects, visualization for reverse engineering, software maintenance, and comprehension at different levels of abstraction (source code, object-oriented systems, and software architectures), among others (Teyseyre & Campo, 2008).

6.1.2.2 Definition of 3D Visualization Software

Three-dimensional visualization software is used to view and interrogate 3D models and other deliverables created using mechanical computer-aided design (MCAD) software (Lifecycle Insights).

6.1.2.2.1 Capabilities Provided

MCAD software provides some combination of the following capabilities (Lifecycle Insights): [1]

  • • Mobile 3D visualization: While some activities in 3D visualization applications require a user to sit at a desk, many do not. Software providers have been active in moving their applications to mobile platforms like tablets and smartphones. As a result, engineering and non-engineering stakeholders alike can work on the go.
  • 6.1.3 Visualization Techniques

There are many techniques to visualize data. In addition to standard 2D/3D techniques such as x-y (x-y-z) plots, bar charts, line graphs, and maps, there are a number of more sophisticated techniques. These correspond to basic visualization principles that may be combined to implement a specific visualization system (Keim & Ward. 2002).

6.1.3.1 Geometrically Transformed Displays

Geometrically transformed display techniques aim at finding “interesting” transformations of multidimensional datasets. The class of geometric display methods includes techniques from exploratory statistics, such as scatterplot matrices (Andrews., 1972; Cleveland, 1993), and techniques that can be subsumed under the term “projection pursuit” (Huber, 1985). Other geometric projection techniques include Prosection Views (Furnas & Buja, 1994; Spence, Tweedie, Dawkes, & Su, 1995), Hyperslice (Van Wijk & Van Liere, 1993), and the well-known parallel coordinates visualization technique (Inselberg & Dimsdale, 1990). The parallel coordinates technique maps the ^-dimensional space onto the two display dimensions by using к axes that are parallel to each other (either horizontally or vertically oriented), evenly spaced across the display. The axes correspond to the dimensions and are linearly scaled from the minimum to the maximum value of the corresponding dimension. Each data item is presented as a chain of connected line segments, intersecting each of the axes at a location corresponding to the value of the considered dimensions (see Figure 6.2) (Keim & Ward, 2002).

6.1.3.2 Iconic Displays

Another class of visual data exploration techniques is the iconic display method. The idea is to map the attribute values of a multidimensional data item to the features of an icon. Icons can be arbitrarily defined; they may be little faces (Chernoff, 1973), needle icons as used in Massive Graph Visualizer (MGV), star icons (Ward, 1994), stick figure icons (Pickett & Grinstein, 1988), color icons (Keim & Kriegel, 1994; Levkowitz, 1991), or TileBars (Hearst, 1995), for example. The visualization is generated by mapping the attribute values of each data record to the features of the icons. In case of the stick figure technique, for example, two dimensions are mapped to the display dimensions, and the remaining dimensions are mapped to the angles and/or limb length of the stick figure icon. If the data items are relatively dense with respect to the two display dimensions, the resulting visualization presents texture patterns that vary according to the characteristics of the data and are, therefore, detectable by pre-attentive perception.

Parallel coordinates visualization (Keim & Ward, 2002)

FIGURE 6.2 Parallel coordinates visualization (Keim & Ward, 2002).

Iris dataset, displayed using star glyphs positioned based on the first two principal components (from XmdvTool; see Ward, 1994)

FIGURE 6.3 Iris dataset, displayed using star glyphs positioned based on the first two principal components (from XmdvTool; see Ward, 1994).

Figure 6.3 shows an example of this class of techniques. Each data point is represented by a star icon/ glyph, where each data dimension controls the length of a ray emanating from the center of the icon. In this example, the positions of the icons are determined using principal component analysis (PCA) to convey more information about data relations. Other data attributes could also be mapped to an icon position (Keim & Ward, 2002).

6.1.3.3 Dense Pixel Displays

The basic idea of dense pixel techniques is to map each dimension value to a colored pixel and group the pixels belonging to each dimension into adjacent areas (Keim, 2000a). Since dense pixel displays generally use one pixel per data value, the techniques allow the visualization of the largest amount of data possible on current displays (up to about 1,000,000 data values). If each data value is represented by one pixel, the main question is how to arrange the pixels on the screen. Dense pixel techniques use different arrangements for different purposes. When the pixels are arranged in an appropriate way, the resulting visualization provides detailed information on local correlations, dependencies, and hot spots (Keim & Ward, 2002).

Well known examples are the recursive pattern technique (Keim. Kriegel, & Ankerst, 1995) and the circle segments technique (Ankerst, Keim, & Kriegel, 1996).

The recursive pattern technique is based on a generic recursive back-and-forth arrangement of the pixels and is particularly aimed at representing datasets with a natural order according to one attribute (e.g., time series data). The user may specify parameters for each recursion level and thereby control the arrangement of the pixels to form semantically meaningful substructures. The base element on each recursion level is a pattern of height h, and width w, as specified by the user. First, the elements correspond to single pixels arranged within a rectangle of height Л, and width w, from left to right, then backward from right to left, then again forward from left to right, and so on. The same basic procedure is applied to all recursion levels; the only difference is that the basic elements on level i are in the pattern resulting from the level (/-1) arrangements. Figure 6.4 shows a sample recursive pattern visualization of financial data (Keim & Ward, 2002). The visualization shows 20years (January 1974-April 1995) of daily prices of the 100 stocks contained in the Frankfurt Stock Index (FAZ).

The idea of the circle segments technique (Ankerst, Keim, & Kriegel, 1996) is to represent the data in a circle divided into segments, one for each attribute. Within the segments, each attribute value is again

Dense pixel displays

FIGURE 6.4 Dense pixel displays: recursive pattern technique (Keim & Ward, 2002).

Dense pixel displays

FIGURE 6.5 Dense pixel displays: circle segments technique (Keim & Ward, 2002).

visualized by a single colored pixel. The arrangement of the pixels starts at the center of the circle and continues to the outside on a line orthogonal to the segment-halving line in a back-and-forth manner. The rationale of this approach is that close to the center, all attributes are close to each other, thus enhancing the visual comparison of their values. Figure 6.5 shows an example of circle segment visualization using the same data (this time 50 stocks) as Figure 6.4 (Keim & Ward, 2002).

6.1.3.4 Stacked Displays

Stacked display techniques are tailored to present data partitioned in a hierarchical fashion. In the case of multidimensional data, the data dimensions to be used for partitioning the data and building the hierarchy have to be selected appropriately. An example of a stacked display technique is dimensional stacking (LeBlanc, Ward, & Wittels, 1990). The basic idea is to embed one coordinate system inside another coordinate system; i.e., two attributes form the outer coordinate system, two other attributes are

Dimensional stacking visualization of drill hole mining data (Keim & Ward, 2002)

FIGURE 6.6 Dimensional stacking visualization of drill hole mining data (Keim & Ward, 2002).

embedded into the outer coordinate system, and so on. The display is generated by dividing the outermost level coordinate system into rectangular cells, and within the cells, the next two attributes are used to span the second-level coordinate system. This process may be repeated multiple times. The usefulness of the resulting visualization largely depends on the data distribution of the outer coordinates; therefore, the dimensions used to define the outer coordinate system have to be selected carefully. A rule of thumb is to choose the most important dimensions first.

A dimensional stacking visualization of mining data with longitude and latitude mapped to the outer x and у axes, as well as ore grade and depth mapped to the inner x and у axes, is shown in Figure 6.6 (Keim & Ward. 2002).

  • 6.1.4 Goals of Visualization Techniques
  • 6.1.4.1 Explorative Analysis
  • • Starting point: Data without hypotheses about the data.
  • • Process: Interactive, usually undirected search for structures, trends, etc.
  • • Result: Visualization of the data, providing hypotheses about the data (Keim, 2000).
  • 6.1.4.2 Confirmative Analysis
  • • Starting point: Hypotheses about the data.
  • • Process: Goal-oriented examination of the hypotheses.
  • • Result: Visualization of the data, allowing the confirmation or rejection of the hypotheses (Keim, 2000).
  • 6.1.4.3 Presentation
  • • Starting point: Facts to be presented are fixed a priori.
  • • Process: Choice of an appropriate presentation technique.
  • • Result: High-quality visualization of the data presenting the facts (Keim, 2000).
  • 6.1.5 Data Type to Be Visualized

In information visualization, the data usually consist of a large number of records, each consisting of a number of variables ordimensions. Each record corresponds to an observation, measurement, transaction, etc. Examples are customer properties, e-commerce transactions, and physical experiments. The number of attributes can differ from dataset to dataset. One particular physical experiment, for example, can be described by five variables, while another may need hundreds of variables. We call the number of variables the dimensionality of the dataset. Datasets may be one-dimensional, two-dimensional, or multidimensional, or they may have more complex data types such as text/hypertext or hierarchies/graphs. A distinction is sometimes made between dense (or grid) dimensions and the dimensions which may have arbitrary values. Depending on the number of dimensions with arbitrary values, the data are sometimes also called univariate, bivariate, etc. (Keim, 2002).

6.1.5.1 One-Dimensional Data

One-dimensional data usually have one dense dimension. A typical example of one-dimensional data is temporal data. Note that with each point of time, one or multiple data values may be associated. Examples are time series of stock prices (see Figures 6.3 and 6.4) and the time series of news data used in the ThemeRiver examples (see Figures 6.1, 6.3-6.5) (Keim, 2002).

6.1.5.2 Two-Dimensional Data

Two-dimensional data have two distinct dimensions. A typical example is geographical data, where the two distinct dimensions are longitude and latitude; x-y plots are a typical method to show two- dimensional data, and maps are a special type of x-y plots to show two-dimensional geographical data. Examples are the geographical maps used in Polaris and in MGV. Although it seems easy to deal with temporal or geographic data, caution is advised. If the number of records to be visualized is large, temporal axes and maps get quickly glutted and may not help to understand the data (Keim, 2002).

6.1.5.3 Multidimensional Data

Many datasets consist of more than three attributes and do not allow a simple visualization as 2D or 3D plots. Examples of multidimensional (or multivariate) data are tables from relational databases, which often have tens to hundreds of columns (or attributes). Since there is no simple mapping of the attributes to the two dimensions of the screen, more sophisticated visualization techniques are needed. An example of a technique which allows the visualization of multidimensional data is the parallel coordinate technique (Inselberg & Dimsdale, 1990) (see Figure 6.1), which is also used in the scalable framework (see Figure 6.7). Parallel coordinates display each multidimensional data item as a polygonal line which intersects the horizontal dimension axes at the position corresponding to the data value for the corresponding dimension (Keim, 2002).

6.1.5.4 Text and Hypertext

Not all data types can be described in terms of dimensionality. In the age of the World Wide Web, one important data type is text and hypertext and another is multimedia Web page contents. These data types differ from others in that they cannot be easily described by numbers and, therefore, most of the standard visualization techniques cannot be applied. In most cases, the data must be transformed into description vectors before visualization techniques can be used. An example of a simple transformation is word counting (Nowell, Havre, Hetzler, & Whitney, 2001), often combined with a PCA or multidimensional scaling (Keim, 2002).

6.1.5.5 Hierarchies and Graphs

Data records often have some relationship to other pieces of information. Graphs are widely used to represent such interdependencies. A graph consists of set of objects, called nodes, and connections between these objects, called edges. Examples are the email interrelationships among people, their shopping behavior, the file structure of the hard disk, or the hyperlinks in the World Wide Web. A number of specific visualization techniques deal with hierarchical and graphical data (Keim, 2002).

Refinement of geographical granularity (Kreuseler, Lopez, & Schumann, 2001)

FIGURE 6.7 Refinement of geographical granularity (Kreuseler, Lopez, & Schumann, 2001).

6.7.5.6 Algorithms and Software

Another class of data is algorithms and software. Coping with large software projects is a challenge. The goal of visualization is to support software development by helping to understand algorithms, e.g., by showing the flow of information in a program, to enhance the understanding of written code, e.g., by representing the structure of thousands of source code lines as graphs, and to support the programmer in debugging the code, e.g., by visualizing errors. A large number of tools and systems support these tasks (Keim, 2002).

6.1.6 Visualization Operations

Visualization operations create renditions of given scenes or object systems. Their purpose is to facilitate the visual perception of object information. They can be scene-based or object-based (Udupa, 1999).

6.7.6.7 Scene-Based Visualization

In scene-based visualization, renditions are created directly from given scenes. Within this approach, two further subclasses may be identified: section mode and volume mode (Udupa, 1999).

6.1.6.1.1 Section Mode

Opinions differ on what constitutes a “section” and how this information is displayed. Natural sections may be axial, coronal, or sagittal; oblique or curved sections are also possible. Information is displayed as a montage with the use of roam through (fly through) and gray scale and pseudo-color.

Figure 6.8 shows a montage display of the natural sections of a computed tomography (CT) scan (Udupa, 1999).

Figure 6.9a demonstrates a 3D display-guided extraction of an oblique section from a CT scan of a pediatric patient’s head. This re-sectioning operation illustrates how visualization is needed

Montage display of a 3D CT scan of the head (Udupa, 1999)

FIGURE 6.8 Montage display of a 3D CT scan of the head (Udupa, 1999).

(a) Three-dimensional display-guided extraction of an oblique section from CT data obtained in a patient with a craniofacial disorder; (b) Pseudo-color display (Udupa, 1999)

FIGURE 6.9 (a) Three-dimensional display-guided extraction of an oblique section from CT data obtained in a patient with a craniofacial disorder; (b) Pseudo-color display (Udupa, 1999).

to perform visualization itself. Figure 6.9b illustrates pseudo-color display with two sections from a brain magnetic resonance (MR) imaging study in a patient with multiple sclerosis. The two sections, representing approximately the same location in the patient’s head, were taken from 3D scenes obtained at different times and subsequently registered. The sections are assigned red and green hues. The display shows yellow (produced by a combination of red and green hues) where the sections match perfectly or where there has been no change (e.g., in the lesions). At other places, either red or green appears (Udupa, 1999).

6.1.6.1.2 Volume Mode

In volume mode visualization, information may be displayed as surfaces, interfaces, or intensity distributions with the use of surface rendering, volume rendering, or maximum intensity projection (MIP). A projection technique is always needed to move from the higher-dimensional scene to the 2D screen of the monitor. For scenes of four or more dimensions, 3D “cross sections” must first be determined, after which a projection technique can be applied to move from 3D to 2D. Two approaches may be used: first, ray casting (Levoy, 1988) consists of tracing a line perpendicular to the viewing plane from every pixel in the viewing plane into the scene domain; second, voxel projection (Frieder, Gordon, & Reynolds, 1985) consists of directly projecting voxels encountered along the projection line from the scene onto the viewing plane (see Figure 6.10). Voxel projection is generally considerably faster than ray casting; however, either of these projection methods may be used with any of the three rendering techniques (MIP, surface rendering, volume rendering) (Udupa, 1999).

In MIP, the intensity assigned to a pixel in the rendition is simply the maximum scene intensity encountered along the projection line (see Figure 6.11a) (Brown & Riederer, 1992; Schreiner, Paschal, & Galloway, 1996). MIP is the simplest of all 3D rendering techniques. It is most effective when the objects of interest are the brightest in the scene and have a simple 3D morphology and a minimal gradation of intensity values. Contrast material-enhanced CT angiography and MR angiography are ideal applications for this method, and MIP is commonly used (Napel, Marks, & Rubin, 1992; Hertz et al., 1993). Its main advantage is that it requires no segmentation. However, the ideal conditions mentioned earlier frequently go unfulfilled, due (for example) to the presence of other bright objects, such as clutter from surface coils in MR angiography, bone in CT angiography, or other obscuring vessels that may not be of interest. Consequently, some segmentation eventually becomes necessary (Udupa, 1999).

Schematic of projection techniques for volume mode visualization (Udupa, 1999)

FIGURE 6.10 Schematic of projection techniques for volume mode visualization (Udupa, 1999).

Fuzzy connected segmentation

FIGURE 6.11 Fuzzy connected segmentation: (a) Three-dimensional maximum intensity projection (MIP) rendition of an MR angiography scene; (b) MIP rendition of 3D fuzzy connected vessels detected in the scene in (a). Fuzzy connectedness has been used to remove the clutter that obscures the vasculature (Udupa, 1999).

In surface rendering (Goldwasser & Reynolds, 1987), object surfaces are portrayed in the rendition. A threshold interval must be specified to indicate the object of interest in the given scene. Clearly, speed is of the utmost importance in surface rendering because the idea is that object renditions are created interactively directly from the scene as the threshold is changed. Instead of thresholding, any automatic, hard, boundary- or region-based method can be used. In such cases, however, the parameters of the method will have to be specified interactively, and the speed of segmentation and rendition must be sufficient to make this mode of visualization useful. Although rendering based on thresholding can presently be accomplished in about 0.03-0.25 s on a Pentium 300 with the use of appropriate algorithms in software (Udupa, Odhner, & Samarasekera, 1994), more sophisticated segmentation methods (e.g., kNN) may not offer interactive speed.

The actual rendering process consists of three basic steps: projection, hidden part removal, and shading. These steps are needed to impart a sense of three dimensionality to the rendered image. Additional cues for three dimensionality may be provided by techniques such as stereoscopic display, motion parallax by rotation of the objects, shadowing, and texture mapping (Udupa, 1999).

If ray casting is used as the method of projection, hidden part removal is performed by stopping at the first voxel encountered along each ray that satisfies the threshold criterion (Hohne & Bernstein, 1986). The value (shading) assigned to the pixel in the viewing plane that corresponds to the ray is determined, as described later. If voxel projection is used, hidden parts can be removed by projecting voxels from the farthest to the closest (with respect to the viewing plane) and always overwriting the shading value in one of a number of computationally efficient ways (Frieder, Gordon, & Reynolds, 1985; Reynolds, Gordon, & Chen, 1987; Herman & Liu. 1977; Udupa & Odhner, 1991).

The shading value assigned to a pixel p in the viewing plane depends on the voxel v that is eventually projected onto p. The faithfulness with which this value reflects the shape of the surface around v largely depends on the surface normal vector estimated at v. Two classes of methods are available for this purpose: object-based and scene-based methods.

In object-based methods (Chen, Herman, & Reynolds, 1985; Gordon & Reynolds, 1985), the vector is determined purely from the geometry of the shape of the surface in the vicinity of v.

In scene-based methods (Hohne & Bernstein, 1986), the vector is considered to be the gradient of the given scene at v; i.e., the direction of the vector is the same as the direction in which scene intensity changes most rapidly at v. Given the normal vector N at v, the shading assigned to p is usually determined as [f, (v, N, L) +/j(v, N, L, V)] /„(v), where f, is the diffuse component of reflection, / is the specular component,/,, is a component that depends on the distance of v from the viewing plane, and L and V

are unit vectors indicating the direction of the incident light and the viewing rays. The diffuse component is independent of the viewing direction but depends solely on L (as a cosine of the angle between L and N). It captures the scattering property of the surface, whereas the specular component captures surface shininess. The specular component is highest in the direction of ideal reflection R whose angle with N is equal to the angle between L and N. This reflection decreases as a cosine function on either side of R. By weighting the three components in different ways, different shading effects can be achieved (Udupa, 1999).

In scene-based surface rendering, a hard object is implicitly created and rendered “on the fly” from the given scene. In scene-based volume rendering, a fuzzy object is implicitly created and rendered on the fly from the given scene. Clearly, surface rendering becomes a special case of volume rendering. Furthermore, volume rendering in this mode is generally much slower than surface rendering, typically requiring 3-20s, even on specialized hardware rendering engines.

The basic idea in volume rendering is to assign an opacity from 0% to 100% to every voxel in the scene. The opacity value is determined on the basis of the objectness value at the voxel and how prominently we want to portray this particular grade of objectness in the rendition. This opacity assignment is specified interactively by way of an opacity function (see Figure 6.12), wherein the vertical axis indicates percentage of opacity. Every voxel is now considered to transmit, emit, and reflect light. The goal is to determine the amount of light reaching every pixel in the viewing plane. The amount of light transmitted depends on the opacity of the voxel. Light emission depends on objectness and, hence, on opacity: the greater the objectness, the greater the emission. Similarly, reflection depends on the strength of the surface; the greater the strength: the greater the reflection (Udupa, 1999).

Like surface rendering, volume rendering consists of three basic steps: projection, hidden part removal, and shading or compositing. The principles underlying projection are identical to those described for surface rendering (Udupa, 1999).

Hidden part removal is much more complicated for volume rendering than for surface rendering. In ray casting, a common method is to discard all voxels along the ray from the viewing plane beyond a point at which the “cumulative opacity” is above a high threshold (e.g., 90%) (Levoy, 1990). In voxel projection, a voxel can also be discarded if the voxels surrounding it in the direction of the viewing ray have “high” opacity (Udupa & Odhner, 1993).

The shading operation, which is more appropriately termed compositing, is also more complicated for volume rendering than for surface rendering. Compositing must take into account all three components: transmission, reflection, and emission. We can start from the voxel farthest from the viewing plane along each ray and work towards the front, calculating the output light for each voxel. The net light output by the voxel closest to the viewing plane is assigned to the pixel associated with the ray. Instead of using this back-to-front strategy, we could also make calculations from front to back, and this has actually been shown to be faster (Udupa & Odhner, 1993).

Diagram of fuzzy thresholding (Udupa, 1999)

FIGURE 6.12 Diagram of fuzzy thresholding (Udupa, 1999).

Scene-based volume rendering with voxel projection. Rendition of knee CT data from Figure 6.14 shows bone, fat, and soft tissue (Udupa, 1999)

FIGURE 6.13 Scene-based volume rendering with voxel projection. Rendition of knee CT data from Figure 6.14 shows bone, fat, and soft tissue (Udupa, 1999).

Graded composition and hanging togetherness

FIGURE 6.14 Graded composition and hanging togetherness. CT scan of the knee illustrates graded composition of intensities and hanging togetherness. Voxels within the same object (e.g., the femur) are assigned considerably different values. Despite this gradation of values, it is not difficult to identify the voxels as belonging to the same object (hanging togetherness) (Udupa, 1999).

In volume rendering (as in surface rendering), voxel projection is substantially faster than ray casting. Figure 6.13 shows the CT knee dataset illustrated in Figure 6.14 rendered with this method. Three types of tissue—bone, fat, and soft tissue—are identified (Udupa, 1999).

6.1.6.2 Object-Based Visualization

In object-based visualization, objects are first explicitly defined and then rendered. In difficult segmentation situations, or when segmentation is time consuming or involves too many parameters, it is impractical to perform direct scene-based rendering. The intermediate step of completing object definition then becomes necessary (Udupa, 1999).

6.1.6.2.1 Surface Rendering

Surface rendering methods take hard object descriptions as input and create renditions. The methods of projection, hidden part removal, and shading are similar to those described for scene-based surface rendering, except that a variety of surface description methods have been investigated using voxels, points, voxel faces, triangles, and other surface patches (Frieder, Gordon, & Reynolds, 1985; Reynolds, Gordon, & Chen, 1987; Udupa & Odhner, 1991). Therefore, projection methods that are appropriate for specific surface elements have been developed. Figure 6.15a shows a rendition, created with the use of voxel faces on the basis of CT data, of the craniofacial skeleton in a patient with agnathia (Udupa, 1999).

Figure 6.16 shows renditions of the bones of the foot created using the same method on the basis of MR imaging data (Udupa, 1999).

Object-based visualization of the skull in a child with agnathia

FIGURE 6.15 Object-based visualization of the skull in a child with agnathia: (a) Surface-rendered image; (b) Subsequent volume-rendered image preceded by the acquisition of a fuzzy object representation with use of fuzzy thresholding (see Figure 6.12) (Udupa, 1999).

Rigid object-based registration

FIGURE 6.16 Rigid object-based registration. Sequence of 3D MR imaging scenes of the foot allows kinematic analysis of the midtarsal joints. The motion (i.e., translation and rotation) of the talus, calcaneus, and navicular and cuboid bones from one position to the other is determined by registering the bone surfaces in the two different positions (Udupa, 1999).

6.1.6.2.2 Volume Rendering

Volume rendering methods take as input fuzzy object descriptions in the form of a set of voxels, wherein values for objectness and a number of other parameters (e.g., gradient magnitude) are associated with each voxel (Udupa & Odhner, 1993). Because the object description is more compact than the original scene, and additional information for increasing computation speed can be stored as part of the object description, volume rendering based on fuzzy object description can be performed at interactive speeds, even on personal computers, such as the Pentium 300, entirely in software. In fact, the rendering speed (2-15 s) is now comparable to that of scene-based volume rendering with specialized hardware engines.

Figure 6.15b shows a fuzzy object rendition of the dataset in Figure 6.15a. Figure 6.17a shows a rendition of craniofacial bone and soft tissue, both of which were defined separately using the fuzzy connected methods described earlier. Note that if we use a direct scene-based volume rendering method with the opacity function illustrated in Figure 6.12, the skin becomes indistinguishable from other soft tissues and always obscures the rendition of muscles (see Figure 6.17b) (Udupa, 1999).

6.1.6.3 Misconceptions in Visualization

Several inaccurate statements concerning visualization frequently appear in the literature. The following statements are seen most often (Udupa, 1999).

6.1.6.3.1 Surface Rendering Is the Same as Thresholding

Clearly, thresholding is only one—albeit the simplest—of the many available hard region- and boundary-based segmentation methods, the output of any of which can be surface rendered (Udupa, 1999).

Visualization with volume rendering

FIGURE 6.17 Visualization with volume rendering: (a) Object-based volume-rendered image demonstrates bone and soft-tissue structures (muscles) detected earlier as separate fuzzy connected objects in a 3D craniofacial CT scene. The skin is essentially “peeled away” because of its weak connectedness to muscles; (b) Scene-based volume-rendered version of the scene in (a) was acquired with use of the opacity function (see Figure 6.5) separately for bone and soft tissue. The skin has become indistinguishable from muscles because they have similar CT numbers, thus obscuring the rendition of the muscles (Udupa, 1999).

6.1.6.3.2 Volume Rendering Does Not Require Segmentation

Although volume rendering is a general term and is used in different ways, the statement is false. The only useful volume rendering or visualization technique that requires no segmentation is MIP. The opacity assignment schemes illustrated in Figure 6.5 and described in the section entitled “Scene- based visualization” (Section 6.1.6.2) are clearly fuzzy segmentation strategies and involve the same problems encountered by any segmentation method. It is untenable to hold that opacity functions such as the one shown in Figure 6.5 do not represent segmentation while maintaining the manifestation that results when ?, = t2 and t3 = t4 (corresponding to thresholding) does represent segmentation (Udupa, 1999).

6.1.6.3.3 The Term Volume Rendering May Be Used to Refer to Any Scene-Based Rendering Technique, as Well as Object-Based Rendering Techniques

The use of the term “volume rendering” varies widely. In one sense, it can also apply to the section mode of visualization. It is better to use volume rendering to refer to fuzzy object rendering, whether performed with scene-based or object-based methods but not to refer to hard object rendering methods.

There are many challenges associated with visualization. First, preprocessing operations (and, therefore, visualization operations) can be applied in many different sequences to achieve the desired result. For example, the filtering-interpolation-segmentation-rendering sequence may produce renditions that are significantly different from those produced by interpolation-segmentation-filtering- rendering. With the large number of different methods possible for each operation and the various parameters associated with each operation, there are myriad ways of achieving the desired results. Figure 6.18 shows five images derived from CT data that were created by performing different operations. Systematic study is needed to determine which combination of operations is optimal for a given application. Normally, the fixed combination provided by the 3D imaging system is assumed to be the best for that application.

Second, the objective comparison of visualization methods becomes an enormous task in view of the vast number of ways we can reach the desired goal. Third, it can be challenging to achieve a realistic tissue display that includes color, texture, and surface properties (Udupa, 1999).

Preprocessing and visualization operations. Renditions from CT data created using five preprocessing and visualization operations (Udupa, 1999)

FIGURE 6.18 Preprocessing and visualization operations. Renditions from CT data created using five preprocessing and visualization operations (Udupa, 1999).

  • [1] Three-dimensional model conversion to lightweight formats: 3D models in CAD applicationscan be exorbitantly large because they contain the full definition of how the geometry was created, often using parametric feature-based approaches. When applications in this technologycategory import such models, they convert them into lightweight models that are dramaticallysmaller and more responsive. • Three-dimensional model interrogation: Some non-engineering stakeholders only need to beable to interrogate 3D models to obtain the information they need to do their jobs. This includestaking measurements, creating cross sections, and checking other geometric characteristics.This set of capabilities is common for the applications in this technology category. • Markup and review: A critical activity in engineering is the design review. This involves engineering peers checking the validity of the design and looking for mistakes and errors. These applications enable engineers to do so by marking up the 3D model with highlights and annotations. • Procedure development and validation: Organizations such as manufacturing and service oftenneed to do more than simply interrogate a 3D model. They must develop procedures that represent how the product will be manufactured on the shop floor or serviced in the field. Softwareproviders for these applications have added capabilities that allow users to develop such procedures and then validate that they can, in fact, be completed. • Specialized visualization: Visualization technologies started out as generic tools that could beused in a wide variety of use cases. Since then, more and more organizations have voiced needsto produce specific types of deliverables, and software providers have enhanced their applications to accommodate them. Today, a wide range of specialized 3D visualization applicationsuse organizational-specific terminology.
 
Source
< Prev   CONTENTS   Source   Next >