Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2003, Computer Graphics and Applications, 2003. …
…
5 pages
1 file
We present the concept of volumetric depth-peeling. The proposed method is conceived to render interior and ex-terior iso-surfaces for a fixed iso-value and to blend them without the need to render the volume multiple times. The main advantage of our method over pre-integrated ...
Direct volume rendering (DVR) is a flexible technique for visualizing and exploring scientific and biomedical volumetric data sets. Transfer functions associate field values with colors and opacities, however, for complex data are often not sufficient for encoding all relevant information. We introduce a novel visualization technique termed texture-enhanced DVR to visualize supplementary data such as material properties and additional data fields. Smooth transitions in the underlying data are represented by coherently morphing textures within user defined regions of interest. The framework seamlessly integrates into the conventional DVR process, can be executed on the GPU, is extremely powerful and flexible, and enables entirely novel visualizations.
2008
A multi-layer volume rendering framework is presented. The final image is obtained by compositing a number of renderings, each being represented as a separate layer. This layer-centric framework provides a rich set of 2D operators and interactions, allowing both greater freedom and a more intuitive 2D-based user interaction. We extend the concept of compositing which is traditionally thought of as pertaining to the Porter and Duff compositing operators to a more general and flexible set of functions. In additional to developing new functional compositing operators, the user can control each individual layer's attributes, such as the opacity. They can also easily add or remove them from the composition set, change their order in the composition, and export and import the layers in a format readily utilized in a 2D paint package. This broad space of composition functions allows for a wide variety of effects and we present several in the context of volume rendering, including two-level volume rendering, masking, and magnification. We also discuss the integration of a 3D volume rendering engine with our 2.5D layer compositing engine.
2018
This paper reports on the implementation of a Direct Volumetric Rendering (DVR) and Visualization technique in the context of a general-purpose, low cost visualization software with available source code. The technique employs a texture mapping strategy to combine scalar values of volumetric data into three-dimensional images. Relevant implementation issues of this technique are presented, and the results are discussed in the light of a target application, namely visualization in dentistry. The technique presented here, DVRT, has demonstrated to behave better than the ray casting DVR technique for the data sets of interest, and also to produce very good results under shading.
Computers & Graphics, 2008
In this paper, we describe a volume rendering application for multimodal datasets based on 3D texture mapping. Our method takes as input two pre-registered voxel models and constructs two 3D textures. It renders the multimodal data by depth compositing view-aligned texture slices of the model. For each texel of a slice, it performs a fetch to each 3D texture and performs fusion and shading using a fragment shader. The application allows users to choose either emission and absorption shading or surface shading for each model. Shading is implemented by using two auxiliary 1D textures for each transfer function. Moreover, data fusion takes into account the presence of surfaces and the specific values that are merged, so that the weight of each modality in fusion is not constant but defined through a 2D transfer function implemented as a 2D texture. This method is very fast and versatile and it provides a good insight into multimodal data.
2014
This paper presents an algorithm called surfseek for selecting surfaces on the most visible features in direct volume rendering (DVR). The algorithm is based on a previously published technique (WYSIWYP) for picking 3D locations in DVR. The new algorithm projects a surface patch on the DVR image, consisting of multiple rays. For each ray the algorithm uses WYSIWYP or a variant of it to find the candidates for the most visible locations along the ray. Using these candidates the algorithm constructs a graph and computes a minimum cut on this graph. The minimum cut represents a very visible but relatively smooth surface. In the last step the selected surface is displayed. We provide examples for the results in real-world dataset as well as in artificially generated datasets.
Journal of WSCG, 2004
1. INTRODUCTION Volume rendering has become an important tool for scientific visualization in the last decade. The major focus in this area lies in the exploration of datasets as obtained from Computer Tomography (CT), Magnetic Resonance Imaging (MRI) or simulations. Iso-...
ACM Siggraph Computer Graphics, 1988
A technique for rendering images Of volumes containing mixtures of materials is presented. The shading model allows both the interior of a material and the boundary between materials to be colored. Image projection is performed by simulating the absorption of light along the ray path to the eye. The algorithms used are designed to avoid artifacts caused by aliasing and quantization and can be efficiently implemented on an image computer. Images from a variety of applications are shown.
Computer Graphics Forum, 2009
Volumetric rendering is widely used to examine 3D scalar fields from scanners and direct numerical simulation datasets. One key aspect of volumetric rendering is the ability to provide shading cues to aid in understanding structure contained in the datasets. While shading models that reproduce natural lighting conditions have been shown to better convey depth information and spatial relationships, they traditionally require considerable (pre-)computation. In this paper, we propose a novel shading model for interactive direct volume rendering that provides perceptual cues similar to that of ambient occlusion, for both solid and transparent surface-like features. An image space occlusion factor is derived from the radiative transport equation based on a specialized phase function. Our method does not rely on any precomputation and thus allows for interactive explorations of volumetric data sets via on-the-fly editing of the shading model parameters or (multi-dimensional) transfer functions. Unlike ambient occlusion methods, modifications to the volume, such as clipping planes or changes to the transfer function, are incorporated into the resulting occlusion-based shading. Figure 1: From left to right: a) Visible male data set with occlusion of solid and transparent materials (3.4 FPS, 996 slices) b) CT scan of an engine block where a clipping plane was used to show the exhaust port (13.3 FPS, 679 slices) c) Bonsai data set of which complex features are exposed by our ambient occlusion approximation (4.
IEEE Transactions on Visualization and Computer Graphics, 2000
Visualization of volumetric data faces the difficult task of finding effective parameters for the transfer functions. Those parameters can determine the effectiveness and accuracy of the visualization. Frequently, volumetric data includes multiple structures and features that need to be differentiated. However, if those features have the same intensity and gradient values, existing transfer functions are limited at effectively illustrating those similar features with different rendering properties. We introduce texture-based transfer functions for direct volume rendering. In our approach, the voxel's resulting opacity and color are based on local textural properties rather than individual intensity values. For example, if the intensity values of the vessels are similar to those on the boundary of the lungs, our texture-based transfer function will analyze the textural properties in those regions and color them differently even though they have the same intensity values in the volume. The use of texture-based transfer functions has several benefits. First, structures and features with the same intensity and gradient values can be automatically visualized with different rendering properties. Second, segmentation or prior knowledge of the specific features within the volume is not required for classifying these features differently. Third, textural metrics can be combined and/or maximized to capture and better differentiate similar structures. We demonstrate our texture-based transfer function for direct volume rendering with synthetic and real-world medical data to show the strength of our technique.
Computer Graphics Forum, 2008
We propose a method for rendering volumetric data sets at interactive frame rates while supporting dynamic ambient occlusion as well as an approximation to color bleeding. In contrast to ambient occlusion approaches for polygonal data, techniques for volumetric data sets have to face additional challenges, since by changing rendering parameters, such as the transfer function or the thresholding, the structure of the data set and thus the light interactions may vary drastically. Therefore, during a preprocessing step which is independent of the rendering parameters we capture light interactions for all combinations of structures extractable from a volumetric data set. In order to compute the light interactions between the different structures, we combine this preprocessed information during rendering based on the rendering parameters defined interactively by the user. Thus our method supports interactive exploration of a volumetric data set but still gives the user control over the most important rendering parameters. For instance, if the user alters the transfer function to extract different structures from a volumetric data set the light interactions between the extracted structures are captured in the rendering while still allowing interactive frame rates. Compared to known local illumination models for volume rendering our method does not introduce any substantial rendering overhead and can be integrated easily into existing volume rendering applications. In this paper we will explain our approach, discuss the implications for interactive volume rendering and present the achieved results.
Journal of Folklore and Education, 2024
Il Veltro. Rivista della Civiltà Italiana, 2024
Sententiae, 2023
Conference: 11º ENCONTRO ABCP, 2018
Lecture Notes in Civil Engineering, 2024
Perspectives in pragmatics, psychology & philosophy, 2022
Studies in Philosophy and Education, 1990
Horti Hesperidum. Studi di storia del collezionismo e della storiografia artistica, IX, 2019, 2,
Ethiopian Free Press, 2015
American Journal of Medical Genetics Part A, 2013
Frontiers in Immunology, 2021
Trayectorias y encrucijadas de las teorías del desarrollo en América Latina, 2024
Journal of the Air & Waste Management Association, 1997
RAFFLES BULLETIN OF ZOOLOGY, 2003
International Journal of Non-linear Mechanics, 2019