Non-Iterative, Feature-Preserving Mesh Smoothing: Thouis R. Jones MIT FR Edo Durand MIT Mathieu Desbrun USC
Non-Iterative, Feature-Preserving Mesh Smoothing: Thouis R. Jones MIT FR Edo Durand MIT Mathieu Desbrun USC
Non-Iterative, Feature-Preserving Mesh Smoothing: Thouis R. Jones MIT FR Edo Durand MIT Mathieu Desbrun USC
Figure 1: The dragon model (left) is articially corrupted by Gaussian noise ( = 1/5 of the mean edge length) (middle), then smoothed in a single pass by our method (right). Note that features such as sharp corners are preserved.
Abstract
With the increasing use of geometry scanners to create 3D models, there is a rising need for fast and robust mesh smoothing to remove inevitable noise in the measurements. While most previous work has favored diusion-based iterative techniques for feature-preserving smoothing, we propose a radically dierent approach, based on robust statistics and local rst-order predictors of the surface. The robustness of our local estimates allows us to derive a non-iterative feature-preserving ltering technique applicable to arbitrary triangle soups. We demonstrate its simplicity of implementation and its eciency, which make it an excellent solution for smoothing large, noisy, and non-manifold meshes. Keywords: mesh processing, mesh fairing, robust estimation, mesh smoothing, anisotropic diusion, bilateral ltering.
1.1
Previous Work
Introduction
With geometry scanners becoming more widespread and a corresponding growth in the number and complexity of scanned models, robust and ecient geometry processing becomes increasingly desirable. Even with highdelity scanners, the acquired 3D models are invariably noisy [Rusinkiewicz et al. 2002; Levoy et al. 2000], and therefore require smoothing. Similarly, shapes extracted from volume data (obtained by MRI or CT devices, for instance) often contain signicant amounts of noise, be it topological [Guskov and Wood 2001; Wood et al. 2002] or geometric [Taubin 1995; Desbrun et al. 1999], that must be removed before further processing. Removing noise while preserving the shape is, however, no trivial matter. Sharp features are often blurred if no special care is taken. To make matters
A wide variety of mesh smoothing algorithms have been proposed in recent years. Taubin [1995] pioneered fast mesh smoothing by proposing a simple, linear and isotropic technique to enhance the smoothness of triangulated surfaces without resorting to expensive functional minimizations. Desbrun et al. [1999] extended this approach to irregular meshes using a geometric ow analogy, and introduced the use of a conjugate gradient solver that safely removes the stability condition, allowing for signicant smoothing in reasonable time even on large meshes. Other improvements followed, such as a method combining geometry smoothing and parameterization regularization [Ohtake et al. 2000]. However, these ecient techniques are all isotropic, and therefore indiscriminately smooth noise and salient features: a noisy cube as input will become extremely rounded before becoming smooth. This lack of selectivity is limiting in terms of applications. Feature-preserving surface fairing has also been proposed more recently [Desbrun et al. 2000; Clarenz et al. 2000; Meyer et al. 2002; Zhang and Fiume 2002; Bajaj and Xu 2003], mostly inspired by image processing work on scalespace and anisotropic diusion [Perona and Malik 1990]. The idea behind these approaches is to modify the diusion equation to make it non-linear and/or anisotropic. The curvature tensor determines the local diusion, thus preserving (or even enhancing) sharp features. Although the results are of much higher quality, these methods rely on shock formation to preserve details, which aects the numerical conditioning of the diusion equations. This can cause signicant computational times, even after mollication of the data. Other researchers have proposed diusion-type smoothing on the normal eld itself [Taubin 2001; Belyaev and Ohtake 2001; Ohtake et al. 2002; Tasdizen et al. 2002]; fairing is achieved by rst smoothing the normal eld, and then evolving the surface to match the new normals. Here again, the results are superior to those from isotropic techniques, but with roughly similar computational cost as anisotropic diffusion on meshes. Locally adaptive Wiener ltering has also been used with success for 3D meshes by Peng et al. [2001], and for pointsampled surface by Pauly and Gross [2001]. However, these methods rely on semi-regular connectivity or local param-
eterization, respectively. A dierent approach is taken by Alexa [2002], similar to anisotropic diusion, though with larger neighborhoods used for ltering. This also results in a fast method, and avoids some of the limitations discussed below, but still relies on a connected mesh and iterative application. The diusion-based feature-preserving techniques are, in essence, all local and iterative. From very local derivative approximations, geometry is iteratively updated until the noise has been diused suciently. Numerical tools, such as preconditioned conjugate gradient solvers or algebraic multigrid solvers, can be used to improve eciency by making the iterations more stable. Nevertheless, the diusion-type setting that is the basis of these approaches requires manifoldness, not always present in raw scanned data. In order to address the need for robust and fast feature preserving smoothing, we propose to recast mesh ltering as a case of robust statistical estimation.
In practice, a spatial Gaussian f and a Gaussian inuence weight g are often used. This dependence on the signal dierence allows one to give less inuence to outliers. Durand and Dorsey [2002] show that bilateral ltering is a robust estimator and that a Gaussian inuence weight corresponds to minimizing a Gaussian error norm. They also show that bilateral ltering is essentially similar to anisotropic diusion. However, the bilateral lter is a non-iterative robust estimator, or westimator [Huber 1981], which makes it more ecient than iterative schemes. In particular, this approach does not have to deal with shock formation at strong edges, and is therefore more stable than anisotropic diusion. See also the work by Barash [2001] and Elad [2002].
1.3
Contributions
1.2
Robust Statistics
The eld of robust statistics is concerned with the development of statistical estimators that are robust to the presence of outliers and to deviations from the theoretical distribution [Huber 1981; Hampel et al. 1986]. Na estimators ve such as least-squares give too much inuence to outliers, because the error function or norm they minimize is large for data points far from the estimator (quadratic in the case of least squares). In contrast, robust estimators are based on minimizing an energy that gives low weight to outliers, as illustrated by the Gaussian robust norm in Fig. 2: after a certain distance from the estimator, controlled by a scale , an increasingly distant outlier has only limited eect.
4 0.3 0.25 3 0.2
y 2
y 0.15
0.1 1 0.05
In this paper, we propose a novel feature-preserving fairing technique for arbitrary surface meshes based on noniterative, robust statistical estimations1 . Contrasting drastically with previous diusion-based methods, our fast and stable approach relies on local robust estimations of shape. Moreover, our method does not require manifoldness of the input data, and can therefore be applied to triangle soup. One of our key insights is that feature preserving smoothing can be seen as estimating a surface in the presence of outliers. The extension from existing robust statistics techniques to surface ltering is, however, far from trivial because of the nature of the data: in a mesh, the spatial location and the signal are one and the same. This makes the denition of outliers and the control of their inuence challenging. We propose to capture the smoothness of a surface by dening local rst-order predictors. Using a robust estimator, we nd the new position of each vertex as weighted sum of the predictions from the predictions in its spatial neighborhood. We will show that our method treats points on opposite sides of a sharp feature as outliers relative to one another. This limits smoothing across corners, which preserves features.
1 x
1 x
x2
(x) = 2
2 x e 22
Figure 2: Least-square vs. Gaussian error norm (after [Black et al. 1998]). Black et al. [1998] showed that anisotropic diusion can be analyzed in the framework of robust statistics. The edgestopping functions of anisotropic diusion [Perona and Malik 1990] serve the same role as robust energy functions. Anisotropic diusion minimizes such a function using an iterative method. The bilateral lter is an alternative edge-preserving lter proposed by Tomasi and Manduchi [1998]. The output E(p) at a pixel p is a weighted average of the surrounding pixels in the input image I, where the weight of a pixel q depends not only on the spatial distance ||q p||, but also on the signal dierence ||I(q) I(p)||: E(p) = 1 k(p) I(q) f (q p) g(I(q) I(p)),
q
We cast feature-preserving mesh ltering as a robust estimation problem on vertex positions. The estimate for a vertex is computed using the prediction from nearby triangles. Moving each vertex to a robust estimate of its position removes noise and smoothes the mesh while preserving features.
2.1
(1)
To allow the proper denition of outliers, we must separate spatial location and signal. We capture surface smoothness using rst-order predictors, i.e., tangent planes. In practice we use predictors based on triangles of the mesh as they represent natural tangent planes to the surface. The surface predictor q dened by a triangle q is just the tangent plane of q (see Fig. 3a). We use a method analogous to bilateral ltering for images [Tomasi and Manduchi 1998], but we form the estimate for the new position of a vertex p based on the predictions q (p) from its spatially nearby triangles. We employ a spatial
a contemporaneous work, Fleishman et al. [2003] present a similar technique (cf. Section 4).
1 In
f (q p) g(I(q) I(p))
(2)
Figure 3: (a) The prediction q (p) for a point p based on the surface at q is the projection of p to the plane tangent to the surface at q. Points across a sharp feature result in predictions that are farther away, and therefore given less inuence. (b) Noisy normals can lead to poor predictors. (c) Mollied normal alleviate this problem. Note that corners are preserved because points are not displaced by the mollication: only the normals are smoothed. weight f that depends on the distance ||p cq || between p and the centroid cq of q. We also use an inuence weight g that depends on the distance ||q (p) p|| between the prediction and the original position of p. Finally we weight by the area aq of the triangles to account for variations in the sampling rate of the surface. The estimate p for a point on surface S is then: p = 1 k(p) q (p) aq f (||cq p||) g(||q (p) p||),
qS
2.2
Mollication
(3)
(4)
Gaussians are used both for the spatial weight f and for the inuence weight g in this paper. Other robust inuence weights could also be used, but Gaussians have performed well in our experiments, as well as the work of others [Tomasi and Manduchi 1998; Durand and Dorsey 2002]. The amount of smoothing is controlled by the widths f of the spatial and g of the inuence weight Gaussians. As can be seen in Fig. 3(a), predictions from across a sharp feature are given less weight because the distance between the prediction q (p) and p is large, and is penalized by the inuence weight g. Filtering a mesh involves evaluating Equation (3) for every vertex and then moving them as a group to their estimated positions. Note that no connectivity is required beyond triangles: we simply use the Euclidean distance to the centroid of surrounding triangles to nd the spatial neighborhood of a vertex. A wider spatial lter includes a larger number of neighbors in the estimate, and can therefore remove a greater amount of noise, or smooth larger features. The inuence weight determines when the predictions of neighbors are considered outliers (by according them less weight), and thereby controls the size of features that are preserved in the ltering. As shown by Black et al.[1998] and Durand and Dorsey [2002], the evaluation of Equation (3) corresponds to approximately minimizing E(p) =
qS
Our predictors are based on the orientation of the tangent planes, as dened by the facet normals. Since the normals are rst-order properties of the mesh, they are more sensitive to noise than vertex positions (Fig. 3(b)). Even so, the robust estimator performs well; we can however signicantly improve the estimate with mollication [Huber 1981; Murio 1993]. We mollify our estimators by smoothing the normals. We rst perform a pass of non-robust smoothing using Equation (3) without the inuence weight, and with the simplest predictor, q (p) = cq , corresponding to simple Gaussian smoothing. We use a dierent width for the spatial lter during mollication, and in practice have always set this to f /2. The normals of the mollied mesh are then copied to the facets of the original mesh before the robust ltering is performed. Notice that we do not alter the positions of the vertices at all: we only need to mollify the rst-order properties (the normals), not the 0-order location (see Fig. 3(c)). Some normals might be improperly smoothed by mollication near corners. This is why it is important not to move the vertices during mollication in order to preserve these features. Fig. 4 shows a comparison of ltering with and without mollication. Without mollication, the facet normals of the mesh are much noisier, resulting in less eective smoothing.
2.3
Feature Preservation
(5)
where (||q (p) p||) is the distance between a vertex and its predicted position under a robust error norm [Hampel et al. 1986]. Robust error norms are bounded above by some maximum error, as discussed in Section 1.2. In our case, we seek to minimize a Gaussian error norm (see Fig. 2); The relation between and g is g(x) (x)/x [Black et al. 1998; Durand and Dorsey 2002; Hampel et al. 1986].
The ltering method we have proposed preserves features through two combined actions. First is the use of a robust inuence weight function, as discussed, while the second is our use of a predictor for vertex positions based on the tangent planes of the mesh. This predictor does not move vertices located at sharp features separating smooth areas of the mesh, since feature vertices are supported by the prediction from both sides. Neither of these actions is sucient alone (see the discussion of Fig. 7 below for examples of how the inuence weight aects the lter), but together they provide excellent feature-preserving behavior. Note the connection to bilateral ltering for images, which uses a prior of piecewise constant images. This is a special case of our formulation, corresponding to the predictor q (p) = cq . As well, the use of the existing mesh facets helps to simplify our formulation and its implementation, as they provide direct estimates for surface tangents. In essence, our technique also relates to ENO/WENO methods [Osher and Fedkiw 2002], a class of nite-dierencebased, shock capturing numerical techniques for hyperbolic PDE integration. In a nutshell, they seek to avoid dissipation of shocks the equivalent of sharp features in our geometric setting. They base their local evaluation of dierential quantities only on the local neighbors of similar eld
Figure 4: Isotropic ltering vs. our method. Notice that details such as the upper and lower lids and the eye are better preserved, while at regions are equivalently smoothed. value. For example, the evaluation of a second derivative at a shock is not centered as this would use information on both sides; in contrast the evaluation is one-sided to prevent numerical dissipation. Robust statistics oers a principled framework to extend similar key concepts to geometry. Model Dragon head Face Dog Fig. 1 4(d) (c) 6(c) (e) 7(b) (c) (d) 10 8 Verts. 100k 41k 195k 35k 134k 100k Time 80 s 16 10 82 132 11 12 23 54 79 s s s s s s s s s f /||e||/ 4 (14) 1.5 1.5 2.7 4 2 2 4 2.5 4 (9.2) (0.9) (6.6)) (9.9) (12) (1.2) (24) (8.1) (14) g /||e|| 1 (4) 0.4 0.5 0.4 1.3 0.2 4 4 1 2 (2.4) (0.3) (0.9) (3.3) (1.2) (24) (24) (3.3) (7)
Results
We demonstrate our results in Figs. 1 and 4-8. In each case, we use Gaussians for the spatial (f ), inuence weight (g), and mollication lters with standard deviations of f f , g , 2 , respectively. This choice for g corresponds to a Gaussian error norm. All meshes are rendered with at shading to show faceting. Table 1 summarizes our results and the parameters used to generate them, given in terms of the mean edge length (||e||) of the particular mesh. The cost of each evaluation of Equation (3) depends on the number of facets that lie within the support of f , so the time to lter a 2 mesh grows approximately as the size of the mesh times f . Fig. 4 shows a portion of a mesh from a 3D scan of a head. We show the original mesh, the result of isotropic smoothing by Desbrun et al. [1999], and our technique. We present this comparison to demonstrate the eectiveness of our approach for smoothing, even on noisy meshes with topological errors. A comparison to the Wiener ltering approach of Peng et al [2001] is shown in Fig. 6. The parameters for our method were chosen to visually match the smoothness in at areas. Our method preserves features better for larger amounts of smoothing (compare 6(d) and 6(e)). Also, as noted previously, Wiener ltering requires resampling the input to a semi-regular mesh, and only operates on surfaces with manifold topology, while our method can be applied more generally, to non-regular and disconnected meshes. We estimate that their implementation would take about 15 seconds to lter this mesh on our machine, in comparison to 60 (or more, depending on the smoothing parameters) for our technique. In Fig. 10 we compare our method to anisotropic diusion smoothing by Clarenz et al. [2000]. The original, noisy mesh is smoothed and smaller features removed by four iterations of diusion. We have chosen the parameters of our method to match the result as closely as possible. One benet of anisotropic diusion is the ability to iteratively enhance
Table 1: Results on a 1.4Ghz Athlon with 2GB of RAM. Times do not include the time to load meshes. The s are expressed as ratios of the mean edges length ||e||, and the numbers in parentheses are in thousandths of the bounding box diagonal for the particular meshes.
Figure 5: (a) Values of f and g that yield the most accurate denoising for a mesh corrupted with Gaussian noise, as a function of the variance of the noise. (b) Evolution of the error for a given noise level as a function of f and g . All values are in terms of the mean edge length. edges and corners. Our method is not able to perform such enhancement in a single pass, resulting in a slightly dierent overall appearance, particularly in the hair, and slightly more noise around edges in the model. We also show a false-color plot of the condence measure k in Fig. 10. As can be seen, smoother areas (such as
Original
our method
our method
Figure 6: Comparison of our method and Wiener ltering. Parameters for Wiener ltering from [Peng et al. 2001]. Parameters for our method chosen to approximately match surface smoothness in at areas. (Original and Wiener ltered meshes courtesy of Jianbo Peng.)
original
noise removal smooth small features smooth large features narrow spatial, narrow inuencenarrow spatial, wide inuence wide spatial, wide inuence (f = 2, g = 0.2) (f = 2, g = 4) (f = 4, g = 4)
Figure 7: The eect of varying spatial and inuence weight functions. Filter widths f , g given in terms of mean edge length in the mesh. (Mesh from the Stanford University Computer Graphics Laboratory 3D scanning repository.) the cheek) and features bordered by smooth areas (such as the edges of the nose) have higher condence, while curved areas or those near more complicated features have lower condence. See also Fig. 8. We show the eects of varying spatial and inuence weight function widths in Fig. 7. For a wide spatial lter but narrow inuence weight, the mesh is smoothed only in mostly at areas. In the converse, a narrow spatial lter and wide inuence weight, small features are smoothed away but larger variations kept. Finally, for a wide spatial lter and wide inuence weight, only the largest and strongest features are preserved. See Fig. 8 for a similar example of our method used to remove all but the most salient features of a mesh. In order to facilitate denoising with our approach, we have performed experiments to nd good values for f and g to smooth a model corrupted with a given amount of noise, such as might be produced by a scanner. If the amount of noise can be quantied, by examining an area on the model known to be at, for example, then the plot in Fig. 5(a) shows the optimal values for f , g from our experiments. These values have been found eective on several models. The surface plot in Fig. 5(b) shows how, for a particular (representative) noise level, the post-ltering error changes. As can be seen, the error is most sensitive to f . We compute the error as the L2 distance between the original mesh before corruption and the ltered mesh [Khodakovsky et al. 2000]. In other applications, our general approach has been to increase f and g together until the ltered mesh is sufciently smooth for our goals. We then decrease g until features or noise that we are trying to remove begin to reappear. All of our results demonstrate the eectiveness of our technique at feature preservation, due to a combination of a robust inuence weight function and a rst-order predictor, as discussed in Section 2.3. In particular, the tips of the ears of the bunny are preserved, as are the head and extremities of the dragon. See also Fig. 9, part of a scan of an origami piece. We apply our ltering method to a mesh corrupted with synthetic noise in Fig. 1. In the noisy mesh, each vertex is displaced by zero-mean Gaussian noise with noise = 1 5 of the mean edge length, along the normal. We lter the dragon mesh to recover an estimate of the original shape. For comparison, in the scanned mesh of Fig. 4 we estimate noise 1 . These results shows the ability of our method 7 to smooth even in the presence of extreme amounts of noise. Fig. 4 also indicates an area where our algorithm could be improved. Where a feature and noise coincide (e.g. in the nose), it is dicult to correctly separate the two. In Fig. 1, we have aimed for a smoother reconstruction, but lose some details in the process. We have applied two basic optimizations to our implementation. We truncate the spatial lter at 2f to limit the number of estimates that must be considered per vertex. This does not noticeably aect the results. We also group vertices and facets spatially for processing, to improve local-
ity of reference. As presented, our method is not necessarily volume preserving. We have not encountered a mesh where this is an issue. Adjusting the mesh after ltering to preserve its volume is a straightforward extension [Desbrun et al. 1999].
We have developed a novel, fast feature-preserving surface smoothing and denoising technique, applicable to triangle soups. We use a rst-order predictor to capture the local shape of smooth objects. This decouples the signal and the spatial location in a surface, and allows us to use robust statistics as a solid base for feature preservation. A robust error norm is applied to the predictors and the vertex positions, from which we derive an ecient and stable one-step smoothing operator. Mollication is used to better capture shape and to obtain more reliable detection of outliers and features. We have demonstrated our algorithm on several models, for both noise removal and mesh smoothing. Contemporaneous with this work, Fleishman et al. [2003] have proposed an extension of bilateral ltering to meshes with similar goals as this work, but with a dierent approach. The main contrasts are that vertex normals are computed from the mesh to perform local projections, after which a vertexs updated position is computed as the bilateral lter of its neighborhood treated as a height eld. In rough terms, their method is faster but requires a connected mesh. The speed increase is due to two factors: they do not mollify normals, and the density of triangles is roughly half that of vertices in a mesh. They require connectivity to estimate normals. They also apply their lter iteratively, while we have concentrated on a single-pass technique. There is also a fundamental dierence in how the two methods form predictions for a vertexs ltered position. Our method projects the central vertex to the planes of nearby triangles, while that of Fleishman et al. projects nearby vertices to the plane of the central vertex. The relative costs and benets of these two approaches merits further study. There are several avenues for improvement of our method. The normalization factor k in Equation (3) is the sum of weights applied to the individual estimates from a points neighborhood (see Fig. 10). It therefore provides a measure of the condence that should be attached to the estimate of the points new position, as noted by Durand and Dorsey [2002]. We have not made use of the condence measure k in this work, but feel that it could be a valuable tool in future approaches. In particular, we believe that it could be used to detect areas where a good estimate could not be formed, as on the sharp features in Fig. 1. Such areas could be marked for special processing, perhaps by iterative ltering. 2 In our experience, the O(f ) growth rate of our algorithm has not been a limiting factor. If it were to become so, a promising approach is to subsample the mesh by simplifying it with some fast method, and then lter the original mesh vertices based on the simplied version. Our method should also extend easily to out-of-core evaluation, since it does not require connectivity information and since the computations are spatially local. This would allow our method to be applied to extremely large models. Finally, the extension of robust statistics to meshes suggests other possibilities for their application. The inuence weight could include other data on the mesh, such as color. It should also be straightforward to extend our lter to other shape representations, such as volume data or point-sample
Figure 8: Original and smoothed dragon, and the condence k for the smoothed dragon. Note that sharp features are preserved while other details are removed. (Mesh from the Stanford University Computer Graphics Laboratory 3D scanning repository.) models [Zwicker et al. 2002]. In the latter case, where each sample includes a normal, the methods transfer directly, as should the results. We also plan to explore how robust statistics could be added to existing techniques for surface approximation, such as Moving Least Squares [Levin 2001], to improve their robustness and sensitivity to noise. Acknowledgments Special thanks to Udo Diewald, Martin Rumpf, Jianbo Peng, and Denis Zorin for providing us with their meshes and results, to Erik Demaine for the origami, to the Stanford 3D Scanning Repository, and to the SigDraft and MIT pre-reviewers for their feedback. This work was funded in part by the NSF (CCR-0133983, DMS-0221666, DMS-0221669, EEC-9529152).
References
Alexa, M. 2002. Wiener Filtering of Meshes. In Proceedings of Shape Modeling International, 5157. Bajaj, C., and Xu, G. 2003. Anisotropic Diusion on Surfaces and Functions on Surfaces. ACM Trans. Gr. 22, 1, 432. Barash, D. 2001. A Fundamental Relationship between Bilateral Filtering, Adaptive Smoothing and the Nonlinear Diusion Equation. IEEE PAMI 24, 6, 844. Belyaev, A., and Ohtake, Y. 2001. Nonlinear Diusion of Normals for Crease Enhancement. In Vision Geometry X, SPIE Annual Meeting, 4247.
Figure 9: On this 3D scan of an origami sculpture, our method preserves corners and edges while removing scanner noise.
input mesh
anisotropic diusion
our method
condence k
Figure 10: Comparison of our method with anisotropic diusion (four iterations) [Clarenz et al. 2000]. Parameters for our method chosen to match as best as possible. In the condence plot, dark blue is high condence, green and yellow lower condence, and red least condence. (Original and anisotropically smoothed mesh courtesy of Martin Rumpf.)
Black, M., Sapiro, G., Marimont, D., and Heeger, D. 1998. Robust anisotropic diusion. IEEE Trans. Image Processing 7, 3, 421432. Clarenz, U., Diewald, U., and Rumpf, M. 2000. Anisotropic geometric diusion in surface processing. In IEEE Visualization 2000, 397405. Desbrun, M., Meyer, M., Schroder, P., and Barr, A. H. 1999. Implicit Fairing of Irregular Meshes Using Diusion and Curvature Flow. In Proceedings of SIGGRAPH 99, 317324. Desbrun, M., Meyer, M., Schroder, P., and Barr, A. H. 2000. Anisotropic Feature-Preserving Denoising of Height Fields and Bivariate Data. In Graphics Interface, 145152. Durand, F., and Dorsey, J. 2002. Fast Bilateral Filtering for the Display of High-Dynamic-Range Images. ACM Trans. Gr. 21, 3, 257266. Elad, M. 2002. On the Bilateral Filter and Ways to Improve It. IEEE Trans. on Image Processing 11, 10 (), 11411151. Fleishman, S., Drori, I., and Cohen-Or, D. 2003. Bilateral Mesh Denoising. ACM Trans. Gr. (Proceedings of ACM SIGGRAPH). Guskov, I., and Wood, Z. 2001. Topological Noise Removal. In Graphics Interface 2001, 1926. Hampel, F. R., Ronchetti, E. M., Rousseeuw, P. J., and Stahel, W. A. 1986. Robust Statistics: The Approach Based on Inuence Functions. John Wiley and Sons. ISBN 0471-632384. Huber, P. J. 1981. Robust Statistics. John Wiley and Sons. Khodakovsky, A., Schroder, P., and Sweldens, W. 2000. Progressive Geometry Compression. In Proceedings of ACM SIGGRAPH 2000, 271278. Levin, D. 2001. Mesh-independent surface interpolation. In Advances in Computational Mathematics, in press. Levoy, M., Pulli, K., Curless, B., Rusinkiewicz, S., Koller, D., Pereira, L., Ginzton, M., Anderson, S., Davis, J., Ginsberg, J., Shade, J., and Fulk, D. 2000. The Digital Michelangelo Project: 3D Scanning of Large Statues. In Proceedings of SIGGRAPH 2000, 131144. Meyer, M., Desbrun, M., Schroder, P., and Barr, A. H. 2002. Discrete Dierential-Geometry Operators for Triangulated 2-Manifolds. In Proceedings of Visualization and Mathematics. Murio, D. A. 1993. The mollication method and the numerical solution of ill-posed problems. Wiley. Ohtake, Y., Belyaev, A., and Bogaeski, I. 2000. Polyhedral Surface Smoothing with Simultaneous Mesh Regularization. In Geometric Modeling and Processing, 229237. Ohtake, Y., Belyaev, A., and Seidel, H.-P. 2002. Mesh Smoothing by Adaptive and Anisotropic Gaussian Filter. In Vision, Modeling and Visualization, 203210. Osher, S., and Fedkiw, R. P. 2002. Level Set Methods and Dynamic Implicit Surfaces. Springer-Verlag, NY. Pauly, M., and Gross, M. 2001. Spectral Processing of PointSampled Geometry. In Proceedings of ACM SIGGRAPH 2001, 379386. Peng, J., Strela, V., and Zorin, D. 2001. A Simple Algorithm for Surface Denoising. In Proceedings of IEEE Visualization 2001, 107112. Perona, P., and Malik, J. 1990. Scale-space and edge detection using anisotropic diusion. IEEE PAMI 12, 7, 629639. Rusinkiewicz, S., Hall-Holt, O., and Levoy, M. 2002. RealTime 3D Model Acquisition. ACM Trans. Gr. 21, 3 (), 438 446. Tasdizen, T., Whitaker, R., Burchard, P., and Osher, S. 2002. Geometric Surface Smoothing via Anisotropic Diusion of Normals. In Proceedings, IEEE Visualization 2002, 125132. Taubin, G. 1995. A Signal Processing Approach to Fair Surface Design. In Proceedings of SIGGRAPH 95, 351358. Taubin, G. 2001. Linear Anisotropic Mesh Filtering. Tech. Rep. IBM Research Report RC2213. Tomasi, C., and Manduchi, R. 1998. Bilateral Filtering for Gray and Color Images. In Proc. IEEE Int. Conf. on Computer Vision, 836846. Wood, Z., Hoppe, H., Desbrun, M., and Schroder, P. 2002. Isosurface Topology Simplication. http://www.multires.caltech.edu/pubs/. Zhang, H., and Fiume, E. L. 2002. Mesh Smoothing with Shape or Feature Preservation. In Advances in Modeling, Animation, and Rendering, J. Vince and R. Earnshaw, editors, 167182. Zwicker, M., Pauly, M., Knoll, O., and Gross, M. 2002. Pointshop 3D: An Interactive System for Point-Based Surface Editing. ACM Trans. Gr. 21, 3 (), 322329.