Skip to content
Snippets Groups Projects
Commit ee826e20 authored by Jan Schnathmeier's avatar Jan Schnathmeier
Browse files

Evaluation + Conclusion

parent e6085b3f
No related branches found
No related tags found
No related merge requests found
Showing
with 486 additions and 3 deletions
\chapter{Conclusion}
\label{ch:Conclusion}
We designed and implemented a new data structure for embedding a meta mesh $\mathcal{M}$ into a base mesh $\mathcal{B}$ via an embedding $\Phi(\mathcal{M})$. This embedding $\Phi(\mathcal{M})$ is an injection from $\mathcal{M}$ to $\mathcal{B}$ and includes functions to manipulate both while preserving the embedding. We implemented a series of atomic operations allowing arbitrary manipulation of $\Phi(\mathcal{M})$ while staying in a regular mesh context. Some operations require $\mathcal{B}$ to be changed as well, but all changes on $\mathcal{B}$ are tentative and get reverted as soon as $\Phi(\mathcal{M})$ allows it.
Further, we implemented Embedded Isotropic Remeshing on our data structure as a proof of concept for algorithms working on the embedding, as well as a means to initializing a good embedding. Our implementation of Embedded Isotropic Remeshing is functional and stable, but could require some further fine-tuning in order to scale and be usable for large meshes. Note that this is most likely a limitation of our implementation of the algorithm, not of the underlying data structure itself, as our atomic mesh operations are all local.
Since Embedded Isotropic Remeshing is a newly designed form of the Incremental Isotropic Remeshing algorithm, tailored to our data structure, we tested a series of different parameters over a variety of metrics to find which ones work best. This gives us a view into the performance of the algorithm as well as a set of healthy default parameters.
Taking a step back, our data structure could be best used in applications where a base mesh $\mathcal{B}$ is given, and an abstracted higher resolution view (meta mesh $\mathcal{M}$) is desired. Our mesh embeddings $\Phi(\mathcal{M})$ are easy to initialize and maintain, and interface with algorithms in the same way normal meshes do through the atomic operations we defined. This gives our data structure a high degree of flexibility and ease of use for a broad range of mesh embedding or even surface-to-surface mapping applications.
In the future, our data structure could certainly be expanded upon in a few ways:
\begin{itemize}
\item Edge tracing\footnote{See Section \ref{subsec:RestrictedPathTracing}: Restricted Path Tracing.} could potentially be improved by considering methods such as \cite{kraevoy2004cross} or \cite{bischoff2005snakes} propose, in order to trace through arbitrary meta faces without having to refine the underlying base mesh $\mathcal{B}$.
\begin{itemize}
\item Alternatively, an even more refined tracing method could proactively split only exactly those edges of $E^\mathcal{B}$ that need to be split, in order to do away with pre-processing and speed up the implementation.
\end{itemize}
\item Initialization of embeddings $\Phi(\mathcal{M})$ currently has a few limitations; for example each Voronoi region needs to be of disc topology. These limitations can currently be circumvented by simply initializing a larger $\Phi(\mathcal{M})$ and then decimating it until the desired $\Phi(\mathcal{M})$ is reached. A more elegant solution would be an expanded initialization method that can handle such edge cases.
\item Having multiple base meshes $\mathcal{B}_i$ is currently not supported, but would certainly be a possibility. Finding embeddings $\Phi_i(\mathcal{M})$ to embed one meta mesh $\mathcal{M}$ into many base meshes $\mathcal{B}_i$ should be possible through a simple expansion of our data structure; in principle it could already be done by running multiple embeddings with matching $\mathcal{M}$. This would extend the functionality of our data structure in the domain of surface-to-surface maps.
\end{itemize}
That's it. Happy remeshing :-)
\begin{figure}[hb]
\begin{center}
\includegraphics[width=0.77\textwidth]{img/FertilityMetaMesh2.png}
\end{center}
\end{figure}
\ No newline at end of file
\chapter{Evaluation}
\label{ch:Evaluation}
In Chapter \ref{ch:EmbeddedIsotropicRemeshing} we introduced Embedded Isotropic Remeshing, along with various options for several parts of the algorithm. Since it is not obvious which options are preferable, we ran a series of tests in order to compare those options based on certain criteria. For each iteration of Embedded Isotropic Remeshing we output screenshots after every step, and measure the following datapoints:
\begin{itemize}
\item The number of the iteration.
\item The time this iteration took (in ms).
\item The number of faces $|F^\mathcal{B}|$ in the base mesh $\mathcal{B}$.
\item The number of faces $|F^\mathcal{M}|$ in the meta mesh $\mathcal{M}$.
\item The ratio of base faces per meta face $|F^\mathcal{B}|/|F^\mathcal{M}|$.
\item The average valence of meta vertices $V^\mathcal{M}$.
\item The standard deviation of the meta vertices valence from 6.\footnote{Since a regular mesh should have an average valence of 6, and our Embedded Isotropic Remeshing algorithm checks tries to minimize the average quadratic distance from 6 for the meta vertices $V^\mathcal{M}$.}
\item The average edge length.
\item The standard deviation from the average edge length.
\end{itemize}
This extensive dataset gives us a broad overview over the performance of different approaches based on the above metrics. In the following we will compare pronounce those metrics where different approaches result in different data.
\input{ch/Evaluation/TestData.tex}
\input{ch/Evaluation/Evaluation.tex}
\ No newline at end of file
\section{Evaluation}
\label{sec:Evaluation}
%\input{ch/Evaluation/Testing/
\ No newline at end of file
The data from Section \ref{sec:TestData} allows us to draw some conclusions in regards to our Embedded Isotropic Remeshing algorithm.
\begin{itemize}
\item Our implementation is still lacking in scalability. This is clearly apparent when comparing the elapsed time between running algorithms on a mesh with 1000 and 10000 vertices. A mesh with 1000 vertices will average around 5 minutes to run 100 iterations, whereas a mesh with 10000 vertices takes many hours in the average case and had to be cancelled without terminating in the worst case. This non-linear time scaling is caused by global operations run in iterations of Embedded Isotropic Remeshing, putting our implementation in $O(n^2)$.
Without proof, we believe that it should be possible to improve the algorithm to be in at least $O(n\log{}n)$ by making as many operations as possible local instead of global. Throughout development we already improved the performance of Embedded Isotropic Remeshing by orders of magnitude several times through making things more local.
\item Using heuristics to decide the order in which edges should be collapsed clearly leads to better results than choosing randomly. The heuristic we came up with is somewhat arbitrary and based on intuition; clearly it could be improved to reduce the amount of worst cases.
\item The default slack variables $\alpha=\frac{4}{3}, \beta=\frac{4}{5}$ are too strict for curved edges. Better slack variables are closer to $\alpha=2, \beta=\frac{1}{2}$ and could be approximated by testing or mathematically derived. Another idea worth pursuing would be adaptively loosening the slack variables throughout a run to force convergence.
\item Limiting flips seems pointless, at least in the context we tested it in.
\item Out of the smoothing types we presented Vertex Weight Smoothing is preferable by all of our metrics.
\end{itemize}
The default values for many parameters of our Embedded Isotropic Remeshing algorithm were suboptimal throughout the testing period, and have now been updated keeping this new information in mind.
\ No newline at end of file
\section{Test Data}
\label{sec:TestData}
%\input{ch/Evaluation/TestData/
\ No newline at end of file
We performed tests for four different sets of options:
\begin{enumerate}
\item \textbf{Heuristic vs. Random Collapses}: The order of collapses, as presented in Section \ref{sec:CollapseOrder} can be decided in different ways. We compare randomized collapses with heuristically determined collapses in Section \ref{subsec:HeuristicVsRandomCollapses}.
\item \textbf{Slack Variables}: Embedded Isotropic Remeshing relies on slack variables $\alpha$ and $\beta$ to determine whether an embedded edge $e^{\Phi(\mathcal{M})}_i\in E^{\Phi(\mathcal{M})}i$ should be collapsed or split. Section \ref{subsec:SlackVariables} presents tests performed with different values for $\alpha$ and $\beta$.
\item \textbf{Limiting Flips}: In Section \ref{sec:ImplementationDetails} we saw that Embedded Isotropic Remeshing sometimes creates ugly artifacts on $\mathcal{B}$ and spiralization of the edges of $\Phi(\mathcal{M})$. One possible approach to reducing spiralization is disallowing flips that would increase the flipped edges' lengths. Section \ref{subsec:LimitingFlips} compares a series of tests with unlimited flips vs a series of tests with limited flips.
\item \textbf{Smoothing Types}: We presented different types of smoothing in Section \ref{sec:Smoothing}. In Section \ref{subsec:SmoothingTypes} we test and compare them.
\end{enumerate}
\input{ch/Evaluation/TestData/HeuristicVsRandomCollapses}
\input{ch/Evaluation/TestData/SlackVariables}
\input{ch/Evaluation/TestData/LimitingFlips}
\input{ch/Evaluation/TestData/SmoothingTypes}
\ No newline at end of file
\subsection{Heuristic vs. Random Collapses}
\label{subsec:HeuristicVsRandomCollapses}
The tests in this section were performed by running 100 Iterations of Embedded Isotropic Remeshing with a target edge length of 0.2 on the ''fertility'' mesh with 10000 vertices and the \textit{VertexDistance} smoothing type. The only difference in these test runs is the way in which collapse order was decided, specifically:
\begin{itemize}
\item \textbf{Random Collapses}: We shuffled the array of edges before each iteration of the algorithm.
\item \textbf{Heuristic 1}: We used the heuristic presented in Section \ref{sec:CollapseOrder} in Equation \ref{eq:CollapseHeuristic}:
\begin{equation}\nonumber
\textsc{ch}(h^{\mathcal{M}}_x) := (|v^{\mathcal{M}}_A|-4)*w_{\mathcal{M}}(h^{\mathcal{M}}_x) - w_{\mathcal{M}}(h^{\mathcal{M}}_p) - w_{\mathcal{M}}(h^{\mathcal{M}}_{on})
\end{equation}
\item \textbf{Heuristic 2}: We slightly modified the above heuristic by adding a factor of $\frac{1}{2}$ before it:
\begin{equation}\nonumber
\textsc{ch}(h^{\mathcal{M}}_x) := \frac{1}{2}(|v^{\mathcal{M}}_A|-4)*w_{\mathcal{M}}(h^{\mathcal{M}}_x) - w_{\mathcal{M}}(h^{\mathcal{M}}_p) - w_{\mathcal{M}}(h^{\mathcal{M}}_{on})
\end{equation}
\end{itemize}
In the following we first have a look at random collapses.
\vspace{-15pt}
\begin{minipage}[t]{0.5\textwidth}
\begin{centering}
\captionsetup{type=figure}
\includegraphics[width=0.95\textwidth]{img/Random-iteration-time_ms.pdf}
\end{centering}
\vspace{-15pt}
%\captionof{figure}{Time / iteration.}
%\label{fig:RandomTime}
\end{minipage}\hfill
\begin{minipage}[t]{0.5\textwidth}
\begin{centering}
\captionsetup{type=figure}
\includegraphics[width=0.95\textwidth]{img/Random-iteration-basefaces.pdf}
\end{centering}
\vspace{-15pt}
%\captionof{figure}{$|F^\mathcal{B}|$ / iteration.}
%\label{fig:RandomBaseFaces}
\end{minipage}
\begin{minipage}[t]{0.5\textwidth}
\begin{centering}
\captionsetup{type=figure}
\includegraphics[width=0.95\textwidth]{img/Random-iteration-metafaces.pdf}
\end{centering}
\vspace{-15pt}
%\captionof{figure}{$|F^\mathcal{M}|$ / iteration.}
%\label{fig:RandomMetaFaces}
\end{minipage}\hfill
\begin{minipage}[t]{0.5\textwidth}
\begin{centering}
\captionsetup{type=figure}
\includegraphics[width=0.95\textwidth]{img/Random-iteration-basefaces_metafaces.pdf}
\end{centering}
\vspace{-15pt}
%\captionof{figure}{$|F^\mathcal{B}|/|F^\mathcal{M}|$ / iteration.}
%\label{fig:RandomBaseMetaFaces}
\end{minipage}
\begin{minipage}[t]{0.5\textwidth}
\begin{centering}
\includegraphics[width=0.95\textwidth]{img/Random-iteration-edgelength_avg.pdf}
\end{centering}
%\caption{Vertex Weight Smoothing: concept}
%\label{fig:vertexweights1}
\end{minipage}\hfill
\begin{minipage}[t]{0.5\textwidth}
\begin{centering}
\includegraphics[width=0.95\textwidth]{img/Random-iteration-edgelength_sd.pdf}
\end{centering}
\end{minipage}
\begin{minipage}[t]{0.5\textwidth}
\begin{centering}
\includegraphics[width=0.95\textwidth]{img/Random-iteration-valence_avg.pdf}
\end{centering}
%\caption{Vertex Weight Smoothing: concept}
%\label{fig:vertexweights1}
\end{minipage}\hfill
\begin{minipage}[t]{0.5\textwidth}
\begin{centering}
\includegraphics[width=0.95\textwidth]{img/Random-iteration-valence_sd.pdf}
\end{centering}
\end{minipage}
The previous page shows all the data we collect for each random run of our Embedded Isotropic Remeshing algorithm. Some of these randoml runs started taking too long and had to be terminated before 100 iterations could be completed. This can be seen in the tables, and is especially pronounced when looking at the elapsed time and number of base faces per iteration. The general pattern is an initial increase in complexity as the base mesh $\mathcal{B}$ is split up gaining more faces, followed by a steady decrease as the meta mesh is decimated,
and superfluous non-original base edges are collapsed.
However, stability is not guaranteed. When a mesh with an initial 10,000 faces explodes to over 1,000,000 faces, and time increases accordingly, this showcases problems in the underlying algorithm. Random collapses can work and terminate properly, but the worst case creates too many splits on $\mathcal{B}$ through spiralization and similar effects as shown in Section \ref{sec:ImplementationDetails}.
Excluding the worst cases, the ratio between the base faces and meta faces $\frac{|F^\mathcal{B}|}{|F^\mathcal{M}|}$ slightly oscillates after ~20 iterations, as does the edge length. There is no convergence due to two reasons:
\begin{itemize}
\item The slack variables $\alpha$ and $\beta$ may be set too loosely; setting them wider stops a lot of collapses and splits from happening. Tests regarding that can be found in Section \ref{subsec:SlackVariables}.
\item Our algorithm for flips optimizes variance \textit{locally}, which means it may need many iterations to converge onto a minimum. Implementing a \textit{global} flipping algorithm should decrease oscillation but at a higher computational cost.
\end{itemize}
\vspace{-15pt}
\begin{minipage}[t]{0.5\textwidth}
\begin{centering}
\captionsetup{type=figure}
\includegraphics[width=0.95\textwidth]{img/COAverages-iteration-time_ms.pdf}
\end{centering}
\captionof{figure}{Time / Iteration.}
\label{fig:COAveragesTime}
\end{minipage}\hfill
\begin{minipage}[t]{0.5\textwidth}
\begin{centering}
\captionsetup{type=figure}
\includegraphics[width=0.95\textwidth]{img/COAverages-iteration-basefaces.pdf}
\end{centering}
\captionof{figure}{$|F^\mathcal{B}|$ / Iteration.}
\label{fig:COAveragesBaseFaces}
\end{minipage}
\vspace{10pt}
As randomness and especially outliers introduce a lot of noise into our data, we average it over the 10 runs shown on the left, and compare it to averaged data of several runs over Heuristic 1 introduced in Equation \ref{eq:CollapseHeuristic} which guides collapsing order. In order to get an additional dataset to compare to, we modify our Heuristic 1 slightly, call the modified version Heuristic 2, and compare that as well. Obviously, our Heuristic 1 is not perfect, and would require a lot of testing and fine-tuning to perform optimally. The modified Heuristic 2 serves as a lower bound; whereas our heuristic 1 beats random collapses, the modified Heuristic 2 performs worse than random collapses.
\begin{minipage}[t]{0.5\textwidth}
\begin{centering}
\includegraphics[width=0.95\textwidth]{img/COAverages-iteration-edgelength_avg.pdf}
\end{centering}
\captionof{figure}{Edge length / Iteration.}
\label{fig:COAveragesEdgelengthAvg}
\end{minipage}\hfill
\begin{minipage}[t]{0.5\textwidth}
\begin{centering}
\includegraphics[width=0.95\textwidth]{img/COAverages-iteration-edgelength_sd.pdf}
\end{centering}
\captionof{figure}{Edge length SD / Iteration.}
\label{fig:COAveragesEdgelengthSd}
\end{minipage}
\vspace{10pt}
As the spikes in Figures \ref{fig:COAveragesTime} and \ref{fig:COAveragesBaseFaces} indicate runs that did not terminate, it can be seen that our Heuristic 1 is more stable than collapsing randomly, but still suffers some problems as well. Even slight modifications like the one resulting in Heuristic 2 can drastically change behavior for the worse. Since scalability and stability on large meshes are important qualities, it would be worth it to spend more time on improving the collapse order. Alternatively or additionally, changing the way in which the underlying base mesh $\mathcal{B}$ is split could improve performance a lot by reducing the number of base faces $|F^\mathcal{B}|$ and thus also the time spent per iteration.
Overall, the main advantage of using a heuristic can be clearly seen in Figures \ref{fig:COAveragesTime}-\ref{fig:COAveragesEdgelengthSd}; that advantage is robustness. An average random run may perform on par with a run using our Heuristic 1, but a worst case is much more likely. Seeing that the examples converged after around 20 iterations, yet many errors still occured later, quitting in time would also improve robustness.
Due to time constraints, the remaining tests were performed on meshes with ~1,000 vertices as compared to the ~10,000 vertices fertility mesh of the test cases in this section. This was necessary since our current implementation does not scale linearly with mesh size\footnote{In our implementation a few global operations remain, such as garbage collection. While base mesh pre and post-processing have been mostly localized, we still sometimes call methods that iterate over the entire mesh and thus scale poorly. This could probably be improved, facilitating better scalability.}, and single runs took a few days on the fertility mesh; especially the outliers.
\subsection{Limiting Flips}
\label{subsec:LimitingFlips}
One variable in our Embedded Isotropic Remeshing algorithm is whether edge flips should be limited or not. Limiting flips means forbidding any edge flip on an embedded edge $e^{\Phi(\mathcal{M})}_x$ that would result in an increase in the length of the flipped edge $e^{\Phi(\mathcal{M})}_{x'}$ compared to its length before flipping $|e^{\Phi(\mathcal{M})}_{x'}|>|e^{\Phi(\mathcal{M})}_x|$. The idea is to reduce spiralization this way. Since spiraled edges are usually much longer than the average edge length, flipping edges into spirals would be forbidden by this rule.
We performed 20 test runs for each setting, over 100 iterations of isotropic remeshing each, with $T=0.2$, on a torus of 1000 vertices. Those 20 runs each were then averaged to compare metrics while minimizing noise.
\vspace{7pt}
\begin{minipage}[t]{0.5\textwidth}
\begin{centering}
\captionsetup{type=figure}
\includegraphics[width=0.95\textwidth]{img/LFAverages-iteration-time_ms.pdf}
\end{centering}
\vspace{-7pt}
\captionof{figure}{Time / Iteration.}
\label{fig:LFAveragesTime}
\end{minipage}\hfill
\begin{minipage}[t]{0.5\textwidth}
\begin{centering}
\captionsetup{type=figure}
\includegraphics[width=0.95\textwidth]{img/LFAverages-iteration-basefaces.pdf}
\end{centering}
\vspace{-7pt}
\captionof{figure}{$|F^\mathcal{B}|$ / Iteration.}
\label{fig:LFAveragesBaseFaces}
\end{minipage}
\begin{minipage}[t]{0.5\textwidth}
\begin{centering}
\captionsetup{type=figure}
\includegraphics[width=0.95\textwidth]{img/LFAverages-iteration-metafaces.pdf}
\end{centering}
\vspace{-7pt}
\captionof{figure}{$|F^\mathcal{M}|$ / Iteration.}
\label{fig:LFAveragesMetaFaces}
\end{minipage}\hfill
\begin{minipage}[t]{0.5\textwidth}
\begin{centering}
\captionsetup{type=figure}
\includegraphics[width=0.95\textwidth]{img/LFAverages-iteration-basefaces_metafaces.pdf}
\end{centering}
\vspace{-7pt}
\captionof{figure}{$(|F^\mathcal{B}|/|F^\mathcal{M}|)$ / Iteration.}
\label{fig:LFAveragesBaseMetaFaces}
\end{minipage}
\begin{minipage}[t]{0.5\textwidth}
\begin{centering}
\includegraphics[width=0.95\textwidth]{img/LFAverages-iteration-edgelength_avg.pdf}
\end{centering}
\vspace{-7pt}
\captionof{figure}{Edge length / Iteration.}
\label{fig:LFAveragesEdgelengthAvg}
\end{minipage}\hfill
\begin{minipage}[t]{0.5\textwidth}
\begin{centering}
\includegraphics[width=0.95\textwidth]{img/LFAverages-iteration-edgelength_sd.pdf}
\end{centering}
\vspace{-7pt}
\captionof{figure}{Edge length SD / Iteration.}
\label{fig:LFAveragesEdgeLengthSd}
\end{minipage}
\begin{minipage}[t]{0.5\textwidth}
\begin{centering}
\includegraphics[width=0.95\textwidth]{img/LFAverages-iteration-valence_avg.pdf}
\end{centering}
\vspace{-7pt}
\captionof{figure}{Valence / Iteration.}
\label{fig:LFAveragesValenceAvg}
\end{minipage}\hfill
\begin{minipage}[t]{0.5\textwidth}
\begin{centering}
\includegraphics[width=0.95\textwidth]{img/LFAverages-iteration-valence_sd.pdf}
\end{centering}
\vspace{-7pt}
\captionof{figure}{Valence SD / Iteration.}
\label{fig:LFAveragesValenceSd}
\vspace{7pt}
\end{minipage}
In Figures \ref{fig:LFAveragesBaseFaces}-\ref{fig:LFAveragesValenceAvg} there is barely any difference between allowing and limiting flips. The only discenable differences in those figures are some slight spikes in the number of base faces and time per iteration when flips were limited,
In contrast, Figure \ref{fig:LFAveragesValenceSd} shows a high difference in the standard deviation of the valence from the average between runs with limited vs unlimited flips. The difference in standard deviation from the average valence is easily explained: Flips are only performed when they would improve valence, but limiting flips forbids some of those flips. Thus, valence is sometimes less optimized locally, and this compounds towards a higher standard deviation compared to allowing all local improvements.\footnote{Note that the average valence stayed at exactly 6 throughout these runs, which is possible since the torus these runs were ran on was a very regular small mesh (1000 vertices) of genus 1, as opposed to the genus 0 mesh used in the previous section where the average valence cannot reach 6. Similarly, the fertility mesh in Section \ref{subsec:HeuristicVsRandomCollapses} is of genus 4, facilitating an increase of the average way over 6 as the number of meta vertices $V^\mathcal{M}$ decreases.}
Since limiting flips gives no measurable benefits while increasing standard deviation from the average valence, it is not advisable to do so. Although perhaps in combination with \textit{random} collapse order limiting flips would improve the performance, it is preferable to use heuristics to determine collapse order as seen in Section \ref{subsec:HeuristicVsRandomCollapses}.
\subsection{Slack Variables}
\label{subsec:SlackVariables}
During the \textsc{Collapses} and \textsc{Splits} steps of our Embedded Isotropic Remeshing algorithm, embedded meta edges $e^{\Phi(\mathcal{M})}_i\in E^{\Phi(\mathcal{M})}$ are split if their length exceeds $\alpha T$, or collapsed if their length subceeds $\beta T$, where $T$ is the target edge length. It is clear that $\alpha\geq 1$ and $\beta\leq 1$, and for traditional meshes the optimal values are $\alpha=\frac{4}{3}$ and $\beta=\frac{4}{5}$ \cite{CGII15}.
Since embedded edges $e^{\Phi(\mathcal{M})}_i$ can be curved, it makes sense to try out greater slack intervals than $\{\frac{4}{5}T,\frac{4}{3}T\}$. We ran tests for three configurations
\begin{itemize}
\item $\alpha=\frac{4}{3}, \beta=\frac{4}{5}$
\item $\alpha=\frac{3}{2}, \beta=\frac{2}{3}$
\item $\alpha=2, \beta=\frac{1}{2}$
\end{itemize}
For each of these configurations we executed 20 runs over 100 iterations each on a base mesh $\mathcal{B}$ in the form of a sphere with 1000 vertices. The 20 corresponding runs are each averaged to reduce noise, and then visualized.
\vspace{7pt}
\begin{minipage}[t]{0.5\textwidth}
\begin{centering}
\captionsetup{type=figure}
\includegraphics[width=0.95\textwidth]{img/ABAverages-iteration-time_ms.pdf}
\end{centering}
\vspace{-7pt}
\captionof{figure}{Time / Iteration.}
\label{fig:ABAveragesTime}
\end{minipage}\hfill
\begin{minipage}[t]{0.5\textwidth}
\begin{centering}
\captionsetup{type=figure}
\includegraphics[width=0.95\textwidth]{img/ABAverages-iteration-basefaces.pdf}
\end{centering}
\vspace{-7pt}
\captionof{figure}{$|F^\mathcal{B}|$ / Iteration.}
\label{fig:ABAveragesBaseFaces}
\end{minipage}
\begin{minipage}[t]{0.5\textwidth}
\begin{centering}
\captionsetup{type=figure}
\includegraphics[width=0.95\textwidth]{img/ABAverages-iteration-metafaces.pdf}
\end{centering}
\vspace{-7pt}
\captionof{figure}{$|F^\mathcal{M}|$ / Iteration.}
\label{fig:ABAveragesMetaFaces}
\end{minipage}\hfill
\begin{minipage}[t]{0.5\textwidth}
\begin{centering}
\captionsetup{type=figure}
\includegraphics[width=0.95\textwidth]{img/ABAverages-iteration-basefaces_metafaces.pdf}
\end{centering}
\vspace{-7pt}
\captionof{figure}{$(|F^\mathcal{B}|/|F^\mathcal{M}|)$ / Iteration.}
\label{fig:ABAveragesBaseMetaFaces}
\end{minipage}
\begin{minipage}[t]{0.5\textwidth}
\begin{centering}
\includegraphics[width=0.95\textwidth]{img/ABAverages-iteration-edgelength_avg.pdf}
\end{centering}
\vspace{-7pt}
\captionof{figure}{Edge length / Iteration.}
\label{fig:ABAveragesEdgelengthAvg}
\end{minipage}\hfill
\begin{minipage}[t]{0.5\textwidth}
\begin{centering}
\includegraphics[width=0.95\textwidth]{img/ABAverages-iteration-edgelength_sd.pdf}
\end{centering}
\vspace{-7pt}
\captionof{figure}{Edge length SD / Iteration.}
\label{fig:ABAveragesEdgeLengthSd}
\end{minipage}
\begin{minipage}[t]{0.5\textwidth}
\begin{centering}
\includegraphics[width=0.95\textwidth]{img/ABAverages-iteration-valence_avg.pdf}
\end{centering}
\vspace{-7pt}
\captionof{figure}{Valence / Iteration.}
\label{fig:ABAveragesValenceAvg}
\end{minipage}\hfill
\begin{minipage}[t]{0.5\textwidth}
\begin{centering}
\includegraphics[width=0.95\textwidth]{img/ABAverages-iteration-valence_sd.pdf}
\end{centering}
\vspace{-7pt}
\captionof{figure}{Valence SD / Iteration.}
\label{fig:ABAveragesValenceSd}
\vspace{7pt}
\end{minipage}
Figures \ref{fig:ABAveragesTime}-\ref{fig:ABAveragesValenceSd} contrast the performances of our three settings for $\alpha$ and $\beta$. In this setting with 1000 base vertices $V^\mathcal{B}$ there were no runs that had to be terminated, as the overhead caused by global methods is much lower. Thus, outliers where base mesh size grows are quickly overcome.
The first point of note is that in Figure \ref{fig:ABAveragesTime}, the default run $\alpha=1.33, \beta=0.8$ took longer than the other two runs as expected, but $\alpha=1.5, \beta=0.66$ outperformed $\alpha=2, \beta=0.5$. The reason $\alpha=1.33, \beta=0.8$ performs the worst is apparently the overly restrictive slack variables that can't account for curvature in embedded meta edges $e^{\Phi(\mathcal{M})}_i\in E^{\Phi(\mathcal{M})}$. $\alpha=1.5, \beta=0.66$ outperforming the more loose $\alpha=2, \beta=0.5$ could be interpreted as $\alpha=2, \beta=0.5$ giving \textit{too much slack}.
However, when looking at the other datapoints, it becomes apparent that out of our options $\alpha=2, \beta=0.5$ is clearly the most preferable. Isotropic Remeshing aims to optimize average edge length and valence to create a maximally regular mesh, and in these categories $\alpha=2, \beta=0.5$ is the clear winner. Figures \ref{fig:ABAveragesEdgelengthAvg}-\ref{fig:ABAveragesValenceSd} show much less oscillation and also much lower standard deviations for the average edge length and valence metrics when using $\alpha=2, \beta=0.5$. These improvements certainly warrant the slightly higher time per iteration compared to $\alpha=1.5, \beta=0.66$.
Note that these experiments were performed on a sphere to reduce the effects of uneven surfaces on edge lengths, but $\alpha$ and $\beta$ could certainly be adjusted to better fit other shapes of meshes as well.
\subsection{Smoothing Types}
\label{subsec:SmoothingTypes}
Lastly we look at the different types of smoothing, as presented in Section \ref{sec:Smoothing}. We compare 20 runs each over 100 iterations on the surface mesh of a cat for Vertex Weight Smoothing\footnote{See Section \ref{subsec:VertexWeightSmoothing}, Algorithm \ref{alg:vertexweights}.} and Vertex Distance Smoothing\footnote{See Section \ref{subsec:VertexDistanceSmoothing}, Algorithm \ref{alg:vertexdistance}.}.
We skip Forest Fire Smoothing because problems with it became apparent in early development. Forest Fire Smoothing on meshes of genus 0 causes the embedding to \textit{slip} off the surface and converge onto a single triangle; essentially vanishing. Due to this problem, our other smoothing methods are much more optimized, and rewriting Forest Fire Smoothing would not be worth it given the time constraints and the two other available and functional smoothing methods.
With that out of the way, we again look at 20 averaged runs each for our smoothing types:
\vspace{7pt}
\begin{minipage}[t]{0.5\textwidth}
\begin{centering}
\captionsetup{type=figure}
\includegraphics[width=0.95\textwidth]{img/STAverages-iteration-time_ms.pdf}
\end{centering}
\vspace{-7pt}
\captionof{figure}{Time / Iteration.}
\label{fig:STAveragesTime}
\end{minipage}\hfill
\begin{minipage}[t]{0.5\textwidth}
\begin{centering}
\captionsetup{type=figure}
\includegraphics[width=0.95\textwidth]{img/STAverages-iteration-basefaces.pdf}
\end{centering}
\vspace{-7pt}
\captionof{figure}{$|F^\mathcal{B}|$ / Iteration.}
\label{fig:STAveragesBaseFaces}
\end{minipage}
\begin{minipage}[t]{0.5\textwidth}
\begin{centering}
\captionsetup{type=figure}
\includegraphics[width=0.95\textwidth]{img/STAverages-iteration-metafaces.pdf}
\end{centering}
\vspace{-7pt}
\captionof{figure}{$|F^\mathcal{M}|$ / Iteration.}
\label{fig:STAveragesMetaFaces}
\end{minipage}\hfill
\begin{minipage}[t]{0.5\textwidth}
\begin{centering}
\captionsetup{type=figure}
\includegraphics[width=0.95\textwidth]{img/STAverages-iteration-basefaces_metafaces.pdf}
\end{centering}
\vspace{-7pt}
\captionof{figure}{$(|F^\mathcal{B}|/|F^\mathcal{M}|)$ / Iteration.}
\label{fig:STAveragesBaseMetaFaces}
\end{minipage}
\begin{minipage}[t]{0.5\textwidth}
\begin{centering}
\includegraphics[width=0.95\textwidth]{img/STAverages-iteration-edgelength_avg.pdf}
\end{centering}
\vspace{-7pt}
\captionof{figure}{Edge length / Iteration.}
\label{fig:STAveragesEdgelengthAvg}
\end{minipage}\hfill
\begin{minipage}[t]{0.5\textwidth}
\begin{centering}
\includegraphics[width=0.95\textwidth]{img/STAverages-iteration-edgelength_sd.pdf}
\end{centering}
\vspace{-7pt}
\captionof{figure}{Edge length SD / Iteration.}
\label{fig:STAveragesEdgeLengthSd}
\end{minipage}
\begin{minipage}[t]{0.5\textwidth}
\begin{centering}
\includegraphics[width=0.95\textwidth]{img/STAverages-iteration-valence_avg.pdf}
\end{centering}
\vspace{-7pt}
\captionof{figure}{Valence / Iteration.}
\label{fig:STAveragesValenceAvg}
\end{minipage}\hfill
\begin{minipage}[t]{0.5\textwidth}
\begin{centering}
\includegraphics[width=0.95\textwidth]{img/STAverages-iteration-valence_sd.pdf}
\end{centering}
\vspace{-7pt}
\captionof{figure}{Valence SD / Iteration.}
\label{fig:STAveragesValenceSd}
\vspace{7pt}
\end{minipage}
With a single glance at Figures \ref{fig:STAveragesTime}-\ref{fig:STAveragesValenceSd} it is apparent that Vertex Weight Smoothing outperforms Vertex Distance Smoothing in every single metric. The clear recommendation is to use Vertex Weight Smoothing as the default smoothing type.
File added
File added
File added
File added
File added
File added
File added
File added
File added
File added
File added
File added
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment