Skip to content
Snippets Groups Projects
Commit 427974ec authored by Jan Schnathmeier's avatar Jan Schnathmeier
Browse files

too much new stuff. bugfixes, improvements, chapter 3

parent f0c96783
No related branches found
No related tags found
No related merge requests found
Showing
with 171 additions and 26 deletions
......@@ -37,7 +37,7 @@ Figure \ref{fig:schreiner3} shows how vertex optimization for a vertex $v$ betwe
The authors discuss several methods of measuring distortion but decided on that of \cite{sander2001texture} since it smoothly penalizes scale distortion. Figure \ref{fig:schreiner4} shows the effect of the this optimization method in reducing distortion. Here, the edges of $M^1$ are meshed onto the surface of $M^2$, and at the close-up feature point it can be observed that distortion has been optimized. This technique can then be used to create low-distortion morphs between two meshes as also seen in Figure \ref{fig:schreiner1} previously in Section \ref{subsec:SurfaceToSurfaceMapping}.
\begin{figure}[h]
\begin{figure}[ht]
\def\svgwidth{\textwidth}
{\centering
\input{img/Campen2014Quad1.pdf_tex}\par
......@@ -72,7 +72,7 @@ Since the topic of this section is geometric mesh embedding optimization, we are
The nodes of the embedding are then iteratively improved in this fashion. Since they should lie on base vertices which are discrete, and snapping a node to the nearest vertex may hamper convergence, the underlying base vertices are simply relocated along with the nodes gradient descent. As soon as the node no longer lies on the vertex or its surrounding patch, the vertex is snapped back to its original position in order not to change the underlying mesh. At last, after the node positions have been optimized, a well refined quad mesh is generated.
\begin{figure}[h]
\begin{figure}[ht]
\def\svgwidth{\textwidth}
{\centering
\input{img/Schmidt2019Surface1.pdf_tex}\par
......@@ -93,7 +93,7 @@ The nodes of the embedding are then iteratively improved in this fashion. Since
\label{fig:schmidt3}
\end{wrapfigure}
Lastly, there is the paper ''Distortion-Minimizing Injective Maps Between Surfaces'' by \cite{schmidt2019distortion}. Figure \label{fig:schmidt1} shows a parameterization of two meshes on the left, mapping onto one another in the middle, and distortion minimization on the right. This already reveals the parallels to the previously discussed \cite{schreiner2004inter}, a work upon which this method builds as well. The aim is to generate a map between two (parts of) disk-topology surfaces, and to minimize distortion on that map.
Lastly, there is the paper ''Distortion-Minimizing Injective Maps Between Surfaces'' by \cite{schmidt2019distortion}. Figure \ref{fig:schmidt1} shows a parameterization of two meshes on the left, mapping onto one another in the middle, and distortion minimization on the right. This already reveals the parallels to the previously discussed \cite{schreiner2004inter}, a work upon which this method builds as well. The aim is to generate a map between two (parts of) disk-topology surfaces, and to minimize distortion on that map.
Figure \ref{fig:schmidt3} visualizes how embeddings work using the method of Schmidt et al. Given input meshes $\mathcal{A}$ and $\mathcal{B}$, an inter-surface map $\Phi(\mathcal{A}):\mathcal{A}\rightarrow\mathcal{B}$ is created by embedding the connectivity of $\mathcal{A}$ into the surface of $\mathcal{B}$. Lastly $\mathcal{M}_{\mathcal{B}}$ on the right is the meta mesh, which contains the vertices of both $\mathcal{A}$ and $\mathcal{B}$ and an additional set of intersection vertices where edges of $\mathcal{A}$ and $\mathcal{B}$ meet.
......@@ -108,7 +108,7 @@ Figure \ref{fig:schmidt3} visualizes how embeddings work using the method of Sch
\label{fig:schmidt2}
\end{wrapfigure}
Whereas \cite{schreiner2004inter} optimizes distortion locally, \cite{schmidt2019distortion} use a global distortion optimization metric. Here, distortion is measured isometrically, since isometric measures are more sensitive towards misaligned geometric features; Figure \label{fig:schmidt2} shows a comparison of different energies. Those energies are symmetric Dirichlet (SD) \cite{schreiner2004inter}, as-rigid-as-possible (ARAP) \cite{liu2008local}, symmetric ARAP (SARAP) \cite{poranne2016simple} and lastly as-similar-as-possible (ASAP). Schmidt et al. use the symmetric Dirichlet energy to measure distortion.
Whereas \cite{schreiner2004inter} optimizes distortion locally, \cite{schmidt2019distortion} use a global distortion optimization metric. Here, distortion is measured isometrically, since isometric measures are more sensitive towards misaligned geometric features; Figure \ref{fig:schmidt2} shows a comparison of different energies. Those energies are symmetric Dirichlet (SD) \cite{schreiner2004inter}, as-rigid-as-possible (ARAP) \cite{liu2008local}, symmetric ARAP (SARAP) \cite{poranne2016simple} and lastly as-similar-as-possible (ASAP). Schmidt et al. use the symmetric Dirichlet energy to measure distortion.
In the implementation, the algorithm computes a 2D mesh overlay of meshes $\mathcal{A}$ and $\mathcal{B}$. Then, it calculates gradients for each vertex, and Hessian matrices using automatic differentiation. Specific feature vertices are picked out as landmark vertices, in order to constrain those points while optimizing the rest.
......@@ -8,6 +8,6 @@ As presented in \cite{CGII15}, the algorithm takes a mesh $\mathcal{M}$, a targe
\item \textbf{Edge Splits}: First, all edges that exceed the length $\alpha T$ are split. $\alpha$ is typically set as $\frac{4}{3}.$\footnote{Setting $\alpha=\frac{4}{3}$ ensures that an edge is split when it has length $T+\epsilon$ such that the resulting edges after the split have length $T-\epsilon$, proof in \cite{CGII15}} Setting it as 1 would lead to oscillation, whereas setting it too high causes too fast convergence.
\item \textbf{Edge Collapses}: Similarly, any edges that subceed the length $\beta T$ are collapsed. $\beta$ is typically set as $\frac{4}{5}.\footnote{Derived similarly to $\alpha$, see \cite{CGII15}}$ Just like $\alpha$, $\beta$ too can be fine-tuned. $\beta$ approaching $1$ causes oscillation, whereas approaching $0$ causes too fast convergence.
\item \textbf{Edge Flips}: Edge flips are performed whereever they improve average deviation of valence from 6. In a completely regular mesh, vertices have an average valence of 6 due to the euler characteristic \cite{euler1758elementa}. Flipping an edge reduces the valence of the two adjacent vertices by one each, and increases the valence of the two new vertices it will be connected to by one each. This can be used to greedily locally optimize valence.
\item \textbf{Tangential Smoothing}: Lastly, vertices are smoothed using normal and tangential smoothing. In our embedded meta mesh structure this does not work, since meta vertices have to lie on discrete positions (base vertices). As such, a different way of smoothing will be presented in Section \ref{sec:IsotropicRemeshing}.
\item \textbf{Tangential Smoothing}: Lastly, vertices are smoothed using normal and tangential smoothing. In our embedded meta mesh structure this does not work, since meta vertices have to lie on discrete positions (base vertices). As such, a different way of smoothing will be presented in Chapter \ref{ch:EmbeddedIsotropicRemeshing}.
\end{itemize}
These steps are repeated for a number of iterations and should converge towards a better mesh eventually. As can be deduced by the operations performed in each iteration, Incremental Isotropic Remeshing aims to optimize edge length, valence, and smoothness of a mesh. Implementing this on our data structure will require some modifications, but more on that in Chapter \ref{ch:EmbeddedIsotropicRemeshing}
These steps are repeated for a number of iterations and should converge towards a better mesh eventually. As can be deduced by the operations performed in each iteration, Incremental Isotropic Remeshing aims to optimize edge length, valence, and smoothness of a mesh. Implementing this on our data structure will require some modifications, but more on that in Chapter \ref{ch:EmbeddedIsotropicRemeshing}.
......@@ -3,7 +3,7 @@
The first related work introduced in this section will be ''Lifted Bijections for Low Distortion Surface Mappings'' \cite{aigerman2014lifted} by Aigerman et al. In this paper, the authors present an algorithm that creates a bijective mapping between two surface meshes. Given two topologically equivalent, oriented and boundaryless surface meshes $\mathbf{M} = (\mathbf{V}_{\mathbf{M}},\mathbf{E}_{\mathbf{M}},\mathbf{T}_{\mathbf{M}})$ and $\mathbf{N}=(\mathbf{V}_{\mathbf{N}},\mathbf{E}_{\mathbf{N}},\mathbf{T}_{\mathbf{N}})$, and a sparse collection of pairs of point correspondences $\mathcal{P}=\{(\mathbf{x}_{\mathbf{i}},\mathbf{y}_{\mathbf{i}})\}\subset\mathbf{M}\times\mathbf{N},i=1,..,k$. Let the bijective mapping between those surface meshes be $\mathit{f}:\textbf{M}\rightarrow\textbf{N}$, then the paper suggests the following three-step algorithm to find a good mapping $\mathit{f}$ that interpolates the correspondences with low isometric distortion. Figure \ref{fig:aigerman1} also showcases the steps of the algorithm.
\begin{figure}[h]
\begin{figure}[ht]
\def\svgwidth{\textwidth}
{\centering
\input{img/Aigerman2014Lifted.pdf_tex}\par
......
......@@ -46,7 +46,7 @@ Another previously discussed approach is ''Consistent Mesh Parameterization'' \c
However that is only a temporary solution, since allowing the embedding to persist in this form would make it look ugly and less like a usual mesh. Thus, these duplicate edges are only traced tentatively, and then sorted by length in a priority queue. The longer an edge is the earlier it will be traced, reducing the average edge length (since tracing later might increase their length due to obstacles). In this second tracing path overlapping edges are no longer allowed, and tentative edges that violate this are retraced. Finally the edges are straightened, and base edge splits are performed where necessary to ensure connectivity
\begin{figure}[h]
\begin{figure}[ht]
\def\svgwidth{\textwidth}
{\centering
{\tiny \input{img/Livesu2020Obtaining1.pdf_tex}\par}
......@@ -95,7 +95,7 @@ Whereas the methods presented so far embed meta edges on base edges and perform
This approach easily solves the space problem shown in Figure \ref{fig:tracingspaceproblem} and discussed earlier in this section. It shares a certain similarity with the approach of Kraevoy et al. where they initially traced over faces. Albeit this way the original geometry is preserved, and no changes have to be made on the underlying base mesh itself, this type of embedding requires a complicated secondary datastructure on top of the existing mesh datastructure. Embedding meta edges on base edges is probably preferable in most applications in order to keep the data structure simple and the resulting embedding nicer to look at.
\begin{figure}[h]
\begin{figure}[ht]
\def\svgwidth{\textwidth}
{\centering
\input{img/Sharp2019Navigating1.pdf_tex}\par
......
......@@ -11,7 +11,7 @@ To start this section it is important to clear up the nomenclature.
The following methods create a surface-to-surface map via mesh segmentation.
\begin{figure}[h]
\begin{figure}[ht]
\vspace{-10pt}
\begin{centering}
\begin{subfigure}{.5\textwidth}
......@@ -42,7 +42,7 @@ The first surface-to-surface mapping method we will look at, \cite{praun2001cons
Figure \ref{fig:praun1} shows the mapping of a simple star-layout onto a horse using naive curve-tracing on the left, and the improved spanning-tree method on the right. Whereas the left side shows overlapping edges, no edge overlaps happen after using the improved method. What's interesting about this method is that the meta mesh is given as an input - see the star in Figure \ref{fig:praun1} in the middle between the horses - and then mapped onto the input surface mesh.
\begin{figure}[h]
\begin{figure}[ht]
\vspace{-10pt}
\begin{centering}
\begin{subfigure}{.5\textwidth}
......@@ -90,7 +90,7 @@ The remaining two steps of the algorithm are cross-parameterization and compatib
This method even works on complicated meshes of higher genus such as the ones that can be seen in Figure \ref{fig:kraevoy2}. The way in which edges are traced over faces is also especially interesting and will become relevant later to compare with our tracing methods for meta edges.
\begin{figure}[h]
\begin{figure}[ht]
\def\svgwidth{\textwidth}
{\centering
\input{img/Schreiner2004Inter1.pdf_tex}\par
......@@ -134,7 +134,7 @@ Given pairs of feature vertices, the shortest path between them is traced using
\end{itemize}
Much of these considerations are relevant to problems that reappear later during the triangulation and refinement of our embedded meta meshes. Note that \cite{schreiner2004inter} differentiates itself from the previously mentioned \cite{praun2001consistent} and \cite{kraevoy2004cross} in that it incrementally improves its meta mesh via decimation. This is an approach that we will also attempt with our data structure, along the lines of an Incremental Isotropic Remeshing algorithm as described in Section \ref{subsec:IncrementalIsotropicRemeshing}.
\begin{figure}[h]
\begin{figure}[ht]
\def\svgwidth{\textwidth}
{\centering
\input{img/Aigerman2014Lifted.pdf_tex}\par
......
\chapter{Embedding Representation}
\label{ch:EmbeddingRepresentation}
With the background work out of the way, it is time to look at our embedding. This chapter discusses the general structure of our embedding and its functionality. Our embedding is a bijection between a base mesh $\mathcal{B}$ and meta mesh $\mathcal{M}$, represented via pointers between the two, and functions that manipulate both while honoring the bijection. In order to facilitate running mesh algorithms on the embedding it needs to offer sensible mesh operations too. For this we present and implement a set of mesh operations sufficient to build arbitrary more complicated operations and algorithms out of. Lastly, we present how we initialize this structure in practice, as there are many different possible ways of doing so.
To summarize:
\begin{enumerate}
\item Section \ref{sec:DataStructure} presents how $\mathcal{B}$ and $\mathcal{M}$ are connected, both logically and in terms of data structures.
\item Section \ref{sec:Operations} discusses our choice of a minimal set of operations necessary to manipulate our meta mesh embedding arbitrarily.
\item Section \ref{sec:Initialization} demonstrates how our initialization of the embedding and the two meshes $\mathcal{B}$ and $\mathcal{M}$ works, and discusses some possible alternatives.
\end{enumerate}
\input{ch/EmbeddingRepresentation/DataStructure.tex}
\input{ch/EmbeddingRepresentation/Operations.tex}
\input{ch/EmbeddingRepresentation/Initialization.tex}
\section{Data Structure}
\label{sec:DataStructure}
In order to embed a meta mesh into a base mesh further additions to the data structure are needed. To start with, here is a definition of the meta mesh component and how they embed into the base mesh:
\begin{itemize}
\item Each meta vertex is a base vertex (but not every base vertex has to be a meta vertex).
\item Each meta edge is a continuous sequence of base edges; meta edges are not allowed to intersect.
\end{itemize}
\begin{figure}
\def\svgwidth{\textwidth}
{\centering
\input{img/EmbeddingDataStructure.pdf_tex}\par
}
\vspace{-1pt}
\caption{Data structure of the embedded meta mesh.}
\label{fig:embeddingdatastructure}
\end{figure}
Figure \ref{fig:embeddingdatastructure} visualizes a meta edge and two meta vertices embedded into a base mesh. The base mesh is colored in gray edges with red vertices, blue vertices denote base vertices that are also meta vertices, and one embedded meta halfedge is colorized in black in the base mesh. The blue line represents an implicit representation of the embedded meta halfedge drawn as a direct connection between two vertices rather than a sequence of base edges.
Much like the halfedge data structure OpenMesh is based on, the embedding is represented via a series of pointers between halfedges and vertices.
\begin{enumerate}
\item Each meta vertex is connected to a base vertex.
\item Each base vertex \textit{can be} connected to a meta vertex if one lies on it, otherwise that pointer is in a special state denotating that the base vertex is not connected.
\item Each meta halfedge is connected to the \textit{first} base halfedge it consists of.
\item Each base halfedge that a meta halfedge lies on is connected to that meta halfedge.
\item Each base halfedge that lies on a meta halfedge points towards the next base halfedge in that sequence (if it is not the last).
\end{enumerate}
While it would be possible to add additional pointers between meta mesh and base mesh elements such as eg. face pointers, the cost of updating face pointers outweighs the benefits. For example, if each base face pointed towards its meta face, every time a meta edge was traced the base faces in the new meta faces would have to be iterated over in a breadth search. But there would be little benefit in doing so every step other than the ability to quickly look up which meta face a base face lies on, as iteration over the mesh is fully possible via halfedges.
\begin{wrapfigure}[19]{r}{0.45\textwidth}
\begin{centering}
\includegraphics[width=0.45\textwidth]{img/CatMetaMesh.png}\par
\end{centering}
\caption{A base mesh with embedded meta mesh: left - The meta mesh represented explicitly: right; colored edges correspond}
\label{fig:catmetamesh}
\end{wrapfigure}
Note that meta faces are not connected to base faces, since the costs outweigh the benefit. Connecting base faces to their respective meta face would require iterating over a lot of base faces every time a new meta edge is traced, whereas in applications so far it was rare that the meta face of a base face needed to be addressed. As such it is computationally cheaper to computationally determine the meta face associated to a base face by doing an outward search until a meta edge is reached.
In this implementation the meta mesh has an explicit representation as a mesh which is then connected with its underlying base mesh via pointers, as seen in Figure \ref{fig:catmetamesh}. But it would also be possible to represent the meta mesh entirely implicitly via properties on base mesh elements. It can however be argued that having an explicit representation is worth it since it makes the structure more transparent and operations on the meta mesh easier to apply. Another big advantage of having an explicit representation of the meta mesh is the speed of mesh traversal. To traverse from one meta vertex to another on the base mesh would require the traversal of a potentially long sequence of base halfedges, whereas by traversing meta mesh halfedges a lot of operations can be skipped. This increases the speed and comfort of operations on the meta mesh at the cost of the space required to represent the meta mesh as a second mesh connected to the base mesh.
\ No newline at end of file
\section{Initialization}
\label{sec:Initialization}
In order to work with meta mesh embeddings, we first need to initialize them in some way. The embedding consists of three components:
\begin{enumerate}
\item A base mesh $\mathcal{B}$
\item A meta mesh $\mathcal{M}$
\item Connectivity between $\mathcal{B}$ and $\mathcal{M}$, defining an embedding $\Phi$ of $\mathcal{M}$ on $\mathcal{B}$.
\end{enumerate}
It is clear that one of the input parameters has to be $\mathcal{B}$, but afterwards we are presented with a few options for further input parameters:
\begin{enumerate}
\item $\mathcal{B}, \Phi, \mathcal{M}$: With all parameters given, the embedding is ready and no further initialization needs to be done. On the other hand this leaves a difficult task to the user in having to input a pre-initialized embedding.
\item $\mathcal{B}, \mathcal{M}$: Giving just the two meshes $\mathcal{B}$ and $\mathcal{M}$ and then embedding $\mathcal{M}$ onto $\mathcal{B}$ creates a similar initialization as that of \cite{praun2001consistent}, and could be handled with a similar method for finding the embedding $\Phi$. However, designing a fitting mesh $\mathcal{M}$ in such a fashion is no trivial task, and should still be simplified.
\item $\mathcal{B}, V_\mathcal{M}$\subset V_\mathcal{B}$: Giving a subset $V_\mathcal{M}$\subset V_\mathcal{B}$ of the vertices of $\mathcal{B}$ as an input implicitly defines the vertex embedding which is part of $\Phi$. The connectivity and actual edges of $\mathcal{M}$ can then be triangulated. This type of input is the easiest for a user, since all that needs to be given is a set of feature points V_\mathcal{M}\subset V_\mathcal{B}$.
\begin{enumerate}[label=(\alph*)]
\item A special case of this input type is $V_\mathcal{M}=V_\mathcal{B}$, in which case $E_\mathcal{M}=E_\mathcal{B}$ is a trivial triangulation, and with that
$\mathcal{M}=\mathcal{B}$. With that the initialization is done by copying $\mathcal{B}$.
\item A further simplification of this is automatically detecting or randomly assigning a set of feature points $V_\mathcal{M}$, then proceeding as usual.
\end{enumerate}
\end{enumerate}
\section{Operations}
\label{sec:Operations}
Since we operate on a halfedge data structure \cite{kettner1998designing} we require a sensible set of operations that allow for comfortable implementation of algorithms on our structure. For conventional orientable 2-manifold meshes, a minimal and complete set of operations would be the following \cite{akleman2003minimal}:
\begin{enumerate}
\item \textbf{Vertex insertion}: Inserts a new meta vertex into the meta mesh; whereas in a conventional mesh this insertion could happen anywhere, we need to restrict it to the positions of base vertices.
\item \textbf{Vertex deletion}: Deletes a meta vertex.
\item \textbf{Edge insertion}: Inserts a meta edge. In conventional meshes edges are straight lines between the two vertices they connect, but a meta edge can take much more complex shapes due to it being a concatenation of base edges. Here we need to be careful to preserve edge order and to not cross edges. Finding a set of connecting base edges between two verices while using neighboring meta edges as restrictions can be done via a Dijkstra \cite{dijkstra1959note} or A* \cite{hart1968formal} tracing algorithm.
\item \textbf{Edge deletion}: Deletes a meta edge.
\end{enumerate}
All of those operations are sensible to have and need to be part of our implementation to make it workable, with edge insertion being especially interesting; but more on that in Section \ref{subsec:RestrictedPathTracing}, where we discuss the implementation in detail. As Akleman et al. show, this set of operations is minimal and complete, but in practice algorithms on meshes can be built better with an extended set of operations. We propose a set of four operations we call \textit{atomic mesh operations}.
\begin{figure}[h]
\begin{centering}
\begin{subfigure}{.5\textwidth}
\def\svgwidth{0.9\linewidth}
{\centering
\input{img/ConceptCollapse.pdf_tex}\par
}
\caption[width=.9\linewidth]{Halfedge collapse of AB halfedge.}
\label{fig:ConceptCollapse}
\end{subfigure}
\begin{subfigure}{.5\textwidth}
\def\svgwidth{0.9\linewidth}
{\centering
\input{img/ConceptRotation.pdf_tex}\par
}
\caption[width=.9\linewidth]{Edge rotation of AB edge.}
\label{fig:ConceptRotation}
\end{subfigure}
\end{centering}
\end{figure}
\begin{enumerate}
\item \textbf{Halfedge collapse}: In this operation a halfedge AB is removed from the mesh, and all vertices connected to vertex A are then connected to vertex B. Afterwards, loops are removed to avoid double edges; see Figure \ref{fig:ConceptCollapse} for reference.
\item \textbf{Edge rotation}: The input is an edge AB, which is removed and then retraced by walking along the sector around it counter-clockwise (halfedge direction) from A and B and picking the next vertices respectively. See Figure \ref{fig:ConceptRotation}. \textit{In a trimesh this corresponds to flipping the edge AB, and in most of our applications this is the case}.
\end{enumerate}
\begin{figure}[h]
\begin{centering}
\begin{subfigure}{.5\textwidth}
\def\svgwidth{0.9\linewidth}
{\centering
\input{img/ConceptEdgeSplit.pdf_tex}\par
}
\caption[width=.9\linewidth]{Edge split of AB edge on vertex C.}
\label{fig:ConceptEdgeSplit}
\end{subfigure}
\begin{subfigure}{.5\textwidth}
\def\svgwidth{0.9\linewidth}
{\centering
\input{img/ConceptFaceSplit.pdf_tex}\par
}
\caption[width=.9\linewidth]{Face split into vertex B.}
\label{fig:ConceptFaceSplit}
\end{subfigure}
\end{centering}
\end{figure}
\begin{enumerate}[resume]
\item \textbf{Edge split}: For an input edge AB, insert a new meta vertex C on AB and connect it to all vertices in the adjacent faces of the AB edge. Note that C cannot be chosen arbitrarily but has to lie on the base mesh $\mathcal{B}$. See Figure \ref{fig:ConceptEdgeSplit}.
\item \textbf{Face split}: For an input base vertex A, create a meta vertex B and connect it to all vertices of the face it lies in. See Figure \ref{fig:ConceptFaceSplit}. Similar to the edge split, we cannot accept any input A inside an adjacent face, but A \textit{must} be a base vertex to preserve the embedding.
\end{enumerate}
These four \textit{atomic operations} are sufficient to represent other common mesh operations such as:
\begin{enumerate}[resume]
\item \textbf{Vertex Relocation}: Moving a vertex A into one of its adjacent faces into a new point B. This can be done by first doing a face split at B, then a collapse of the AB halfedge.
\end{enumerate}
In fact it can be shown that our four \textit{atomic operations} are complete for trimeshes since they allow for arbitrary changes to connectivity and vertex positions. Any \textit{atomic operation} on a trimesh will still result in a trimesh, which is a very nice property since many algorithms run entirely on trimeshes.
If the desired context is a broader polymesh and not just a trimesh, this can be achieved by using operations from the complete and minimal set presented at the start of this section (eg. by deleting one edge between two triangles we now have a square and the mesh is no longer a trimesh). For convenience sake it also makes sense to implement a vertex deletion function which also deletes all adjacent edges, since edges with an unconnected end are usually undesirable.
......@@ -21,6 +21,6 @@ Embedding meta edges in this way is desirable because it is easily accomodated b
Our contribution is a data structure allowing the embedding of a simple meta mesh into a more dense base mesh. This allows for the application of complex algorithms on a meta mesh, then backpropagation of changes to the base mesh. A major distinction to existing state of the art mesh embedding methods is our explicit representation of the meta mesh as its own mesh as can be seen in Figure \ref{fig:SampleMetaMeshFertility} on the right. This makes it extremely easy to apply common mesh algorithms on the meta mesh, given that it has an implementation of basic mesh operations. On traditional meshes or base meshes there exist several operations modifying the mesh that are then used extensively as the base components of algorithms. These operations are edge collapses, edge flips, edge splits, face splits and vertex relocation. Given that algorithms on meshes often only manipulate the mesh using those atomic operations, implementing them enables the meta mesh to be manipulated by arbitrary mesh algorithms without much additional work.
As implementing such a data structure and operations on it is the main focus of the thesis, it would be remiss not to implement at least one mesh algorithm as well in order to test the stability and efficiency of the implementation. For that purpose the chosen algorithm is Incremental Isotropic Remeshing (details in Section \ref{sec:IsotropicRemeshing} as it can be used as a bench mark for the atomic operations while also potentially proving useful in later use cases e.g. to initialize an embedding by refining the meta mesh using Incremental Isotropic Remeshing. Since the algorithm will require some changes to be applied to a meta mesh and there are different choices to be made while designing it, a few of those options will be compared by testing their runtime and fidelity against each other.
As implementing such a data structure and operations on it is the main focus of the thesis, it would be remiss not to implement at least one mesh algorithm as well in order to test the stability and efficiency of the implementation. For that purpose the chosen algorithm is Incremental Isotropic Remeshing (details in Chapter \ref{ch:EmbeddedIsotropicRemeshing} as it can be used as a bench mark for the atomic operations while also potentially proving useful in later use cases e.g. to initialize an embedding by refining the meta mesh using Incremental Isotropic Remeshing. Since the algorithm will require some changes to be applied to a meta mesh and there are different choices to be made while designing it, a few of those options will be compared by testing their runtime and fidelity against each other.
The implementation of an Incremental Isotropic Remeshing algorithm on our embedded meta mesh structure has the additional advantage of being useful in establishing a good initial meta mesh. Whereas manually choosing feature points to be triangulated is cumbersome and randomly choosing feature points could lead to unsatisfactory results, Incremental Isotropic Remeshing opens up a third option for initialization. By first selecting the entire base mesh as the meta mesh we start with the highest possible precision representation of the base mesh. From there on the meta mesh can be iteratively remeshed decreasing in size each iteration until it arrives at the desired precision. While more time consuming than a random triangulation, this should deliver a significantly better triangulation since remeshing algorithms optimize factors such as average edge length and vertex valence.
\subsection{Base Mesh Refinement}
\label{sec:BaseMeshRefinement}
\label{subsec:BaseMeshRefinement}
\subsection{Restricted Path Tracing}
\label{sec:RestrictedPathTracing}
\label{subsec:RestrictedPathTracing}
\subsection{Retracing}
\label{sec:Retracing}
\label{subsec:Retracing}
\subsection{Vertex Relocation}
\label{sec:VertexRelocation}
\label{subsec:VertexRelocation}
\subsection{Edge Rotation}
\label{sec:EdgeRotation}
\label{subsec:EdgeRotation}
\subsection{Edge Split}
\label{sec:EdgeSplit}
\label{subsec:EdgeSplit}
\subsection{Face Split}
\label{sec:FaceSplit}
\label{subsec:FaceSplit}
\subsection{Halfedge Collapse}
\label{sec:HalfedgeCollapse}
\label{subsec:HalfedgeCollapse}
......@@ -54,8 +54,5 @@
\lineheight{1}%
\setlength\tabcolsep{0pt}%
\put(0,0){\includegraphics[width=\unitlength,page=1]{img/Campen2014Quad1.pdf}}%
\put(1.02328089,-1.09528967){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}\end{tabular}}}}%
\put(0.88559589,-1.20973075){\color[rgb]{0.1372549,0.12156863,0.1254902}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}\end{tabular}}}}%
\put(0.54784612,-1.2299613){\color[rgb]{0.1372549,0.12156863,0.1254902}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}\end{tabular}}}}%
\end{picture}%
\endgroup%
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment