Generate Polygon and Parse in 4 equal areas

Class Instructor Date Language Ta'ed Code
CS 6491 Computer Graphics (Graduate) Jarek Rossignac Fall 2014 Java/Processing No Code N/A

This also started life as a project for the Graduate Graphics class, although I expanded it due to interest in the topic. The purpose of the assignment was to divide a polygon into 4 equal areas using the minimum length cuts. This is, of course, an analog of the Optimized Knapsack problem, which is NP-hard. I expanded my solution to include planar polygons of any configuration.

I made the simplifying assumption that the half-area cut that was the shortest would be part of the solution, and I also didn't worry about finding cuts that stopped at an internal vertex within the polygon (like the minimum cuts for an equilateral triangle).

First I would randomly generate a polygon - either convex or concave, and then I find the two closest edges whose endpoints form a polygon that enclose the "half-cut", with the rationale being that such a cut would probably be smallest - not always the case, but it worked pretty well. Then I repeat the process with each piece from the original cut.

Generating the planar polygon was actually pretty tricky - determining how to make sure there were no loops or crossings was a good exercise.

Here are videos of the project code running (with links if the videos don't play automatically)

Square and Triangles
Irregular, Randomly Generated Convex and Concave Planar Polygons
Intermediate Keyframes via Bezier Interpolation
Intermediate Keyframes via Bezier Interpolation

Specifically, the problem presented in this project was to determine the minimum cut required to split a polygon into 4 regions of equal area. By minimum cut we mean the smallest edge we need to introduce into the polygon that accomplishes the cut. Although the cut does not need to be connected, in my project all my cuts were connected, although not all partitions required 3 cuts to accomplish the desired 4 equal area layout (the square for example only needed 2). The possible applications of this type of algorithm include manufacturing, where moving a cutting head or etching laser across a template needs to be accomplished in the minimum amount of moves while guaranteeing a result, as well as routing algorithms that require load balancing (i.e. “equal area”) while guaranteeing minimum travel routes (i.e. “min cut length”).

The challenges inherent in this problem are two fold – geometric and combinatorial. The geometric challenges include determining an efficient storage mechanism for the polygon that optimizes accessibility while minimizing storage requirements; calculating the required quantities such as cut position, length, geometry and resultant area accurately and with sufficient generality to apply to as large a range of polygons as possible, and, in the case of computational geometry, handling various floating-point-related errors and inaccuracies inherent in the discrete representation of what is a continuous-realm structure. What I am calling the combinatorial challenges here involve the same problems inherent in any of the brand of problems that are reducible to Optimized-SAT (in effect NP-hard problems), where one of the problems is proving optimality in polynomial time.

My approach was a greedy one, where I found the minimum cut that partitioned the polygon in half iteratively, and then repeated that procedure for each resultant half. I chose to restrict cuts to straight lines, but one of the benefits of the greedy approach I took is that it enabled me to generate interior junctions as needed (since each resultant ½ polygon is treated as a full poly, and the first cut is no different than subsequent ones). I generated random polygons where there were no loops and vertices were at most some delta distance apart, but convex and concave polygons were equally likely.

Simpler versions of this problem include limiting the polygon to being a simple shape such as a square or triangle(where a solution can be derived analytically or via calculus in the form of a minimization problem relating area to perimeter/cut length) as well as limiting the # of splits to just 2 (i.e. 1 cut). For example, to find the minimum cut required to partition an equilateral triangle (the simplest case) into two equal areas would involve the equation for finding the area of a triangle (½ * base * height) where the height calculation is base * sine(60). Define some cut across the triangle parallel to the base, with the length of the cut being 2 * height'/sine(60) (where height' is the distance from the peak vertex to the cut location), and the area of this new triangle being ½ * cutlen * cutlen * sin(60). Setting this equal to ¼ * base * height of the original triangle (which is ½ the area) would yield an equation for cutlen * cutlen = 2 /sqrt(3) * ½ * base * height = base * height * 1/(sqrt(3)) for a result of cutlen = sqrt(base * height * 1/sqrt(3)). Since every side is the same length, this is not only the cut length required to cut it in half, but it is the minimum cut length. If the triangle were not equilateral, the process could be accomplished the same way, only using the smallest angle vertex of the triangle as the top vertex in the calculation, or the equations could be set up to be vertex-independent using variables for the vert angles, and for what are defined as base and cutlen, and solving cutlen in terms of base for each vertex and then finding the smallest of the 3 solutions.

My solution, as I stated earlier, involves finding the minimum cut length that “gets close” to cutting the polygon in half. I do this via ray casting through a particular point that starts off at the COM of the polygon and sweeping from 0 to pi to find the smallest cut that gets the closest to half area (first sorting on closeness of area to be within some delta, and then sorting on size of cut). I then check how close the areas are to equal, and if they are not “close enough” I incrementally move the ray cast point toward the com of the larger area and repeat the above process until the areas are sufficiently close. I also determine if a particular cut goes through two edges that can hold the half-area edge (determined by taking the area of the polygon bounded by the edges formed by the end points of the edges – will give a min and max area bound for any cut that crosses those two edges. This could also have been used alone to determine which pairs of edges would hold lists of all the cuts for ¼, ½ and ¾ area partitions of the polygon respectively, all at once, and then determine which pairs were closest of each of the three groups to determine between which edges each optimal cut would reside in. Once the closest pairs of edges that could hold a cut containing ¼, ½ and ¾ of the polygon were determined, the actual cut within these edges could easily be determined via interpolating between the end point edges, and then could be further minimized by rotating the cut so that either it is perpendicular to each edge (if the edges are parallel) or rotating the cut around its midpoint to form the base of the isosceles triangle described by the edges (i.e. so that the incident angles of the cut to each edge were equal to 180 – the angle between the vectors described by the edges. These were the mechanisms I used in my project, although the (much simpler) edge pairing algorithm I described I did not finish – all my results were generated via the ray-casting algorithm.

I organized my mesh in a variant of the described GEL structure, which I defined as a class. I added components describing vertices, edges and loops (although I did not use loops) as well as the functionality required to generate the random mesh, remove loops and self-intersections, space vertices sufficiently, and scale the mesh to fill the screen. Furthermore, vertices can be added or deleted, the mesh can be deformed via dragging, as well as moved or increased/decreased in size. I calculate the area of the mesh using the 2-d outer product, which Jarek denotes as the det product, or the rotated dot product from every vertex and its neighbor to the origin, taking half the sum of these products from every vertex. I reorient every mesh so that the result of this calculation is positive, to have a “right hand” poly. I find intersections by checking whether the endpoints of each of 2 lines are always on the same side, also using the det product of the vectors describing the orientation of one of the lines and the endpoint of that line directed to each of the other points.

To deloop the mesh, I find intersections, and then iteratively “undo” them, either by reflecting a vertex or an edge that crosses another edge to the other “legal” side of that edge, swapping adjacent verts that form a “bowtie”, or else swapping the endpoint vertices of edges that cross (i.e. form an X in the mesh), so that the result is | | → with 2 edges e1 and e2, e1.n.v ↔ e2.v and e1.v ↔ e2.n.v. This only works if the vertices are ordered consistently, so I reorder them to be in the correct direction before I begin this process.