How web designers, architects, and engineers use computer graphics and the meaning of a good topology structure.
When the fax machine was invented, people around the world were stunned. How is it possible to send images over telecommunication lines?
This astonishment came from a common misunderstanding about digital images. The reality is that much like the way fax machines transmit simply encoded information in the form of bytes, similarly, digital images as we know it, are just compressed bytes that encode instructions for how your computer screen should render a look-alike image.
The fact is that your computer can only show 2D colors in the form of pixels, but you have a few ways of giving instructions for what to color on each pixel on your screen.
We often use the word ‘image’ when describing a raster graphic created via a digital camera. However, in computer graphics, you have a few additional forms of graphics.
2D and 3D graphic programs use complex trigonometry, rendering, and image processing techniques to create and visualize digital graphics. However, despite the complexity in its math and algorithms, they are surprisingly using simple concepts and Structures.
In this post, I’ll attempt to simplify it even more, while still exploring and explaining all the necessary details. Keeping in line with the SelfCAD mantra of “Simple enough for beginners and powerful enough for professionals” Are you intrigued? Let’s get started!
What is a Pixel
Pixels are tiny dots, they are the smallest points a digital screen can color and visualize. Most of the computer monitors can choose from upwards of 16,000,000 colors and each pixel can have a different color. Such a rich color palette can display amazing photography. With the help of image processing, you can enhance any image to look even better than the original snapshot and with the help of 2D and 3D graphic editors, you can create amazing photorealistic 2D and 3D digital images that look better than a picture taken with the best photography equipment.
What is a Raster Image and how your computer uses them
The most common raster image file formats are png and jpg. They are different file formats that compress and compact thousands or sometimes millions of pixels. Each Pixel has an XY position and RGB color information Digital cameras and Raster image editors need to compile the pixel data into small image files so it takes less time of loading and less storage space on memory and hard-drive.
It’s important to use standard file types that all computers know how to open and process them. png and jpg are standard compression algorithms that when used will create a file with such a file extension and all modern computers know how to open and process them.
When you load an image into your computer, the computer will unpack and read the instructions for each pixel and will then paint (render) the same colors on your computer screen. This means that you actually never see the original image itself, the image is only used to give instructions for what your computer should render on your screen.
How do digital screens work?
Every digital screen has a set amount of pixels. Each pixel has an XY position and accepts an RGB color. Depending on the screen size and the aspect ratio (pixel size), this determines how many pixels your screen has and is referred to as your “screen resolution”. For example, my laptop has a Screen Width of 1536 pixels and a Screen Height of 864 pixels (1536x864), this means I have a total of 1,327,104 available pixels on my screen.
The question is, what happens in case of a mismatch between the number of pixels in the image and the number of pixels needed to color the screen based on the given instructions? Well, it turns out that the computers are great at downscaling (making an image smaller) so loading a large image on a small screen (I.e. a smartphone) will look perfectly fine, but loading a small image into a large screen is a big problem. There are some advanced algorithms to upscale (make larger) an image by filling in the missing pixels, but it’s not so simple and is not standard across devices. Therefore in most cases, you will end up seeing a pixelated image because when you make the image bigger, you end up having empty pixels. This is one of the big challenges for web developers because you need to load nice pixel-perfect images and at the same time need to load the page as fast as possible. Therefore, most developers will create a few image sizes and load the correct size based on the screen size.
It is also a tedious task to create digital images using just pixels, in the real world we often use bigger building blocks when making objects, so why not do the same in the digital world? And these challenges have created the need for Scalable Vector Graphics (SVG).
Vector graphics is another way of giving instructions for your computer on what to paint on each pixel of the screen (Viewport). However, instead of giving pixel-by-pixel instructions, it rather gives instructions using primitives (Building blocks). Vector graphics can describe large parts with much fewer details. This means that the visual size of the image does not have a direct correlation to the size of the file - it all depends on the number of primitives used.
Scalable Vector Graphics (SVG) is the standard vector file format used across modern web browsers and graphic applications. However, instead of directly describing what pixels to color, they use a markup language to describe larger shapes and your computer mathematically translates them into pixel-by-pixel instructions.
Scaling - Vector Vs Raster
Scaling an SVG is just changing it’s width and height parameters and does not change the file size at all, whereas scaling a raster image means adding or removing pixels and so the file size will get larger or smaller depending on the image size.
Primitives are widely used in programming and in computer graphics, you can also make your own primitives (building blocks) as needed. When it comes to SVG graphics, you have six types of primitives:
1. A Vertex - This is a similar concept to a pixel in raster images. This is the smallest point you can use to describe any drawing in vector graphics. A vertex Is sometimes called a vector. This is because besides the XY positions and it’s RGB color, it can also include a direction, and a point with a direction is referred to as a vector.
2. Lines - Lines are a 2nd level primitive, it takes the first (Vertex) primitive and extends it into a new type of building block by combining two vertices (the plural of vertex) and drawing a line between them.
3. Rectangle - A Rectangle is the 3rd level. It is made of four lines.
4. Arcs and Circles - These shapes are also made of vectors so they are 2nd level but they are different from lines because they use trigonometry math functions to create the circular shapes from just a few vectors.
5. Splines - Splines is 3rd level and it’s made of a set of arcs. Anyone can make any shape using a set of lines but splines are totally different because they involve math functions to smooth out all arcs and show them as a continuous spline.
6. Custom shapes - You can create any custom shape using a combination of all the above types and describe them as a path, Paths is the way you can start drawing complex shapes.
The main benefits of using SVG for websites
SVG is the best option for drawing basic images on websites because of the fact that you can draw large parts with just a few lines. For example, a line just needs to describe two vertex points and it does not matter how long the line is, you can at any time, change the position of the points to make it larger or smaller. In the case of Raster images, you need to describe each pixel of the line, so a bigger line means many more pixels. Hence, the file gets larger.
The main benefit of using SVG for designers
Using Lines, Arcs and Splines allow you to quickly trace any image smoothly and make it a lot faster and more accurate when editing basic shapes and outlines. In general, vector graphics are much easier to draw shapes while Raster is much easier to create pixel-perfect photo realistic images.
Why developers love SVG
In the Online 3D Modeling post, I explained how each browser has a rendering engine. That engine is responsible for rendering both raster and vector graphics (in the browser). These browsers have added and standardized the above primitives. However, the most intriguing part is the circles, arcs, and splines because they automatically do all the math. Developers love it as it significantly reduces the programming time. This is not the case when it comes to 3D graphics.
3D modeling software
3D objects have two parts to it - the basic 3D object is similar to vector graphics and the visualization/ rendering is more like raster graphics.
Rendering itself is also divided into two parts - the first part takes a primitive as instructions similar to vector graphics and the second part, renders pixel-by-pixel, similar to raster images.
When you visualize a 3D object using just basic colors, it’s best to use a fast primitive based rendering engine. However, when creating photo-realistic renderings, you need to use a much slower pixel-by-pixel rendering engine.
3D modelers specialize in creating digital 3D objects and often do not need to use an advanced rendering engine. This is especially the case when designing for 3D printing, and conversely, many designers specialize in rendering other peoples’ objects and do not do 3D Modeling on their own.
Nowadays, most 3D Modeling applications will also include a rendering engineer. However, photo-realistic rendering is a different topic on its own and for the sake of this post, we will focus just on the 3D Modeling and the basic primitive based rendering parts.
The most basic, first-level primitive is a vertex (as is the case with SVG). The single second-level primitive is a line, while triangles (and in some systems also a rectangle) are the 3rd level primitives (called a face). Other basic shapes are already custom shapes like an SVG path.
You can technically describe very complex scenes using just vertices. Case in point, 3D laser, and LiDAR scanners create a point-cloud which is just a set of vertices similar to pixels in a raster image. However, all standard 3D Modeling applications and 3D modeling exports use other primitives to create and modify 3D objects. You rarely see someone working directly on a point cloud. Most 3D scanners come with software to convert the point cloud into a mesh (a 3d polygon object) that you can work on using standard 3D Modeling software and tools.
To describe a line you need just two vertices, the same as in vector graphics.
You can use 3D Modeling applications to create 2d shapes like rectangles and triangles etc. However, the purpose of 3d modeling is to create 3d shapes and because the standard for describing 3d shapes is by creating a nest of triangles (called a polygon mesh), these triangles have become the face of any 3d object. Hence, we refer to them as faces.
From a technical point of view, the rendering engine can only color triangles (some also support rectangles) so any other shape even if it’s still 2D, like a circle, needs to be triangulated (converted into a set of triangles). That’s why they are already custom shapes.
Face3 vs Face4
Some shapes like a circle are much better off using triangles (called Face3) but designers often prefer using rectangles (face4) when it comes to editing other types of 3D polygon shapes.
STL (a standard 3D File exchange format) and some other 3D Polygon mesh file formats only support triangles.
To accommodate designers, while still supporting standard 3d file exchange formats, SelfCAD editor supports both (Face3 and Face4) options and has tools to convert between them. However, SelfCAD will automatically convert all Face-4 faces into triangles when exporting to a file format without face4 support.
2D and 3D shapes
If you use SelfCAD or other professional CAD applications, you will find many additional 2D and 3D primitives. However, they are created and supported as part of the custom 3D modeling applications logic, not a built-in feature of your computer or browser.
Why does it matter? Well, the main difference is that you no longer have consistency between applications so you may see different options in each CAD application. You may also have issues with transferring objects between different types of modeling applications. This is important to understand when we will be discussing polygon and other types of modeling applications later.
What is a Polygon?
Polygon comes from Greek. Poly- means "many" and -gon means "angle". Any “2D” shape, made of just straight lines is a polygon. As long as they are flat on a single plane (called planar) and as long as it forms a closed shape (closed-loop) meaning you can start at one point and loop around all lines (edges) and will end back at the starting point.
A triangle made of three lines is the minimum polygon. because you can’t make a closed loop with just two lines (edges) but there is no maximum, as long as it meets the requirements. This means that faces of a 3d object are also polygons, hence the name “Polygon Mesh”.
Edges vs Lines
You can draw a single line or a multitude of lines, without ever creating any shape but when you use lines to make a shape, we often refer to all outer lines as Edges. Think about standing on the edge.
Edges VS Arcs
Arcs are a crucial part of 3D Modeling and it extends into two (circles, and Splines) additional primitive. From a user perspective, you can create an arc using just 3 vertices but some arcs consist of 4 vertices mainly for modification purposes, as the additional vertex is used to help deform the arc. Nevertheless, the smooth looking arc, and it’s extended Splines, and circles that use a set of arcs are all just visual, from a technical perspective, you need to convert each arc into a set of edges/ lines.
I already described above how SVG incorporated arcs, splines, and circles as built-in primitives and how this has a great advantage from the consistency across applications perspective and how it speeds up development. However, the fact that 3D Modeling applications need to manually create them is also adding some flexibility. Most 3D modeling applications will allow you to set the number of segments of an arc.
The technical definition of ‘Segments’ is to divide a figure into separate parts or sections. In the case of arcs, the amount of segments is the same as the number of edges or lines. An arc with a lot of segments/lines/edges will look smooth, while the fewer segments you add, will reduce its roundness. An arc with just 1 segment will become a simple line, and as you add more segments, it will position each line into the circular function and become more rounded.
This added flexibility is very helpful in the design, if you like to achieve the same drawing in SVG, you will need to manually create them using a set of lines and painstakingly draw each line segment, while 3D software automates them all for you.
Arcs and Splines in 3D
You can create polygons using Arcs, splines, and circles but they are technically not called a polygon until it’s converted into lines. SelfCAD resolution, round objects, and simplification tools are converting them all into lines and the Fill Polygon function is using a triangulation algorithm to generate the faces needed for the rendering engine and to create a polygon mesh.
Area of a polygon
The inside part (the infill) of any polygon is the area of the polygon. With vector graphics and the same is with 3D modeling applications, you simply describe the outer edges and fill parameters, and the rendering engine will draw the lines and fill in the entire area of the polygons.
Profiles and Sketches
Artists often use the terms Sketching a Profile when making an outer sketch of an object's profile. For example, Sketching the profile of someone's face. This means that the act of Drawing is referred to as Sketching and The Drawing itself is referred to as a Profile. It also means that both (Profile and Sketch) is used in terms of shape without a Fill.
3D Modeling software borrows many designers and artist community concepts and so any object without an infill is referred to as a Sketch or Profile and they are one and the same. In SelfCAD you have a “3D Sketch” tool, that allows you to make any Sketch, while the output of the Sketch is called a Profile, in line with the artistic definition but some 3D Modeling applications refer to the output as well, as a Sketch.
Wireframe vs Mesh mode
I just described how the rendering engine draws lines as well as infill and I said that an object without an infill is just A Profile. That is true in the sense when the object is missing the faces to make the infill. For example, a bigger primitive like a complex path or even a simple circle that is missing faces (not triangulated yet) is only a profile. However, once you already have a triangulated object you can choose how to see the object from three available modes.
In Mesh mode, you only see the infill without the edges. This is best for visualization when you like to see a smooth object.
In Wireframe mode you can see only edges, this is very helpful in surface modeling when you need to inspect the inside of an object.
Wireframe Plus Edge mode
Wireframe plus edges mode is great to visualize the complete object and the best way to see clearly the topology structure.
Concave vs Convex Polygon
A convex polygon has no sharp angles (more than 180°) pointing inwards. A Polygon that has an internal angle between edges of greater than 180° is a concave polygon. To remember, think that “concave” has a "cave" in it.
Self Intersecting polygons
To be a valid polygon you need to be able to start from any vertex, trace around the entire polygon until coming back to the starting point, without missing and without going over any vertex more than once. This means that Self Intersecting polygons are invalid polygons. To fix a self-intersecting polygon, you need to remove some of the edges.
In the above example, I draw a rectangle and then draw a circle from the center of the top edge. This is a common way designers use to create complex polygons, you just need to trim the extra parts.
In most 3D modeling applications you have a separate “Trim command” that after activating, will allow you to click on the edges you need to trim away. In SelfCAD you can simply activate the face mode to select complete sections or edge mode, to select just parts, and you then delete them. This adds much more flexibility and makes it more convenient for designers to reuse the same selection methods throughout the entire design process.
What is a path
Think about walking on a path. A path is technically any loop of lines. Every Face and every Polygon is also a path but not every path is a polygon and not every polygon is a face. A path is also not limited to lines, it can use other primitives like arcs and splines as well.
Plane and Planar and Scene
In 2D Graphics you only have a single drawing surface (called plane) but when it comes to 3D, the 3D Modeling editor emulates the real world, and the same way that you can have many drawing surfaces/planes in the physical universe, you can also create many 2d drawing planes in a 3D modeling application, the entirety all the 2d and 3d objects, including all the planes is called “Scene” the scene is the virtual universe.
If the entire Geometry in the scene, for example, a circle, is positioned directly on a single plane, it’s called “Planar” while if some vertices do not touch the plane. For example, a 3D spline, it's no longer a planar geometry.
Path vs Plane and Surface
I described above how each face and polygon is also a path. However, they are all different from planes and surfaces in the sense that a path only describes a loop of edges, while planes and surfaces can contain any type of drawings and do not have to be closed.
Plane vs surface
The main difference is that a plane is always 2D and a surface can be 3D. However, they are also different in the sense that A plane is not limited to the type of drawings on it, you can simply scribble and you can have many open or closed paths on a single plane, while a surface needs to be connected. Think about the surface of a sofa, the cushion of a sofa is clearly not planer, yet we call non-connecting parts as different surfaces.
Some CAD applications limit you to drawing only on 2D Planes. In SelfCAD you have the Freehand drawing that is limited to plane drawings, you can create many planes but you can’t connect them directly in a single drawing. The fact that it’s kept to planar geometry, allows it to perform real-time (very fast, while drawing) cutting (boolean) operations, this tool is great for tracing images and for quickly designing complex patterns, etc.
With the 3D Sketch in SelfCAD you can freely draw on as many planes as needed and you can draw connections between planes or between other drawings (Drawing in thin air) as well as directly drawing on other 2d and 3D geometries.
3D Sketch, when used in conjunction with Loft, revolve, and SelfCAD’s unique Follow Path tools, empowers 3D Modeling professionals and designers to create intricate objects very quickly and accurately.
A contour is an outline of a shape or image without the goal of creating any sort of Path. A typical image has many contours. Contours lines are often used to describe Elevation contours in a Topography
A Silhouette is almost the polar opposite of contour lines, A Silhouette is used to describe entire objects in an obscured way that removes all details. Think about a shadow of an object, the shadow can show an entire object (not just the outline) but without much detail.
Polygon Modeling applications
When creating and editing 3D objects, especially when dealing with intricate polygon meshes, you need the flexibility to select and edit all the above primitives and topology parts, most CAD software creates different modes and many create different interfaces for working with faces, edges, and profiles, etc. That is one of the contributing factors to why CAD software is so complicated.
SelfCAD is revolutionizing 3D Modeling and they have created a very intuitive contextual based workflow that can select and edit all the above using a simple and single User-friendly interface.
Introduction to 3D Modeling
Before diving into 3D Modeling, Let's move away for a moment from the digital world to focus on the tangible objects in our physical universe.
We live in a three-dimensional universe and that means you can describe the position from everything, even very small objects like a grain of sand in three dimensions, so how can one or two-dimensional objects exist in a three-dimensional universe?
GPS vs 3D CAD positioning systems
SelfCAD like most CAD applications uses an XYZ (Cartesian coordinate) system while a GPS uses a (Spherical coordinate) latitude, longitude, and elevation as it’s 3D positioning system. There are a few other positioning systems as well. However, every object in the physical universe and in a virtual 3D CAD scene has a three-dimensional position so what makes an object one dimensional, two dimensional, or three dimensional?
Obviously it can’t be related to positioning, so what is it? Well, to explain that we first need to understand Measurements. The two basic ways of measuring an object are volume and size. You can also measure weight, density, etc. but they are not related to this topic
How to measure an object's volume?
One of the most ancient, clever, yet sophisticated methods (that we still use today) of measuring irregular objects, is based on water displacement. You simply need to submerge the object completely underwater and measure the change in the height of the water level.
Suppose you submerge a Stone in a tube that has 20 mL of water and it increases the water level to 40 mL, this indicates that the object has a volume of 20 mL
Now think for a second, what if you have a cavity inside the object, will it change the measurements?
Well, it all depends, if the object is still closed, for example, replace the solid stone with a Balloon that is empty inside while still closed from the outside, in this case, it will still displace the same amount of water, hence the volume stays the same regardless of its inner density.
However, if you measure a hollow tube that allows entering water inside; in this case, you only count the outer walls as volume, since this is how much water it is displacing
This leaves us with an interesting observation that a hollow object is measured by its wall thickness while we still measure a closed (solid) object by its total volume, which also includes it’s empty space and regardless of its inner density. This understanding is key for understanding the topology structure of a 3D object and how they relate to 3D printing.
How to measure the size of an object?
To measure an object's size you need to measure the absolute distance between two points. It does not matter at which point you start and in which direction you go, you simply count the number of units between any two arbitrary points.
What is a one-dimensional object?
If you can’t measure the size of an object because it has only one point (Unit) of reference, you can’t measure its distance, hence you get a one-dimensional object. For example, one pixel, one vertex, a single grain of salt, are all 1D objects.
Which measuring Units do we use?
You can technically measure everything in 3D, you may just have to use a smaller unit of measurement. So technically, one-dimensional and two-dimensional objects are classified on a relative basis, according to the specific unit of measurement if your choice.
What makes a two-dimensional object?
If however, you make a connection between two objects, like drawing a line using two vertices, you have a two-dimensional object because you can already measure the objects' distance/size.
If you add a second connection line, regardless of the direction it can be parallel or perpendicular to each other you still have a two-dimensional object because all you can measure is size and distance. The same is true if you keep adding many more lines, so long they do not form a closed loop.
Creating an Area?
Once you have a minimum of three lines, you can already create a closed (triangle) loop and if you add another line you can make it a rectangle, and so on and as soon as you get any closed shape you gain the Area as you can now measure the entire enclosed Area of the shape.
Let’s say you have a 100 pixels by 100 pixels rectangle, if you measure its outer perimeter, you get a size of 400 pixels but if you measure the Area, you get 10,000 because you can fill the inner Area with up to 10,00 pixels.
You can use any unit, it can be 100 cm by 100 cm and the results would be the same, you get 10,000 cm because it’s an absolute measurement but relative to the units of your choice.
Area vs Volume
In some sense, an Area is similar to Volume (described above) The main difference is that an Area is still two dimensional so it can’t hold anything within its boundaries while a 3D enclosed shape can already hold water within its boundaries, hence it can already displace water and so it already has a volume.
What makes an object three dimensional?
First off, every object with a volume is 3D but you can also have 3D objects that are opened from one or more sides, so they will not displace water when fully submerged, hence they do not have a volume so what makes them 3D?
Well, the technical definition is any object that you can measure its length, width, and height. We have already established that 1D has no measurements, and once you can measure any length (X) you have a two-dimensional object, we have also discussed that adding a 2nd direction (Y) will still keep it 2D. However, once you add a 3rd direction (Z) the object becomes 3d. Why these inconsistencies?
Well, it all depends on how many planes you use. You can draw an (XY) Height and width on a single plane but if you need to add a 3rd dimension (Depth) you already need to add the 3rd plane, this means that an object becomes 3D, when you need more than one plane to position all of its Geometry.
Can we get to higher dimensions?
Well, in theory, yes, but we still can’t fully understand what makes an object 4 dimensional and certainly, modern CAD software is limited to 3D. You can keep adding planes in any direction and angle and it will still be limited to (3D) height, width, and depth (XYZ) measurements.
Understanding XYZ planes
Like in basic math, XYZ is just variables that mean nothing special, you can replace them with any other set of letters or numbers but consistency across applications are crucial, especially when you need to import and export files between applications but unfortunately not all systems use the same terminology.
Height and width (XY) in 2D
The most 2D application uses a cartesian coordinates system and describe the horizontal direction (Which runs left-to-right across the screen) as X or as width and the vertical line (which runs up and down the screen) as Y or as hight, there’s no clear technical definition of what means height, width, and depth and you can technically use any combination of two to measure 2D but the standards are to use Height-width and XY.
Height, width, and depth (XYZ) in 3D
GPS and other devices often use different 3d coordinate systems but when it comes to CAD Software, almost all professional 3D Modeling applications use a cartesian coordinates system and describe them in terms of height, width, and depth and use the XYX labels. However, there’s no clear agreement on the direction.
Everyone agrees to keep the X/width as the horizontal direction, the same as with 2D but some replace vertical direction as Z and depth as Y, while some keep XY as in 2D and just use Z for depth as this is the new axis.
If you think from the perspective of drawing on a screen, it makes sense to keep all the same as with 2D drawings and just add the new direction/Axis as depth/z but If you look at it from the perspective of other tangible 3D objects, the height is usually the 3rd dimension, hence this discrepancy.
SelfCAD editor by default is using Y as height but it allows you to change it anytime. SelfCAD’s slicer is using Z as height since this has become the standard for 3D Printing. SelfCAD automatically flips these axes when moving from the editor into the slicer so, from a user perspective, they see no difference and do not need to do anything when slicing a 3D object in SelfCAD.
In some cases when you import objects from one application to another, you will need to manually flip these axes, it's not a big deal, a simple rotation will do it.
Circular vs Rectangular functions
Any Circular object like a circle, an arc, or a 3D cylindrical object, etc. all have one thing in common, that the amount of edges will determine the smoothness of its circular rotation.
When it comes to rectangle functions, like a rectangle or 3d Cube, etc. the amount of details (inner faces) will not affect the outline and the shape will look the same in a mesh mode.
The only reason to add and remove details from a non-circular shape is for organic modeling and sculpting techniques, adjusting the amount of details allows you to determine the behavior of many other 3D modeling tools.
What is a watertight mesh?
A watertight mesh means an object that can hold water and it should not leak out the water even if you rotate the object in any direction.
In the above examples, the first cube counts the entire cubic volume as a watertight mesh. The 2nd cube has no volume at all, as the entire water will leak out when flipped over. The 3rd cube ignores the leaky part in the center and accounts for the entire wall thickness as its volume.
Exploded vs joint vertices and edges
To create a basic cube you need a minimum of 6 rectangular polygons and each polygon has 4 lines, which means we should have 6*4 = 24 edges and since each edge needs two vertices, that means we should have 24* 2= 48 vertices. In the example below, you can see it has 6 faces, 24 edges as expected, but how come it only has 24 vertices?
You can see this cube had been opened/exploded, so let me show an example of a closed cube to compare.
You can see how a cube with the same 6 faces now has only 12 edges and only 8 vertices, how can that be?
Well, the answer is simple, you can share and reuse any vertex for more than one line and you can reuse any line for more than one face.
In the first open cube example, you can’t share edges between faces because they are not connected to each other, so you need to duplicate and create all the 24 edges, but each edge shares a vertex with its neighboring edge, so you do not need to create all the 48 vertices so by sharing, we reduced it to half and only needed 24 vertices.
In the closed cube example, we also share edges with the neighboring faces, so it reduces the edge count by half to 12, and the way a cube is structured, we can further reduce the vertex count to only 8, a vertex for each of the 8 corners of the cube because we share each corner vertex on 3 planes and 3*8=24, hence we can use a vertex 24 times while still having only 8 vertices in total.
Edge and vertex sharing is the default in most 3d modeling applications but you can also split them to duplicate all edges and vertices and that is called exploded geometry. However, exploded geometry is considered as non-watertight as water can leak out between the individual non-connected vertices and edges.
What is a non-manifold mesh?
Watertight and Manifold are often used as Synonyms but a non-manifold geometry also includes other problematic geometry structures such as self-intersecting faces, inner faces, and Flipped normals. I’ll discuss them all in detail in my upcoming "3D Modeling Tools and Techniques" blog post.
SelfCAD has a magic fix tool that will automatically fix and repair most of the non-watertight and non-manifold issues.
Register now, and try out SelfCAD for free!
Do you have any questions about the article? Have any thoughts that you'd like to share with others? Do you think something should be added or changed? Take part in the discussion in the thread dedicated to this article by clicking on the button down below!