Explained

What Is 3DGS To Mesh? How Gaussian Splatting Becomes Real 3D Models

What is 3DGS to Mesh? Learn how Gaussian Splatting converts volumetric data into editable polygon meshes, and when to use it over photogrammetry.

onehuang · Mar 3, 2026

TL;DR: What Is 3DGS to Mesh and Where Can You Use It?

  • What it is: 3DGS to Mesh converts volumetric Gaussian Splatting data into a traditional polygon mesh (.OBJ, .FBX, .GLTF) that can be used in standard 3D workflows.

  • Best for: Challenging surfaces such as glossy, reflective, transparent, or low-texture objects where traditional photogrammetry struggles.

  • Where it can be used: The resulting mesh can be imported into Blender, Maya, Unity, Unreal Engine, Godot, or prepared for 3D printing.

  • The result: Highly photorealistic 3D assets that are editable, printable, and ready for real-time or production pipelines.

KIRI Engine 3DGS to Mesh 2.0 example showing volumetric Gaussian Splatting converted into a clean polygon meshExample of KIRI Engine 3DGS to Mesh 2.0 converting volumetric Gaussian Splatting data into a clean polygon mesh surface.

Introduction: Why 3DGS to Mesh Matters

For 3D artists, game developers, and spatial creators, digitizing real-world objects has traditionally relied on photogrammetry. While effective for highly textured, matte surfaces, traditional methods struggle with complex lighting, reflective surfaces, and featureless objects.

Recently, 3D Gaussian Splatting (3DGS) introduced a method for modeling a continuous 3D radiance field, capturing complex light behaviors and volume. However, raw volumetric 3DGS data cannot be natively manipulated in standard rendering pipelines. This is where 3DGS to Mesh technology bridges the gap, converting volumetric captures into standard formats usable in 3D software and game engines.

Side-by-side comparison showing rough photogrammetry mesh versus smooth 3DGS to Mesh reconstructionComparison between traditional photogrammetry reconstruction (left) and 3DGS to Mesh output (right).

This guide explains what 3DGS to Mesh is, how the technology works, and when you should choose it over traditional scanning methods.

What is 3DGS to Mesh?

3DGS to Mesh (3D Gaussian Splatting to Mesh) is the process of converting a volumetric Gaussian Splatting reconstruction into a traditional polygon mesh. To understand how this works, consider this mental model:

  • Traditional Photogrammetry is like guessing the 3D shape of an object by looking at flat photographs from different angles. If the object is a shiny mirror or transparent glass, the photos show different reflections, which confuses the algorithm and creates broken geometry.

  • 3D Gaussian Splatting (3DGS) is like capturing a 3D hologram. It records exactly how light behaves at a spatial volume, perfectly capturing reflections, specular highlights, and transparencies.

  • 3DGS to Mesh is the process of casting a solid, workable mold (a polygon mesh) out of that "hologram," so the object can be physically rigged, animated, and edited in traditional 3D software.

Example of KIRI Engine 3DGS to Mesh showing reflection removal and surface normal prediction on a glossy objectReflection removal and surface normal prediction improve mesh reconstruction on glossy surfaces.

How 3DGS to Mesh Works

Unlike standard 3DGS, which uses millions of microscopic, colored "splats" to represent a scene volumetrically, a polygon mesh uses vertices, edges, and faces to define a solid surface.

When utilizing 3DGS to Mesh algorithms, the software evaluates the density of the Gaussians and estimates the surface geometry from the density field. It essentially locates the solid boundaries within the volume and wraps a geometric skin (topology) around it.

The resulting output provides a clean polygon structure with accurate surface normals and UV-mapped textures. This allows creators to bake complex lighting and color details directly into standard texture maps (such as diffuse or base color maps).

Side-by-side comparison of rough photogrammetry mesh and smooth 3DGS to Mesh reconstruction of a game controllerComparison between traditional photogrammetry (left) and 3DGS to Mesh reconstruction (right) on a low-texture object.

The Standard 3DGS to Mesh Pipeline:

  1. Capture: Record a video or take photos of the object from all angles.

  2. Reconstruction: The software processes the images to create a volumetric 3D Gaussian Splatting field.

  3. Mesh Extraction: The algorithm estimates the surface density and generates a raw polygon topology.

  4. Texture Projection: The original radiance and color data are baked onto the mesh's UV map.

  5. Export: The asset is exported as a standard 3D file (.OBJ/.GLTF) for immediate use in your engine of choice.

To see the complete process in action, you can watch the full 3DGS to Mesh workflow video.

3DGS to Mesh exported polygon model opened in 3D software for editing and production use3DGS to Mesh model opened in standard 3D software after export (.OBJ/.GLTF).

Research Collaboration and Open-Source Contribution

KIRI Engine was among the first applications to bring 3D Gaussian Splatting-to-Mesh workflows into a production environment, making this emerging technology accessible to creators and developers.

KIRI Engine team member capturing a 3D object using 3D Gaussian Splatting for 3DGS to Mesh workflowReal-world 3D Gaussian Splatting capture during KIRI Engine 3DGS to Mesh research collaboration.

This capability was developed through close collaboration between the KIRI Engine engineering team and researcher Chongjie Ye and his team at The Chinese University of Hong Kong, Shenzhen, who are actively advancing mesh reconstruction methods for Gaussian Splatting.

As part of this collaboration, the mesh-generation algorithm developed by Chongjie Ye and his team has been open-sourced, with support from KIRI Engine to help bring the technology into real-world production workflows. This allows developers and researchers to experiment with Gaussian-based mesh reconstruction and integrate it into their own pipelines.

Explore the open-source Gauss Studio Gaussian Splatting Mesh repository on GitHub

Expert Insight: How the Mesh Is Actually Generated

To better understand the underlying reconstruction approach, KIRI Engine CEO Jack sat down with researcher Chongjie Ye to discuss the challenges of generating usable geometry from volumetric representations.

KIRI Engine CEO Jack in technical discussion with researcher Chongjie Ye about 3DGS to Mesh reconstructionKIRI Engine CEO Jack discussing 3DGS to Mesh reconstruction methods with researcher Chongjie Ye at CUHK Shenzhen.

👉 Watch the full technical discussion here:

3DGS to Mesh – Technical Discussion with Chongjie Ye (YouTube)

As explained in the discussion, one key advancement is that mesh extraction is not simply performed as a post-processing step.

Instead, mesh generation can be integrated directly into the Gaussian Splatting optimization process itself, allowing the system to better estimate surface structure from the volumetric representation.

Side-by-side comparison of 3DGS to Mesh 2.0 and photogrammetry showing reduced floating artifacts and cleaner surface reconstructionExample demonstrating how integrated mesh generation reduces floating artifacts compared to photogrammetry.

As discussed by Chongjie Ye, this approach incorporates techniques such as:

Reflection handling, which helps reduce floating artifacts on reflective surfaces.

Surface normal prediction, which improves alignment between Gaussian representations and physical object boundaries.

These advances help make Gaussian-based mesh reconstruction significantly more practical for real-world 3D asset creation.

Quick Comparison: Photogrammetry vs. 3DGS to Mesh

Feature

Traditional Photogrammetry

3DGS to Mesh

Best Used For

Matte, highly textured, rough surfaces.Textured, diffuse surfaces requiring high geometric accuracy

Challenging surfaces such as reflective, glossy, transparent, or low-texture objects

Core Technology

Multi-view stereo reconstruction using feature matching and camera pose estimation

Radiance field optimization and density estimation from volumetric Gaussian representations

Handling Thin Structures

May struggle with thin or low-texture structures depending on feature visibility

Often captures thin structures more completely due to volumetric reconstruction

Output Format

Point cloud, polygon mesh, and texture maps

Gaussian splats converted to polygon mesh and texture maps

Decision Guide: Which Method Should You Use?

Choosing the right scanning method depends on the material of your subject and the requirements of your 3D pipeline.

When to use Traditional Photogrammetry

  • The object has a highly detailed, matte, and non-reflective surface (e.g., stone, wood, rusted metal, historical ruins).

  • You are working with legacy scanning pipelines that only accept basic image-to-point-cloud data.

Photogrammetry performs strongly on highly textured, matte surfaces compared to 3DGS to Mesh.Comparison of traditional photogrammetry and 3DGS to Mesh on a matte, non-reflective surface.

When to use 3DGS to Mesh

  • The object is glossy, reflective, or transparent (e.g., vehicles, polished electronics, wet surfaces).

  • The subject lacks distinct texture details (e.g., smooth plastics, unpainted walls).

  • You need to capture complex, thin structures (like plant leaves or wireframes).

  • You need a highly photorealistic asset ready for game engines without the need for extensive manual retopology of broken reflective areas.

Side-by-side comparison of 3DGS to Mesh with reflection detection and photogrammetry on a reflective objectComparison showing 3DGS to Mesh with reflection detection (left) versus traditional photogrammetry reconstruction (right).

When NOT to use 3DGS to Mesh

  • Extreme Micro-Geometry: If you need sub-millimeter geometric accuracy (such as scanning a highly worn coin or precise industrial parts for CAD measurement), photogrammetry or laser scanning (LiDAR) remains superior. 3DGS excels at visual photorealism, but its mesh surface estimation may smooth over extreme micro-details.

  • Completely Diffuse Landscapes: For massive outdoor environments with diffuse, non-reflective surfaces, both photogrammetry and 3DGS can produce excellent results.Photogrammetry often provides highly accurate mesh geometry with efficient processing for large datasets. 3DGS, on the other hand, typically delivers superior visual realism, especially in complex lighting conditions, but mesh extraction may require additional processing depending on the workflow.

[Internal Link Suggestion: Link to a KIRI article on "Combining LiDAR and Photogrammetry"]

Frequently Asked Questions (FAQ)

Does 3DGS to Mesh replace photogrammetry?

No. They are complementary technologies. Photogrammetry remains standard for heavily textured, matte environments, while 3DGS to Mesh provides a solution for difficult lighting, reflections, and featureless surfaces that traditional methods fail to reconstruct. For a deeper explanation, watch this video comparing 3DGS to Mesh and photogrammetry in real-world scenarios.

Comparison showing photogrammetry artifacts on a smooth reflective device versus cleaner 3DGS to Mesh reconstructionComparison of photogrammetry and 3DGS to Mesh on a smooth, low-texture electronic device.

Can I edit 3DGS meshes in Blender or Maya?

Yes. Because the 3DGS to Mesh process converts volumetric data into a standard polygon format (such as .OBJ, .FBX, or .GLTF), you can import the model directly into Digital Content Creation (DCC) software like Blender, Maya, or ZBrush.

From there, you can perform standard workflows such as retopology, sculpting, UV editing, and rigging. The edited mesh can then be exported for use in game engines, rendering pipelines, or 3D printing.

3DGS to Mesh 2.0 model imported into 3D software for editing, retopology, and rigging3DGS to Mesh 2.0 model ready for editing in DCC software such as Blender or Maya.

Can I 3D print a model generated from 3DGS to Mesh?

Yes. The 3DGS to Mesh process generates a standard polygon mesh that can be exported to formats such as .STL and used for 3D printing.

Depending on the scan, you may need to clean up the mesh in software like Blender or Meshmixer to ensure it is watertight before printing.

Can I use 3DGS meshes directly in game engines?

Yes. Once the Gaussian Splatting data is converted into a standard polygon mesh (such as an .OBJ or .GLTF file), it functions like any standard 3D model. It can be imported directly into game engines like Godot, Unreal Engine, and Unity.

Do 3DGS to Mesh exports include textures?

Yes. The 3DGS to Mesh pipeline projects the color and radiance data from the original Gaussian splats onto the generated geometry, creating standard UV maps and texture files alongside the 3D model.

Comparison of photogrammetry and 3DGS to Mesh geometry showing cleaner surfaces suitable for UV mapping and texturesCleaner surface geometry from 3DGS to Mesh enables accurate UV mapping and texture projection.

Implementing 3DGS to Mesh in Your Workflow

The transition from volumetric data to usable geometry opens up new possibilities for digitizing real-world objects, especially those that are difficult to capture with traditional photogrammetry.

Once converted to mesh, these assets can be edited in tools like Blender or Maya, imported into game engines such as Unity, Unreal Engine, or Godot, or prepared for 3D printing.

For creators looking to experiment with this workflow, KIRI Engine provides a 3DGS to Mesh processing pipeline accessible via web browser and mobile devices.

Start creating your own models using the KIRI Engine 3DGS to Mesh pipeline.

KIRI Engine interface showing 3DGS to Mesh workflow available on mobile and web platformsKIRI Engine 3DGS to Mesh workflow accessible via web browser and mobile devices.