1. The Bottleneck of 3D Content Creation
In modern game production, 3D asset creation has become one of the biggest time sinks in the entire pipeline.
From environment modeling and character sculpting to texture baking and lighting optimization, the traditional geometry-based workflow is slow, expensive, and difficult to scale.
Over the past few years, Photogrammetry and Neural Radiance Fields (NeRF) have offered two main approaches to “reality capture.”
Photogrammetry reconstructs surfaces through feature-point matching, while NeRF learns the radiance field of a scene using neural networks.
/webresources/Blog/1762761589064.33.23.png)
Yet both have clear limitations:
Photogrammetry struggles with reflective or texture-poor surfaces like glass or metal.
/webresources/Blog/1762761966030.17.43.png)
NeRF demands heavy training and inference time, making it impractical for real-time editing or rendering.
/webresources/Blog/1762762035658.28.43.png)
3D Gaussian Splatting (3DGS) offers a new balance.
It reconstructs the world through light itself—retaining NeRF-level realism while achieving near real-time performance on modern GPUs.
/webresources/Blog/1762762208579.45.05.png)
2. The Core Principle: Rebuilding the World with Light
3DGS represents the world as millions of Gaussian “points,” each carrying color, orientation, radius, and opacity.
When rendered, these points are accumulated in screen space—splatting—to form a continuous light field image.
This representation requires no topology or UV mapping, only light.
Its light-based formulation provides three critical advantages:
Topology-Free – No need for retopology or UV unwrapping.
Photorealistic Lighting – Retains the captured scene’s global illumination and reflections.
Real-Time Rendering – Linear scalability with point count enables full GPU parallelization.
In practice, 3DGS sits between photogrammetry and NeRF—achieving realistic fidelity without sacrificing real-time performance inside game engines.
/webresources/Blog/1762762329595.33.14.png)
3. How 3DGS Is Used in Game Development
1️⃣ Photoreal Scene Reconstruction
3DGS allows developers to generate interactive environments or levels directly from real photos or scans.
A single video or photo set can become a playable level, enhanced with traditional polygonal assets such as characters or props.
It’s ideal for rapid prototyping, VR experiences, and cinematic backdrops that demand realism without long modeling cycles.
2️⃣ Rapid Asset Generation
By capturing real-world locations, developers can quickly create usable in-game assets—buildings, vegetation, props, and terrain—at a fraction of the traditional cost.
This shortens environmental-art production while preserving visual authenticity.
/webresources/Blog/1762763175734.48.28.png)
3️⃣ AR / VR and Immersive Worlds
With KIRI’s Mesh-Inclusive 3DGS, light-field data can coexist with traditional meshes.
Captured environments are not just viewable—they’re interactive and can include collisions and physics, making them ideal for mixed-reality and XR applications.
/webresources/Blog/1762763093348.07.45.png)
4. The Engine Integration Challenge
Despite its strengths, 3DGS faces a structural challenge:
Modern game engines were built around meshes, not light fields.
Core systems in Unreal, Unity, or Godot rely on vertices and faces:
• Material systems depend on normals and UVs.
• Animation depends on bones and skinning.
• Collision and physics depend on geometric boundaries.
/webresources/Blog/1762762473270.14.08.png)
3DGS, by contrast, is a cloud of light particles with no inherent topology.
To integrate it into production, an intermediate system must bridge light-field data with mesh-based pipelines.
5. KIRI’s Solution — Mesh-Inclusive 3DGS
KIRI Engine introduces a fully engineered solution: Mesh-Inclusive 3DGS.
This approach generates both Gaussian points and a synchronized mesh version within the same coordinate space, enabling light and geometry to coexist.
/webresources/Blog/1762762525633.45.10.png)
The principle is simple:
Gaussians handle lighting; Meshes handle interaction.
This hybrid architecture lets developers:
Render realistic lighting and reflections using 3DGS.
Use the generated mesh for collisions, physics, and animation.
Achieve real-time hybrid rendering in Blender or Unreal Engine.
The result: 3DGS becomes not just a visual layer but an editable, interactive asset class.
/webresources/Blog/1762762573793.05.45.png)
6. Toolchain & Production-Ready Workflow
To make 3DGS practical in everyday game production, KIRI built a full cross-platform pipeline—from capture to export—bridging mobile, cloud, and desktop tools.
1️⃣ Crop Modes
In KIRI Engine’s 3DGS workflow, the mobile app is more than a capture tool—it’s the first stage of semantic cleanup and asset refinement.
Users can shoot, reconstruct, crop, and upload models entirely on mobile, performing boundary optimization before cloud upload.
Three core crop modes streamline on-device editing:
Sphere – Quickly isolate a subject and remove background points.
Plane – Cut away ground, walls, or table surfaces with a single swipe.
Brush – Manually refine edges and clean up stray points.
/webresources/Blog/1762762674148.16.48.png)
Real-time previews (WYSIWYG) and GPU-accelerated spatial partitioning deliver low-latency editing.
Lightweight uploads reduce cloud workload by 30–40%, turning 3DGS from an R&D prototype into a real-time, production-ready asset flow.
2️⃣ Cloud Reconstruction & Export
Once uploaded, KIRI’s cloud engine performs high-precision Gaussian reconstruction and optimization.
It transforms multi-view imagery and cropped point data into a photometrically consistent, geometrically stable light field.
Supported export formats include:
.splats – Full radiance field data.
.ply / .obj – Compatible with Blender, Unreal, and Unity.
Mesh-Inclusive packages – Containing both light field and mesh data for physics and collision.
Optimized cloud processing compresses model size by 50–60%,
and the reconstructed data can be sent directly into the KIRI Blender add-on or external tools.
The result is a seamless loop of mobile capture → cloud optimization → desktop editing, enabling instant, production-ready assets.
3️⃣ Unified Lighting Response System
In the latest version of the 3DGS pipeline, KIRI has implemented a Unified Lighting Response System—fully compatible with PBR rendering.
/webresources/Blog/1762762826242.19.27.png)
Gaussian points now share the same lighting, shadow, and reflection paths as standard materials in the engine, unifying light fields and meshes under a single shading logic.
This allows:
Real-time response to Directional, Point, and Spot lights.
Full reflection and shadow interaction with mesh materials.
Physically consistent results without any custom shaders.
3DGS assets are no longer a separate render layer—they are treated as first-class rendering primitives within the pipeline,
marking the shift from visual demo to true engine-level content format.
4️⃣ Blender Add-on & Real-Time Editing
KIRI’s Blender 3DGS Add-on v4 turns Blender into a complete Gaussian editing and rendering suite.
It supports real-time viewport preview, HQ/LQ render switching, and seamless compositing with mesh assets.
Powered by a Proxy-based real-time rendering system, users can edit in Edit Mode for fast cleanup, then switch to Render Mode for instant previews.
Low-quality (LQ) mode ensures responsive workflow; high-quality (HQ) mode delivers clean, production-grade output.
With Combine with Native Render, 3DGS layers automatically merge with mesh renders—enabling a smooth edit → render → composite pipeline.
Creators can import, refine, color, and render 3DGS assets entirely within Blender—achieving a truly “edit-to-final”experience.
/webresources/Blog/1762762871079.jpg)
7. Industry Integration & Future Outlook
Today, 3DGS functions as part of a hybrid pipeline—light fields for visual fidelity, meshes for logic and physics.
As standardization progresses, future engines may adopt 3DGS as a native render primitive, alongside Meshes, Volumes, and Particles.
The next evolution—4D Gaussian Splatting—extends this concept into time.
By adding temporal attributes to each Gaussian, 4DGS captures dynamic lighting and deformation, paving the way for XR, virtual production, and AI-generated scenes built directly on real-world motion.
8. Conclusion — Turning Light into a Creative Language
3D Gaussian Splatting isn’t here to replace modeling—it’s here to expand it.
It shifts creators from sculpting geometry to capturing light, turning reality itself into editable, game-ready material.
Through its Mesh-Inclusive system, engine plug-ins, and mobile-to-cloud workflow,
KIRI Engine has transformed 3DGS from a research curiosity into a scalable production technology,
empowering developers to build worlds faster—and with more freedom—than ever before.
When both light and time become part of the creative palette,
we’re not just modeling scenes anymore—
we’re reconstructing imagination itself.
✳️ Appendix — KIRI at GDC 2024
At GDC 2024 (San Francisco), KIRI Engine CEO Jack delivered a session titled
“Using 3D Gaussian Splatting in Game Development — What the Internet Doesn’t Tell You.”
/webresources/Blog/1762762931304.21.42.png)
Learn More:https://www.youtube.com/watch?v=zTwHmxfKvOs&t=894s
The talk presented, for the first time, a full view of 3DGS in live game pipelines, including:
The architecture of Mesh-Inclusive 3DGS;
Real-time rendering inside Unreal Engine;
Integrated mobile-to-cloud workflow;
Early research on 4DGS (dynamic light fields).
This session marked the moment 3D Gaussian Splatting was formally recognized as a viable, real-time production technology for modern games.




/webresources/Blog/1762763867214.png)
/webresources/Blog/1761640404233.png)
/webresources/Blog/1757472386254.png)
