Gaussian Splatting

The implementation and comparison of Gaussian Splatting against photogrammetry and NeRFs in Unreal Engine 5 to create real-time hyper-realistic radiance field 3D renders for application in object and environment building.
Project Overview
As technology progresses there is a prevalent need for efficient hyper-detailed 3D models. The aim of this research is to find the most efficient way to create these renders while comparing each method’s viability in creating environments, objects, and a plethora of other comparable workflows. There will also be an understanding built upon how each of these methods impacts hardware performance while creating and also upon viewing. Things such as frame rate, render time, and GPU/CPU temperatures will be measured. A section of this research also hopes to find an innovative way to utilize these technologies such as through mapping to give efficient hyper-realistic street views that are 3D models as opposed to 360 images.
w/ Prof. Nick Heitzman
Gaussian splatting is a way to visualize complex 3D scenes. The process begins from a point cloud. These specific point clouds can then be utilized to create structure from motion. A sparse point cloud can create a structure out of image sequences. These image sequences of the real world create the render's hyper-realistic aspect. At the center of these points, an ellipsoidal object is placed with a variable position, sizing, specularity, transparency, etc. These ellipsoidal objects are Gaussian splats. Further, these Gaussian splats have the technology to implement spherical harmonics. From the reference images, multiple angles are represented and created. These view-dependent angles lend themselves to spherical harmonics. These harmonics enable an object to look different from different angles. The optimization of this technology comes into play when regarding variable density depending on the location and complexity of objects. For instance, the sky would have a low density but an object such as a tree would have a high density count and therefore a lot of Guassians in place. This technology is the basis of the research but is not the only available option for creating hyperrealistic 3D objects. Photogrammetry and Nerfs also work in fundamentally different ways to create the same effect. Photogrammetry implements the structure from motion idea discussed earlier. When camera points are combined and set up a 3D mesh can be created from the locations of points. The 3D mesh can then be textured to create a hyper-realistic model. NeRFs are neural radiance fields. These are a system of neural networks that instead of using structure from motion use multi-layer perception. These are now trained systems that can be implemented in different scenarios. This form of 2D to 3D modeling uses a system of progressive rays that are shot through the image to create a 3D shape from comparisons of 2D images.