Orisphere aims to be an editor and embeddable renderer for the algorithmic generation of 3D shapes, materials, animations and effects using analytic Signed Distance Functions (2D and 3D) and related techniques, for LÖVE.
The primary purpose of this project is to develop a simple methodology that I can use to easily experiment with 3D rendering and explore simple, complex, weird, mysterious, grim or eerie aesthetics.
Beyond the interactive rendering of the editor, I have three major use cases for the project (in order of ascending priority).
The first is the creation of what I call audiomations: non-interactive audiovisual experiences where Orisphere would be used to produce the visuals (models, maps, animations, etc.) and Oriquartz to produce the audio (musics, atmospheres, sound effects, etc.). It can be similar to what people create on Shadertoy, although the emphasis on audio and non-interactivity is lower there. Even if the end goal is a non-interactive experience, which allows to do pre-rendering into a high quality video, the interactive rendering of the editor is still very important for the creative process (e.g. for the ability to quickly iterate, explore, experiment).
The second one is the Playground, a game built on this project which extends the same editing methodology for gameplay purposes. Each level would be an interactively rendered world SDF, making it much less resilient regarding performances than the first case, but the hope is that the editor will give the tools to intelligently design a scene to keep the amount of primitives, operations and overall cost low enough for it to be a real time experience (in addition to rendering quality options).
The third is the creation of isometric games. Thanks to the static isometric view, tiles and effects can be rendered and baked, at runtime, using Orisphere (e.g. when loading the game, when loading a map, or each frame for more dynamic elements). Similarly to the first case, it can give great creative freedom and visual quality, while being an interactive experience with good performances.
The core ideas and constraints for the project are as follow:
- Analytic / algorithmic: The editing process is non-destructive using abstract constructs. There should be no concrete external data involved to produce the shapes, materials or final rendering. Also, for simplicity and quality, the process should avoid baking things as much as possible.
- Global SDFs: The editor generates code that describes the full scene which allows to use elegant SDF rendering techniques such as "fake" ambient occlusion or soft shadows.
- Scene graph: The editor works on a scene graph and other structures to manage objects and other properties (e.g. animations, scene documentation). The entire scene or "project", or parts of it, can be exported to and from Lua objects (as a format).
- Abstractions: To simplify the editing process and to be more amenable to automatic optimizations or other transformations, it should not involve writing GLSL directly. For low-level computations, which can be especially useful to control various properties from global parameters (e.g. animations), a simple visual dataflow language can be created. It also makes the output, a model or a level in the Playground case, more portable and safer to share.
Considered rendering techniques:
- Physically based rendering, which should be heavily based on the work done on the Filament renderer. Materials with normal, base color, roughness, metallic, reflectance and emission components.
- Direct lighting (directional, point, spot, etc.) with soft shadows. For punctual lights, also try an "analytic" approach, that is, use the same SDF editing capabilities to represent non overlapping lights (domain repetition of lights).
- Generation of environment maps (cubemap) for "sky" rendering and ambient lighting using IBL1 and "fake" ambient occlusion. More precisely, the generation of stylized spherical environment light textures using noise, gradients, etc.
- Global illumination with Radiance Cascades (in world space, exploiting the SDF raymarching; see this video by Alexander Sannikov). Maybe there is also a way to handle rough reflections and refraction with this or a similar technique.
- Volumetric lighting / shadows.
- Volumetric glow (see Volumetric Raymarching by Xor).
- Bloom.
- Dithering to increase perceptual color depth.
- Super sampling anti-aliasing.
- Generate multiple functions, especially for interactive rendering. One could be just to find the intersection point and can also be used for normals, shadows and ambient occlusion. Another could be just for the materials and would be evaluated afterwards without raymarching.
- SDF bounding volumes.
- Use of 2D SDFs to make symbols and patterns, for modeling (extrusion) and texturing. E.g. for a coin, cape, flag, magic circle, etc.
- Generation of "physics" animations (no physics engine involved). E.g. the wave motion of a flag.
The editor will be under the GPL and the embeddable renderer under the MIT license.
The work of Inigo Quilez and the Shadertoy community are important learning resources to achieve this project.
- ^ Image Based Lighting