Blender to UE5: The Complete Export Pipeline



heavy Blender geometry vs. exported Unreal render. image: Sarah Page



You modeled something in Blender. You exported it. It’s 100× too big, the normals are wrong, and there are phantom bones called “hand_end” cluttering your skeleton. We’ve all been there.

The Blender-to-UE5 FBX pipeline works well once you know the exact settings — but the defaults fight you, and UE 5.4+ made things worse by replacing the legacy FBX importer with the Interchange Framework. This guide documents every critical setting for Blender’s FBX exporter and UE 5.6’s Interchange import dialog, covers static and skeletal mesh workflows, explains what transfers for materials, and walks through the most common import failures.

This guide targets Blender 4.x and Unreal Engine 5.6. Earlier UE5 versions use a different import dialog (the legacy FBX importer) — the Blender-side settings still apply but the UE5 import options will look different.

Prerequisites: Basic Blender and UE5 familiarity. We’ll be specific about where every setting lives.

Why scale breaks and how to fix it before you export


The single most common Blender-to-UE5 problem is scale mismatch. UE5 uses centimeters internally — 1 Unreal Unit equals 1 cm. Blender defaults to meters, where 1 Blender Unit equals 1 meter. That’s a 100× difference, and it causes objects to import at the wrong size, skeletons to scale incorrectly, and physics assets to report bones as “too small.”

Where to check in Blender: Properties panel → Scene Properties (the cone/film strip icon) → Units section. You’ll see Unit System (default: Metric) and Unit Scale (default: 1.0).

You have two approaches, and which one you choose depends on what you’re exporting.

Approach A — Keep Unit Scale at 1.0 (best for static meshes). Leave Blender’s defaults alone. Model at real-world meter scale — a door is about 2.0 BU tall. The FBX exporter writes unit-conversion metadata, and UE5 applies the 100× conversion automatically on import.

Approach B — Set Unit Scale to 0.01 (best for skeletal meshes). This makes 1 BU equal 1 centimeter, eliminating the unit conversion at export. This avoids a persistent class of bugs where the skeleton gets written with incorrect scale metadata. The trade-off: Blender’s viewport grid becomes tiny and some add-ons behave unexpectedly at non-1.0 unit scales.

How to tell you got it wrong: Your mesh appears in UE5 at the right visual size, but the Transform section in the Details panel shows Scale values of (100, 100, 100) instead of (1, 1, 1). It looks fine, but collision, sockets, physics, and animation will all break because the transform hierarchy is polluted.

Blender FBX export settings: every checkbox that matters


Open File → Export → FBX (.fbx) in Blender. The exporter writes FBX 7.4 binary format — there’s no version dropdown to worry about. Here’s every setting that affects your UE5 import, organized by panel.

[SCREENSHOT: Blender’s FBX export dialog with the Transform panel expanded, showing the recommended settings. Caption: “Blender’s FBX export settings panel. Three settings here — Apply Scalings, Smoothing, and Add Leaf Bones — prevent about 80% of UE5 import headaches.”]

Transform panel

Scale: Leave at 1.0. Changing this directly scales all geometry and can break skeletons.

Apply Unit: Check this. It tells the exporter to factor in your scene’s Unit Scale when writing FBX metadata.

Apply Scalings: Set to FBX Units Scale. This is the most important dropdown in the entire dialog. It writes unit conversion to FBX metadata, letting UE5 handle the math. The default “All Local” also works for static meshes but can cause skeleton scale issues. Avoid “FBX All.”

Forward / Up: Leave at defaults: -Z Forward and Y Up. UE5 handles the coordinate conversion automatically.

Apply Transform: Leave unchecked for skeletal meshes — it’s labeled “EXPERIMENTAL” for a reason. It bakes the coordinate-system conversion directly into vertex data, which breaks armatures, skinning, and animations. For static meshes only, you can optionally enable it to get clean (0,0,0) rotation on import.
Blender’s FBX export dialog. The Transform panel is where most import problems originate — especially Apply Scalings and Add Leaf Bones.



Include panel

Set Object Types to only what you need: check Mesh for static meshes, or Armature and Mesh for skeletal meshes. Uncheck Camera, Light, Empty, and Other unless you have a specific reason.

Check Selected Objects to export only your current selection rather than the entire scene.

Geometry panel

Smoothing: Set to Face. This prevents the “No smoothing group information was found” warning on import. Use “Normals Only” instead only if you need exact custom normals from a Data Transfer modifier — it triggers the warning but is technically more accurate.

Tangent Space: Check this. UE5 needs tangent and binormal vectors for correct normal map rendering.

Apply Modifiers: Check this. Applies all modifiers except Armature before export. Caveat: uncheck this if you need to preserve shape keys — you can’t have both applied modifiers and shape keys in the same FBX export.

Triangulate Faces: Leave unchecked. UE5 triangulates on import anyway.

Armature panel (skeletal meshes only)

Add Leaf Bones: Uncheck this. It’s enabled by default, and it’s the source of those phantom “hand_end,” “foot_end” bones that clutter your skeleton, break retargeting, and cause merge errors. You need to manually uncheck this on every single export unless you save a custom export preset (File → Export → FBX, set your settings, then click the + button next to the preset dropdown).

Primary Bone Axis / Secondary Bone Axis: Leave at defaults (Y / X). Blender bones align along local +Y while UE5’s Mannequin uses X-forward — this is cosmetic and doesn’t affect skinning or animation playback.

Only Deform Bones: Check this. It excludes IK controllers and helper bones that aren’t needed at runtime.

Animation panel

Bake Animation: Check for animated assets, uncheck for static meshes. Baking converts constraints, drivers, and IK into per-frame keyframes — required because UE5 can’t read Blender’s constraint systems.

Key All Bones: Check this. Forces at least one keyframe per bone, preventing UE5 from dropping “static” bones.

NLA Strips / All Actions: These control which animations export. “All Actions” exports every Action in the file as a separate take. “NLA Strips” exports each NLA strip as a take. To export a single specific animation, uncheck both and set the desired Action as active in the Action Editor.

Force Start/End Keying: Check this to prevent animation popping at clip boundaries.

Path Mode

Set to Copy and click the small box icon next to the dropdown to embed textures inside the FBX. This ensures texture files travel with the FBX rather than relying on file paths that break when you move things around.

Static mesh workflow: origins, collisions, and lightmap UVs


Apply transforms before export. Select your mesh, press Ctrl+A → All Transforms. This zeros out rotation and normalizes scale to (1,1,1). Unapplied transforms — especially non-uniform scale — cause distorted collision, broken normals, and physics bugs in UE5. Do this every time, for every mesh.

Set the origin point deliberately. Your object’s origin in Blender becomes its pivot point in UE5. For props, use Right-click → Set Origin → Origin to Geometry (centers the pivot) or Origin to 3D Cursor (for a floor-level pivot). For modular environment pieces, place the origin at a corner aligned to Blender’s world origin.

Custom collision meshes. UE5 recognizes collision meshes via strict naming prefixes. The collision mesh must be a separate object in Blender, exported in the same FBX:
  • UCX_MeshName — Convex hull collision (most common). For multiple pieces: UCX_MeshName_00, UCX_MeshName_01, etc.
  • UBX_MeshName — Box collision.
  • USP_MeshName — Sphere collision.
  • UCP_MeshName — Capsule collision.

The MeshName portion must exactly match the render mesh’s object name (case-sensitive). In UE 5.6’s Interchange dialog, expand Assets → Collision and check Import Collisions. If collision still isn’t recognized, try parenting UCX objects to the render mesh in Blender’s outliner (Ctrl+P → Object) before export — Interchange may require this hierarchy.

Lightmap UVs. UE5 needs a non-overlapping UV layout in UV Channel 1 for baked lightmaps. Your options: create a second UV map in Blender using U → Lightmap Pack; let UE5 auto-generate them post-import in the Static Mesh’s Build Settings; or skip them entirely if you’re using Lumen (UE5’s default dynamic GI), which doesn’t need lightmap UVs at all.

Skeletal mesh workflow: root bones, orientation, and animation


The root bone requirement. UE5 requires a single root bone at the top of your skeleton hierarchy. A subtle trap: rename your armature object away from Blender’s default “Armature.” UE5 can confuse the armature node with an actual bone, interpreting it as a second root.

Exporting animations — separate files (recommended). Export one FBX with armature + mesh (animation unchecked) for the skeletal mesh. Export additional FBXs per animation containing only the armature with Bake Animation enabled and both NLA Strips and All Actions unchecked. In UE5, import the mesh first, then import each animation FBX and assign it the existing Skeleton.

All-in-one FBX. Check “All Actions” to export every Action as a separate animation take. Simpler but produces larger files.

Root motion. Animate the root bone’s translation (not the armature object’s location). In UE5, enable Root Motion on the Animation Asset or Animation Blueprint.

What transfers for materials (and what doesn’t)


Blender’s FBX exporter reads only from the Principled BSDF node. The values that actually transfer: Base Color (value and connected Image Texture), Roughness (value and texture), Metallic (value and texture), Specular Intensity, Alpha (value and texture), Normal Map (only through a Normal Map node), IOR, and Transmission.

What doesn’t transfer: procedural textures (Noise, Voronoi, etc.), node math (Color Ramp, Math, Mix Shader), Emission, Subsurface Scattering, Clearcoat, and Bump nodes. A subtle gotcha: if a Roughness or Metallic value comes from a separate Value node rather than being set directly on the Principled BSDF’s own field, it often exports as a default value (frequently 1.0). Plug values directly into the shader fields.

The recommended production workflow: Use Principled BSDF with descriptive material names. For simple materials, let UE5 auto-import as a starting point. For anything complex or procedural, bake to image textures in Blender (Cycles baking), export textures as PNG/TGA, and build materials from scratch in UE5 using Material Instances.

UE5 import: what the Interchange dialog actually shows you


UE 5.6 uses the Interchange Framework as its FBX importer. If you’ve seen older tutorials referencing “Convert Scene Unit,” “Normal Import Method,” or “Auto Generate Collision” in a detailed import dialog — those are from the legacy FBX importer that was the default before UE 5.4. The Interchange dialog is much simpler.

[SCREENSHOT: UE 5.6 Interchange FBX import dialog showing Assets, Materials, and Textures sections. Caption: “UE 5.6’s Interchange import dialog. If you’re looking for Normal Import Method or Convert Scene Unit from older tutorials — they’re not here anymore.”]

When you drag an FBX into the Content Browser, here’s what you see:

Top of the dialog: A Use Pipeline Defaults button and a Preview button.

Assets section: Contains Offset Uniform Scale (leave at 1.0 unless your scale is wrong) and a Collision subsection with an Import Collisions checkbox and Fallback Collision Type dropdown.

Materials section: Contains Material Search Location, which controls how UE5 matches FBX materials to existing materials in your project.

Textures section: Contains an Import Textures checkbox.

That’s it.

What’s missing (and why it matters). Normal Import Method, Build Nanite, Generate Lightmap UVs, and Import Mesh LODs are all absent — handled silently by pipeline defaults. The critical one is normals: in UE 5.4/5.5, Interchange defaulted to recomputing normals, silently destroying authored shading. Whether 5.6 fixed this is version-dependent. If your mesh has shading artifacts after import that weren’t there in Blender, recomputed normals are the likely culprit.

To change hidden defaults, try editing the pipeline asset directly: enable Show Engine Content in the Content Browser settings, then search for the Interchange pipeline assets. Alternatively, fix normals post-import in the Static Mesh’s Build Settings.
UE 5.6’s Interchange import dialog. If you’re looking for Normal Import Method or Convert Scene Unit from older tutorials — they’re not here anymore.

The 10 most common import problems and how to fix them


1. Mesh is 100× too big or too small. Verify Apply Unit is checked and Apply Scalings is set to FBX Units Scale. If scale is still wrong, adjust Offset Uniform Scale in the import dialog. For skeletal meshes, try Blender’s Unit Scale at 0.01.

2. “No smoothing group information” warning. Set Smoothing to Face instead of Normals Only in the FBX Geometry panel.

3. Broken normals and shading artifacts. Likely Interchange recomputing your normals. Export with Smoothing set to Face, Tangent Space checked. If it persists, edit pipeline defaults or fix post-import in Build Settings. Verify normals behavior in your specific 5.6 build.

4. Mesh appearing inside-out. In Edit Mode, select all (A), then Mesh → Normals → Recalculate Outside (Shift+N). Check for negative scale values and apply them (Ctrl+A → Scale). Use the Face Orientation overlay to diagnose before export.

5. Pivot point in the wrong place. Set the object’s origin deliberately before export via Right-click → Set Origin.

6. Collision meshes not recognized. Verify naming exactly matches UCX_[RenderMeshName] (case-sensitive). Check Import Collisions in the dialog. Try parenting UCX objects to the render mesh before export.

7. Extra leaf bones in skeleton. Uncheck Add Leaf Bones in the FBX Armature panel. Enabled by default — save a preset.

8. Animations not playing or scaled wrong. Animate the root bone (not the armature object) for root motion. Verify Bake Animation and Key All Bones are checked. Apply all transforms on the armature.

9. Duplicate or broken materials. Remove unused material slots, give each material a unique name. For production, build materials in UE5 instead.

10. Wrong animation exported. Uncheck both NLA Strips and All Actions. Select only the desired Action in the Action Editor. Set the frame range to match.

Quick reference cheat sheet


If you remember nothing else from this guide:

Blender FBX export (static meshes):
  • Apply Scalings: FBX Units Scale
  • Smoothing: Face
  • Apply Modifiers: on
  • Selected Objects: on

Blender FBX export (skeletal meshes):
  • Apply Scalings: FBX Units Scale
  • Apply Transform: off
  • Add Leaf Bones: off (uncheck this!)
  • Only Deform Bones: on
  • Bake Animation: on (for animated assets)
  • Key All Bones: on

UE 5.6 Interchange import dialog:
  • Offset Uniform Scale: 1.0 (don’t touch unless scale is wrong)
  • Import Collisions: on (under Assets → Collision)
  • Import Textures: on

Before every export from Blender: apply all transforms (Ctrl+A → All Transforms), set origin to desired pivot point, clean up unused material slots, and verify face orientation (enable the overlay — look for red faces).

Save your export settings as a preset. The defaults fight you — especially Add Leaf Bones and Apply Scalings — and reconfiguring them every export is how mistakes happen.

Next steps


Once your assets are importing cleanly, you’ll probably want to look into automating the pipeline. Epic’s Send to Unreal addon (community-maintained) lets you push assets from Blender to a running UE5 editor with one click, skipping the manual export/import cycle entirely. For skeletal meshes specifically, Auto-Rig Pro (~$40 on Blender Market) has a UE5 preset that handles bone orientation and scale issues automatically.

If something in this guide didn’t match your setup, or if UE 5.6 has changed behavior since we wrote this — reply to this post or drop us a message. The Interchange Framework is still evolving, and we’ll update this guide as Epic stabilizes the pipeline. The Blender-side settings here are battle-tested, and getting those right solves about 80% of import problems before you ever open UE5.





Understanding Nanite: When to Use It and When to Skip It


A practical guide for indie developers making informed decisions about UE5’s virtualized geometry system


Nanite Virtual Geometry by Epic Games

We’ve seen a lot of confusion in the indie dev community about Nanite. Marketing materials make it sound like a magic “enable and forget” optimization, and we bought into that early on. After testing it across multiple projects and digging into the actual documentation, here’s what we wish someone had told us from the start: Nanite excels with high-poly photogrammetry on high-end hardware, but traditional LODs often outperform it for typical game assets and broader hardware targets.

This isn’t a knock on Nanite — it’s genuinely impressive technology. But it’s a specialized tool, not a universal upgrade. Let’s break down what it actually does, where it shines, and when you should skip it entirely.

What Nanite Actually Does


Nanite is a virtualized geometry system that replaces traditional LOD (Level of Detail) workflows. Instead of manually creating 4–5 simplified versions of each mesh, Nanite breaks your high-poly source mesh into 128-triangle clusters organized in a hierarchical structure. During rendering, it intelligently swaps these clusters based on camera distance — different parts of the same mesh can display at different detail levels simultaneously, with seamless transitions and zero popping.
Nanite Triangles visualization mode showing dynamic level of detail in an example scene with Nanite mesh and Nanite Spline from Unreal Engine 5 documentation (Epic Games)

The real performance trick is Nanite’s software rasterizer. Hardware rasterizers waste significant GPU cycles on sub-pixel triangles, outputting 2×2 pixel quads even when only 1 pixel is actually needed. That makes them roughly 25% efficient at best for tiny triangles. Nanite’s software approach handles these micro-triangles about 3x faster — in Epic’s demos, over 90% of geometry was software rasterized.

Nanite also uses a visibility buffer that records only triangle IDs before shading. Materials evaluate per-pixel only for visible geometry, eliminating overdraw entirely for material costs. This two-pass occlusion culling draws what was visible last frame, builds a depth buffer, then tests previously-occluded geometry against it.

The key tradeoff: Nanite shifts your performance bottleneck from geometry to materials and memory. With traditional rendering, material complexity is one factor among many. With Nanite, material cost becomes the primary concern — we’ve seen 200–300% performance gains from simplifying materials on distant foliage in large Nanite scenes.

Hardware Requirements and Platform Reality


Before committing to Nanite, understand where it actually works.

Minimum GPU requirements:
  • NVIDIA: GTX 900 series (Maxwell) works; RTX 2000+ (Turing) recommended for full async compute benefits
  • AMD: RX 400/500 (Polaris) with limitations; RX 5000+ (RDNA) recommended
  • Intel: Arc A-series supported; older Intel GPUs unsupported

VRAM: 6GB minimum, 8GB+ recommended for development, 16GB+ optimal for complex scenes.

Nanite requires DirectX 12 with Shader Model 6.6 (or Vulkan with 64-bit atomics). On unsupported hardware, it falls back to auto-generated LOD meshes with a console warning — which may or may not match your quality expectations.

Console support:
  • PS5 and Xbox Series X: Full Nanite support, typically hitting 30–60fps with dynamic internal resolutions around 800–1800p upsampled via TSR
  • Xbox Series S: Supports Nanite but with major compromises — 540–1080p internal resolution, no Lumen reflections, reduced foliage density. Its 4 TFLOPS GPU and 10GB usable RAM limit capability significantly

Steam Deck: Nanite technically runs on the Deck’s RDNA 2 architecture, but expect 30–40fps at low-medium settings with mandatory FSR/XeSS upscaling. Epic’s Matrix City Sample runs sub-20fps. Some community members report Nanite buildings rendering invisible due to Proton driver limitations. If Steam Deck is a target platform, test early and often.

Mobile (iOS/Android): Not supported. No plans for support. Mobile GPUs lack the compute capability, memory bandwidth, and thermal headroom. Use traditional LODs and baked lighting instead.

What UE 5.5 Changed


UE 5.5 addressed one of Nanite’s biggest limitations: animated characters.

Nanite Skeletal Mesh Support (Beta): You can now enable Nanite on skeletal meshes with r.Nanite.AllowSkinnedMeshes=1. This is still beta—expect some rough edges—but it opens up virtualized geometry for characters, not just environments.

Performance improvements:
  • 20% faster masked material rasterization through a sliding window vertex cache
  • Runtime-adjustable streaming pool size (useful for graphics settings menus)
  • Reserved resource allocation that prevents memory spikes during pool resizes
  • Dynamic quality target adjustment when overcommitted, distributing detail more uniformly

Editor workflow fixes:
  • Resolved flickering Nanite selections with TSR/TAA
  • Added proper selection through occlusion
  • New Fallback Target setting for explicit control over fallback mesh generation — choose between Relative Error or Percent Triangles targeting
  • Texture Color Painting as an alternative to vertex painting (scales with texture resolution rather than vertex count)

Experimental additions: Spline mesh support via r.Nanite.AllowSplineMeshes=1, and continued landscape improvements.

What Meshes Work (And What Doesn’t)


Supported mesh types:
  • Static meshes (primary use case)
  • Geometry collections (Chaos destruction)
  • Skeletal meshes (5.5+, beta)
  • Landscapes (experimental)
  • Foliage via instanced static mesh
  • Spline meshes (experimental)

What doesn’t work:
  • Morph targets
  • Mesh decals (require translucent blend)
  • Translucent materials entirely — the visibility buffer assumes single depth per pixel, and transparency breaks this model

Material constraints: Only Opaque and Masked blend modes are supported. Translucent materials render with the default material and log warnings.

Here’s the catch with masked materials: they’re substantially more expensive than opaque. Epic’s own guidance notes that masked-out pixels cost nearly as much as drawn pixels. Traditional billboard-style foliage cards can actually be slower with Nanite than without.

World Position Offset (WPO): Supported but expensive. Nanite meshes with WPO split into smaller clusters with individual bounds. You must clamp displacement with Max World Position Offset Displacement to prevent culling artifacts. More vertices = higher cost. Epic bakes their foliage animations to textures rather than calculating per-vertex WPO because of this overhead.

Common Misconceptions


We’ve made these mistakes ourselves, so we’re sharing them to save you time.

“Nanite should be enabled everywhere”

Real testing contradicts this. We’ve seen a 5,000-triangle mesh with handcrafted LODs outperform Nanite reducing the same mesh to 4,000 triangles. In Epic’s own Lyra sample, Nanite added 9–11% GPU overhead on lowest settings and 20% on high preset. Multiple developers report City Sample vehicles on flat landscapes running at 40fps with Nanite versus 80+ fps without.

“Nanite = automatic optimization”

Nanite shifts your bottleneck, it doesn’t eliminate it. With traditional rendering, geometry is often your constraint. With Nanite, material cost explodes into the primary concern. If you’re not profiling materials, you’re likely leaving performance on the table.

“Nanite works for everything”

Nanite has base overhead regardless of scene content. Even with zero visible Nanite meshes, the instance cull pass consumes budget. Low-poly meshes add overhead without payoff. The sweet spot is genuinely high-poly assets (millions of triangles) that would be impossible to handle with traditional LODs.

Profiling and Visualization Tools


Access visualization modes via View Modes → Nanite Visualization in the viewport.

Key visualization modes:
  • Mask: Green = Nanite geometry, Red = non-Nanite. Use this to quickly identify what’s actually using Nanite.
  • Triangles/Clusters: Display rendered geometry and cluster groupings at current LOD level.
  • Overdraw: Shows overdraw amount including masked-out pixels — critical for diagnosing foliage performance issues.
  • Evaluate WPO: Green = using World Position Offset (expensive), Red = not using WPO.
  • Raster Bins: Reveals material batching. Many bins = overhead.

Console commands:
  • NaniteStats — Real-time culling statistics overlay
  • NaniteStats Primary — Main view stats
  • NaniteStats VirtualShadowMaps — Shadow performance stats
  • r.nanite.showmeshdrawevents 1 — Use with Unreal Insights to see which materials consume time
  • r.Nanite.Visualize.Advanced 1 — Low-level debugging options

Critical note: Always profile packaged builds, not editor. Performance differs significantly between them.

Decision Framework for Indies


Here’s our honest take on when Nanite makes sense for small teams.

Use Nanite when:
  • Your assets genuinely have millions of triangles (photogrammetry, high-end scans)
  • You’re targeting only high-end PCs and current-gen consoles
  • You’re doing film/virtual production with relaxed frame targets
  • You’re building arch viz with controlled viewing conditions
  • You’ve tested and confirmed performance wins over LODs for your specific content

Skip Nanite when:
  • Targeting Steam Deck, lower-end PCs, or Xbox Series S performance mode
  • 60fps is a hard requirement with complex scenes
  • Assets are under ~100K triangles per mesh
  • Heavy masked material or WPO usage (animated foliage)
  • Shipping timeline is tight — battle-tested LODs are safer
  • Small team without bandwidth to learn Nanite’s edge cases

The practical test:
  1. Enable Nanite on your target assets
  2. Profile your packaged build (not editor)
  3. Disable Nanite and compare
  4. Make your decision based on actual numbers, not assumptions

Many developers report better performance without Nanite for typical game assets. The documentation claim that “Nanite meshes typically render faster” is true specifically for high-poly assets that would suffer severe quad overdraw without virtualized geometry — not for general-purpose game meshes.
Epic Games’ Valley of the Ancient sample project demonstrating Nanite and Lumen rendering a 2×2km photorealistic environment composed of Quixel Megascans assets, showcasing high-poly photogrammetry running on next-generation console hardware

Performance Numbers Worth Knowing


Some concrete figures to help calibrate expectations:

Memory footprint: Nanite averages about 14.4 bytes per input triangle — a 1-million-triangle mesh is roughly 14MB. This is actually 7.6x smaller than a traditional static mesh with 4 LODs. However, LOD generation roughly doubles storage, and streaming pool defaults may need tuning via r.Nanite.Streaming.StreamingPoolSize.

Render timing: Epic’s PS5 demo showed approximately 2.5ms to cull and rasterize all Nanite meshes, well within 60fps budgets. But this scales with screen resolution — Nanite’s visibility pass is fill-rate bound, which is why console games run at 720–1200p internal with TSR upscaling rather than native 4K.

What makes Nanite slow: Material bin overhead from many unique materials (especially masked foliage), WPO evaluation cost, closely stacked surfaces defeating HZB occlusion efficiency, and high instance counts saturating VRAM. Root clusters for ALL instances must remain in memory even when distant — this catches people off guard with heavily instanced foliage.

Quick Troubleshooting


A few issues we’ve run into that might save you debugging time:

“My Nanite mesh is invisible or flickering”: Check that your mesh doesn’t use unsupported materials (translucent, mesh decals). Also verify you’re not hitting VRAM limits — use stat Nanite to monitor streaming budget.

“Performance is worse with Nanite enabled”: This is common with lower-poly meshes or heavy masked material usage. Profile with NaniteStats and check if material bins are the bottleneck. Consider if the mesh actually benefits from virtualized geometry.

“Foliage performance tanked after enabling Nanite”: Masked materials are expensive. Check the Overdraw visualization mode — you’ll often see high overdraw from masked-out alpha. Epic recommends using opaque materials with geometric detail instead of alpha cards where possible.

“WPO animations cause culling issues”: Set Max World Position Offset Displacement on your mesh to clamp the maximum displacement. Without this, Nanite can't accurately calculate bounds and may cull geometry incorrectly.

“Steam Deck shows invisible buildings”: Known Proton driver issue with some Nanite content. Test on actual hardware early if Deck is a target platform. Consider providing a non-Nanite fallback path in graphics settings.

The Bottom Line


For indie developers, Nanite is a specialized tool for specific problems, not a universal upgrade. It’s incredible for photogrammetry-heavy environments on high-end hardware. It’s often counterproductive for stylized games, low-poly assets, or when targeting a broad hardware range.

Profile early. Test on target hardware. Don’t assume it helps until you’ve measured it.



\\\Check out this content on Medium
\\\Check out this content on Substack






Lumen Lighting for Indies: Good Results Without Melting Your GPU


Lumen is one of Unreal Engine 5’s headline features — real-time global illumination that responds instantly to changes, no more waiting for light builds, no more “LIGHTING NEEDS TO BE REBUILT” warnings haunting your viewport. But there’s a catch: Lumen was designed for high-end hardware, and if you’re targeting the GTX 1060s and RX 580s common among indie developers and their audiences, you need to understand what you’re working with.

Here’s the honest truth: Lumen completely disables itself at Low and Medium quality presets, and Epic’s official minimum spec is a GTX 1070. This guide covers how to push Lumen’s limits on constrained hardware — and when baked lighting is actually the smarter choice.

We’ve been working through these tradeoffs ourselves while developing Dreamless Kingdom. Not every project needs fully dynamic global illumination, and sometimes accepting Lumen’s limitations early saves you from painful optimization work later.

What Lumen Actually Does


Lumen handles two things: Global Illumination (GI) and Reflections. Both use the same underlying system but sample differently.

The pipeline works like this: every ray follows the same path through Screen Traces → Software/Hardware Ray Tracing → Skylight fallback. Screen traces check against your depth buffer first, catching fine contact detail cheaply. Rays that miss pass to either software ray tracing (using distance fields) or hardware ray tracing (using actual geometry). Finally, anything that misses everything samples your skylight.

The key innovation is the Surface Cache — a pre-computed, simplified representation of your scene captured through projected “cards” on mesh surfaces. Instead of evaluating expensive material shaders at every ray hit, Lumen just looks up values from this cache. This is what makes real-time performance possible, but it’s also why Lumen struggles with thin walls and very small objects.

Understanding this helps you optimize: Lumen’s cost comes from tracing rays through your scene, shading them from the Surface Cache, and then filtering the noisy results into clean lighting. Each step has knobs you can turn.

For GI, Lumen samples your scene at roughly 1/16th screen resolution using hemispherical sampling — meaning it’s checking light coming from all directions above each surface. Reflections use importance-sampled GGX based on surface roughness, concentrating rays where they matter most. Both systems share the Surface Cache and tracing pipeline, which is why you can’t really optimize one without affecting the other.

Thame City Sample scene with Lumen enabled(left) versus disabled (right). Without Lumen, interiors go flat black and there’s no light bounce from the sky. The difference is dramatic — but so is the performance costSource: City Sample Project Documentation — Epic Games



Hardware vs. Software Ray Tracing


Lumen offers two tracing modes, and choosing correctly matters a lot for performance.

Software ray tracing is the default. It uses Mesh Distance Fields — 3D volume textures that store the distance to the nearest surface for each mesh in your scene. Rays “sphere trace” through these fields, efficiently skipping empty space. This mode works on any GPU that supports UE5, handles overlapping geometry well (great for kitbashing), and achieves 60 FPS on current-gen consoles when properly optimized.
What Lumen actually “sees” when tracing rays. This debug view (Show → Visualize → Mesh Distance Fields) reveals the simplified geometry representation used for software ray tracing. Whiter areas mean more ray march steps. If your mesh looks blobby or missing here, that’s why lighting behaves strangely. Source: Mesh Distance Fields — Epic Games

The tradeoff: distance fields can’t capture features thinner than their voxel resolution, and they don’t support skeletal mesh GI.

Hardware ray tracing uses DirectX Raytracing (DXR) to trace against actual triangles via a BVH acceleration structure. It offers two sub-modes:
  • Surface Cache mode: Fast. Traces rays but samples lighting from the pre-computed cache. Similar quality to software RT for most scenes.
  • Hit Lighting mode: Expensive. Evaluates full materials at every hit point. SIGGRAPH data from The Matrix Awakens showed Hit Lighting at 11.54ms versus Surface Cache at 2.44ms for reflection tracing on PS5 — nearly 5x slower.

Hardware RT requires RTX 2000 series or newer (NVIDIA) or RX 6000 series or newer (AMD). For GTX 1060/RX 580 users, hardware RT simply isn’t available.

Our recommendation: Unless you’re targeting RTX 3060+ exclusively, stick with software ray tracing. Even RTX 2060 users often get better results from software RT due to the 6GB VRAM limitation.

The Settings That Actually Matter

Quality Presets: A Critical Reality Check


Here’s something the marketing doesn’t emphasize:

PresetLumen StatusLowDisabled (falls back to SSGI/DFAO)MediumDisabled (falls back to SSGI/DFAO)HighEnabled, targets 4ms budget (60 FPS)EpicEnabled, targets 8ms budget (30 FPS)

Lumen completely disables below High preset. If your target hardware can’t sustain High settings, you’re already in baked lighting territory — Lumen isn’t a fallback option, it’s all or nothing.

Key CVars for Optimization


The single most impactful setting is r.Lumen.ScreenProbeGather.DownsampleFactor. This controls probe density—default is 16 (Epic quality) or 32 (High quality). Values range from 4-64; higher means better performance but lower quality. For aggressive optimization, try 48.

Stochastic interpolation (r.Lumen.ScreenProbeGather.StochasticInterpolation=1) provides roughly 30% performance improvement with minimal quality loss. AMD specifically recommends this setting.

Spatial filter passes (r.Lumen.ScreenProbeGather.SpatialFilterNumPasses) defaults to 3. Reducing to 1-2 saves time at the cost of slightly noisier results—often acceptable when TSR temporal filtering cleans things up anyway.

For reflections, r.Lumen.Reflections.MaxRoughnessToTrace (default 0.4) controls which surfaces get traced rays versus cheaper approximations. Lowering to 0.3 reduces work. Setting r.Lumen.Reflections.MaxRoughnessToTraceForFoliage=0 disables reflection tracing on foliage entirely—a smart optimization since grass and leaves rarely need sharp reflections.

An Aggressive Optimization Profile


For pushing Lumen to its limits:

ini
r.Lumen.HardwareRayTracing=0
r.Lumen.ScreenProbeGather.DownsampleFactor=48
r.Lumen.ScreenProbeGather.StochasticInterpolation=1
r.Lumen.ScreenProbeGather.SpatialFilterNumPasses=1
r.Lumen.ScreenProbeGather.ShortRangeAO=0
r.Lumen.Reflections.MaxRoughnessToTrace=0.3
r.Lumen.Reflections.MaxRoughnessToTraceForFoliage=0
r.Lumen.Reflections.RadianceCache=1
r.ScreenPercentage=67

Combined with TSR upscaling from 67% internal resolution, you might hit playable framerates on GTX 1070/1080 hardware. On GTX 1060-class cards, expect 15–25 FPS regardless — the hardware is below minimum spec.

Common Problems and How to Fix Them

Light Leaking Through Walls


The number one Lumen complaint, and almost always a mesh construction problem. Lumen’s Distance Fields can’t accurately represent geometry thinner than about 10cm.
Light bleeding through an enclosed box — the most common Lumen complaint. The culprit is usually thin geometry. Source: Lumen Technical Details — Epic Games

Fixes:
  • Make walls at least 10cm thick using closed, watertight meshes
  • Enable Two-Sided Distance Field Generation in Static Mesh Build Settings for thin meshes you can’t change
  • Increase Distance Field Resolution Scale to 1.5–2.0 on problem meshes
  • Separate walls, floors, and ceilings into individual meshes rather than importing entire rooms as single objects

If you’re using hardware RT and seeing leaking specifically with angled geometry, try r.LumenScene.DirectLighting.HardwareRayTracing=0—there are known driver issues in some configurations.

Flickering and Temporal Noise


Screen-space reflection artifacts often appear as flickering edges or vanishing highlights, particularly on shiny surfaces near screen edges. The fix:

ini
r.Lumen.Reflections.ScreenTraces=0
r.Lumen.ScreenProbeGather.ScreenTraces=0

This disables the SSR overlay on Lumen, eliminating most flickering artifacts. You’ll lose some fine detail in reflections, but the stability is usually worth it.

If you’re seeing noise specifically at shadow edges (from Virtual Shadow Maps), try:

ini
r.Shadow.Virtual.SMRT.RayCountLocal=8
r.Shadow.Virtual.SMRT.SamplesPerRayLocal=4

For Nanite-specific shadow noise, increase Fallback Triangle Percent or set Fallback Relative Error to 0 in the Static Mesh Editor. This gives the shadow system cleaner geometry to work with.

Dark Interiors


This one’s tricky because it’s often physically correct — Lumen accurately simulates how enclosed spaces receive less bounced light. Don’t fight the physics; work with it:
  • Add emissive materials to light fixtures and enable Emissive Light Source on the mesh instance
  • Use subtle fill lights with the Indirect Lighting Intensity multiplier cranked up
  • Increase Lumen Scene Detail in Post Process Volumes to capture smaller contributing meshes
  • Set up Auto Exposure properly with appropriate EV100 min/max values for interior/exterior transitions

The temptation is to crank up light intensities, but that just creates harsh contrast. Subtle emissives and properly configured exposure work better.

Performance Hitches


Stuttering usually comes from two sources: Surface Cache updates when scenes change rapidly, or Global Distance Field rebuilds when static meshes move.

Debug with r.GlobalDistanceField.Debug.LogModifiedPrimitives=1 to identify which objects are triggering rebuilds. Common culprits: actors with incorrect Mobility settings, Nanite meshes without proper setup (Lumen is dramatically faster with Nanite enabled), and lots of overlapping instances.

When to Skip Lumen Entirely


Lumen isn’t always the right choice. Use baked lighting instead when:

Targeting constrained hardware:
  • Steam Deck (1.6 TFLOP GPU struggles to hit 20 FPS in Lumen demos)
  • GTX 1060/RX 580-tier cards (below minimum spec)
  • Mobile platforms (Lumen isn’t supported at all)
  • Nintendo Switch

Needing guaranteed performance:
  • Fighting games and competitive titles requiring locked 60 FPS (Tekken 8 uses baked lighting for this reason)
  • VR projects (Epic explicitly states Lumen isn’t VR-ready)

Working with static environments:
  • Arch viz with controlled camera paths
  • Levels where lighting doesn’t need to change at runtime

Real-world testing shows 30+ FPS gains when switching identical scenes from Lumen to baked lighting. One developer reported a stylized village jumping from 20–25 FPS to 47–50 FPS on an RTX 3090. Dynamic GI is expensive.

What About Hybrid Approaches?


Here’s a critical finding that trips up many developers: Lumen disables all static/baked lighting contributions when enabled. You cannot mix Lumen GI with baked lightmaps in the same render path — Epic’s documentation explicitly states that precomputed lighting is hidden when Lumen is active.

What IS possible: using scalability settings to automatically switch between Lumen (high-end) and baked lighting (low-end). This requires designing and testing two separate lighting scenarios since scenes optimized for Lumen look significantly different without dynamic GI.

GPU Lightmass: The Baked Alternative


UE5’s GPU Lightmass offers fast baking via hardware ray tracing — build times approaching distributed Swarm builds with real-time preview during baking.

Setup: Enable the GPU Lightmass plugin, Virtual Texture Support, and Hardware Ray Tracing support — then disable Lumen. The two systems don’t play together.

The workflow tradeoff is clear: Lumen requires no build time but costs 2–8ms every frame; baked lighting requires upfront builds but costs nearly nothing at runtime.

For environments where lighting is mostly static with a few dynamic elements (a character walking through a pre-lit scene, for example), baked GI with real-time direct lighting often looks better and runs faster than trying to squeeze Lumen onto underpowered hardware. The “fully dynamic everything” approach sounds appealing but isn’t always the smartest path.

Hardware Reality Check


Let’s be direct about what to expect:

HardwareExpected FPS (1080p)Viable?GTX 1060 6GB15–25 FPSNo — below minimum specRX 580 8GB15–30 FPSMarginal at bestSteam Deck20–35 FPSLow settings onlyRTX 206030–45 FPSSoftware RT preferredRTX 306040–60 FPSFully viable

The GTX 1060 and RTX 2060’s shared limitation isn’t compute power — it’s the 6GB VRAM. Lumen scenes frequently require 8GB minimum, with 12–16GB recommended for complex environments.

Even an RTX 2080 Ti achieves only 20–25 FPS in demanding interior scenes with multiple lights. The hardware ceiling is real.

Practical Recommendations


If you’re on GTX 1060/RX 580: Disable Lumen. Use r.DynamicGlobalIlluminationMethod=2 (SSGI) and r.ReflectionMethod=0 (screen-space reflections). Your hardware is below minimum specification, and forcing Lumen wastes development time you could spend on content. SSGI combined with good ambient lighting and strategic fill lights can still look great.

If you’re on GTX 1070/1080: Software Lumen at High settings (not Epic) may work. Apply the aggressive optimization profile, render at 67% internal resolution with TSR, and design scenes with Lumen’s costs in mind — fewer overlapping instances, simpler materials, strategic emissives. Test your heaviest scenes frequently and have a fallback plan.

If you’re on RTX 2060: Software Lumen preferred over hardware RT due to VRAM constraints. The 6GB limit hurts more than the first-gen RT cores help. Monitor your VRAM usage in editor and be prepared to simplify if you’re hitting the ceiling.

If you’re on RTX 3060+: Full Lumen capability. Hardware RT becomes viable, though Software RT often provides comparable quality at lower cost. This is the tier where Lumen really shines, so take advantage of it — but remember your players may not all have equivalent hardware.

If you’re targeting Steam Deck: Disable Lumen. Successful UE5 games on Deck specifically disable Lumen and use classic GI modes to hit their performance targets. The Deck’s 16GB unified memory sounds generous until you realize it’s shared between CPU and GPU.

The Bottom Line


Lumen is a genuine workflow revolution — instant lighting iteration, physically accurate bounced light, no more painful light builds. But those benefits come with a GTX 1070 floor that excludes a significant portion of indie developers and their audiences.

The practical approach: use Lumen during development on capable hardware for the workflow benefits, but build in scalability fallbacks using baked lighting from the start. Games targeting broad hardware compatibility should treat Lumen as a high-end enhancement rather than baseline expectation.

Profile early, test on target hardware, and don’t assume Lumen helps until you’ve measured it. The best lighting solution is the one your players can actually run. Sometimes the old ways work better.






Gaussian Splatting: A Complete Student Guide to 3D Capture in 2026

Your smartphone is now a photorealistic 3D scanner. Here’s everything you need to know to start using it.

Gaussian Splat - Captured from within Balder’s Gate 3

If you’re reading this on a phone, here’s the fastest possible version: Download Scaniverse (free), walk around something interesting, tap process, and you’ll have a photorealistic 3D scene in 60 seconds. Then open SuperSplat in your browser to clean it up. That’s it. You’re splatting.

For everyone who wants to understand why this works, what tools exist, and how to push the technology further — read on. This guide is organized as a series of waypoints you can jump between.

How Gaussian splatting actually works
3D Gaussian Splatting won the SIGGRAPH 2023 Best Paper Award and it represents a genuine paradigm shift in how we capture and render real-world scenes. Photorealistic results at 100+ frames per second — something impossible with previous neural rendering techniques.

The core idea: represent a scene using millions of 3D Gaussian distributions — soft, ellipsoid-shaped blobs in 3D space. Think of it like painting with spray cans in three dimensions, where each spray creates a fuzzy dot that can be stretched and rotated. Millions of these blobs combine to create photorealistic scenes.

Each Gaussian primitive carries learnable parameters: position in 3D space (x, y, z), a covariance matrix defining shape and orientation, opacity, and color encoded through spherical harmonics — mathematical functions that capture how color changes depending on viewing angle, enabling realistic specular highlights.

The pipeline works in three stages:

Stage 1 — Structure-from-Motion (SfM). Software like COLMAP analyzes your input images and estimates camera positions while generating a sparse point cloud.

Stage 2 — Optimization. A Gaussian is initialized at each point, then iteratively refined by comparing rendered images against ground truth photos. The loss function balances pixel-level accuracy with structural similarity.

Stage 3 — Adaptive Density Control. Large Gaussians that cover too much area get split. Regions where detail is missing get additional clones. This typically runs every 100 iterations until convergence around 30,000 iterations.

The magic happens during rendering through tile-based rasterization. The screen is divided into 16×16 pixel tiles, Gaussians are sorted by depth using GPU-accelerated radix sort, and each pixel blends colors front-to-back using alpha compositing. Because this uses standard GPU rasterization rather than expensive ray-marching, it achieves real-time performance of 100–200+ FPS at 1080p.
Why this beats NeRFs
Neural Radiance Fields (NeRFs) encode scenes implicitly within neural network weights, requiring hundreds of network evaluations per pixel. Beautiful results, but at 1–10 FPS — far too slow for interactive applications. NeRF training also takes hours to days.

Gaussian splatting’s explicit representation changes the equation entirely. Training completes in 7–45 minutes, matching or exceeding NeRF quality. The tradeoff is storage: NeRFs produce compact 10–50MB models while Gaussian splats require 500MB–1.5GB per scene. For anyone building interactive experiences, the speed advantage far outweighs the storage cost — and modern compression formats like SPZ can reduce file sizes by 90%.


Free tools that get you started today
The best news for students: several excellent free options exist, with different tradeoffs between ease of use and flexibility.
Scaniverse — the fastest path to your first splat
Platform: iOS and Android | Cost: Completely free, no subscriptions | Processing: On-device

Scaniverse from Niantic is the most accessible entry point. It processes Gaussian splats on-device in 60–90 seconds without uploading your data anywhere, and exports standard PLY files compatible with all other tools. No account required for basic functionality.

Open the app → select Gaussian Splat mode → walk around your subject in a spiral pattern → tap to process. Done. This makes Scaniverse ideal for learning capture techniques without technical overhead. You can iterate on multiple captures within a single coffee break.
Quick verdict: If you’ve never made a splat before, start here. Period.
PostShot — professional-grade local processing
Platform: Windows (NVIDIA RTX 2060+) | Cost: Free tier with paid export (€17/mo for PLY) | Processing: Local GPU

PostShot from Jawset offers the most polished desktop experience. The free tier provides unlimited image/video input, training up to 4K resolution, and live preview during training — you watch your scene materialize in real-time.

PostShot excels for students learning the optimization process because you can observe how Gaussians densify and refine during training. It also imports camera alignments from RealityCapture, enabling a workflow where you use RealityCapture’s superior (and now free for educational use) camera pose estimation, then train the splat in PostShot.
Quick verdict: Best for understanding how the technology works. The live training preview is genuinely educational.
Polycam — cloud processing with editing tools
Platform: iOS, Android, and web | Cost: Free tier (150 images, GLTF export); Pro ~$9/mo with student discount | Processing: Cloud

Polycam processes captures on cloud servers, meaning you can create splats without a powerful local GPU. The free tier supports up to 150 images per capture with GLTF export. Full PLY export and unlimited captures require Pro at $17.99/month — though a 50% student discount brings this to roughly $9/month.

The value proposition is integrated editing tools (cropping, exposure adjustment) and a web-based workflow that works from any device.
Quick verdict: Best if you don’t have a GPU and need more than phone-based processing.
Luma AI — a shifting landscape
Luma AI pioneered consumer-friendly Gaussian splatting but has suspended direct splat processing in their mobile app as of late 2024, redirecting resources toward AI video generation. Their infrastructure remains valuable: free Unreal Engine plugin for importing splats, WebGL library for web embedding, and a dashboard for managing previously created captures.
Quick verdict: Don’t rely on Luma for new splat creation, but consider their ecosystem for delivery.
Open-source tools for deeper learning
Nerfstudio is the premier open-source framework for neural rendering research. After installation, you can train Gaussian splats using the Splatfacto method:
ns-process-data images --data /path/to/images --output-dir processed/
ns-train splatfacto --data processed/
ns-export gaussian-splat --load-config outputs/.../config.yml --output-dir exports/

Nerfstudio requires an NVIDIA GPU (8GB+ VRAM recommended), Python 3.10, and comfort with command-line interfaces. Steeper learning curve, but complete control over training parameters and access to cutting-edge research methods.

gsplat, the CUDA-accelerated backend powering Nerfstudio, offers 4x less memory usage and 10% faster training than the original implementation. For AMD GPU users, a ROCm port exists.

The original INRIA implementation remains valuable as the reference standard. Setup is more involved — requiring COLMAP, CUDA 12, specific Python versions, and compilation of custom CUDA kernels — but understanding this codebase provides deep insight into the algorithm.
SuperSplat — the essential free editor
Regardless of which tool creates your splats, SuperSplat is indispensable. This free, browser-based editor from PlayCanvas lets you clean up “floaters” (stray Gaussians from motion blur or insufficient coverage), crop unwanted background, adjust colors, and compress files. Runs entirely in your browser — no installation — and can reduce file sizes by 70–90% through compressed PLY export.

Every student workflow should include a SuperSplat cleanup pass before final export.


When paid options make sense
Free tools handle most student projects. Here’s when paid subscriptions earn their keep:

Polycam Basic (~$9/mo with student discount) — when you need more than 150 images per capture (complex scenes require 200–500+), want PLY export without switching tools, or value the integrated editing workflow.

PostShot Indie (€17/month) — when you’re committed to local processing and need exportable files for game engines or web publishing. The live training preview alone accelerates learning significantly.

RealityCapture — now free for users under $1M annual revenue. Provides industry-leading camera alignment that’s dramatically faster than COLMAP. Doesn’t create Gaussian splats directly but exports alignment data that PostShot and Nerfstudio can import. For complex captures where COLMAP struggles, this workflow often succeeds.


Integrating splats into game engines and the webUnreal Engine 5
UE5 lacks native Gaussian splatting support but has a growing plugin ecosystem. XVERSE 3D-GS (XScene) is the recommended starting point — free under Apache 2.0, compatible with UE 5.1–5.5, and built on Niagara for seamless VFX integration.

For advanced features (custom LOD, collision generation), the Akiya 3D Gaussians Plugin (~$99) removes Niagara’s particle limits and enables dynamic lighting interactions. The Volinga Plugin targets professional production with ACES color support, HDR, and proper shadow integration.

Performance varies significantly with splat count and GPU capability. An RTX 3070 typically achieves 30–100 FPS depending on scene complexity.
Blender
KIRI 3DGS Render v4.0 (free, Apache 2.0) provides the most complete Blender integration. Import PLY files, switch to Render mode for real-time preview, and use sphere or plane selection tools to edit individual Gaussians. The add-on supports animation keyframing, so you can animate camera paths through splat scenes directly in Blender.
Unity
UnityGaussianSplatting by Aras Pranckevičius (free, MIT license) supports Unity 6+ with Built-in, URP, and HDRP pipelines. Critical requirement: enable Vulkan or D3D12 in Project Settings (D3D11 doesn’t work). Performance benchmarks: an RTX 3080Ti renders 6.1 million splats at 147 FPS. VR support is built-in for Quest 2/3/Pro, HTC Vive, and Vision Pro.
Web publishing
For web delivery, SuperSplat hosting provides the simplest path — edit your splat in the browser, export as HTML Viewer, and host on GitHub Pages, Netlify, or Vercel (all free). The result is a shareable URL that works on any modern browser.

Self-hosted viewers include antimatter15/splat (WebGL 1.0, excellent mobile support), gaussian-splatting-webgpu (faster GPU sorting, requires Chrome), and GaussianSplats3D for Three.js integration.


Capture techniques that actually work
The quality of your Gaussian splat depends overwhelmingly on capture quality. This section alone will save you hours of frustration.

Coverage is paramount. Aim for 70–80% overlap between adjacent photos and capture from multiple heights — low, mid, and high angles. For a typical object, 100–200 photos using a spiral pattern works well; room-scale scenes need 200–500+. When uncertain, take more photos. Extra coverage rarely hurts; insufficient coverage guarantees failure.

Lock your exposure settings. Auto-exposure causes flicker between frames that confuses the SfM algorithm. On smartphones, tap to lock focus and exposure before beginning your capture walk. On cameras, use manual mode: shutter speeds of 1/500s or faster to eliminate motion blur, apertures of f/8–f/11 for maximum depth of field, and the lowest ISO your lighting allows.

Lighting consistency matters more than brightness. Overcast days produce excellent results because lighting remains constant as you move. Avoid changing light conditions, strong shadows that shift with your position, or mixed indoor/outdoor lighting.

Smartphones work excellently. Modern computational photography produces high-quality results. iPhone 13 Pro or later, Google Pixel 7+, and Samsung Galaxy S22+ all produce professional-quality input. Stick to the standard/main lens — ultra-wide distortion confuses the algorithms.

Know what challenges the algorithm. Mirrors, glass, transparent objects, featureless white walls, and moving elements (people walking through your scene) all create artifacts. Moving objects produce characteristic “floaters” — stray Gaussians floating in space — that require cleanup in SuperSplat.


Hardware requirements: what students actually need
For mobile workflows (Scaniverse, Polycam): any recent smartphone suffices. iPhone 11+ or modern Android handles capture and on-device processing fine.

For desktop training, GPU is the critical constraint. The original implementation recommends 24GB VRAM (RTX 3090 or 4090). But students can work around this:
  • RTX 3060 (12GB): Viable for most scenes with reduced iterations (7,000–15,000)
  • RTX 2060 (6–8GB): Limited to smaller scenes; use --data_device cpu to reduce VRAM at the cost of speed
  • Integrated graphics: Training isn’t feasible — use cloud-based or mobile tools instead

OpenSplat can train on CPU (~100x slower) and supports AMD GPUs and Apple Metal, enabling Mac users to train locally without CUDA.

For viewing splats, requirements are much lower — a GTX 1060 or integrated graphics can display pre-trained models.

Expect 50MB–1.5GB file sizes per scene, with complex outdoor environments at the higher end. Fast SSD storage improves training speed significantly.


File formats and the emerging standard
PLY (Polygon File Format) is the current industry standard for Gaussian splats, storing position, scale, rotation, opacity, and spherical harmonics for each Gaussian. Uncompressed files are large (~248 bytes per Gaussian), but nearly all tools support PLY import/export.

SPZ (Splat Zip), open-sourced by Niantic under MIT license, achieves 90% compression through fixed-point quantization and column-based organization — a 250MB PLY becomes approximately 25MB. Scaniverse exports SPZ natively, and converter tools exist for other sources.

Compressed PLY from PlayCanvas/SuperSplat offers approximately 4x compression by dropping spherical harmonics data, trading some view-dependent color accuracy for smaller files.

glTF standardization arrived in August 2025 when Khronos officially added 3DGS to the glTF ecosystem via KHR_gaussian_splatting extensions. This signals the technology’s maturation toward industry-standard interoperability, though tool support is still emerging.

For format conversion, 3dgsconverter (Python CLI) and gsbox (cross-platform CLI) handle most conversions, while SuperSplat can export to multiple formats through its browser interface.


Current limitations you should understand
Despite rapid progress, Gaussian splatting has meaningful constraints you should plan around.

Reflective and transparent surfaces remain challenging. While 3DGS handles reflections better than photogrammetry, mirrors and glass still produce artifacts. Research like 3DGS-DR (SIGGRAPH 2024) addresses deferred reflection handling, but these aren’t yet in consumer tools.

Thin structures — power lines, fences, antennas — exhibit visual artifacts because Gaussians struggle to represent long, linear features accurately.

Editing capabilities are limited. You can select and delete Gaussians, transform groups, and merge multiple splats, but boolean operations, semantic editing, and text-driven modifications remain research topics. SuperSplat provides selection and deletion tools; true DCC-grade editing doesn’t exist yet.

VR requires careful optimization. Achieving mandatory 72+ FPS at VR headset resolutions with wide field-of-view and stereo rendering demands aggressive optimization. Target under 400,000 Gaussians for standalone Quest performance.


Where the technology is heading
Gaussian splatting is evolving rapidly across multiple fronts.

4D Gaussian Splatting now enables dynamic scene capture. Methods like 4D-GS (CVPR 2024) achieve 82 FPS at 800×800 for temporal sequences, with training times around 8 minutes.

Compression and streaming standards are crystallizing. MPEG has opened an explorations track on Gaussian Splat Coding, signaling eventual formal standards. The SPZ format is emerging as a de facto standard for compressed delivery.

Alternative kernels beyond Gaussians — including Deformable Beta Splatting for sharper edges and Gabor Splats for textured surfaces — are active research areas that may improve reconstruction quality.

Semantic integration with language models (LangSplat, Feature 3DGS) enables text-driven queries and manipulation of splat scenes, pointing toward editing workflows where you describe changes rather than manually selecting Gaussians.

Industry analysts describe Gaussian splatting as approaching a “JPEG moment for spatial computing” — 2023 proved the speed, 2024 added geometric accuracy and mobile support, 2025 brought standardization, and 2026 should see production-grade workflows mature. Students building expertise now will be well-positioned as the technology becomes ubiquitous.


Getting started: your first splat in 30 minutes
For the quickest possible start:
  1. Download Scaniverse (free) on your smartphone
  2. Find a textured, static object — a statue, plant, or piece of furniture works well
  3. Capture by walking slowly in a spiral pattern, varying height
  4. Process on-device (~60 seconds)
  5. Export as PLY from the app
  6. Open SuperSplat in your browser
  7. Clean up any floaters using the selection tools
  8. Export as compressed PLY or HTML viewer

From there, experiment with larger scenes, different tools, and game engine integration as your projects require.


Essential resources
The foundational paper: “3D Gaussian Splatting for Real-Time Radiance Field Rendering” — Kerbl et al., SIGGRAPH 2023 (project page · GitHub repo)

Tools & editors: Scaniverse · PostShot · Polycam · SuperSplat Editor · Nerfstudio · RealityCapture

Open-source repos: INRIA original implementation · gsplat (CUDA backend) · UnityGaussianSplatting · OpenSplat

Learning & community: Awesome 3D Gaussian Splatting (curated paper/tool list) · Radiance Fields (news & platform reviews) · r/gaussian_splatting · Radiance Fields Discord · Jonathan Stephens tutorials (YouTube)

Formats & specs: SPZ format (Niantic) · PlayCanvas Gaussian Splatting docs · Khronos glTF



The barrier to entry for photorealistic 3D capture has never been lower. Your smartphone, a free app, and a browser-based editor are all you need to begin exploring this technology that’s reshaping how we represent and interact with 3D space.

If this was useful, share it with a student or colleague working in 3D. And if you want to see what I’m building with these tools — installations, speculative dioramas, the places where physical and digital overlap — subscribe and stick around.



Extracting worlds: the ethics and aesthetics of treating game assets as raw material
Filtered Point Cloud Extraction from Elden Ring | By Hyperdense


Eighty-seven percent of video games released before 2010 are now commercially unavailable — critically endangered cultural artifacts locked behind proprietary walls, decaying on obsolete media, vanishing faster than American silent films. This startling figure from the Video Game History Foundation crystallizes the central tension in contemporary debates about game extraction and transformation: Are commercial game worlds proprietary products to be consumed and discarded, or cultural commons to be preserved, studied, and reimagined?

The question matters urgently because artists, modders, preservationists, and scholars are already treating games as raw material — ripping textures, extracting 3D models, reverse-engineering engines, building new meaning from corporate creations. They do so in a legal gray zone where “tolerated infringement” is the operative reality, where the same act that earns Cory Arcangel a Whitney Biennial invitation could earn a fan developer a $12 million lawsuit from Nintendo. What follows is a map of this contested territory.

The enclosure of the digital commons


The most useful philosophical framework for understanding game asset extraction comes from James Boyle’s concept of the “second enclosure movement.” Just as the English commons were privatized through enclosure acts in the 18th century, Boyle argues we are witnessing an enclosure of “the intangible commons of the mind” — the raw material of cultural creation being fenced off by ever-expanding intellectual property regimes.

Video games exemplify this enclosure perfectly. They are built from shared cultural resources — mythologies, visual languages, musical traditions, narrative conventions — yet corporate ownership transforms them into private property accessible only on licensed terms. Lawrence Lessig’s distinction between “Read-Only” and “Read/Write” culture captures the stakes: RO culture produces passive consumers; RW culture enables creators who “blur boundaries” and make new meaning from existing materials. Current copyright law, Lessig argues, is “antiquated for digital media” since every use of a creative work in digital contexts involves copying.

McKenzie Wark’s Gamer Theory offers complementary framing. She describes “gamespace” — the game-like logic that increasingly structures contemporary life — arguing that games are not escapist entertainment but “the form in which the present can be felt and, in being felt, thought through.” If games reveal the algorithmic logic governing existence, then extracting and transforming their elements is not theft but critical inquiry. Wark herself practiced this openness by releasing Gamer Theory under Creative Commons.

The enclosure framing gains urgency from preservation realities. The Video Game History Foundation’s 2023 study found that less than 3% of games released before 1985 remain commercially available. Game Boy libraries show only 5.87% availability; even the relatively recent PlayStation 2 catalog is 88% out of print. As Frank Cifaldi of VGHF puts it: “Imagine if the only way to watch Titanic was to find a used VHS tape and maintain your own vintage equipment.” This is the reality for most video game history.

Artists who treat games as found objects


A robust tradition of artists has already answered the extraction question through practice. Cory Arcangel stands as perhaps the most celebrated figure — his Super Mario Clouds (2002) hacked an NES cartridge to display only scrolling sky and clouds, stripping the game to pure abstraction. Exhibited at the Whitney Biennial in 2004, the work treats commercial software as readymade raw material. Crucially, Arcangel shares his source code freely, framing the work as participatory: “Here’s what I made, and here’s the source code.”

The net art collective JODI (Joan Heemskerk and Dirk Paesmans) pioneered extreme game modification as deconstructivism. Their SOD (1999) replaced all Wolfenstein 3D graphics with abstract black-and-white geometries; their Untitled Game series created closed cubes with swirling patterns from intentional Quake engine glitches. The 1999 Webby Award winners now have work in MoMA and ZKM collections.

More recent practitioners have used extraction explicitly for critique. Joseph DeLappe’s dead-in-iraq (2006–2011) typed names of killed U.S. soldiers into America’s Army chat, transforming the military recruitment game into memorial. Georgie Roxby Smith’s 99 Problems [WASTED] repeatedly killed a female GTA V avatar to expose gendered game violence. Peggy Ahwesh’s She Puppet layered Tomb Raider footage with texts from Fernando Pessoa and Sun Ra to explore “women, virtual bodies, role-playing, identity issues.”

These artists inherit the appropriation art tradition of Sherrie Levine, who re-photographed Walker Evans’ Depression-era images as her own, and Richard Prince, whose Instagram re-photographings sold for $100,000. The legal outcomes have been mixed — Prince won fair use protection for 25 of 30 works in Cariou v. Prince (2013), though the 2023 Supreme Court decision in Andy Warhol Foundation v. Goldsmith significantly narrowed transformative use doctrine, requiring artists demonstrate “fundamentally different purpose” rather than merely adding new meaning.

The legal architecture of tolerated infringement


The legal reality governing game extraction is less a coherent framework than a patchwork of accommodations, gray zones, and selective enforcement. Understanding this architecture reveals both the risks and the spaces where transformative work occurs.

Foundational protections exist for reverse engineering and personal modification. In Sega v. Accolade (1992), the Ninth Circuit held that disassembly for interoperability is fair use “as a matter of law” when it’s “the only way to gain access to the ideas and functional elements” of software. Sony v. Connectix (2000) extended this to PlayStation emulation. Most significantly for everyday modding, Lewis Galoob v. Nintendo (1992) established that temporary alterations for personal enjoyment do not create infringing derivative works — “Having paid Nintendo a fair return, the consumer may experiment with the product and create new variations of play.”

However, commercial distribution changes everything. In Micro Star v. FormGen (1998), user-created Duke Nukem 3D levels were found to infringe the game’s “story” — not the code, but the narrative arc. The court reasoned that game characters’ adventures constitute a copyrightable story, and user levels creating new adventures constitute unauthorized sequels. This case haunts anyone distributing mods commercially.

The DMCA’s anti-circumvention provisions create additional hazards. Nintendo’s 2024 lawsuit against the Yuzu Switch emulator settled for $2.4 million not primarily on copyright grounds but on DMCA § 1201 claims — the emulator enabled decryption of copy protection. This theory potentially bypasses fair use entirely, since circumvention itself is the violation regardless of ultimate use.

The 2024 Copyright Office ruling rejecting remote access exemptions for game preservation illustrates the current legal ceiling. Even for scholarly research at accredited institutions, the Copyright Office determined that “preserved video games would be used for recreational purposes” — revealing that cultural access itself threatens corporate interests. The ESA lawyer stated there is “no combination of limitations” industry would support for remote access.

What emerges is a legal landscape where much derivative game content exists as “tolerated infringement” — technically illegal but practically permitted due to enforcement economics, community relations, and marketing benefits. This tolerance can be withdrawn at any time, and publishers like Nintendo demonstrate that legal exposure is real.

The developer spectrum: from collaboration to control


Game studios span from open collaboration to aggressive control, revealing that different philosophies about ownership of creative work are not only possible but commercially viable.

Bethesda represents maximum openness: releasing the Creation Kit (the same tools developers use) for free, with Todd Howard declaring “the sky’s the limit” for mods. Bethesda sees extended game lifespan — Skyrim’s decade-plus relevance — as proving the business case for supporting transformation.

CD Projekt Red evolved toward openness as a “final goodbye” to The Witcher 3, releasing REDkit in 2024 and hiring talented modders into professional roles. Community response: “This is not a modding tool, this is an engine, you guys are crazy.”

Sega explicitly tolerates fan works: “We usually have no issue with y’all using our blue boy to hone your art and dev skills” as long as works aren’t monetized. Sega famously hired Sonic fan developers to create the acclaimed Sonic Mania.

Nintendo occupies the opposite pole: AM2R (Metroid fan remake) taken down after 15 years’ development; 379 fan games removed from Game Jolt in December 2020; the Garry’s Mod community forced to remove all Nintendo-related content in April 2024. Nintendo’s patent attorney Koji Nishiura frames emulation itself as potentially illegal “depending on how it’s used.”

Developer Tim Schafer articulated preservation-friendly reasoning in 2002: “Most of the creative teams behind all those games have long since left the companies that published them, so there’s no way the people who deserve to are still making royalties off them. So, go ahead — steal this game!” Against this, the ESA argues preservation exemptions would “obliterate the market for reboots, remakes, and relaunches.”

Player labor and the co-creation of game worlds


Critical theory offers frameworks for understanding players not as passive consumers but as co-creators whose labor generates value appropriated by corporations.

Tiziana Terranova’s foundational essay “Free Labor: Producing Culture for the Digital Economy” (2000) describes internet creative labor as “simultaneously voluntarily given and unwaged, enjoyed and exploited.” This captures precisely the modder’s condition: investing creative energy that extends game lifespan and corporate profits without compensation.

Julian Kücklich’s concept of “playbor” names the convergence of play and labor in gaming communities, particularly modding. The line between entertainment and exploitation is deliberately blurred; player creativity generates value captured by companies.

Alexander Galloway reframes games fundamentally: “If photographs are images and films are moving images, then videogames are best defined as actions.” Games exist only through player action — players co-constitute the work. This ontological shift implies players have legitimate claims over experiences they help create.

Ian Bogost’s procedural rhetoric framework positions games as persuasive arguments made through processes rather than words or images. If games encode ideological arguments in their mechanics — Animal Crossing teaching capitalist debt cycles through gameplay — then modifying those mechanics is counter-argument, not theft.

Academic research confirms modders understand themselves as creators. Hector Postigo found modders “develop a specific rationale and set of norms rooted in Jenkins’ concept of a ‘moral economy’ to justify their appropriations.” Tom Welch argues queer mods constitute “affectively necessary labour” responding to inadequate representation — politically necessary creative work building what commercial products fail to provide.

Death of the author in the age of corporate immortality


Roland Barthes’ “Death of the Author” has been applied to games with revealing complications. The core argument — that reader interpretation should take primacy over authorial intention, that once created “the author dies” — seems especially applicable to games designed for active player participation.

Games scholars note that games are “designed such that players will have as much opportunity to create their own stories and validate their own interpretations of the fiction as possible.” Game designers create frameworks for meaning-making rather than fixed meanings — theoretically opening space for derivative work.

But unlike traditional texts, game companies retain power “from the afterlife.” When Sony/EverQuest banned a player for creating fan fiction, it demonstrated that while authorial intent may be theoretically dead, corporate power enforces interpretive control through technological and legal means. The author may be dead, but the rights-holder is immortal.

This creates a fundamental asymmetry. Commercial games draw freely from shared cultural resources — mythology, genre conventions, visual languages developed over centuries of collective creation — then fence off their outputs as private property. The extraction question is really about reciprocity: if corporations can take from the commons, why can’t artists take back?

The doujinshi alternative: what tolerance looks like


Japan’s doujinshi culture offers a vision of alternative relations between rights-holders and derivative creators. Fan-made comics using copyrighted characters thrive in a legal environment with no fair use doctrine — technically infringing but practically flourishing through cultural norms and industry tolerance.

Publishers view doujinshi as talent pipeline and free promotion. The “shinkokuzai” system means most copyright offenses require victim complaint for prosecution — and companies rarely complain. Prime Minister Abe stated doujinshi should be treated as parodies, not piracy.

This tolerance is real but conditional: keep sales small-scale, don’t compete commercially, respect “IP dignity.” When creators violate these norms — as in the 1999 Pokemon erotic doujinshi arrest — enforcement occurs. The system runs on enforcement discretion rather than legal rights, meaning creators depend on corporate goodwill that can be withdrawn.

The doujinshi model reveals that vibrant derivative culture is economically and culturally compatible with commercial interests — but also that framing this as tolerance rather than rights leaves creators perpetually precarious.

What remains uncertain


The legal landscape continues shifting. The 2023 Warhol decision narrowed transformative use; future cases may narrow it further. The ESA’s absolute opposition to preservation access suggests no voluntary resolution. AI training on game assets raises entirely novel questions about extraction and transformation.

Meanwhile, artists continue working in gray zones — tolerated until they aren’t, legal until challenged. The fundamental question remains unresolved: Are commercial game worlds private property to be protected or cultural material to be transformed? The answer will shape what becomes possible for digital creativity — and what disappears into the enclosure.


Elden Ring — Leyndell Royal Capital | recaptured by 





Speculative design: A comprehensive field guide for 2026

Liam Young’s Planet City

Speculative design has transformed from an academic niche into a global practice reshaping how industries, governments, and cultural institutions imagine futures. This field — encompassing design fiction, critical futures, worldbuilding, and experiential scenarios — now influences everything from climate policy to AI ethics to urban planning. The discipline’s growth reflects a broader cultural shift: as accelerating technological and environmental change makes the future increasingly uncertain, designers have become essential guides for navigating possibility.

This guidebook maps the contemporary landscape of speculative design for educators, practitioners, and curious newcomers. It surveys the studios creating immersive future scenarios, the theorists shaping discourse, the aesthetic movements visualizing alternative worlds, and the educational pathways into this expanding field.


Superflux: Experiential futures at scale


Superflux, founded by Anab Jain and Jon Ardern in London in 2009, has become the field’s most visible practice. Both founders trained under Dunne & Raby at RCA and were awarded Royal Designers for Industry status in 2022 — the first practitioners recognized specifically for speculative design. Their work creates visceral encounters with possible futures through immersive installations.

The studio’s signature project, Mitigation of Shock (2017–ongoing), transforms gallery spaces into lived-in apartments from London circa 2050, complete with DIY hydroponics, insect farms, and survival manuals — materializing climate adaptation futures visitors can physically inhabit. Their Refuge for Resurgence (Venice Biennale 2021) staged a multispecies banquet imagining coexistence after ecological crisis. Clients range from Google AI and DeepMind to the UN Development Programme and the UK Cabinet Office.

Website: superflux.in


A modular, adaptable and fully circular exhibition that charts the multifarious business transformations paving the way for positive climate action today.

Near Future Laboratory: Design fiction’s methodological home


Near Future Laboratory, founded by Julian Bleecker in 2005 and now operating from Venice Beach with collaborators including Nicolas Nova, Fabien Girardin, and Nick Foster, established design fiction as a rigorous professional practice. Bleecker’s 2009 essay “Design Fiction: A Short Essay on Design, Science, Fact and Fiction” remains foundational, and the studio’s Manual of Design Fiction (2022) has become the canonical practitioner reference.

Their most influential artifact, the TBD Catalog (2014), presents 166 mundane consumer products from a “normal, ordinary, everyday near future” — AI-by-the-hour lawyers, dream-recording pillows, IoT toilets — demonstrating how design fiction makes abstract technological trajectories tangible through familiar objects. The catalog’s 10th anniversary edition appeared in 2024.

Website: nearfuturelaboratory.com

A prototyping-based research program exploring the potential of artificial intelligence in everyday life



Extrapolation Factory: Democratizing futures


Extrapolation Factory, founded by Elliott Montgomery and Chris Woebken in Brooklyn in 2012, pioneered participatory approaches that bring futures thinking to non-expert publics. Their breakthrough 99¢ Futures project placed speculative products in actual dollar stores across New York, transforming everyday retail into a space for public futures discourse.

The studio’s Extrapolation Factory Operator’s Manual provides eleven futures modeling tools designed for collaborative workshops. They won the Lexus Design Award Grand Prix in 2018 and have partnered with UNICEF, Walker Art Center, and TED Active. Their work demonstrates that speculative design need not remain in galleries — it can meet people where they already are.

Website: extrapolationfactory.com



Tellart: Strategic speculation for institutions


Tellart, operating since 2000 with headquarters now in Amsterdam, bridges speculative design and institutional transformation. Their exhibitions for the World Government Summit in Dubai directly inspired the creation of the UAE’s Museum of the Future. At COP28, their Dinner in 2050 project used AI-powered speculative dining to help delegates imagine climate-resilient food futures.

The studio describes its approach as “life-centric” — designing not just for humans but for all living systems. Clients include the V&A, World Economic Forum, and Van Gogh Museum.

Website: tellart.com


Sheikh Mohammed bin Rashid al Maktoum interacts with the Future of Education exhibit



Automato: Algorithmic everyday life


Automato.farm, formed in Shanghai in 2015 by Simone Rebaudengo, Matthieu Cherubini, Saurabh Datta, and Lorenzo Romagnoli, explores how algorithmic systems shape domestic life. Their Addicted Products imagines appliances with their own needs; Ethical Things creates a smart fan that crowdsources moral decisions from online workers.

The collective’s work has appeared at Vitra Design Museum, MAK Vienna, and Triennale Milano, winning the IoT Award 2016 for Design Fiction. Rebaudengo also serves as an associate of Near Future Laboratory.

Website: automato.farm

Objective Realities

Emerging collectives reshaping the field


Nonhuman Nonsense (Berlin/Stockholm), founded by Leo Fidjeland and Linnea Våglund, represents a new generation centering posthuman perspectives. Their Pink Chicken Project proposes genetically modifying all chickens pink using gene drive, creating a visible geological marker of the Anthropocene. Work has been presented at the UN Convention on Biological Diversity.

Marshmallow Laser Feast (London) creates multi-sensory immersive experiences exploring perception and nature. Projects like We Live in an Ocean of Air (Saatchi Gallery) and In the Eyes of the Animal (VR forest from animal perspectives) won the Wired Innovation Award for Experience Design.

Protopia Futures, led by Monika Bielskyte, advocates for proactive prototyping of inspiring, livable futures as alternatives to both dystopia and naive utopianism — she has traveled to over 100 countries researching diverse cultural approaches to futures.

Marshmallow Laser Feast — Of The Oak


Liam Young and Unknown Fields Division


Liam Young, an Australian architect based in Los Angeles, has been called “the man designing our futures” by the BBC. He runs both the MA in Fiction and Entertainment at SCI-Arc and the nomadic research collective Unknown Fields Division (with Kate Davies).

Unknown Fields organizes expeditions to extreme landscapes — rare earth mining sites, radioactive lakes, e-waste cities — documenting the hidden geographies behind consumer technologies. Young’s speculative film Planet City (2020) imagines all ten billion humans concentrated in a single hyper-dense metropolis, surrendering the rest of Earth to wilderness. His work resides in collections at MoMA, the Met, and the V&A.

In the Robot Skies (2016) was the first narrative film shot entirely by drones; Where the City Can’t See (2016) was the first fiction film using LIDAR scanning. Young represents architecture-as-worldbuilding — using the tools of film, documentary, and speculative narrative rather than traditional practice.

Website: https://liamyoung.org/


Alex McDowell and World Building Institute


Alex McDowell, the production designer behind Minority Report (2002), Fight Club, and Man of Steel, transformed film production design into a legitimate futures methodology. His work on Minority Report — collaborating with MIT scientists to imagine 2054’s gestural interfaces, personalized advertising, and predictive policing — influenced over 100 actual technology patents.

McDowell now directs the World Building Media Lab at USC, training students in narrative-driven world creation. His “World Building” approach treats environments as design substrates from which multiple stories can emerge, applicable to film, games, corporate strategy, and policy planning.

Website: https://worldbuilding.usc.edu/

Foundational figures

Speculative Everything — on Amazon

Anthony Dunne and Fiona Raby established speculative design as a recognized discipline through their teaching at RCA’s Design Interactions program (2005–2015) and their book Speculative Everything (2013). They coined “critical design” and popularized “speculative design,” developing the A/B Manifesto contrasting problem-solving (A) with problem-finding (B) approaches. Now University Professors at The New School/Parsons in New York, they continue shaping the field through their Designed Realities Studio. Their forthcoming book Not Here, Not Now (MIT Press, 2025) extends their thinking beyond future-oriented framing.

Bruce Sterling, science fiction author and cultural critic, coined “design fiction” in his 2005 book Shaping Things, defining it as “the deliberate use of diegetic prototypes to suspend disbelief about change.” His intellectual bridging of science fiction and design practice legitimized speculation as a professional methodology.

Julian Bleecker operationalized design fiction through Near Future Laboratory, creating the practical frameworks and artifacts that moved the concept from theory to practice. His original essay remains the field’s most-cited methodological text.

Stuart Candy pioneered “experiential futures” — making possible futures tangible through immersive scenarios rather than just objects. His framework treats futures as things to be lived rather than merely described. The card game The Thing From The Future (co-created with Jeff Watson) has become the field’s most widely-used imagination tool, with over 3.7 million possible prompts. Candy has held positions at Carnegie Mellon, Parsons, and currently serves as Outstanding Visiting Professor at Tec de Monterrey.

Contemporary practitioners across disciplines


Anab Jain (Superflux) has developed “More-Than-Human Centred Design,” expanding speculation beyond anthropocentric frames. She holds a professorship at the University of Applied Arts, Vienna, and received an Honorary Doctorate from University of the Arts London.

James Auger, now at École normale supérieure Paris-Saclay, produced some of critical design’s most provocative works through Auger-Loizeau, including Carnivorous Domestic Entertainment Robots — devices that trap and digest flies using microbial fuel cells. His scholarly writing, particularly “Speculative Design: Crafting the Speculation” (2013), formalized the field’s methodology.

Alexandra Daisy Ginsberg explores synthetic biology’s implications for design, authoring Synthetic Aesthetics (MIT Press, 2014) and creating Designing for the Sixth Extinction — speculative organisms engineered to support endangered species. Her PhD, Better, critically examined synthetic biology’s uncritical optimism.
Alexandra Daisy Ginsberg — Resurrecting the Sublime


Theorists and critics


Cameron Tonkinwise has become the field’s most cited critic, questioning speculative design’s political limitations and calling for more engaged practice. Bruce and Stephanie Tharp introduced “discursive design” as a unifying framework for critical, speculative, and adversarial approaches. Matt Malpass provided systematic taxonomies in Critical Design in Context (2017). Jeffrey and Shaowen Bardzell brought critical design into HCI discourse through influential CHI papers.

Emerging voices expanding the field


Deepa Butoliya develops “Post Normal Design” frameworks emphasizing decolonial and intersectional perspectives. Johanna Hoffman applies speculative futures to urban resilience. Rewa Wright (Māori/Aotearoa) brings Indigenous and cyberfeminist approaches to speculative practice. Phil Balagtas founded the Design Futures Initiative and Speculative Futures meetup network, building global community infrastructure.
Design Beyond the Center: Stories of Jugaad, Resilience, and Collective Knowledge



Exhibitions that defined the discipline


United Micro Kingdoms (Dunne & Raby, 2012–2013), commissioned by London’s Design Museum, remains the most ambitious speculative design exhibition ever mounted. It imagined the UK fractured into four self-contained “super-shires” — Digitarians, Bioliberals, Anarcho-evolutionists, and Communo-nuclearists — each with distinct political and technological systems materialized through films, objects, maps, and transportation designs.

Mitigation of Shock (Superflux, 2017–ongoing) pioneered the “future apartment” format, creating fully-realized domestic spaces visitors can inhabit. The installation — featuring DIY food production, water recycling, and survival protocols — has traveled from Barcelona to Singapore, demonstrating how experiential futures can viscerally communicate climate adaptation challenges.

Refuge for Resurgence (Superflux, Venice Biennale 2021) staged a banquet table where humans sit alongside foxes, pigeons, and insects, materializing multispecies futures thinking in architecturally-scaled installation.


Design fiction artifacts


The TBD Catalog (Near Future Laboratory, 2014) demonstrated design fiction’s power through mundane consumer products: AI services, quantified-self devices, and IoT appliances presented with the deadpan ordinariness of actual mail-order catalogs. The 10th anniversary edition (2024) remains the canonical design fiction artifact.

Minority Report’s production design (Alex McDowell, 2002) established film as a legitimate space for speculative prototyping. The gestural interface created for the film led to actual development contracts and patents, demonstrating how “diegetic prototypes” can influence real technology trajectories.

Her (Spike Jonze, 2013) provides another model: its near-future world is populated with believable everyday technologies — wireless earbuds, ambient computing, AI assistants — integrated so seamlessly into narrative that they feel inevitable rather than spectacular.
Her (2013)


Games as speculation tools


The Thing From The Future (Stuart Candy and Jeff Watson, 2014), a 108-card imagination game, has been played at UNESCO Youth Forums, SXSW with U.S. mayors, and in corporate strategy sessions worldwide. Players generate prompts combining future Arcs (Grow, Collapse, Discipline, Transform), Terrains, Objects, and Moods, then describe artifacts from those possible worlds. The Association of Professional Futurists recognized it as Most Significant Futures Work in 2015.

Tabletop games like The Quiet Year (Avery Alder), The Ground Itself (Everest Pipkin), and Microscope (Ben Robbins) have developed worldbuilding as collaborative play, influencing how speculative designers think about participatory futures.
The Thing from the Future — Speculative Futures card game


Design fiction: Making futures tangible


Design fiction creates artifacts from possible worlds to provoke discussion about futures. Key principles include:
  • Diegetic prototyping: Objects that exist within story worlds, functioning as “totems through which a larger story can be told.” The term comes from film theorist David Kirby, who studied how science consultants create believable technologies for cinema.
  • The future mundane: Focusing on everyday objects rather than spectacular technologies, because the most telling futures emerge in banal details — not flying cars, but how people buy groceries.
  • Discursive spaces: Design fictions succeed not by predicting futures but by creating spaces for conversation about possibilities.

The speculative–critical distinction


Critical design (coined by Anthony Dunne, 1999) uses design to “challenge narrow assumptions, preconceptions and givens about the role products play in everyday life.” It’s more attitude than method — employing satire, deadpan humor, and provocation.

Speculative design specifically concerns future scenarios and “what if?” questions. The Dunne & Raby A/B Manifesto contrasts these orientations: A (affirmative design) solves problems, provides answers, and serves industry; B (speculative design) finds problems, asks questions, and serves society.

The distinction has blurred in practice. Related terms now include discursive design, adversarial design, interrogative design, and design for debate — all sharing a commitment to design as inquiry rather than solution.


Worldbuilding methodologies


Worldbuilding — creating coherent fictional realities — draws on frameworks from transmedia studies:
  • Klastrup and Tosca’s Three Dimensions: Topos (setting, geography, physical/social laws), Ethos (moral framework), and Mythos (central stories and legends)
  • Negative capability: Strategic gaps that invite audience imagination
  • Transmedia scalability: Designing worlds that can sustain multiple stories across platforms

Mark J.P. Wolf’s Building Imaginary Worlds (2012) provides the theoretical foundation; Alex McDowell’s World Building Institute applies these principles to film, games, and strategic futures.


Futures studies methods


Speculative designers draw extensively on strategic foresight:
  • The Futures Cone (Joseph Voros): Visual representation distinguishing possible, plausible, probable, and preferable futures
  • Scenario planning: Developing multiple narrative futures, typically using two-axis frameworks creating four contrasting scenarios
  • Horizon scanning: Systematic detection of emerging trends and “weak signals”
  • The Futures Wheel (Jerome Glenn, 1971): Mapping cascading consequences from central changes

These methods provide structured approaches for imagining what speculative design then materializes.


Object-oriented ontology and design


Object-Oriented Ontology (OOO), developed by Graham Harman, rejects human exceptionalism: objects exist independently of human perception or access, and no object can be fully grasped. Timothy Morton’s “hyperobjects” — phenomena so distributed in time and space they transcend localization, like climate change or nuclear radiation — have become crucial conceptual tools for designers grappling with planetary-scale challenges.

New materialism (Karen Barad, Jane Bennett) emphasizes material agency: things are not passive recipients of human intention but have “positive, productive power of their own.” These philosophical frameworks support the shift from human-centered to more-than-human design.
Timothy Morton’s Hyperobjects

Aesthetic movements visualizing alternative futures

Wakanda — Black Panther (2018)


Solarpunk: The optimistic resistance


Solarpunk envisions sustainable futures where renewable technology integrates with nature and community. The aesthetic emerged around 2014 through Tumblr artists and gained momentum as resistance to “dystopia fatigue.” Visual characteristics include Art Nouveau influences, organic architecture (think Frank Lloyd Wright meets vertical forests), and technology-nature synthesis.

Key references include Studio Ghibli films, Boeri Studio’s Bosco Verticale in Milan, and Wakanda’s design in Black Panther. Solarpunk functions as both aesthetic movement and activist toolkit — proposing not just images but practices for preferable futures.

Lunarpunk offers the “yin to solarpunk’s yang” — nocturnal, spiritual, individualistic futures featuring bioluminescence, fungi, and dark flowing garments. The 2023 anthology Bioluminescent formalized this emerging aesthetic.

Hopepunk, coined by Alexandra Rowland in 2017, emphasizes “weaponized optimism” — characters fighting for positive change through kindness and collective action rather than cynicism. It entered Collins English Dictionary in 2022.

Afrofuturism and its expansions


Afrofuturism, theorized by Mark Dery in 1993, explores African diaspora culture’s intersection with technology and speculation. Rooted in Sun Ra’s cosmic jazz performances and Octavia Butler’s visionary fiction, it found mainstream visibility through Black Panther (2018).

Contemporary Afrofuturist designers and artists include Wangechi Mutu, Cyrus Kabiru, Lauren Halsey, and musicians from Janelle Monáe to Erykah Badu. The Smithsonian’s National Museum of African American History and Culture mounted “Afrofuturism: A History of Black Futures” as major exhibition.

Writer Nnedi Okofor distinguishes Africanfuturism as specifically rooted in African (rather than diasporic) perspectives — a reminder that these categories continue evolving.

Indigenous Futurisms


Indigenous Futurisms, formalized by Grace Dillon’s anthology Walking the Clouds (2012), express Indigenous perspectives on time, technology, and speculation. Key concepts include non-linear temporality (past/present/future interconnected) and the recognition that post-apocalyptic scenarios are already reality for communities that survived colonization.

Practitioners include Skawennati (Mohawk), Elizabeth LaPensée (Anishinaabe/Métis), Santiago X (Coushatta/Chamoru), and Dennis Numkena (Hopi), described as “the first Indigenous Futurist in architecture.” Institutions like the Center for Native Futures (Chicago) and Initiative for Indigenous Futures support this expanding practice.
Walking the Clouds — Grace Dillon


Gulf Futurism


Gulf Futurism, coined by Sophia Al-Maria and Fatima Al Qadiri in 2012, examines how Western cyberpunk futures are already manifest in the Persian Gulf’s petro-capitalist present. The concept captures the “quantum leap” between Bedouin nomadic life and hyper-modern consumer culture within a single generation.

Al-Maria’s work includes “Black Friday” (Whitney Museum) and the memoir The Girl Who Fell to Earth. The aesthetic emphasizes gleaming skyscrapers, air-conditioned mega-malls, and the contradictions between luxury development and exploited labor.

Hauntology and liminal aesthetics


Hauntology, adapted from Jacques Derrida by critic Mark Fisher, describes cultural stagnation — society “haunted” by lost futures and unable to generate genuinely new forms. The aesthetic manifests in Ghost Box Records artists sampling 1960s British public information films and in the pervasive sense that contemporary culture recycles rather than invents.

Liminal space aesthetics emerged from internet culture, particularly the 2019 “Backrooms” phenomenon: images of empty transitional spaces — abandoned malls, vacant hotels, deserted pools — evoking nostalgia and unease. These aesthetics provide visual languages for temporal disjunction and the “slow cancellation of the future” Fisher diagnosed.

AI-generated aesthetics


The explosion of generative AI tools (Midjourney, DALL·E, Stable Diffusion) since 2022 has created distinct aesthetic tendencies: hyper-detailed rendering, cinematic lighting, dreamlike juxtapositions — and a concerning tendency toward generic retrofuturism that may narrow rather than expand speculative imagination.

Practitioners debate whether AI accelerates speculation (enabling rapid visualization) or undermines it (defaulting to familiar futures). Ethical concerns include training data sourced without consent and the risk of what critic Boris Müller calls “algorithmic kitsch” — visually impressive but intellectually empty futures.

Educational pathways into speculative practice

Veshak by Ina Chen | SCI-Arc Fiction and Entertainment


Graduate programs


SCI-Arc Fiction and Entertainment (Los Angeles), directed by Liam Young, offers a 1-year MS embedded in LA’s entertainment industry. Students work with Framestore, Disney Imagineering, and Netflix, with graduate work premiering at Sundance and Tribeca. This program most directly trains speculative designers for film, games, and immersive media.

Royal College of Art (London) remains historically central, though Design Interactions closed when Dunne & Raby departed in 2015. Current programs include Design Futures MDes and Information Experience Design MA, continuing critical and speculative traditions through faculty like John V Willshire and connections to alumni like Superflux’s founders.

Carnegie Mellon’s PhD in Transition Design integrates speculative methods within a framework for addressing “wicked problems” and catalyzing sustainable transitions. Faculty include Terry Irwin, Cameron Tonkinwise, and Gideon Kossoff.

Parsons School of Design offers a Futures Studies and Speculative Design Certificate (online, ~$1,760), accessible professional training in scenario planning and design fiction with faculty including Elliott Montgomery from Extrapolation Factory.

Design Academy Eindhoven houses the Critical Inquiry Lab and Social Design MA, emphasizing research-based design for public activation. IAAC/ELISAVA Barcelona’s Master in Design for Emergent Futures combines speculation with digital fabrication and Fab Academy integration.


Workshops and alternative education


The School of Critical Design (founded by J. Paul Neeley) offers online courses from self-paced basics to advanced masterclasses, making speculative methods accessible globally. CIID Summer Schools provide 5-day intensives on topics like “Designing Ethical Futures.”

The Speculative Futures network, founded by Phil Balagtas through the Design Futures Initiative, operates 70+ local chapters worldwide — Berlin’s chapter alone has 2,000+ members. Quarterly meetups, workshops, and the annual PRIMER Conference provide peer learning outside academic structures.

Residencies


Eyebeam (Brooklyn) has supported artists working with technology since 1998; their 2025/2026 “Speculating on Plurality” program offers $4,000 stipends for emerging artists. Microsoft Research Artist in Residence brings artists into science-technology environments. Google X maintains internal speculative design practice using design fiction for “moonshot” technologies — Nick Foster, Near Future Laboratory co-founder, serves as Head of Design.

\\Essential texts

  • Speculative Everything (Dunne & Raby, MIT Press, 2013): The foundational text defining the field
  • The Manual of Design Fiction (Near Future Laboratory, 2022): Practical methodology guide
  • Discursive Design (Bruce & Stephanie Tharp, MIT Press, 2018): Expanded theoretical framework
  • Building Imaginary Worlds (Mark J.P. Wolf, Routledge, 2012): Worldbuilding theory
  • Synthetic Aesthetics (Ginsberg et al., MIT Press, 2014): Design and synthetic biology
  • Designs for the Pluriverse (Arturo Escobar, Duke, 2018): Decolonial design approaches


\\Journals and magazines


Design Issues (MIT Press) and She Ji (Tongji/Elsevier) publish academic speculative design research. Journal of Futures Studies bridges foresight and design communities. Dezeen, Core77, and CLOT Magazine cover speculative projects for broader audiences. Ding Magazine offers futures-focused interviews and essays.


\\Conferences and festivals


Ars Electronica (Linz) and Transmediale (Berlin) are primary venues for art-technology-speculation intersection. Dutch Design Week (Eindhoven) and Milan Design Week feature speculative exhibitions. Academic venues include CHI and DIS (ACM conferences), Design Research Society Conference, and Nordes (Nordic Design Research).

\\Climate and ecological futures


Climate speculation has become central to contemporary practice. Superflux’s “Mitigation of Shock” and “Invocation for Hope” make climate adaptation viscerally experiential. Design Earth (Rania Ghosn and El Hadi Jazairy) produces architectural fictions like The Planet After Geoengineering graphic novel. The shift from dystopian warning to “active hope” narratives reflects the field’s evolution toward inspiring collective action.

\\AI and algorithmic futures


As AI capabilities accelerate, speculative designers increasingly explore algorithmic implications. Near Future Laboratory’s AI Designed Fictions Research Studio prototypes everyday AI artifacts; Automato’s work imagines domestic life shaped by machine logic. The emergence of generative AI as both subject and tool of speculation creates recursive challenges the field is still working through.


\\Biotechnology and synthetic biology


Alexandra Daisy Ginsberg’s “Designing for the Sixth Extinction” imagines synthetic organisms supporting endangered ecosystems. The Synthetic Aesthetics project brought designers and biologists together to prototype biological computers and bacterial textiles. Central question: what does it mean to “design nature”?


\\Post-human and more-than-human


Superflux’s “More-Than-Human Centred Design” framework, influenced by OOO and new materialism, expands speculation beyond human needs to encompass other species, ecosystems, and objects. This represents the field’s most significant theoretical development since Dunne & Raby’s original critical design framework.


\\Critical perspectives and ongoing debates


The field faces persistent critiques. Cameron Tonkinwise and others question speculative design’s political efficacy: does provocative gallery work actually change anything, or merely comment? Critics identify Eurocentric bias — the field emerged from London institutions and remains concentrated in wealthy Western contexts. Calls for decolonial approaches, diverse futures representation, and engagement with Global South perspectives continue reshaping practice.

The relationship between speculation and action remains contentious. Some practitioners emphasize design fiction’s discursive value — opening conversations, not providing solutions. Others push toward “speculative activism” that connects imagination to material change.

The arrival of generative AI raises new questions about human creativity, authorship, and whether algorithmic tools will expand or constrain speculative imagination. The field’s next chapter will likely be shaped by how practitioners navigate these tensions between critique and construction, gallery and public, human and posthuman perspectives.


\\Speculation as essential practice


Speculative design has matured from a provocative academic experiment into essential infrastructure for navigating uncertain futures. Its methods — design fiction, experiential scenarios, diegetic prototyping — are now deployed by governments exploring policy options, corporations imaging product trajectories, and cultural institutions engaging publics with climate, AI, and biotechnology challenges.

The field’s most significant contribution may be its insistence that futures are designed — that the images, objects, and narratives shaping collective imagination are not neutral but carry values, assumptions, and political implications. In making this visible, speculative design creates space for intervention: if futures are designed, they can be redesigned.

For educators, this survey provides entry points across theory, practice, and aesthetics. For practitioners, it maps the contemporary landscape and its debates. For anyone curious about how design might help navigate accelerating change, speculative design offers both methods and examples — not predictions of what will happen, but provocations about what could.

The future remains unwritten. Speculative design helps us draft.



Linktr.ee  INSTAGRAM