Skip to content

3D Modeling & Art Pipeline

The definitive reference for all 3D modeling, scanning, rendering, and texture workflows used across VPW table builds. Organized by workflow stage from asset acquisition through final VPX integration.

Quick Start

  1. Scan at 600 DPI in TIFF format using an HP 4670/4600 scanner. Verify dimensions against physical measurements (divide pixel count by DPI)
  2. Blender collections: VLM.Bake for static objects, VLM.Lights for all lights, VLM.Movables for animated objects. At least one light collection must use "Solid" bake mode
  3. Nestmap size must be 8192 (8K). Set render ratio to 25% for test renders, 100% for final
  4. 31-character name limit on VPX primitives -- keep component names at most 6 characters. Prefix Blender modifiers with NoExp to skip them during export
  5. After every Blender import: re-apply reflection/refraction probes, re-check "Hide parts behind" on playfield, verify timer objects are not on VLM.Visual layer
  6. Set render probe roughness to 0 for best performance (up to 40% FPS improvement)
  7. For plastics, use PNG with alpha channel -- JPG kills transmission even with correct material settings

Playfield Scanning and Photography

Flatbed Scanner Workflow

The HP 4670 (or 4600) flatbed scanner at 600 DPI produces high-quality playfield scans in TIFF format. Stitching software (CK/PTGui) handles combining multiple scanner passes into a single playfield image. The scanner excels for playfield surfaces but struggles with plastics due to their transparency and depth (first documented during the Earthshaker build by friscopinball).

For oversized playfields requiring multiple passes, borgdog built a jig to move a regular scanner in a grid pattern, producing stitched results reaching approximately 20K resolution raw before downscaling (learned during Jokerz). Scanner stitches can introduce skew -- EBisLit's Guns N' Roses stitch had visible left-side droop at the slingshots. PTGui software generally produces better stitches than manual Photoshop alignment.

Hardtop overlays (plastic overlays for refinishing playfields, approximately $350 from outsideedgeproducts.com) provide another source of high-quality art. The artwork is printed on the underside, so scan quality is high. At 150 DPI, scans produce excellent VPX-ready assets (discovered during Jokerz). After scanning, the hardtop can be resold to recover most of the cost.

Scanning a brand-new CPR (Classic Playfield Reproductions) playfield yields pristine results with no signs of wear, as demonstrated on Earthshaker.

Playfield Photography and Stitching

Camera-based playfield capture offers different tradeoffs than flatbed scanning. Cameras can capture the entire playfield in a single shot for alignment reference, but introduce barrel/pincushion lens distortion and perspective flattening (detailed comparison on Guns N' Roses).

Barrel distortion correction: Photoshop can auto-correct lens distortion via Filter > Lens Correction if the photo's EXIF data contains the lens model (documented on Special Force).

Calibration trick (from mlager8 during Guns N' Roses): Photograph a printed grid pattern perpendicular to the camera. Use Photoshop's lens distortion filter to straighten the grid. Apply the same correction to all subsequent images taken with the same lens and zoom setting.

Best practice: Use a low-resolution full-playfield photo as the alignment reference, then stitch high-resolution scans to match it.

Real table teardown photography (from benji084 during Police Force): Print a sheet of crosshairs spaced 2 inches apart, cut a rectangular window in the center, and use it as a "picture frame" for each photo. The crosshairs provide reference points for correcting skew, rotation, and distance when digitizing the images later.

DPI Verification and Dimension Validation

Always verify scan DPI against known physical dimensions. Divide pixel count by PPI to confirm actual size -- for example, a 12194x27629 pixel scan at 600 PPI gives 20.32"x46.04", confirming the known 20.25"x46" dimensions (first documented on Breakshot by niwak).

Standard dimensions to verify against: - Williams/Bally standard: 20.25" x 46" - Williams/Bally widebody: 23" x 46" - Bally widebody (Black Rose era): 20.25" x 46"

Critical lesson from Guns N' Roses: The "internet wisdom" for GnR's playfield size (25" x 51.75", stated everywhere including VPX dimension manager) was wrong. Actual measured size was 23" x 46". Validation process: compare width/height ratios across 3 independent sources, verify with physical measurement. Never trust playfield dimensions from online databases without verification. Use circular features (pop bumper holes, post holes) as circularity validation -- if they render as ovals, the dimensions are wrong.

Playfield dimension verification workflow (from Cirqus Voltaire): Export the VPX table and import into Blender to read exact dimensions. Compare against known physical dimensions where 50 VP units = 1.0625 inches. Check that the playfield image aspect ratio matches the VPX table width/height ratio -- a mismatch means the image is being stretched.

Scan Color Correction

Playfield scans come out flat and desaturated compared to real tables. Color correction should primarily bring black levels to an eye-pleasing level without crushing detail (from bord1947 on Medieval Madness). When debating saturation, always reference real machine photos or in-person experience rather than videos, which compress color significantly (learned on Defender).

Selective Gaussian blur removes JPEG compression artifacts from cabinet artwork: use GIMP's Selective Gaussian Blur filter, duplicate the layer, apply the filter, then delete detail sections from the blurred layer to preserve sharp edges (from sixtoe on Judge Dredd).

Playfield softening simulates clearcoat diffusion: duplicate the playfield image layer, apply 4px Gaussian blur, set blurred layer opacity to 50%. Clear the insert holes in the blurred layer first, or black borders will appear around inserts (from Judge Dredd).


AI Upscaling

Multiple AI upscaling techniques have been used across VPW builds. Redbone615's workflow (documented on Earthshaker) uses chaiNNer with multiple AI models layered in Photoshop: realesrgan-x4plus-anime_x4 and UltraMix_Restore are favorites, with up to 6 models overlaid, erasing parts of one layer to let better results from another show through. Working Photoshop files can reach 2GB. GnR nestmaps were upscaled to 14976x16384 with no FPS impact.

The waifu2x neural network upscaler (designed for anime/cartoon images) works well for pinball plastic artwork and playfield graphics (used by sixtoe on No Fear). However, a proper high-resolution scan is always preferable to AI upscaling.

Tomate80 used AI upscaling on playfield scans and plastic images before applying them as textures on Last Action Hero, improving visual clarity beyond what the original scan resolution provided.


Playfield Art: Redraw vs Scan

Two philosophies compete in VPW table development. Brad1X created full playfield and plastic redraws for Fish Tales using Adobe Illustrator, traced from high-resolution scans. The redraw approach produces perfectly clean vector art that scales to any resolution. For toolkit tables, redraws provide better source material since the toolkit renders at very high resolution where scan artifacts become visible.

The counter-argument: some builders prefer scan authenticity. Tomate80 noted on Earthshaker: "reality isn't perfect... some flaws make things look more realistic, less video game." The consensus is that recreated text is acceptable if matched to the original, but scan imperfections can add character.

Bad Cats provides a notable example: the playfield and all plastics were completely redrawn in vector art by brad1x using Adobe Illustrator, then exported to Photoshop for final compositing and saved as PNGs. This is credited as "re-drawn" rather than "remaster" to distinguish from simple filter/upscale work.


Blender Toolkit Workflow

Collection Organization

The VPX Lightmapper (VLM) toolkit in Blender organizes objects into specific collections that control bake behavior (first documented on Haunted House, refined on Iron Man):

  • VLM.Bake: Objects whose surfaces receive baked lighting (playfield, ramps, plastics). Sub-collections for Parts (non-moving objects), underPF (objects visible through transparent playfield areas), and separate playfield collections if different depth biases are needed.
  • VLM.Lights: All light objects. Inserts in one collection (Split mode), GI per string (Grouped, with shadow-casting lights in separate Split collection), Flashers in one collection (Split), Room light collection (Solid render mode).
  • VLM.Movables: Animated objects (targets, flippers, gates, spinners). Use the hide checkbox (eye icon) to mark movable toys.

Critical rule: At least one light collection must have "Solid" bake mode selected. Without a "Solid" light scenario (typically the Room/environment light), no bakemaps are generated -- only lightmaps (discovered on Haunted House).

Batch Export Settings

Key settings discovered through trial and error across multiple builds:

  • Nestmap size: Must be set to 8192 (8K), not 256. At 256, the export produces approximately 100 tiny nestmap images instead of properly packing textures.
  • Render ratio: Use 25% for quick 1K test renders (2-3 hours), 50% for 2K iteration renders, 100% for final 4K. The ratio and Blender's main render percentage are multiplicative (from Last Action Hero).
  • Export mode: Use "Hide" not "Remove All" to preserve original objects in the VPX file.
  • Max lights: Set to 12-16 to prevent GPU out-of-memory errors during batch renders. Setting to 0 (unlimited) can crash even powerful GPUs (from Spectrum).

Naming Conventions and Limits

VPX has a 31-character maximum on primitive/lightmap names. Names longer than 31 characters are silently truncated, causing duplicate name conflicts, lightmap assignment failures, and script reference errors (first documented on Haunted House, confirmed on Iron Man).

Toolkit lightmap names follow the pattern: LM_<light collection>_<light name>_<part collection>_<part name>. Keep all component names at most 6 characters. Insert lights use the pattern: p32on (lit state), p32off (unlit state), p32bulb (physical bulb light) where the number corresponds to the ROM lamp number (from Earthshaker).

Flasher naming convention: lowercase "f" prefix + "1" prepended to the solenoid callback number. Example: solenoid callback 9 becomes f109 (standardized by sixtoe on Police Force).

Nestmap Architecture

Toolkit rendering generates multiple images per object -- one for each lighting scenario (bakemap/base, GI-on, GI-splits, individual flasher contributions). For Bad Cats, 134 separate images exist just for the playfield. These images are trimmed and packed into 8K nestmaps by the toolkit algorithm (explained by benji084 on Earthshaker).

Two rendering methods exist: (1) "Camera" render -- overhead snapshot, used for most objects including plastics, shared lightmaps mean fewer draw calls. (2) "Unwrapped baked projection" -- for moving objects and playfield, requires manual UV unwrapping, generates separate lightmaps per object.

Nestmap packing affects resolution: objects that are visually prominent (close to the camera/player) should get proportionally more texture space than objects in peripheral areas (from Diner). GnR's effective playfield width was only 2050px in the nestmap despite 8K nestmaps (discovered on Earthshaker by redbone615).

Import and Post-Import Workflow

After Blender batch completes, integration into VPX requires (from Godzilla and No Fear):

  1. Import VLM materials from another toolkit table if the toolkit does not assign them automatically
  2. Apply VLM.Bake.Solid to opaque elements, VLM.Bake.Active to transparent elements
  3. Set up movable objects using _Animate methods
  4. Hide all collidable physics primitives and VPX light objects
  5. Assign render/refraction probes (these get lost on import and must be re-assigned)

Post-import checklist (mandatory after every Blender batch import, documented on No Fear): 1. Re-apply "Hide parts behind" checkbox on playfield 2. Re-apply reflection probe to playfield (strength 0.2, not 1) 3. Re-apply refraction probes to ramp layers 4. Set playfield image to actual playfield image for correct ball reflections 5. Verify timer objects are not on VLM.Visual layer (they get deleted on import)

NoExp Modifier Convention

When adding modifiers in Blender for toolkit objects, name bevel and subdivision modifiers with the prefix NoExp (e.g., NoExp_Bevel) to prevent them from being applied during toolkit export (from niwak on Iron Man). Without this, a single screw with subdivision modifier can balloon a Blender file from 250MB to 1.25GB (discovered on Last Action Hero).


Blender Ramp Modeling and Geometry Cleanup

When preparing VPX ramp meshes in Blender (documented on Congo by flupper1):

  1. Merge by distance (M > by distance) to weld unconnected vertices from VPX export
  2. Remove internal faces that corrupt material rendering
  3. Add subdivision modifier for smoother curves, but add a bevel modifier BEFORE subdivision (per Flupper's tutorial ramp workflow)
  4. Clean UV mapping after geometry fixes

VPX-exported ramp geometry always has unmerged/unwelded polygons -- this is a VPX export limitation, not a modeling error.

Ramps can be modeled parametrically in CAD, allowing easy adjustment when real measurements arrive later (from Bram Stoker's Dracula). Ramp height can be derived from the machine manual: find the length of mounting posts listed in the parts table, then subtract the ramp wall height (from Bram Stoker's Dracula).

Ramp UV unwrap requirement: If a ramp has a solidify modifier, the UV unwrap must be done AFTER applying the modifier -- otherwise the bake only renders the edges (discovered on Creature from the Black Lagoon).


Texture Baking and EXR HDR Pipeline

EXR DWAA Compression

Toolkit-generated EXR lightmap files use DWAA compression. Compression levels 20-25 approximate Blender's default output quality. Level 45 is too aggressive -- it loses HDR brightness data visible as dimmer light reflections (from Fish Tales). The toolkit saves EXR as 16-bit half-float with DWAA, which compresses very small because lightmaps are mostly alpha (dark areas) with small areas of light data.

When post-processing EXR files (e.g., fixing lightmap artifacts in Photoshop), the workflow must preserve HDR data: save uncompressed from Photoshop, reopen in Blender's image editor, then re-save with correct DWAA settings. Photoshop's own EXR compression produces different results than Blender's.

HDR Pipeline Benefits

EXR floating-point lightmaps enable true HDR rendering in VPX (documented on Fish Tales): flashers rendered at full brightness retain detail across the entire brightness range as VPX fades them. High dynamic range means bright flasher values above 1.0 are preserved, revealing additional detail as brightness decreases -- impossible with 8-bit images that clip at 1.0. VPX converts all textures to GPU float variants internally, so EXR files avoid a lossy conversion step.

Lightmap Post-Processing Risks

Post-processing toolkit lightmap EXR files carries significant risks (from Fish Tales): per-pixel alignment can break (even 1-pixel shifts cause visible seams), HDR information can be lost if the editing software clips values above 1.0, and format conversion can introduce artifacts. Blender's Texture Paint mode is the safest tool for small fixes because it operates on the EXR natively.


Ambient Occlusion Shadow Techniques

Two approaches exist for playfield ambient occlusion shadows, with significant tradeoffs discovered during the Hook build:

Option 1 -- Baked into playfield image: - Multiply the AO render onto the playfield texture in Photoshop/GIMP at desired opacity (approximately 80% multiply) - Simpler, no performance cost, no depth bias issues - Cannot be tied to GI state (shadows remain visible when GI is off) - Any object repositioning requires a new AO render

Option 2 -- Separate flasher layer: - AO image as a separate flasher primitive overlaid on playfield - Can be tied to GI: PlayfieldShadow.opacity = 110 - gilevel * 100 (visible when GI off, fades when GI on) - Causes depth bias clipping artifacts at high opacity values - Two alpha layers stacking (PF + AO flasher) causes VPX rendering issues with ramp decals

Recommendation: The baked approach is more reliable. The dynamic GI-linked flasher approach was attempted on Hook but reverted after causing too many visual artifacts. Metallica achieved good results with the flasher approach, but it requires careful depth bias management.

AO render tips: - Render at 4K, resize to 2K for final use - Convert to black-and-white shadow image, tune contrast so center areas have no grey - Re-render AO whenever objects are repositioned


Render Probes

Refraction Probes

Refraction probes are created in VPX's Render Probe Manager (not toolkit side). Key rules established across multiple builds:

  • One refraction probe per overlapping depth layer -- overlapping transparent objects at different heights each need their own probe (per niwak on Police Force)
  • Non-overlapping transparent objects can share a single probe (better for performance)
  • Default thickness of 10 is too much -- values of 1-4 work better for most plastics (from Medieval Madness)
  • Refraction adds a "dirty look" that may or may not be desirable

Reflection Probes and Roughness

Render probe roughness (blur function) causes significant performance issues including ball jitter. Setting roughness to 0 for all render probes fixes performance -- up to 40% FPS improvement was measured on Fish Tales. The blur function has known bugs: white flashing on playfield and ghosting in VR (GitHub issue #939).

For VR specifically, non-zero roughness on playfield reflection probes causes white flashing artifacts (fine on desktop). The fix is to create two render probes -- one with roughness for desktop, one with zero roughness for VR (from Breakshot and Fish Tales).

Stern playfields have an "orange peel" surface texture, so roughness of 1-2 on the reflection probe looks more realistic than mirror-like settings (from Metallica).

Performance Impact

Niwak's top 3 performance recommendations for toolkit tables (from Police Force): 1. Limit non-opaque parts -- transparent/refractive objects are expensive 2. Limit bake groups -- fewer texture atlas pages means fewer draw calls 3. Limit active lights -- every active light that casts shadows adds overhead


Plastics Modeling

Bevel Workflow

Plastics rendering requires compositing depth/thickness effects. For Batman DE, benji084 described a baking approach in Octane/C4D: apply texture as an emissive material with camera directly overhead, no reflective sheen on the top face. Bevels are vector-painted with a separate material for gradual falloff from clear plastic (top) to rough/refractive (edge). UVs for bevels must be unwrapped/relaxed so they lay flat around the UV of the top/cap.

Flupper1's plastics workflow uses Inkscape SVG tracing: trace the plastic shape as SVG paths, import into Blender, extrude to give thickness, apply textures (from Judge Dredd). This produces cleaner geometry than freehand modeling. The same SVG-to-Blender approach was used on Simpsons Pinball Party.

Plastics Material Settings

For realistic plastics (from Bad Cats): use static rendering off, place hotspot lights under the plastic with transmit = 1 so they glare through. Two light elements per bulb achieves great-looking plastics. Brighter lights wash out plastic colors -- if the original texture is slightly darker, it looks more colorful when lit.

Blender plastics must use PNG with alpha channel -- JPG kills transmission even if the material has transmission settings (from Scared Stiff).

Transparent Plastic Shadow Fix

Transparent plastics in the toolkit's Blender scene cast opaque shadows by default, which darkens the playfield beneath them. Niwak's fix (from Iron Man) uses Blender's shader node system: add a Light Path node, connect the Is Shadow Ray output to a Mix Shader, mix between the normal plastic shader and a Transparent BSDF. When a shadow ray hits the plastic, it passes through transparently instead of casting a dark shadow.


Camera Matching and Blueprint Overlay

Blueprint Overlay Method

To verify playfield image alignment (from Fish Tales and Bram Stoker's Dracula): overlay the machine's original blueprint drawing (from the manual) on top of the playfield image in an image editor. Set the blueprint layer to 50% opacity. Use perspective correction to align the blueprint to the playfield scan -- straight lines and known positions (screw holes, switch locations) provide reference points. Guide pin holes drilled in the playfield correspond to blueprint marks and serve as high-precision alignment anchors.

Blueprints from manuals are not always 100% accurate (especially for pre-CAD era machines) -- trust the scan over the blueprint when they conflict.

For toolkit builds, sixtoe recommended creating blueprint renders from the toolkit -- a top-down view showing wall positions. Make two blueprints (upper layer and lower layer) because ramps hide walls and objects underneath them (from Jurassic Park).


3D Scanning and Photogrammetry

Polycam and Phone-Based Scanning

Polycam phone app (even older iPhones) can produce high-quality 3D scans of pinball toys. For best results: place the piece on a lazy susan with good lighting and rotate slowly while scanning (first documented on Godzilla by ebislit). The resulting mesh can be decimated significantly in Blender (100K tris down to 4.6K) while retaining visual quality, then baked with normal maps for VPX import.

Scans serve as valuable dimensional references even when polygon counts and formats are unsuitable for direct import (noted by tomate80 on Jurassic Park).

Qclone App

Qclone (iOS app) was used for photogrammetry of the Mr. Bigglesworth figurine on Austin Powers. The app provides a printable grid for reference, displays a sphere of capture angles to complete, and can merge two scan sessions. Free to use, $15 to unlock 3D file export. For best results: object must be immobile, camera moves around it, avoid shadows, take 30-40 photos from all directions.

Scan-to-Game-Ready Model Pipeline

The full pipeline (documented on Iron Man):

  1. Scan: Photogrammetry scan of physical toy
  2. Clean mesh: Remove scan artifacts, fill holes
  3. Poly reduction: Flupper reduced the Iron Monger from 2.5 million triangles to approximately 8,000 while preserving visual quality
  4. UV unwrap and texture bake: Transfer high-poly detail to low-poly via normal maps and baked textures
  5. Import to VPX: As OBJ with texture maps

The key insight is that aggressive poly reduction (300x) is viable for pinball toys because they are viewed from a fixed camera angle at distance, so detail loss is imperceptible.

Instant Meshes (external software) produced dramatically better decimation results than Blender's Decimate modifier for 3D scanned toys -- reducing a LOTR model from 180MB/high-poly to 3MB/30K polygons while maintaining shape (from Lord of the Rings). The critical limitation: UV maps are lost during the process. Workaround: use Blender Cycles' high-to-low poly baking to transfer textures from the original high-poly to the decimated low-poly.


Polygon Reduction

Blender Workflow

For reducing polygon count on complex meshes (from Bride of Pinbot by niwak):

  1. Merge by distance to clean up coincident vertices
  2. Limited dissolve at very low angle (possibly 0) to remove unnecessary edge loops -- check "all boundaries" carefully as it can break the model
  3. Reset normals
  4. Mark sharp edges manually or use auto-detection
  5. Add small bevel modifier (clamp disabled) for edge definition
  6. Add weighted normals modifier for better shading

Decimate modifier often breaks complex organic shapes -- limited dissolve preserves topology better. For VPX: faces must be flat and convex (tris and quads preferred). Locknuts were reduced from 17K to 780 polys using this workflow.

Flupper's remesh technique for massive toy poly reduction (from SpongeBob): Apply Solidify modifier first (prevents remesh from creating holes), then Remesh modifier, then Decimate modifier. This pipeline can reduce a 100K+ triangle toy to approximately 2K triangles.

Blender Export Rules

When exporting geometry from Blender to VPX (from tomate80 on Judge Dredd): never apply the subdivision surface modifier -- it is only so rendering looks smooth but should not affect final geometry because it makes the mesh impossible to handle in VPX due to polygon count. The solidify modifier MUST be applied before export (or the mesh will be a flat plane with no volume in VPX), but do not save the Blender file after applying destructive modifiers.


Color Management and Tonemapping

Tony McMapFace Tonemapper

Niwak developed "Tony McMapFace" as an alternative tonemapper to replace Blender's default Filmic, which produces oversaturated results (documented on Breakshot). Install by replacing files in Blender's datafiles/colormanagement directory. All toolkit tables benefit from this tonemapper.

With Tony McMapFace (and later AGX), physically correct 2700K bulb temperature works correctly for GI, whereas Filmic required overriding to 3000-3200K to avoid excessive yellow (from Jurassic Park).

LUT Limitations

LUTs have fundamental limitations for VPX color correction (per niwak on Breakshot): (1) Applied AFTER tonemapping, which loses HDR information, (2) Applied to the final composited result rather than just the object albedo, (3) Corrections affect both playfield texture and light contributions equally, which is not physically correct. Proper saturation adjustment should happen in the source material before rendering.

Build table assets with full saturation and use LUTs to reduce it for the desired look. LUTs can reduce dynamic range and saturation but cannot expand them (from Congo). Starting with desaturated assets permanently limits visual options.

Practical approach (from skitso on Hook): Use a warmer LUT when GI is on and a colder LUT when GI is off, swapped via script. However, LUT swapping causes issues in VR -- disable when VRroom is enabled.

Environment Image Selection

The environment/room image significantly affects table appearance (from Bad Cats). A contrasty environment image gives nice metal and insert reflections but can leave the table looking dark in unlit areas. Flupper's shinyenvironment3blur4 avoids glare issues from custom HDRIs (from Star Trek TNG). The artist_workshop_8k HDRI was identified as particularly good for pinball rendering (from Lord of the Rings).

When selecting HDRI environment textures (from Stargate): use indoor scenes, avoid HDRIs with bright visible point light sources, set HDRI strength low (approximately 0.3), and desaturate to prevent color tinting white plastics.

HDRI Saturation Control

When using a colorful HDRI environment image for rendering, add a Hue/Saturation node to reduce saturation and prevent the environment from "staining" the entire table with its dominant color (from Beat the Clock).


Playfield Mesh Setup and Gotchas

The playfield mesh primitive must be named exactly playfield_mesh -- VPX treats this specially as the replacement physical playfield surface (from Blood Machines and Congo). Key properties:

  • Any area with no faces becomes transparent, automatically creating visual holes for kickers/scoops
  • Must be imported at position 0,0 with scale 100 -- VPX overrides all physics, material, and visibility settings
  • If accidentally nudged from 0,0, all inserts and lights misalign with the playfield image
  • DisableLighting can be enabled on the mesh if using pre-rendered playfield lighting
  • The mesh is rendered as Static, meaning BlendDisableLighting must be set in the editor (cannot be changed at runtime via script)

Loop cuts in the playfield mesh can cause balls to stick at standstill, particularly in the flipper area (discovered on Batman DE). When a ball is nearly stopped, it can "pick up" the loop cut edge and get trapped. Avoid loop cuts in the flipper area entirely.

Cutting holes in the playfield mesh for rollover switches can allow balls to fall through (from Batman DE). In VPX, rollover switches are pure software sensors with no physical attributes. Fix: make rollover switch holes transparent in the playfield texture instead of cutting actual geometry.

The playfield texture's alpha channel does not work in VPX -- you cannot use it to make parts of the playfield transparent to show insert primitives underneath (discovered on Batman DE).


VR Room Creation

VR rooms are separate mesh files loaded alongside the table in VR mode. Key considerations first documented on Fish Tales and refined on Big Bang Bar:

  • Room meshes should be low-poly with baked textures -- they are peripheral scenery, not the focus
  • Room lighting should be baked into textures (no dynamic room lights for performance)
  • The room floor must align with the VR floor plane
  • The room must be rotated 6 degrees away from the player to compensate for the table slope (from SpongeBob)
  • VR far plane can be set to 2000-3000 to limit draw distance
  • Room file sizes can reach 75MB+ due to animation frames (Fish Tales' animated water)

VR blocker walls technique (first used on Breakshot): Copy the cabinet primitive, make it fractionally smaller, and flip normals so they face inward. This prevents VR players from seeing through the cabinet to the VR room floor through gaps like the plunger lane. Purely visual, not collidable.

Sixtoe's VR philosophy: "Fake reflections look bloody awful in VR, always." Prefer material quality (proper roughness, subtle dirt/wear textures) over reflection effects. VR players notice material authenticity more than reflection accuracy because they can lean in close to examine surfaces (from Indiana Jones).


Pivot Points and Object Orientation

The pivot point for VPX primitives corresponds to xyz 0,0,0 in Blender (from F-14 Tomcat). When creating animated objects that need to rotate around a specific point, the object's origin in Blender must be placed at the desired rotation axis.

VPX primitives have two rotation systems (from Blood Machines): ObjRotX/Y/Z (object-space rotation applied first) and RotX/Y/Z (world-space rotation applied second). These interact -- changing ObjRot changes which axis Rot operates on. For animated primitives, typically set the static orientation with ObjRot and animate with Rot.

Drop target orientation rule (from Hang Glider and Goonies): Target primitives must face the drain at RotZ=0. Use RotZ for playfield positioning, not ObjRotZ. Roth's drop target physics code calculates hit angles from RotZ -- using ObjRotZ causes all hit angle calculations to be incorrect.

Insert mesh pivot should be at the top surface so Z position stays at 0, avoiding the need to adjust Z offset every time Z scale is changed (from Austin Powers).