Houdini building and landscape project using PDG to generate a set of unique buildings, terrain tools to
blend the buildings, roads with the landscape, and exploring using USD/MaterialX as much as possible to render with Karma and to send to Unreal5. Nov 2023
For RSTLSS: Simulated Metaverse fashion GLTFs, avatars and ThreeJS viewers collaborating with other artists and programmers. Made with Marvelous Designer, Houdini,
Maya, Substance, Photoshop, Javascript, ThreeJS and Python. 2022-2025
Mojo: tools to create 10,000 unique mojos including Unity based preview and assembly tools, color palette tools, and finally
rendering 10,000 movies using Unity's HDRP. 2022
Project for client to make a VR environment based on a TV show as a demo for their client. I did some procedural modeling with Houdini, used Unity to do the
lighting, materials, scene assembly and VR optimizations. Sep 2021
I've always been very interested in using computers to make art. At first I wrote programs to make
images, then animated images, then interactive images.
Along the way Ive learned to use many 3D applications and how to program with them. Having a 3D
programming environment like Maya really frees up
the creative process for me, and in turn, I help create and refine content production pipelines by
combining my experience using the art tools with programming for them.
I started programming graphics and games in the 80s using BASIC and Assembly. I was inspired by Zork
and wrote a text-game-engine. These days most of my programming is Python, C# or Javascript.
I use Cg, C, C++ and SQL on the job, too.
Applications:
My favorite art applications are the ones I can program with. First there was Swivel, Stratavision,
Photoshop and Lingo programming with Director in 1991.
After that were several Alias jobs until Maya came out, then most of my work has
been with Maya. I have also used and programmed scripts or plugins for Houdini, Substance, Softimage, Renderman,
Motivate, Photoshop, After Effects, Shake, Mari, Modo, Unity, and Unreal4.
I use and really like ZBrush but haven't programmed for it yet :-)
Procedural
I like to create 3D content with programming, either by creating tools or creating
procedural models.
These curtains were made procedurally in Houdini and imported to Unreal5 with Alembic. All of the layout is Houdini, also.
Aug 2021
I worked on a procedural city project using Houdini 16, Substance Designer, UnrealEngine 4.17 and Houdini Engine which allows me to author procedural assets in Houdini and then rebuild them
inside of Unreal with new parameters. My goal here is a procedural city that is drivable in a video game, so the drivable surfaces (collision surfaces) are very important.
Nov 11 2017
On MLB:The Show I created procedural modeling tools to create the inside of the player's
mouth and eyelashes, and for the stadium team tools to place seats and outfield walls.
I wrote a Maya script to export models in an FBX file and a text format map for UDK. My test
case was this procedurally built wall.
The block is instanced in Maya and UDK. (2011).
This is a procedurally built structure with physics constraints and an earthquake. Bullet
Physics and Maya. (2014)
This is a procedurally built city in Maya and rendered in Mental Ray. This was to be used in
backgrounds for a TV concept I was working on.
It also had roads that could be exported to a game engine (Shockwave) and driven on.
(2003)
This tool allowed the artist to define the path of the road with one curve, build the road
and sidewalk, curb, guardrail, gutter from parameters, then adjust the curve or the
parameters and see realtime feedback. (2011)
I used this to make roads for this unity game sketch about driving in LA.
Google Chrome has disabled NPAPI plugins so Unity webplayer will not work with
Chrome. :-(
You can view this example with Internet Explorer.
Modeling Tools
Eyes and Face Rig
On MLB: The Show making an exact likeness of a real player was a major selling
point of our game. We created a curve-based tool for remeshing 500k triangle scans to game
resolution.
Artists would place feature points on the face and then shrink wrap, I also had an
experimental script to find the feature points automatically.
There was also a blendshape based tool for players we did not have a scan for.
MLB Face Curves scan remeshing tool
MLB Blendshape Tool
Road Tool
This tool allowed the artist to define the path of the road with one curve, build the road,
sidewalk, guardrail, gutter from parameters, then adjust the curve or the
parameters and see realtime feedback.
Models
Here are some props and characters that I created in Maya for a TV
show concept that I was trying to pitch to Cartoon Network about 2003.
All of these images are made with Maya 4 in 2003.
For current models, see the VR and Character sections.
Character
Character rigging for Wavedash's Icons
I rigged Zhurong (the character with a sword and cape, above left) and Xana (the tall character, below left) for Wavedash's fighting game Icons and supported the offsite animators.
I also wrote shaders for the characters and wrote rigging, animation transfer and pipeline tools in Python and C#.
Character Experiments with Unreal 4 (2016)
This VR story is a work in progress. The story is about a girl, dog, and bear playing cards around a
campfire.
I have made the switch to Unreal for many different reasons. It looks and performs better, and I like
Unreal dev tools better, it's more complete. I have a workflow from Maya's new time editor
to the Unreal sequencer. I am having a lot of fun animating. Currently I am animating card tricks in
Maya and Unreal blueprints.
https://www.youtube.com/watch?v=yrmI0Xpmb-4
He's trying to understand why he would give up one blackberry for the chance of winning two in the
future.
https://www.youtube.com/watch?v=yrmI0Xpmb-4
This clip is the bear reaching for a blackberry. 2016
VFX
Procedurally built city, wet streets, fog, the flames fire with the same timing pattern as a real V8. Oculus DK headset, Unreal 4. 2016.
Physics
This is a procedurally built structure (Python in Maya) rigged with Bullet Physics (Dynamica) and earthquake
(2014)
Tracked vehicles created by script with Bullet Physics and Maya (2014)
I wrote some Maya scripts and Unity scripts to be able to author destructibles in Maya and
export to Unity. (2014)
I wrote a PyMEL wrapper to be able to do this cabin explosion in script using FumeFX,
FractureFX and Maya (2014). I did this by monkey-patching new methods on to the generic PyMEL node.
AR/MR Stuff
Handtracking and physical Controllers
02/2021 I was experimenting with using hand tracking on the Quest2 and a midi controller. Pressing the
physical button aligns the virutal controller to the finger tip that pressed the button.
Multiple 3d cameras
07/2020 This experiment was to visualize when part of the body is occluded and the app knows the parts to fill in. Uses Unity's VFX graph and 3 kinect time-of-flight cameras.
Mixed Reality Demo with Google ARCore
6/2018 This is an experiment with Google ARCore and Unity 2018. My goal was "the floor is made of lava" but in a
more unique way. To me, one of the most important things about making mixed reality is occlusion. ARCore detects the ground (actually arcore 1.2 detects vertical now), so I
place geometry on the ground that occluded the other parts, such as the lava river and volcano underground. I havent seen this before so I
thought it would be a fun experiment.
My other goal was to use Houdini to do some real time VFX. The floor crumbling, river of lava's flow map and the particle fluid lava fountain are made with Houdini.
This is mostly done (6/1/2018), there are some things I want to improve like the resolution/appearance of the fluid sims and the underground volcanic lightning and a video example.
(Fixed Link 10/21/2021) Download the APK for Android 7.0+ if your phone supports Google ARCore chrisrogers3d.graphics/lavagin.apk (70megs) You also
need ARCore from the Google Play store.
Made with Unity 2018.1 and I used Maya, Arnold (for a lightmap), Zbrush, Houdini, Photoshop and Substance Painter.
Augmented/Mixed reality experiment using Mapbox's Unity SDK and Google ARCore.
(3/2018) This experiment uses Android's ARCore, GPS and 3D maps (from Mapbox's Unity SDK) to do "world scale" augmented reality.
To fine tune the users position and orientation I allow the user to fine tune the GPS location with buttons,
then align known landmarks (in this case Sutro Tower, antennas on Twin Peaks) to fine tune direction. The buildings
are rendered as a mask so I can show these characters being occluded by the buildings. I could get good results with the
occlusion if I fine tuned the location. This experiment got me interested in computer vision again.
This was another hack-a-thon project at Anki. Its called Drawing with Vector, the idea is that you
start a drawing on a piece of paper, Vector scans it, and uses Sketch-RNN
to "decide" how to finish the drawing, then Vector with a pen attachment, draws it.
Its tough to see what is going on in this video so I will try to explain. I only got the first 3 steps to work before end of hack-a-thon.
On my laptop there is a live representation of the scene in Maya(!) in the background showing Vectors location, the undistorted view from the camera projected on the ground, and the localized piece of paper and its corner markers.
The overlayed window to the left is Vector's raw camera view, and the green
window to the right is Vector's internal quadtree map of the environment from the Vector SDK. Green is a place Vector can go according to the range finder and
the red squares are the fiducial markers in the corners of the paper. When Vector finds one, the corner of the paper in Maya is localized until all 4 localize the entire peice of paper.
Vector localizes (finds and orients) the piece of paper by recognizing the fiducial markers on the corners. I am driving Vector (0:34)
instead of Vector exploring and finding it to save time. In the map on the right side of laptop in the video, the markers appear as red squares. (0:58 - 1:14)
I make the first line or two with a pen on the paper.
Vector then scans the paper (1:16-1:30) by driving over all of it and making images that are overlayed
to make a complete image. To scan it, the camera distortion must be removed. I originally tried the technique in
OpenCV with bad results. My innovation was creating a UV map to undistort each image. I created this by
overlaying the original image and the scanned image in Maya, and using Maya's UV brush manipulation tools to align the
scan to the original with close to per-pixel accuracy. Its also very fast to undistort using OpenGL on Vector.
The lines in the bitmap are made into a vector drawing, which is fed to the SKetch-RNN network. I think i used 'potrace' to vectorize the bitmap on the laptop.
Sketch-RNN delivers the rest of the drawing as vectors.
Vector draws on the paper using a pen attachment on the lift, so the pen can be raised and lowered.
OpenCV and Unity on Android
This is an experiment (1/2017) at animating characters in a mobile app using computer vision as the input. I'm using Unity5.5 and OpenCV on Android for the app.
The incidental background music is Talvin Singh and the plastic character that I have attached an ARUCO marker to is Heathrow the Hedgehog by Frank Kozik.
TensorFlow and Object Recognition (4/2018)
People using machine learning with computer vision are starting to use synthetic 3D data sets instead of
/in addition to real photographs. I recently created a set of images of billiard balls and followed a tutorial on
how to retrain a TensorFlow network to recognize my objects. I didnt get good results, will try again.
using a phone as a 3D input device proof-of-concept (12/2015)
After seeing Tiltbrush and not having any 3D input devices of my own, I started an experiment to
see if it was possible to use
image tracking on a phone to extract the phone's position in 3D space to be used as an input for
an application. This
proof-of-concept uses Vuforia's image tracking, Unity5, Unity's new low level network interface
(NetworkTransport), and a Nexus6p.
It sort of worked, it was noisy and had a limited tracking space. Besides sending the
position, there is a UI on the phone with sliders.
Arduino Stuff
This project was to determine the input latency for Unity with various game controllers for a Super Smash Brothers type game on PC. There was an Arduino that controlled a relay and had a
photoresistor as an input. In Unity, I press a GUI start button on the PC screen and that sends a message to the Arduino program. Although the main goal was to test different controllers' latency,
these experiements have the Arduino send a button press signal over USB (simulating the game controller button press instead of the 3d printed solenoid i made to attach to a physical controller)
So... the signal is sent over USB to Unity. Unity then flashes the screen white after it was black. The photoresitor, taped to the monitor with packing tape,
also hooked up to the Arduino sends a signal when the screen lights up. I measured the difference between a Unity build and the editor, and it was 90ms to 60ms. That's
2 frames at 60 fps. Human response time is at best 200ms. I thought the editor's speed advantage was being multithreaded, but didnt pursue the issue.
VR Stuff
Daydream in the neighborhood (11/2015)
This is photographed running on a Nexus6P. With this shader, fog density is influenced by absolute X world position to make the fog denser
Inner Richmond Demo
If you've spent much time in the Inner Richmond in the summer, this should look familiar.
Several custom shaders. This is just a sketch of an
environment to see how the fog and clouds and sun felt in an VR environment. I thought it was
effective.
Be Bots
I started on the idea 10 years ago as an animated show.
You are Blue, a kid that is the leader of a robot jazz band. He rides a Schwinn Stingray to the
jazz club, Chez Bot.
If you follow the YouTube link, you can watch these videos at 1080p60. For the most part they
all run at 90fps when I am not capturing video at 60fps at the same time.
Bass Sequencer in space
This is an experiment with a string bass and a music
sequencer, like a drum machine but the third dimension is the note. This is triggering midi
notes,
I was experimenting with controlling Ableton from a VR interface.
Vive Drums
This is a drum kit for the band. not having physical feedback with drums
is weird, and the Vive controllers arent balanced like a drum stick, but its an experiment.
Vive Bass
Early bass demo with a very unrealistic note reproduction (ie: what should be
on the fretboard an A is not an A). I liked the different sounds I could assign to the bow
(pluck, regular bow and a sustaining bow)
interacting with the string.
Drum Sequencer in VR
This video is a drum sequencer in 3D with a "gaze-interaction"
interface for devices that only have a direction to look, and a button, like Google Cardboard.
Although this is on a PC with a DK2.
Drawings
I still draw often, usually abstract designs. I like to draw portraits, too.
The recent sketches are 1/4 sheet of regular typing/printer paper and a Prismacolor Ebony pencil.