Rookie Awards 2024 - Open for Entries!
Natalie Villarreal | Selected Works
Share  

Natalie Villarreal | Selected Works

Natalie Ann Villarreal
by natalieavill on 1 Jun 2023 for Rookie Awards 2023

A selection of personal projects from my final terms at Gnomon School of Visual Effects, Games & Animation.

57 1814 17
Round of applause for our sponsors

Hello! Welcome.

My name is Natalie Villarreal, graduate of the 2-year DP Program at Gnomon School of VFX, class of Summer 2022. I am a 3D Generalist, Lighter and 2D Compositor. Through my time at Gnomon I found that above all else, I love creating beautiful images and telling stories... I like to have a hand in each part of the pipeline and getting as close to the final pixel as I can. This year has been both rewarding and challenging, filled with all sorts of projects. I've chosen a selection of 3 projects from 2022 to present in this year's Rookie Awards, I hope you enjoy them as much as I loved making them.

I'll begin with my 2022 graduate reel.

Enjoy!

Blue Bayou

'Blue Bayou' is a short film made by myself with the help of Dakota Smith for the HD Digital Filmmaking and Matmoving & Integration courses at Gnomon School of VFX.

SYNOPSIS
A little bot radio spends his days in ignorant bliss. Things are not as they seem, but does it even matter?

THE TEAM
I am a 3D generalist and compositor, Dakota Smith is a talented FX artist.

INSPIRATION
The story began with a song and a dream about a place called "Blue Bayou" where things weren't as they seemed…set to the song by Linda Ronstadt. In the way that only dreams can, I was left with an impression so powerful that I knew I had to tackle the feeling in my next project.

THE TEAM
When it came time to pick a team, there was no question that I wanted to work with Dakota Smith, a fellow Gnomon student. We had the same creative sensibilities and I trusted his judgement completely. The first day of class I pulled him aside, played him the song and explained my idea--Dakota was sold.

RULES
We were warned that many student-made VFX-integrated films fail or struggle for a number of reasons, including trying to take on too much, poor quality footage, relying too much on compositing, and even group in-fighting over creative differences and poorly-defined roles.

Dakota and I spent a lot of time outlining rules to follow to ensure our success: I was the Director with final say on any creative/story decisions, Dakota was our VFX supervisor with veto power over anything that would compromise our bottom line. We played to our strengths, meaning we'd stay away from too much animation or modeling. We didn't use actors. Given our experience, we knew including actors had the potential of lowering the quality of the film. The CG would interact with the live-action environment as little as possible (e.g. touching the floor, picking things up). No greenscreen was used, and we'd shoot in an accessible location.

STORY
The story was written over many meetings at a coffeeshop. We'd start with the Blue Bayou dream and iterate from there; no idea was too weird to consider. We both wanted to tackle heavy subjects in our story and would imagine something completely out-there, whittle it down to a short story, then start again. We ended up with about 6 storylines and chose the one that made the most sense for our skill level.

STORYBOARDING
My storyboards were not beautiful, but they were essential. I roughed them out first in my notebook, then in Photoshop. From there, we mapped out where we'd film in the apartment, and used that to plan the shooting days.

I created an animatic with the storyboards below:

PREPARATION
I did a ton of research about how to make a short film and on the common mistakes that students make. I also learned all about the filmmaking/vfx pipeline and the roles involved. This was a crucial part of my preparation and the project would not be as successful without it.

We shot everything in my apartment where we could have complete control over the environment. It was important that we had the flexibility to shoot and reshoot whenever we needed… we had to allow for mistakes.

We spent a significant amount of time designing the file system. We were building our own pipeline for managing and sharing files, so the system had to be intuitive for both of us. This system proved vital down the line and I am grateful we spent so much time designing it at the start.

PRODUCTION
For shooting, I took on the role of cinematographer and lighter, and Dakota the role of VFX supervisor/grip. I shot almost everything on a Sony PXW-FS5 camera with a Sony E PZ Variable Lens, and the equipment was limited to what the school offered. We improvised a lot, like using bedsheets as diffusors & reflectors and a longboard for tracking moves.

I relied mostly on natural light, using fills/keys when necessary. The biggest lighting obstacle came from managing the exposure from the large window. To solve this, I exposed for the shadows and took under-exposed plates for backup. Dakota took an HDRI after every setup and made sure each take stuck as close to our rules as possible.

In total, we shot around 2 hrs and 33.5 GB of footage across roughly 3 days.

DESIGN
The design of the robot was mostly driven by two factors:
      · He had to be charming or the story doesn't work.
      · He had to be simple to work with, i.e. no complex animations or moving parts.

The actual model existed from a previous project and we improved on it with the help of Shawn Juan, a fellow Gnomon student with a background in industrial design. We kept his face impassive—I wanted to try and tell the story without relying on expressions or "cutesy" animations—he has no mouth, just two big eyes…almost void-like. To aid in the integration, we chose not to give the bot any "feet" that could touch the ground and limited his interactions with his surroundings.

One of my favorite design changes was in the shape of his profile, which was inspired by Wagner Moura's portrayal as Pablo Escobar in the TV show "Narcos". The actor's posture causes his belly to stick out in such a way that evokes the sort of solemn dejection that I was looking for.

RIGGING & ANIMATION
For this part we received help from our fellow student animators, Jayne Lynn, Alex White and Jean Oquendo. Jayne rigged the bot and provided a few animations, and Alex and Jean provided a lot of feedback to help loosen up our stiff animations. I wanted the animation style to be robotic but bouncy without being "cartoonish"...and this took some time to get right.

MATCH-MOVE
The tracking phase is when we learned exactly where the issues were in our footage. In the end, our adherence to strict rules about filming helped make this process easier. If I could go back, I would capture MacBeth & gray/chrome balls, be more diligent about noting the camera specs for each shot, and set aside a day for planning the setups.


LIGHTING & RENDERING
In Maya, I constructed a simple floorplan of my apartment and furniture to use as light blockers. We used the HDRIs for getting reflections and overall light direction, but the biggest help came from using an area light with a still of the window applied as the texture. We found that this provided the most accurate lighting.
We had 25+ shots to render, over 4000 frames, plus iterations and re-renders. The entire film would take countless hours to render. To cut down on render time, we rendered using GPU at 1280x720 and reformatted the shots in post. This was a essential to getting the shots rendered in time.

WAREHOUSE
I wanted the warehouse to have the sinking feeling of the "top men" scene in Indiana Jones and the Kingdom of the Crystal Skull with a 'David Lynch' vibe. The biggest challenge came from making the warehouse seem both endless and filled. To get the 'endless' look, I used mirror shaders on the walls. After a bit of tweaking, the result worked out great. To fill the space, I used higher-res assets for the forground shelves and low-res assets for the midground. For the rest of the shelves, I created maps from a render of a high-res section of shelves and applied those to plain cubes. This saved me a ton of working and render time.

COMPOSITING
We composited everything in Nuke using a combination of beauty, shadow, reflection, crypto and depth passes.

From the start, I wanted to use aspect ratio to tell the story. I took a lot of inspiration from films like "A Ghost Story", "Mommy", and "La La Land". We went with a 1.55 frame with rounded corners and stretched it to 2.35 during the transition to the warehouse.

EDITING
Editing & Sound were some of the most challenging parts of this making film. This is where any holes in our story became giant chasms, and the film underwent two major story changes as a result. There were countless ways to show the viewer the whole of the bots reality through editing and sound, and we tried many. I ended up having to step away from the edit entirely for a number of weeks to shake off all of my ties to our previous storylines. For the final edit, I opened myself up to using mistakes and happy accidents to add to the surreal-ness of the ending.

Here is where I have to mention Miguel Ortega. This film would not be the same without his guidance and harsh critiques.

COLOR GRADING
We went with a warm, dreamy feel for the apartment scenes and a cold, eerie feel for the warehouse scenes. This was the fun part of the compositing process and we allowed ourselves to go nuts with glows and godrays.

When Dakota and I finally strung all the shots together, we ended up with a robot in every imaginable shade of avocado green. It became apparent that no matter how "neutral" you think you can stay, everyone's eye is fallible, especially when you've been staring at the same shot for hours. To fix this, we matched each shot in nuke to the same frame.

CONCLUSION
The story underwent countless re-writes, even up until the final edit 6 months later. Our original story had a lot of confusing shots...we were trying to say too much. After the initial 20 weeks, I put the project away for a month or two to consider what I was really trying to communicate. Eventually, my instructor and mentor, Miguel Ortega, advised me to strip it down to the essentials and let the song, editing and cinematography do the rest. In the end I had to cut a number of scenes and rearrange a bunch of shots, but it was worth it.

The biggest thing I learned throughout this whole process is that you must be ready to sacrifice any idea/shot if it's not serving your story. Story is paramount, all the cool tricks come second.

THANDIWE NEWTON

I created this character likeness for Tran Ma's Texture 4 class at Gnomon. This was my first human sculpt and I wanted to take on the challenge of doing a likeness. Capturing a true likeness is very difficult and I learned a lot along the way.

SCULPTING
Throughout this entire project, I spent hours upon hours collecting as many reference images as I could. Getty images was a huge help for gathering high-quality photos from recent events. I would gather them all in a single location and also split them off based on what they were good for: higher-res, skin/makeup, expression, etc… In Zbrush, I would cycle through each image as I sculpted, never spending too much time in one POV or on one image. I would also mark up the images in photoshop, calling out things like the direction of her hair, distinct wrinkles/textures on her skin, or the planes of her face.

I utilized Zbrush spotlights and cameras to line up my sculpt with a particular reference. I would sculpt for a while from that POV then start again with a different image. This method of sculpting took a long time each time and I often felt as though I was going overboard. However, I think in the end all of this time spent was worth it.

Take a look at the video below for a progression of the sculpt:

POSING
Breaking symmetry was scary and I tackled it slowly. I always kept backups to revert to if something went wrong, and this helped a lot. I began with her expression and head orientation. I used layers to toggle back in forth between expressions until I was satisfied to stay with one expression. Then, I posed the entire model in Maya and exported the posed version back to Zbrush. I also exported an animation of her moving from a-pose to sitting and used that to pose the clothing in Marvelous.

TEXTURE & DISPLACEMENT
I did most of the texturing in Mari using nodes, I found them to be more intuitive than layers and felt like I had more control. I had a lot of fun doing the texturing, especially in combing through the references and identifying all of the different tones and textures in her skin.

For the albedos, I used maps from Texture XYZ combined with a few custom alphas and lots of hand-painting. Below is an example of the type of guide I used to outline what values/tones to paint and where to paint them.

To determine roughness, I used Tran's method of rendering out her skin at 5.00 increments of roughness from 00 to 50. Then, I combined them all in photoshop and used masks to figure out the levels I wanted of each. I then used that as a guide for painting the roughness masks in Mari.

For finer details, I sculpted the larger displacements in Zbrush and used Texture XYZ multimaps to capture the fine displacement details.

HAIR
The hair was done using Yeti. I decided to go with Yeti over XGen because I found the node interface to be more intuitive. I began by modeling out Nurbs curves to capture the overall placement and direction of each clump, and used those as guides in Yeti. For the bun, I used a simple torus to capture the look. All of the hair was modeled in a separate file and cached out, this helped to keep the file size down.

SHADING
The shading was done with a series of layered shaders. VRay AI Surface for the skin and VRayHairNextMtl for the hair. I used vray fur combined with utilizing the facing ratio to capture the velvet feel of the dress.

Final Comp

CONCLUSION
I first have to thank Tran Ma and Miguel Ortega for all of their guidance throughout this project. The end result would not be the same without it. Tran offered training, techniques and advice that stream-lined this whole process. Miguel's critiques were constructive and direct.

In the end, I think the reason the sculpt came out like it did is that I didn't give up... everytime something didn't look right, I buckled up and fixed it. That was Tran's advice from the start—likenesses are hard, but the key to getting close is to not give up, no matter what.

Lotus Temple

The Lotus Temple is my first project in Unreal Engine, and I made it for the Cinematics for Virtual Production course at Gnomon using Unreal 5. This was an excellent chance to get familiar with the software and workflows. Cinematography and lighting are two of my favorite disciplines in filmmaking and I chose to explore them with the Lotus Temple.

CONCEPT & REFERENCE
As with any project, I spent a good amount of time gathering and re-gathering references. I knew I wanted to make a magical temple with a central character and water feature. I started with the main structure and a boat, then built everything out from there. I found myself improvising more than I usually do, and it helped to stay flexible for the sake of learning/experimenting. 

MODELING
I wanted to maximize my time experimenting with cameras, lights, post-process volumes, sequencer, and movie render queue, so I tried to fill out the scene quickly with what I already had available. This meant opting for pre-made assets wherever possible. In the scene, most of 'fillers', like foliage and decorations, are made using megascans and decals. The building, boat, clothing, and interior sculpture are all modeled & textured by me.

CHARACTER
The base mesh for the model is one that I purchased online. I rigged the character using Mixamo and animated her in Maya. Then, I exported an .fbx of the animation and imported into Marvelous. In Marvelous, I began by fitting all the clothing on the t-posed mesh, then used the animated .fbx to get the clothing into position. From there, I exported the animation as an alembic and brought it back into Maya to finish the texturing & shading.

CINEMATICS & LIGHTING
Each camera is set up to simulate real-world camera movements. I did this by using rigs (like tracks and cranes) to keep the movement grounded in reality.
In lighting, I played a lot with shadows, using leaves and branches to add depth to the scene. I used the post-process volume to capture the overall fog and added fog cards where necessary.

POST-PRODUCTION
For compositing, I experimented with options in the Movie Render Queue like stencils and passes, as well as UnrealReader, to get the passes into Nuke. In the end, I found the UnrealReader slightly clunky to use and stuck with the movie render queue.

Thank you so much for taking the time to review this selection of my work. This has been an incredibly rewarding year full of creativity and learning... I absolutely love what I do and am looking forward to whatever the future will bring!

If you like what you see, you can find more at the links below. Please feel free to contact me for more information, I'm happy to discuss whatever you have in mind!

email: [email protected]
website: natalieavillarreal.com
vimeo: vimeo.com/natalieavillarreal
linkedin: linkedin.com/natalieavillarreal
instagram: @natalieavill


Comments (17)