Click Here to view all the amazing entries to Rookie Awards 2024
Daniel Albarracin - FX Reel
Share  

Daniel Albarracin - FX Reel

Daniel Albarracin Cortes
by DanielAlbarracin on 17 May 2024 for Rookie Awards 2024

Over the past months, I've dedicated myself to crafting my own personal projects that tried to maintain a balance between my artistic and technical skills. Now, I'm thrilled to share them with you in this entry, and I hope you enjoy seeing the results!.

24 585 3
Round of applause for our sponsors

Animations and Camera

In this shot, a top view camera was used, so the first thing I did was set up the terrain dimensions and create a camera animation that matched the reference. Then, using characters imported from Mixamo with static animations, I configured the transformations so that the impact would happen at the origin.

Ground Preparation

To achieve an accurate rigid body simulation it was necessary to go through the process of creating fractures. The ground was divided into two vertical layers, each with its own fracturing system. In the case of the upper part, fractures were differentiated based on their proximity to the point of impact and whether they were part of the area that the right character breaks with his footsteps. The bottom layer was fractured with a higher level of detail as it was closer to the impact.

Constraints Setup

The constraints in this shot were very easy to configure since all the pieces were made of the same material. A distinction was established between the constraints  connecting the first level of fractures and the constraints connecting the second level, in order to have different fracture behaviors. Similarly, a proximity attribute to the impact was used to create more randomization in the fractures.

Finally, I grouped the constraints of the pieces that were going to be deformed by the character's steps to give them a different behavior.

Simulation Forces

To achieve movement in the simulation, I created a vector attribute pointing towards the direction in which the pieces would move and randomized it per piece based on its distance to the center. I used a perpendicular vector to the one described before as the rotation axes for the angular velocity.

Final Results

The upper layer is the most visible, where the pieces break and are propelled away. However, the lower layer provides a sinking deformation that helps generating a sense of depth and power in the impact, as well as serving as collision for the upper layer. The deformation of the footprints uses the same principles described for the lower layer.

Secondaries Rigid Body Simulations

To add more details to the fracture and make the effect more interesting, two different debris simulations were used along with a particle simulation that filled the gaps and generated a greater variety of piece sizes in the final result.

Pyro Simulations

We already know that a significant number of rigid body simulations are covered by volumes, and in this case it's no different. So, based on the rigid body pieces, I implemented two different volume simulations to recreate the main shockwave.

And you might be wondering why I used two different simulations for the same shockwave... Well, to achieve better control over the final result I differentiated between a first simulation with greater divergence, aiming for higher fidelity with the velocities of the pieces, and another one with lower divergence and dissipation, where velocities decrease to create those lingering smoke remnants that add detail at the center of the impact.

Finally, I added a ground-level shockwave with higher velocities serving as coverage for the main shockwave, which also interacts with the exterior fire columns.

Final Results of the Ground

Details from Dust Steps

I also included dust simulations in the footprints crevices of one of the characters to add more details.

Character Simulations

More volume simulations were employed to recreate speed trails on each character.

Fire Columns

The smoke columns were created from certain parts of the assets placed on the ground surface, with a preroll added to ensure proper results once the shot starts.

Procedural Floor Shading

For the ground shading I needed to employ a plethora of texture projections and noise layers to achieve a similar result to the reference and ensure adequate visual complexity in ground patterns. Additionally, these projections were exported as AOVs in order to continue working with them in Nuke. Below, I share the setup used for the ground shading.

Rendering the Shot

This shot was rendered using Karma as the render engine, requiring to export the different FX layers and geometries to Solaris, as well as create or import cameras and lights. Below, I present the configuration used in this shot, which is very similar to the other shots I'll describe next.

Compositing the Shot

Finally, to enhance each shot, I always go through Nuke (using an ACES workflow) to compose all the layers, aiming to add more cohesion between them and giving the shot a final look that closely matches the references. Again, below, I share the setup of this shot, which has a very similar layout to the setups used for the following shots.

Environment

For the environment creation in this shot I simply used a grid with different level variations created from masks, along with some small rocks scatters. The airplane model had to be prepared with a series of boolean operations and blast nodes to match the appearance of the airplane in the reference.

Pyro Simulations

The majority of the time spent on this shot was used for creating the fire simulations and trying to match the reference as closely as possible. To recreate the fires, I divided the airplane into different parts to have control over each simulation individually. Subsequently, I added variation to the emitter and velocities, and inside the solver I played with wind forces and turbulences to achieve an interesting result.

Once I had an adequate outcome for one of the main fire simulations, the same setup was used for creating the remaining ones, only tweaking the emitters animations and microsolvers parameters.

Custom Scatter Field

To achieve a better result in the fire shading I decided to create my own scatter fields, allowing me to manipulate different intensities and distances for each fire simulation.

Embers Simulations

For the embers simulations I used particles. To adress those bouncing on the ground, I employed velocity and bounce count randomizations to achieve a believable result. Additionally, to ensure their movement aligned with the wind, I created an animated velocity field instead of using the pyro simulations themselves, achieving greater control over the movement of the particles.

All Simulations Together

Rendering and Compositing the Shot

Environment 

I took some Megascans assets to create this environment and placed some vegetation and trees using loops to scatter geometries and choose between different model variations and sizes. This idea can be seen in the final picture of the process view below.

Animations

The animation process was entirely done using rigid body simulations. For the helicopter rotors, only cone twist constraints were necessary to achieve a good result. On the other hand, for the Mustang animation I had to go through the entire process of fracturing, creating different types of constraints (such as glue, soft, hard, cone twist and slider), and performing post-simulation deformation of the remaining pieces that were not simulated. This was essential to obtain a correct vehicle animation that responded appropriately to surfaces and included a suitable suspension system.

You can learn more about this process in the following video.

Main Explosion 

The explosion is composed of 10 different layers, 5 of solids and 5 of volumes.

The solid layers are particle simulations (sand and rocks). I used a similar emitter for all of them to create variation in velocities and emission clusters.

For volume simulations the process has always been the same:

1. Obtain an emission geometry (either by using an invented emitter, as seen with solids, or starting from the simulated rocks or sand particles).

2. Create the necessary attributes for the simulation and rasterize them.

3. Implement velocities to add breaks and other interesting behaviors to the simulation.

All Simulations Together

Car Interaction Simulations

Additionally, I added different simulations to recreate the interaction of the car with the ground, including dust and rocks simulations.

The first simulation is emitted from the ground, and I used the proximity of the car with the surface to determine the emission time and location. Instead of using that emitter directly, a SOP solver helped me creating a fading trail to have a more accurate emitter.

Two additional dust simulations were added using the car's wheels and its rear bumper as emitters.

For the rocks jumping from the ground during the car's journey I preferred to use particle simulations instead of rigid bodies, as I didn't need very precise collisions, and the angular velocity could be faked using velocity and pscale attributes.

Finally, for the rocks that interact with the small cliff during the car's jump, I used a simple rigid body simulation that allowed me to obtain a nice motion and collisions.

Vegetation Wire Simulations

Usually, I would have done this type of simulation using Vellum, but in this case I wanted to give the wire solver a try to simulate the interaction of plants with the shockwave and the collision with the vehicle.

Missile Simulation

For the smoke simulation of the missile launched by the helicopter, I created an emitter at the rear part of the missile and deformed it to follow the missile's animation until the impact moment.

For the initial frames of the missile launch, I also wanted to create a fire simulation. However, since it would only be visible for 2-3 frames and didn't require much details as it wasn't a close-up element, I opted to create an animated geometry shaped like a fire trail, rasterize it, and render it as an emission volume to emit some light to nearby geometries.

TOPs

Something I haven't mentioned yet, but which I've consistently utilized in all my projects is the tasks context in Houdini. It allows me to visualize different results by automatically varying parameters using TOP nodes. Below, I present a very simple setup for iterating over the divergence attribute and positions of the points scattered on the emitter for one of the explosion layers. Subsequently, I compile a final MP4 video with a montage of the different flipbooks for each variation.

Rendering and Compositing the Shot

Motivation and Idea

While searching for plant and tree growth algorithms online, I stumbled upon a scientific research article titled  'Modeling and visualization of leaf venation patterns'. In it, various patterns of veining in leaves were described, as well as explanations about the Space Colonization Algorithm.

Intrigued by this information, I sought out interesting real-life leaf references with the idea of implementing a tool that would allow the creation of a wide variety of leaves resembling those found in icy climates.

The goal was to apply the techniques discussed in the article to create vein patterns on the leaves.

Inside the HDA

The HDA starts from scratch in the creation of the leaf and ultimately delivers a fully customizable leaf in terms of shape, size, vein patterns, and a wide range of elements characteristic of icy climates.

HDA Controls

The HDA's menu is divided into three sections:

1. Main Leaf Controls.

2. Growth Algorithm (for venation patterns).

3. Ice details.

This differentiation, along with the variety of options available in each section, allows the user to configure the leaf easily, resulting in endless combinations.

Space Colonization Algorithm 

Trying to explain in a simple way what the Space Colonization Algorithm does, we could say that there are two types of elements: attractor nodes and vein nodes. Attractor nodes, as the name suggests, are the ones that influence the vein nodes to obtain a growth direction. Vein nodes are the ones that end up forming the new branches or veins.

The process could be summarized as follows:

1. Identification of the closest vein node to each attractor node.

2. Obtaining a growth direction based on an average of the directions from the attractor nodes to their associated vein node. Creation of a new vein node at this obtained position.

3. Removal of the attractor nodes that now have a vein node too close to them.

4. Obtaining new attractor nodes. If these are within a distance less than a certain margin with respect to the previous attractor nodes, they are not suitable as new attractor nodes.

In my project, the algorithm was replicated using VEX inside a SOP solver, and the steps mentioned before were slightly modified to adapt to the project's needs.

Division in Two Solvers

I decided to split the growth patterns into two separate solvers, one that would generate the main structure of the veins, and another that would create the secondary veins using random points from the output of the first solver. This allows to have better control over the patterns, as well as more independence between the overall structure and the smaller veins.

HDA Breakdown

Masks for Shading

Some masks were created during the development of the HDA and used in the shading stage.

Branches HDA

The leaves were the fundamental part of this project, and in my case, I already had a reference to which I wanted to adapt, so I placed the branches and leaves manually. However, I took this opportunity to implement a simple branch creation tool with the same icy climate characteristics as the leaf HDA, and I build up a small setup combining both HDAs to obtain any type of combination between branches and leaves in case I wanted to use it in any other project.

Shading of the Leaves

As I said before, I used the masks generated by the HDAs in order to shade the leaves and branches, and also exported them as AOVs to use them in the compositing stage.

Rendering and Compositing the Shot

General Idea of the System

The creation of the creature began with a simple curve simulating the path of movement for the symbiote. Subsequently, multiple layers of curves, each with their own animations, were generated and integrated to enhance the visual complexity of the creature's body. Finally, these curves were transformed into spheres, and VDB techniques were employed to achieve a more intricate mesh.

VEX Main Parts Overview

The core of the creature is the main structure, formed by clusters of curves. All curves were created from the same base curve, and each of them has different animations and attributes, but are related when part of the same cluster. These clusters introduce additional movement patterns, so whenever the creature encounters an obstacle, the clusters attempt to separate, mimicking the creature's adaptation to new surfaces.

The attachments can be understood as the legs or tentacles of the creature. Like the curves of the core, they all stem from the same base curve. Each attachment possess attributes to initiate the extension and detect when retraction is necessary.

Finally, the attachments extensions are small bifurcations located at the end of the legs to add some interest to these areas of the creature.

Meshing Process

To create a renderable geometry of the symbiote, I used VDB operations, with the pscale attribute of the points as the basis.

Wetmap

To add some interaction between the symbiote and the environment, I used a SOP solver to create a very basic geometry, which I rendered as fluid to simulate a kind of slime that the creature left sticking to the surfaces.

Shading of the Symbiote

For the creature shading, I used some attributes created with VDB operations, such as the gradient and curvature.

Rendering and Compositing the Shot

Environment

For the environment of this project, I based myself on various references I found online of rivers in snowy climates. I started by creating a very basic blocking to get an idea of the dimensions and positions of the elements, and then I modeled the main assets (terrain and bridge). I used some rock and branch assets from Megascans.

However, all the assets would be covered with snow, and instead of relying on finding the appropriate assets online, I decided to create a snow placement tool to be able to give the layout the appearance I desired.

Below these images, I show some tests done with the tool.

Trees and Other Assets

Some tree assets I found online already had snow geometry on their surfaces, so I didn't need to pass them through the tool to make them usable. However, for others, it was necessary. The tool works for all types of surfaces, so there was no problem customizing the appearance of the assets that needed it.

Flip and Whitewater Simulations

The workflow for the flip simulations began with the preparation of the collision geometry. Subsequently, an initial state of the river was created, and boundaries with custom velocities were added to the simulation to emit and collect the fluid at the limits of the simulation. Finally, the particles were converted to geometry after the simulation.

For the whitewater, I used the results of the flip simulation and created its emission based on the vorticity of the river flow. Within the simulation itself, the goal was to get a whitewater that maintained an interesting aesthetic.

Two Rivers in One

If you watched the shot and didn't realize that this river is actually formed by "two different rivers", it means the trick has worked properly and the result is nice.

I needed to maintain a high resolution simulation if I wanted to have interesting details close to the camera, but to avoid losing processing power and wasting it on areas far from the camera, I decided to divide everything in two parts, using the end boundary of the far river as the starting point for the close-up river, and then merging the meshes of both fluids post-simulations.

The far river has five times less resolution than the river close to the camera.

Wetmap

Finally, to add some interaction between the river and the rocks, I decided to create a wetmap that was used in the shading stage to slightly modify the appearance of the materials.

Environmental Simulations

The work didn't end here. It was necessary to recreate weather conditions in order to have a cohesive result.

Snowflakes were created using different particle simulations, and their properties were determined by their proximity to the camera, allowing me to have better control on the movement and sizes I wanted in each area.

Two volume simulations were created to add some atmosphere to the shot and achieve a more interesting result.

Rendering and Compositing the Shot

In this case, I decided to use Mantra to render this shot instead of Karma, and everything was separated into layers to be composed later in Nuke.

Environment

To create the mountains, I decided to use Houdini's heightfield tools and save the masks for the shading stage.

Airplane Animation

The airplane animation was created using a curve, and the main idea was to make the airplane cross the screen from left to right, gradually approaching the valley formed between the two mountains.

Simulated Clouds Creation

For the clouds, as I mentioned earlier, I used pyro simulations instead of static VDBs, which gave me the freedom to choose the exact formation point where I wanted my cloud to stop growing. This approach also provided me with the typical characteristics of cumulus clouds, achieved through divergence attributes and microsolvers inside the simulation.

TOPs to Create Variations

But this shot would be composed of hundreds of clouds, and obviously simulating them one by one and placing them individually was not feasible. So, once again, TOPs helped me generating different variations for three different types of clouds (flat-bottom cumulus, rounded cumulus and stretched cumulus).

Instancing and Viewport Visualization

Finally, to position all the clouds and visualize them in the viewport without causing program crashes, they were converted to geometry and their characteristics were randomized in a loop. Additionally, to avoid placing clouds in non-visible areas, a camera frustum was used to remove unnecessary copy points.

Once I was happy with the results, a simple toggle button would make the clouds render ready.

Rendering and Compositing the Shot


Comments (3)