top of page
deep-impact.jpg
sound design.png
deep-impact.jpg

 

Watch 

Deep

Impact

actionnfilmboard.png
techlog.png

A written log documenting my workflow and Technical approach to sound designing for linear and non-linear audio-visual materials From initial ideas and early sketches to the final work.

DEEPIMP.jpg

Project

Files

Dolby project

files

Download for High Quality 
majora 1
LINEAR.png

Online Production Log

For my sound design assignment I will be creating audio for both linear and non-linear audio-visual materials. My linear project focuses on designing sound for a film called Deep Impact. The film was released in 1998 and the plot was based on the possibility of an impact of a comet on Earth and events that would follow in such an event. The film was released at a time where the world and public were interested in the concept of asteroid impacts on Earth due to scientific discussions and media coverage around threats from space. The film was released at the same time Armageddon, starring Bruce Willis, was released, which added to this cultural movement as both films explored similar themes of global disaster.

I would use this film as an opportunity to explore spatial sound and mixing in Dolby Atmos in order to bring scenes to life as the film had great scenes to design audio with movement to create an immersive feel.

I would begin my project by watching the film and selecting an appropriate scene to use. I would examine the sounds used in the film for a rough idea for my sound selection. I would also remove the original audio to begin sound designing for the film. During the late 90s audio technology and sound design techniques were not as advanced as today, which is apparent in the film when watching it in today's time. If I were to rewind time to the younger version of me, I would remember not paying much attention to the audio, as visual technology was the main attraction for films at that time with CGI and other new advancements appearing. Recognising this, I would take advantage and would use a scene which would be the best scene possible for my audio to visual assignment.

 

Management of Planning/production

After finding and examining the scene I wanted to use, I would begin planning a storyboard for the scene and set it into segments for better management and better efficiency when working. This way I have a workflow which leads to progression, giving me less chance of losing track or becoming unmotivated. Also, I can plan the film scenes and sounds for the visuals better by breaking it down into small tasks. I would create a rough sketch and write up notes for ideas to use with the visuals and events in the film.

Storyboard/Scene

Scene 1

_storage_emulated_0_Pictures_comica_comica1745689404050.jpg

The scene begins with a large meteor in space breaking through the Earth’s atmosphere. I would imagine the sound of an explosion as the meteor breaks through and also sizzling noises of the meteor rushes down to Earth. With the knowledge of knowing that there is no sound in space, I would take into account that many audio for visuals from movies to nature documentaries like to exaggerate sounds to bring the sense of immersion. For example, overhead shots of pods of dolphins in the Our Planet series by the BBC are captured with a drone, and whilst there is the ambience of the splashes and water, it is drowned in the noises of the drone’s blades. Or when Macro lenses are used to capture the visuals of animals but they are too small to have a registered noise to them. This leads the BBC's sound design team to create audio for visuals and design sounds to bring the scene to life. The scene is short and sets the stage for the beginning of the end.

comica1745688108380.jpg

The next scene leads to a couple on a motorbike driving fast on a road passing by vehicles whilst the incoming danger of the meteor descending in the sky is seen in the background. This scene would involve the noises of a motorbike cruising with slight acceleration whilst in the background would be the noise of wind heard when a vehicle passes by. i would have to include the noises of the meteor which would involve sizzling noises with layers of rumble. I would also take into account in making some sounds to be dynamic and have movement as many objects in the scenes are dynamic and moving such as the meteor or the cars. Other parts of this scene would be a traffic jam full of people witnessing the meteor descending.

​Scene 2

The traffic jam scene would involve a mix of noises to give the sense of a commotion going on from layers of crowd noises with the intermittent sound of cars and trucks horning alongside the noise of the meteor passing by. I would take into account the transitions in the scene and would ensure to make the sounds dynamic and match the visuals in relation to various factors such as distance or what is in the scene. The next part of this scene would be a couple on a beach awaiting their fate and watching the meteor impact into the ocean. This scene would lead to the final impact of the meteor.  

​Scene 3

comica1745688661530.jpg

With the meteor impacting on Earth, we begin to see a large tsunami approaching. We see some seagulls being pushed by the force of the wind and also the tsunami destroying two oil rigs. The sound of the tsunami would start off low and gradually increase in volume as it approaches the front of the screen. I would plan out how to create the movement in the sounds and also would plan out a workflow in which the static mix I would use fade ins and outs to create the increase in volume, and in the dynamic mix in Dolby Atmos, I would assign the sound to an object and use automation to further enhance the movement of sound in space.

​Scene 4

comica1745688921580.jpg

The scene transitions back to the couple we had previously seen on the beach witnessing the incoming tsunami. The sound of the ocean swelling and heavy wind and waves surging would make this scene very dramatic and with spatial audio it would take the visuals to levels they couldn't back in 1998 when the film was made. Using layers of heavy rumbles and assigning the tsunami sounds to an object in Dolby and moving the sounds towards the screen giving the visuals an impactful presence. From this point onwards a majority of the audio would be the tsunami sound so i would ensure to make the tsunami sounds not monotone or constant but have movement by automating levels and sound objects to match the direction and movement of the tsunami

​Scene 5

The scene transitions to New York city in where the silence is disrupted by the approaching sound of the large tsunami. The large wave would approach with force destroying the statue of Liberty and following on to destroy the city. This scene would involve a mixture of other sounds added into the mix of the tsunami sounds. Ranging from metal impact noises and layers of crashing noises. Some of the crashing sound would need to feel "heavy" to suit the large buildings crashing meaning low end focused whilst some high end sounds would need to be present to cater for the sound of glass and metal noises.    

​Scene 6

The film ends showing the damage done and the aftermath of what's left of the city. Deep waters have taken over the city with a scene of the statue of liberty's head seen bouncing off the water which was once the road. the sounds for this scene would need to have a submerged sound which i could create by using heavy reverb. At one point the statues head bounces off the floor in which i would need to design a metal clunking noise for this.

Sound Design

( Key Techniques/Layer)

I started my project by adding marker points in the film in order to keep track of events in the film. I would then begin sound designing the noise for the meteor by using a synthesizer called Serum. Any synthesizer would do as long as it had a noise generator to design the rumble noises and sizzling noises for the meteor. I would begin by muting the oscillators as I only needed the noise generator. I would use pink noise for my starting sound to manipulate and would lower the pitch of the noise to create a low rumble noise. I would load an EQ to visualise the low end information and would begin boosting the low end. I would then add saturation and some drive for distortion on the sound I was developing. Following from this I would insert a compressor to raise the dynamics back to a point in where the sound was more balanced rather than low end focused. this would introduce slight artefacts and high end information which suited well for the sound I was targeting for a large meteor. I would place another EQ to cut and control the low end and would boost some of the mid frequencies to shape the sound further. With the low end rumble finished I would begin designing another rumble sound to layer with my first sound although this would slightly be higher in pitch. I would use the same sound I formed but would introduce the 1st oscillator alongside the noise generator. I would lower the octave as the initial sound was not to my liking. I would then route the noise generator into the 1st oscillator to create FM synthesis in where the 1st oscillator would be the carrier and the noise generator being the modulator. this would cause a wobble effect on the firs oscillator and the rate would not only depend on the noise generators waveform but also on how much FM synthesis I would introduce into the 1st oscillator. after creating this sound it initially would be a sizzle sound but with an EQ cutting out all the high end frequency it would form into a 2nd rumble which was not low end focused but more mid frequencies. i would then send the audio into an aux track an load a large reverb on the parallel signal and set the reverb to be wet so only processed audio would pass through this aux track. Once the sound was created I would begin layering it in the timeline of the DAW and would ensure the audio syncs with the visual cues.

I would start to further layer rumbles in the scene where needed, whether the meteor would be in the scene or if it's in the background of the scene. I would ensure to align the audio takes and also add fade-ins where required to allow audio to come in smoothly, as some sounds would start abruptly, which would need controlling. For example, I needed the rumbles to start low in volume and gradually raise, giving the meteor a sense of movement. I would achieve this by adding a long fade-in. I would also adjust the tension of the fade-in, which would control the midpoint of the fade-in, allowing me to make the rumble fade in with a less linear fashion. I would add an EQ and set a low-pass filter to remove a bit of high-end frequencies as I needed the rumble to sound deeper.

At one point of the scene, the comet is seen from the perspective of a man in a jeep. The comet can be seen flying past as a reflection in the jeep's window in which I would put myself in that perspective and imagine how the sound would be in such an environment. This would help me to think outside the box and consider sounds I would expect to hear and the movement of sound in such an environment. This also helped me identify small details such as the door closing and also how loud to set the levels of the meteor in relation to the camera's or scene's proximity to the meteor.

My next task was to find audio for the motorbike scene and use a sample of a motorbike accelerating away. Once I layered the audio on the timeline, I would time-stretch the audio to match the motorbike's visual speed, as the original audio was too fast, and the visuals of the motorbike seemed like it was at a cruising speed. This also lowered the pitch of the motorbike's acceleration noise, giving the sense of a gradual acceleration. I would next use the audio of a car passing by, which has a distinct whoosh noise usually heard when passing an object at speed. Similar to the motorbike audio, I would time-stretch the audio to fit the visuals of the cars and also the motorbike's audio. I would add one more layer of audio, which would be that of a cruising truck. Although there is no truck in this scene, the audio was perfect to use in the background at a low volume to accompany the sound of the car and motorbike engines, giving a sense of locomotion.

The next scene would be of the build-up of traffic and crowds on the highway, which I would begin by layering samples of car horns. Some of the horn samples I would duplicate and alter the duplicated versions' pitch and also time-stretch them to create variations of horn sounds. I would layer 2 samples of crowd noises and would set the levels according to the scene's dominant sound and ensure they fit into the background.     

For the next scenes, the sound design would involve a range of elements—particularly those associated with water, wind, explosions, impacts, and the destruction of various materials. This section, which I refer to as the beginning of the end, is action-packed and required impactful audio to support the visuals.

The scene begins at a beach where a couple witnesses the meteor approaching the ocean. Once again, my approach involved placing myself in the scenario, imagining strong winds with a deep rumble in the background and a sizzling, high-pitched texture gradually increasing in volume as the meteor descends toward Earth. I would keep the low rumble present throughout the background and layer in a recording of strong winds through trees, manipulating it to simulate ocean waves and crashing wave crests.

To reflect the aggressive water movement visually, I raised the volume of the wave sounds above all others, making it the dominant element. I added a fade-in to control how suddenly this audio enters, depending on the scene’s pacing. For example, in a moment where the water parts as the meteor passes over it, I applied a fast fade-in delayed slightly in relation to the meteor’s rumble and sizzle to maintain realism in how the physics and sound would unfold.

For the impact moment, where the meteor crashes into the ocean, I aimed for a highly impactful sound to reflect the immense force. I used a long fade-in on all ambient sounds, and at their peak volume, I inserted a short, heavy kick sound timed precisely with the meteor’s impact. This emphasized the moment visually and aurally.

In the following scene, I designed the sound of an oil rig being destroyed by the tsunami. My approach began with considering the structural composition of the rig—primarily metal—and imagining how metal would sound under deep water or during a heavy collision. Based on this, I selected a percussive one-shot with a hollow character that fit the context well.

I applied a reverb plugin to make the sound feel larger and less dry. Then, I added EQ to remove high-end frequencies, aiming for a darker tone. I used a pitch-shifting plugin to lower the pitch, ensuring the formant and audio quality were maintained—avoiding artifacts often introduced when stretching audio. To complete the chain, I added a pitch-shifter with delay emulating the Eventide H3000 Factory unit, further enhancing the sound character I was targeting.

My next task was to design the sounds for the various types of water and destruction-related noises featured in the film scenes. I began by using an audio recording of underwater ambience, which I mixed with previously created meteor-related layers, such as rumbles. I also layered in ice cracking sounds, time-stretching the audio to simulate the sound of destruction or large structures collapsing, such as buildings.

To enhance the effect, I duplicated the audio and adjusted the formant and pitch, layering it with the original to create a larger, fuller impact—as a single layer alone wouldn't convey the weight of destruction. I also included breaking glass layered subtly underneath, keeping the volume low so it would sit within the mix rather than dominate it.

Since I had routed most of my destruction-related sounds through an aux track with a large reverb on the insert, I bounced a recording of only the reverb tail, which I then duplicated and reused at key points to intensify the tsunami and mass water movement scenes.

My next objective was to design audio for the crowd fleeing the incoming tsunami. I used three different crowd recordings, ranging from stadium audiences to regular public gatherings, layering them to simulate a panicked, chaotic atmosphere. The tsunami water sounds were introduced with a gradual fade-in, beginning with deep rumbling, followed by the high-frequency textures of breaking waves.

Once the full film had been sound designed and all elements were properly layered, I reviewed the project and made final adjustments. I then completed a static mixdown and consolidated all files in preparation for Dolby Atmos spatial audio mixing.

To begin my attempt on mixing in Dolby atmos i would require a DAW that supports Dolby or is natively supported in the DAW. For that reason i would use Logic pro as it has Dolby atmos featured as standard.
I would import my stems from FLstudio into Logic pro and the visuals of the film ensuring they align well and not out of sync. In order to preserve my original stereo mix I would create an alternative project and would name it to correspond as the Dolby atmos mix. From this point i would set my Logic pro project to be mixed in spatial audio (Dolby Atmos) and my surround format as 7.1.2. Lastly i would set my atmos output setting as Binaural since i was using headphones to mix the audio. With these settings in place i was readfy to commence mixing in Dolby atmos.

 

I would next audition the sounds to get an idea of how i would creatively mix audio to the visual in spatial audio. I would set certain audio as an object which would enable me to control the audio in space and time giving realism and immersion to the visuals of the scenes. With the audio as objects i am able to automate the audio which would further enhance the listeners feeling of being in the scene.
I would take this opportumity to experiment on controlling audio with expressive MIDI control and would try out MIDI controlled by Infrared hand tracking. I had routed Midi notes to hand gestures specifically the X and y axis so i am able to control the object panner in logic pro. With this setup i could assign the object panner to my hand movement and nodes on my hand which would in turn create expressive and fluid movements in which would be converted to the audio movement i had assigned to in the 3d object panner in Logic pro. This in turn would help bring the sounds and scene to life and an immersive feeling overall to the visual and sound production. I would assign MIDI channel 1 of my hand gesture moving on the Y axis to the elevation of an object in the 3D panner. MIDI channel 2 would be assigned to the left and right movement of an object ( X axis). Lastly my forward and backwards movements (Z axis) assigned to MIDI channel 3.    


In order for me to control the parameters of the 3D panner in Logic pro i would need to pair the 3 movements/hand gestures i made to a parameter in the 3D panner. This would be done by moving a parameter and making Logic learn the movement and what parameter it is associated to.
With all hand gestures assigned to the parameters i would begin experimenting with my hand movements and manipulate the audios movement in regards to the films visuals. This would consume alot of my time as i would firstly have to calibrate and master my hand movements in relation to the sounds object in the scene being edited. Once i was confident in how i would approach this task i would begin studying the scenes in sections and would find ways of manipulating the audio in a Dolby environment. 

My next task would involve recording the audios movement as an automation which is done by arming a tracks automation setting to latch. I would prepare my hands position and the moment i would press play i would begin performing the movements of the audio whilst watching the visuals of the film. Once a scene was done i would stop recording and set the tracks automation setting back to read to review the outcome. Once i was happy with the recorded automation i would search for other audio used for the same visual and would copy and paste the automation i had recorded which would save me time in recording extra takes and saving the work of aligning the sounds to match the visual object. Prior to moving on to the next section of scenes i would practice my hand movements in relation to the the movement in the visual scene such as the comets movement. This would be tricky as i would at times confuse myself in whos perspective should the sounds movement be in relation to. The viewer or the scene itself or characters in the film?. Or a mixture of both!. With this in mind i would try to balance out my movement design to correspond to both perspectives. Once i had finished recording and editing the automations i would review my work and attempt to fine tune my automations.      

With the creative stage complete i would now focus on the critical side of working on my project. This would involve checking the levels of my audio and balancing out all audio with panning or volume adjustments. I would also analayse the sounds movement in the Dolby atmos Master plugin in Logic Pro for an understanding of how all the audios objects are moving in relation to each other and the visuals of the film. As i was creating audio for a film I would have to adhere to the loudness standards of films, TV and commercial which is -18 LUFS. I would insert a loudness meter on my master chain which would be placed after the Dolby atmos plugin to measure the average loudness (intergrated LUFS) of the audio. I would also place a limiter before the atmos plugin to ensure i had optimal levels without distortion and helping to maintain clarity and a good dynamic range in my mix. After adjusting the limiter and having my master hitting -18 throughout the whole film i would save my project and prepare to bounce my Dolby master as the appropriate format for Dolby being ADM BWF (Audio Definition Model Broadcast Wave Format) which is a format used for Dolby mixes and includes all the metadata for the atmos renderer. I would notice that i couldn't render the project if my limiter was not set correctly as 7.1.2. With my project rendered as a ADM BWF file i would check over to see if im happy with my overall results and reflect on areas i could imrove on in future projects involving Spatial audio.  

Summary

Being my first ever attempt on working in Dolby Atmos I found this project to be very challenging but at the same time very interesting as I was learning new skills and techniques in which I could make use of spatial audio in my works. I would struggle in the automation process as I was not too familiar on working with Logic pro's automation and would hear sections in the film where the automation of audio continues where it shouldn't. causing some of the scenes to lose immersion. I would also struggle in choosing the right direction for the objects of the audio to travel and would revisit some areas to record new automations. Once I had a workflow and process going i would find it easy in progressing on my project and would take the time in forming sound movements better. I found it quite difficult to monitor the audio in my headphones as the Binaural effect would confuse me with decisions specifically the Low end. I felt as though I held back in post processing as i was new to mixing in spatial audio. The only processing would be a limiter placed on my master track. Thankfully i had carried out a static mix prior to mixing in Dolby which helped as a starting point for me. Once i would feel more confident mixing in Dolby i would definitely push myself in further understanding and working with spatial audio.

NON VISUAL.png

For my non-linear audio-visual assignment I chose to sound design audio for an interactive logo of the Deep impact films cover. I would choose to use resynthesis as my form of sound design and would later add effects to be processed onto the audio. When the logo is clicked the sound will be triggered and would play. My goal was to create a Robotic sound that would have low and high end  frequency information as i had learnt in my previous assignment not to focus too much on low end sounds as it sounded bland.

My first task was to load a basic synthesiser in which i am able to use white noise. I would experiment with the white noise and would choose to add a low filter on the sound. I then got the idea in automating the filter to create a sweep as my first layer to work with. I would draw an automation which would increase in time and descend at the end giving a transition sound. I would aim to have this sweep slightly long so it would accompany the other sounds i would later create. Once i was happy with the sound i would consolidate it into a wav file with the sweep effect incorporated in the wav file.

With my first sound formed i would begin the task of designing my next sound in which i would use a synthesizer which has resynthesize capabilities and FM synthesis. I would start by loading in the synthesizer the audio I had designed and would resynthesise the audio and use it as my modulator in which would affect my chosen carrier in which would be a sine wave. I would then further manipulate the sound by transposing the audio lower and forming the sound to start slow and last longer with the use of the ADSR settings. This would work well when mixed with my first sound so i would move on to the next task in adding another partial/sound source in the synthesizer. This second sound would be higher in frequency information and my intention was to layer it with the other sounds. I would also carry out FM synthesis by introducing the modulator into the carrier of the second partial/sound and would assign an automation to the FM ratio parameter. This noise would be layered with the other sounds which would create a sci-fi technological sound to the whole composition. 

With my sound finally formed, I began routing all the sounds into a bus track and would begin the task of processing the audio with effects such as Delay and compression to finalise my sound. I would insert a delay plugin on the aux track which would affect the audio going into it. I would also insert a compressor to control the dynamics of the sounds. Lastly I would insert a dynamic resonance suppressor as the sound was too harsh in the high end. Once completed i would have a listen to my final sound and would consolidate tjhe sound as my final task.

Summary

This project allowed me to dive deeper into the world of sound designing over visuals which had presented me with new creative and technical challenges. One of the tasks I enjoyed was recording live sounds such as during the storm Darragh which brought heavy rain and wind. Also creating sounds for the gears with metal objects and effects processing the recording to take the dry sounds to a whole new level. i had found the task for using synthesis quite challenging as i would at times be lost in thinking of a sound to match my visual such as the sounds for the fairies. i would also struggle in finding a use for new tools i intended to use such as hand gesture MIDI controllers. Other challenges included aligning footsteps, voicings of characters.

Making a remix of the infamous Sarias song (Lost woods song) of Zelda OST for the woods scene was both rewarding and complex as it needed to feel nostalgic yet unique. Overall, this project pushed me to further test my abilities and creativeness in tasks such as field recording, synthesis and post-production techniques to create an immersive and cohesive audio experience for the visuals

DEEPIMP.jpg
bottom of page