

Online Production Log
For my sound design assignment I will be producing audio for both linear and non-linear audio-visual materials. My linear project focuses on designing sound for a fan-made animation based on The Legend of Zelda: Majora's Mask a well known and loved video game released in the year 2000. The game is known for its dark themes and is highly praised in the gaming world for its immersive storytelling and atmospheric sound design and music suited to its lore. This animation reimagines characters, environments and the lore of the game which i saw as an opportunity to use as my linear production. In the context of video games or animation, audio plays a key role in building immersive worlds and an emotional connection. Each game often develops a unique soundscape or theme to its lore that is instantly recognizable such as the iconic music or sounds of Super Mario or in my case the unique music and sounds of The legend of Zelda. By being mindful of this my sound selection and creations should align with the animation helping to reinforce the identity of the game and enhance the viewer's connection to Zelda's world and story.
I would begin my project by watching the animation and removing the original audio to start sound designing, where I would ensure the incorporation of synthesis techniques, sampling, field recording, audio editing, and electroacoustic approaches in my sound design.
Management of Planning/production
I would begin by firstly watching the animation to get a feel of the story and after this i would plan a storyboard and set it into segments of scenes. This way i can plan my stories events and sounds for the visuals. I would create a rough sketch and write up with cues and notes for sound ideas to sync with the visuals and events.

Storyboard/Scene
Scene 1

The animation opens in darkness, with the subtle introduction of a mask slowly emerging from the shadows. The scene is brief, with the mask fading out just as quickly as it appeared, setting a mysterious and foreboding tone.

We transition to a church where a lone man sits at a piano, seemingly lost in thought. The atmosphere is heavy, with the sound of intense rain pouring outside, creating a melancholic backdrop. As the man begins to hear faint whispers emanating from the mask a sense of unease builds. His expressions show sadness and fear which triggers him to play a few piano notes and chords. The sound resonating in the stillness of the church triggers a mechanical response, causing massive gears within the church to slowly begin turning, as if marking the passage of time. The scene is filled with tension, as the man's actions seem to have a deeper, almost supernatural connection to the events unfolding around him.
Scene 2

The scene transitions to a dense, rain-soaked forest, where the heavy downpour creates an intense, almost suffocating atmosphere. Two fairies flutter through the rain and as they fly deeper into the woods they come across an imp sitting alone by a tree, drenched and lonely.
the red fairy, drawn to his isolation, approaches him. The imp reaches out his hand toward the fairy which shows the beginning of an unexpected friendship. This quiet interaction highlights the connection between the characters, offering a brief moment of warmth amidst the cold and relentless rain.
Scene 3

The imp and the fairies are shown enjoying each other's company in the forest, a big shift from the loneliness the imp had and the atmosphere or environment. They run and play together laughing and bonding over their shared experiences. The imp discovers a jewel, plays with a frog he found and dances with the fairies as he plays his flute. Their carefree enjoyment adds a sense of warmth and connection to this scene in contrast to the previous scene as this scene is filled with a sense of fun and happiness which is reinforced by the familiar Lost Woods song from Zelda, which adds to the nostalgic and magical vibe of the setting.
Scene 4

The scene transitions to the man we previously had seen from the church walking alone in the woods. He carries a heavy bag with numerous masks attached to it and seems as if he is on a journey as he has camping gear with him. As he walks a strange noise from the forest catches his attention. Initially, he seems calm or even a bit cheerful, but as he listens more closely, the sound begins to unsettle him. His expression shifts to one of caution and uncertainty as he turns around searching for the source of the noise. The whispers he hears remind him of the eerie voices he encountered earlier deepening his unease. The tension builds as the scene ends leading into the next scene where the man sets up camp for the night, still troubled by the sounds.
Scene 5

The scene shifts to the man’s campsite where he’s still haunted by the whispers. This time, the voices affect him deeply causing him to lose consciousness and collapse onto the ground. The imp then approaches the campfire noticing the unconscious man and the mask nearby. Unlike the man, the imp doesn’t hear the whispers at first. As the man slowly regains consciousness, he sees that the mask has grown tentacles, and the l campfire is now engulfing the area. As the whispers intensify the imp begins to hear them. Initially scared, his curiosity takes over, and he picks up the mask. The moment he wears it, the voices stop, and the mask takes control. The imp screams in agony as the mask overtakes him.

The animation ends abruptly, leaving an unsettling sense of uncertainty. The once joyful and carefree moments with the imp and fairies are now overshadowed by the terrifying transformation. As the imp screams in agony, the screen fades to darkness, The abrupt ending leaves the story open, also leaving the audience to linger in thought what happens next.
Sound Design
( Key Techniques/Layer)

I started my project by composing the intro song using my MIDI keyboard and Native Instruments Kontakt in the DAW Flstudio. I selected the Jade Ethnic Orchestra library which has a huge collection of Asian ethnic instruments to create a dark and atmospheric melody for my intro song. After establishing the melody, I added a vocal-like instrument from Kontakt called Voice of Connie and would play a single note. i would further enhance this sound with reverb and delay effects layered on the sound to give a ghostly vocal sound. To further build the music piece i layered horns and other elements to match the mood of the visual scene.
Once the song was complete, I imported it into Reaper to synchronize it with the video. I would carefully slice and align the song to match key changes in the animation such as when the song changes the visual transition would change at the same time helping to ensure that the scene changes felt cohesive with the music. To finish, I applied a fade-out to the audio at the end of the scene, providing a smooth transition into the next part of the animation.

My next focus was to design the sound effects for the church, specifically the mechanical noises of the gears. To achieve this, I set up three microphones (two positioned on either side to capture the stereo field) and one in the center to record the direct sound. This configuration allowed me to capture a fuller and more dynamic sound when sound designing. Also by manipulating the recordings from the side microphones and blending them with the middle mic i can achieve various sonic textures.
The microphones I used had the capability to emulate the frequency response of well-known microphones. For the side mics, I selected the AKG D112 emulation due to its excellent low-end response, which was crucial for capturing the weighty resonance of the items sounds i would use. For the center mic I opted for a shotgun microphone emulation to capture a focused and directional sound.
To enhance the clarity of the recording, I formed a processing chain. The first element in the chain was a noise gate, which removed unwanted background noise by cutting off the signal when the input fell below a set threshold. This ensured a clean recording. Next I applied the Eventide Crystals plugin a granular synthesis effect i use to make a darker, more atmospheric tone with subtle high-frequency artifacts. This added depth and texture to the sound. Finally, I incorporated Valhalla’s reverb plugin to enhance the overall atmosphere making the gear noises feel as though they resonated within a large reverberant room.
After configuring the chain, I conducted microphone tests to ensure the levels were balanced and the desired effect was achieved. This careful setup and processing allowed me to produce a rich, immersive sound design for the scene.

I began recording sounds using heavy metal objects placed on a table. I started by experimenting with these items to discover the range of sounds they could produce, aiming to create effects such as gate-opening noises and the movement of heavy mechanical parts. By varying the force and speed of my movements, I was able to generate a diverse selection of sounds that aligned with the scene.
Once I had a range of sounds, I selected the most fitting ones and began recording. During this process, I adjusted the microphones’ polar patterns to optimize the recordings for post-production. I also ensured that the captured audio was clean and clear, allowing for flexibility during editing.
After recording, I refined the sounds in post-production by applying audio effects and enhancing specific characteristics to better match the scene. With the final set of sounds completed and synced to the video, I moved on to the next stage of my sound design project.

My next task was to design the sound for a scene featuring heavy rain. Taking advantage of a recent storm we had (Storm Darragh) I recorded the sound of rain and wind hitting my studio roof, as the unfinished ceiling provided a natural acoustic setting for capturing the raindrops.
I used the same microphones i had set up earlier but this time each was routed through different preamps to enhance the tonal and sonic characters of my recording also giving a difference in each recordings. the centre mic was routed to the SSL VHD preamps with some harmonic drive. the left mic was routed to an SSL Super Analogue preamp with some eq and slight compression, lastly the right mic was routed to a Camden 500 preamp with a "Mojo" which adds saturation similar to the SSL vhd pre amp although more low end focused. After verifying i had no phasing issues I recorded several takes selecting the best one for the scene.
To synchronize the rain sounds with the video, I adjusted the volume to match the scene i.e louder for close up scenes and vice versa for distant scenes or transitions. I also processed the recordings to enhance their realism by adding reverb for spatial depth and creating a parallel chain routed through an LA-3A compressor.
The LA-3A is a solid-state opto-compressor that blends features of the LA-2A and the 1176. Like the LA-2A, it uses the T4 optocell for smooth, program-dependent compression and a slow release. However, instead of tubes, it has a solid-state design with a transformer, giving it a faster attack time compared to the LA-2A. This makes it punchier and better suited for handling transient-rich material while retaining a warm, natural character. It bridges the gap between the vintage warmth of the LA-2A and the speed of the 1176
With this knowledge in mind i would subtly compress the dynamics of my rain sounds making it sound fuller and more immersive while preserving its natural texture.
For additional detail I layered raindrop sounds in the church scene to simulate droplets falling in a large hall. By randomly placing and panning these sounds across the stereo field I was able to achieve a realistic sense of space. I would also EQ the rains higher frequencies to darken its tone and blend it seamlessly into the scene. With these steps complete, the rain audio effectively enriched the atmosphere of the video.

My next task was to use my synthesis skills to create sounds for the fairies in the scene. I began by opening Logic Pro on my MacBook Pro Laptop and routing the audio through an audio interface to inputs 3 and 4 of my patchbay which leads to my PC and DAW FLstudio. I also opened Google Chrome desktop to view my MacBook screen on my PC which allows me to use Logic Pro alongside FL Studio. This workflow helps me stay efficient by using only what I need from Logic Pro, specifically the ES1 synthesizer for subtractive synthesis.
I would also be using hardware and software that converts hand gestures and movements into MIDI messages. This is a technique I use to manipulate sound in a more natural, responsive and creative way weather it be controlling modulation, LFO's, oscillators or any parameters i would assign it to form a creative and expressive sound. For instance I assigned an up-and-down hand movement to MIDI message 1 and next would assign left-and-right hand movements to MIDI message 2 and would later use this to control my LFO rate and filter.
I started to design a high-frequency fairy-like sound in Logic Pros ES1 synthesizer. Starting with a wave oscillator instead of a sub, I added resonance for more high-frequency artifacts. My envelope settings included a medium attack and decay, with high sustain and a long release to achieve a smooth, ethereal tone. I added a chorus effect for texture and used a high-pitched triangle-wave LFO controlled by my hand gestures to modulate the sound dynamically. Up-and-down movements controlled the LFO while left-and-right movements adjusted the cutoff filter.
Once I achieved the desired sound I identified the key that suited the fairies and recorded multiple takes of my created sound. Post-processing involved adding reverb to soften harsh frequencies and delay to enhance the spatial effect. i would end of aligning the audio with the visuals, layering the sounds and adjusting volumes through automation to ensure they fit seamlessly in the scene.

My next task involved designing the sound for the dark rumbling noises in my film. I began by using a synthesizer with a noise oscillator to create the foundation of the sound. To make the noise fuller I added resonance and applied reverb which enhanced the sound to give it a more immersive and expansive environmental feel.
To achieve the desired dark, scary, and horror-like tone, I filtered out the higher frequencies and focused on boosting the lower frequencies. This was further enhanced by layering mid and lower frequencies, a technique I had learned for creating thicker kicks in certain genres. After the reverb, I applied an EQ to remove any remaining high frequencies, refining the darker tonal quality.
To add movement and dynamics, I created an automation clip for the filter on the noise oscillator. By adjusting the automation I tailored the sound to fade in gradually and come to an abrupt stop aligning perfectly with the visual cues. I added a phaser effect at the final stage to introduce a supernatural and impactful feel to the sound.
Once the sound was finalized, I recorded the output and strategically placed it in various moments throughout the scene to enhance the visual impact creating a cohesive auditory experience.

My next task focused on creating sounds for the movements depicted in my visuals. I began by sourcing footstep noises recorded on various materials ensuring they matched the specific environment and context of the scene. I would slice and edited audio to fit and complement the visuals whilst fading them in or out to align with the scene’s progression.
To avoid repetition in sounds and realism I made sure to vary the sounds used ensuring they didn't feel out of place. If I had to reuse the same sounds I took extra care to manipulate them by introducing subtle variations to differentiate each instance.
The movements in my visuals ranged from footsteps, jumping, carrying objects and interacting with the environment. I made sure that my sound selection was appropriate and consistent with the visuals. Every sound I chose was carefully aligned and layered to match the timing of the visuals helping to create a seamless auditory experience.

Most of the time i would revisit areas and would time stretch or adjust the formant of the sounds I had created or used to further suit the visuals. This would range from the diamond shining sound I had found to use in a scene or the gear noises in the church. Some parts of my film I would have to perform a melody to fit my visuals. Ranging from a short piano melody at the start of the film down to a remaking the lost woods song, a famous Zelda OST. The use of familiar music not only enhanced the visuals but also provided a deeper emotional connection for the audience. Once I had formed the melodies, I processed them with reverb to create a sense of space and atmosphere making the performance feel more immersive. I also adjusted the volume levels in response to the visual elements ensuring that the audio dynamics matched the flow of the scene. Additionally, I fine-tuned the performance in the midi notes editor.

For the vocal elements in my film I used a microphone routed through an SSL Super Analogue preamp and would apply some compression and EQ to shape the sound. I carefully scanned through the film to identify moments requiring vocalization and for each of these moments I recorded multiple takes to capture variations that would best suit the visuals. For the mask moments I had planned to incorporate whispers and would record several takes and layer for a sense of depth. Each were panned accordingly to achieve a spatial effect and I would also adjust the pitch and format of each recording to add variation which ensured each vocal had its unique sonic character.
For a darker and more sinister sound effect, I lowered the octave of certain voicings which further enhanced the sound. I applied reverb to make them sound larger and more atmospheric. Additionally I used a pitch-shifting effect on the screams of the Imp mask scene to create a more unsettling sound.
I would also add a granular echo synthesizer effect by sound toys called crystaliser which introduced a phasing and warping effect to the voice amplifying the intensity of the moment. Once all the vocal elements were recorded and processed, I placed them within the scenes ensuring they aligned with the visuals
Summary
This project allowed me to dive deeper into the world of sound designing over visuals which had presented me with new creative and technical challenges. One of the tasks I enjoyed was recording live sounds such as during the storm Darragh which brought heavy rain and wind. Also creating sounds for the gears with metal objects and effects processing the recording to take the dry sounds to a whole new level. i had found the task for using synthesis quite challenging as i would at times be lost in thinking of a sound to match my visual such as the sounds for the fairies. i would also struggle in finding a use for new tools i intended to use such as hand gesture MIDI controllers. Other challenges included aligning footsteps, voicings of characters.
Making a remix of the infamous Sarias song (Lost woods song) of Zelda OST for the woods scene was both rewarding and complex as it needed to feel nostalgic yet unique. Overall, this project pushed me to further test my abilities and creativeness in tasks such as field recording, synthesis and post-production techniques to create an immersive and cohesive audio experience for the visuals

For my non-linear audio-visual assignment I chose to challenge myself by creating a sound design element for an interactive logo. I wanted to design a sound which would be triggered when the Majora’s Mask logo (which I had previously worked with in my linear sound design project) is pressed.
To achieve this I decided to step out of my comfort zone by using Native Instruments FM8 synthesizer, a plugin I hadn’t fully explored before. Having experience with FM synthesis using FLstudio’s Sytrus helped but FM8 was a different beast with additional routing capabilities and a more complex operator structure. My goal was to create a unique engaging sound that would complement the Zelda theme of Majora's Mask while pushing my skills further in sound synthesis and design.

For my first operator A, I choose a sine waveform and would use the envelope settings to create a slow attack and a long release for the initial sound on operator A.
For the next operator B, I would route it to the output just to hear the sound I was shaping. I would adjust the pitch by tweaking the ratio setting (for example, a 2:1 ratio would give an octave pitch difference). I would adjust the ratio by ear looking for a sound similar to operator A but at a lower octave.
Once I had achieved the desired sound I would switch the waveform to a square wave and further adjust operator B’s envelope settings, giving it a shorter release and faster attack compared to operator A. I would also keep operator A routed to the output so I could hear how both sounds worked together.
Once I had formed the sounds for both operator A and operator B I would start FM synthesis by routing operator A into operator B by 30%, then route operator B to the output.
After experimenting with this FM synthesis setup I realized that FM8’s routing matrix was more sophisticated compared to FL Studio’s Sytrus. This is because FM8 allows for creating loops within the routing matrix which opens up more creative possibilities.
With this new understanding, I would route operator B back into operator A by a small amount, which gave the sound a resonant metallic growl. The routing would loop between operator A and operator B with operator B being the final destination where its envelope setting would cause the sound to swell when a note was played and held.
Next, I would work on operator C by routing it to the output and raising the octave using the ratio setting. I would set the waveform to a triangle wave.
In the routing matrix I would introduce operator X (which is the noise generator) by routing operator C into it. I would adjust the cutoff frequency of operator X so that only the lower frequencies of the generated noise would affect operator C, eliminating any harsh hissing sounds for a clearer effect.
Because the low rumble noises were similar to a wobble, this setup created a lo-fi vinyl-style effect on operator C. I would then lower the ratio to bring the sound down to a lower octave.
Moving on to operator E, I would route it to the output initially to hear the sound. I would set the ratio high to achieve a high-pitched sound. Another great feature of FM8 compared to FL Studio's Sytrus is the ability to draw custom envelope patterns by right-clicking points on the envelope graph.
With this knowledge in mind, I would create an envelope that makes operator E pulse when pressed. I would test this by routing operator E between operator C and operator X.
However, the sound was too high-pitched so I decided to route operator C into operator E with both routed to the output in parallel.
In this chain, the sound from operator C would first pass into operator E, then from operator E to the output. At the same time, operator C would be routed directly to the output, both signals being affected by operator X (the noise generator). This routing created a sound that resembled chirping birds or an alien spaceship.
After this experiment, I revisited the sound I originally created with operator A routed into operator B, looping back into operator A and then into operator B. I first experimented by introducing operator Z (the filter) into the routing chain but decided against it as I preferred the low-end content coming from operator B. Instead I routed the alien chirping sound I had created from operator C, operator E, and operator X into operator X (the filter). I also modified the envelope of operator C to create a pulsing sound.
I added a bit of operator A into operator C by accident but liked the resulting sound, so I kept it in the chain.

With my sound finally formed, I began the task of recording various performances. Playing at a lower octave or a higher octave produced different tonal results and holding a note for a longer duration also affected the sound.
To capture these variations I recorded three takes. After reviewing my recordings, I selected the best take and began further designing my sound. I started by applying EQ, cutting out some inaudible lower-end frequencies and removing a harsh resonance. Next, I duplicated the recording and made it unique so that any modifications wouldn’t affect the original. On this new take, I raised the pitch while lowering the formant, resulting in a more droning sound. I then added reverb to the sound, followed by a delay. To enhance it further, I applied a granular echo effect using Soundtoys’ Crystallizer, which introduced a subtle warping effect. I used this effect sparingly to maintain balance in the overall sound. Once this process was complete, I recorded the final formed sound and layered it with the original dry sound for added depth and texture. To conclude, I faded out the layered sounds at a point I felt was most suitable, creating a cohesive and polished result.
Summary
This project allowed me to dive deeper into the world of sound designing over visuals which had presented me with new creative and technical challenges. One of the tasks I enjoyed was recording live sounds such as during the storm Darragh which brought heavy rain and wind. Also creating sounds for the gears with metal objects and effects processing the recording to take the dry sounds to a whole new level. i had found the task for using synthesis quite challenging as i would at times be lost in thinking of a sound to match my visual such as the sounds for the fairies. i would also struggle in finding a use for new tools i intended to use such as hand gesture MIDI controllers. Other challenges included aligning footsteps, voicings of characters.
Making a remix of the infamous Sarias song (Lost woods song) of Zelda OST for the woods scene was both rewarding and complex as it needed to feel nostalgic yet unique. Overall, this project pushed me to further test my abilities and creativeness in tasks such as field recording, synthesis and post-production techniques to create an immersive and cohesive audio experience for the visuals