Sound Design for Music
A complete beginners guide to creating your own sounds
Last Updated: November 2023 | 3067 words (15 – 17 minute read)
We may earn commissions from purchases made through our links. Learn More.
So you’ve been using presets all this time and now you want to start designing your own sounds for your music?
In this guide you’ll learn everything you need to know to get started with sound design for music production.
We’ll go over the absolute basics all the way up to a step-by-step guide to designing your own sounds.
Let’s get right into it…
Article Table of Contents
- 2 How to Design Sounds
Audio Version of Article
So how does sound work, exactly? Simply put, sound is a form of energy that is created when air vibrates.
When any physical object starts to vibrate, it creates a disturbance in the surrounding air molecules, causing them to vibrate as well. These vibrations then travel through the air in waves. When these waves reach our ears, they then cause our eardrums to vibrate.
Those vibrations cause our ear drums to send signals to our brain which get interpreted as “sound.” The pitch of a sound is determined by the speed of the vibrations (i.e. the wave’s “frequency”), while the volume or intensity is determined by the amplitude (i.e. the “size”) of the waves.
What is Sound Design in Music?
Sound design is the ability to manipulate these air molecule vibrations to “create” a specific type of “sound” signal our brain will interpret.
So, for example, you can “design” a sound, from nothing, that your brain will interpret exactly as a “piano.” Or you can “design” a sound that is lush and full similar to an orchestral string section, but more “electronic” sounding.
Those are just a couple of examples, and they can be done in a variety of ways using a bunch of different tools we all have inside our DAWs (digital audio workstations).
You can design sounds to precisely mimic real-world instruments, or get much more experimental with it.
The world of sounds is your oyster.
— Related Article: Guide to Sound Selection for Beat Makers —
What do Sound Designers Do?
A sound designer is someone who understands how audio works in a way that allows them to create whatever sound they want, from nothing more than an audio source (whether acoustic – i.e. physically created – or synthesized) and some sound manipulation tools.
Sound designers can work for companies or individuals and provide them with the sounds they need to create their projects with. They can also be independent entrepreneurs or music producers themselves.
There are a lot of different applications to sound design, music obviously being one. Other applications include film/tv, theatre, video games and much more. We’ll touch on these other areas of sound design later.
How to Design Sounds
Below, we’ll get into the nitty gritty of how to design sounds yourself. It may seem intimidating, but the important thing is to just take the concepts and try things out for yourself.
You’ll learn a lot more from doing than from just reading or watching a video.
As mentioned above, a sound’s pitch (how high or low a sound is) depends on it’s frequency (how fast a wave is “oscillating” – i.e. vibrating back and forth).
Frequency is measured in Hertz (Hz) which means cycles (i.e. “oscillations”) per second. Humans are able to hear frequencies that oscillate from 20 Hz up to 20,000 Hz (20 kHz).
Almost all sounds, however, are made up of a bunch of different frequencies mashed together. The only sound with a single frequency is called a “sine wave” – you’ve probably heard it before – and it’s the basis of all sound synthesis.
Overtones and Harmonics
The strongest frequency within a sound is known as the “fundamental” frequency. That’s the “note” that you’re able to hum or sing along to. The rest of the frequencies within that sound are called the “overtones.”
The only sound that doesn’t have any overtones is, you guessed it, a sine wave (since it’s a single frequency).
There are two types of overtones – harmonic and inharmonic.
Harmonic overtones are ones that “go along with” the fundamental frequency (i.e. add to the “pitch” of a sound). They are exact multiples of that initial fundamental frequency.
Inharmonic overtones are not exact multiples of the fundamental – they don’t go along with the fundamental’s pitch. Instead, these overtones add to the tone, texture and “timbre” (i.e. overall character) of the sound.
Naturally, you can have both harmonic and inharmonic sounds, depending on which overtones are most present. A harmonic sound would be a guitar being plucked, while an inharmonic sound would be a crash cymbal on a drum set.
But even then, most sounds in nature – even drum sounds – are a mixture of harmonic and inharmonic overtones.
And when we’re creating sounds of our own, we can manipulate the harmonics of sound to achieve the results we’re after.
Step 1 – Identify What Sound You’re Trying to Make
When you’re designing your own sounds, you can just experiment with things and see what you come up with.
But you could also try and identify exactly what type of sound you want to make. Are you trying to create something that sounds like a piano or guitar? A vocal chop? Do you want a synth pad that acts like a string section? Are you making a kick or snare drum?
If you try to identify what you’re trying to create, you can approach the sound design process more methodically – manipulating the audio in a way that gets you the right result.
Determining the Timbre
Most sounds that you hear will have multiple layers to it. They’re also often heavily processed to give them that loud, stand-out quality. But they all start with the basic building blocks of sound. That’s why it’s important you familiarize yourself with these building blocks.
When you’re designing sounds you can start from pure synthesis (using single “waveforms”) or from a recording of an “acoustic” sound (i.e. an instrument, voice, etc.).
Get to know how these various waveforms sound – sine waves, square waves, triangle waves and sawtooth waves. Once you memorize them, you’ll be able to hear them in the various synth sounds you come across in the wild.
— Related Buyers Guide: The Best Studio Headphones Comparison —
Also get to know how different instrument sounds – brass, keys, strings, woodwinds, drums/percussion, voice, etc. Can you tell the difference between a guitar, a sitar and a tumbi? Get familiar.
Once you can recognize the characteristics of these fundamental/basic sound sources, you’ll have a better idea of how other more complex sounds are put together when you first hear them. You’ll be able to recognize patterns and put things together in your mind.
Step 2 – Choose the Right Sound Design Tool
You have a few options available to you when you’re designing sounds. You can record audio and edit it, you can sample audio and manipulate it or you can synthesize (i.e. “create”) sounds using electrical circuits (i.e. analog or digital synthesizers).
Sampling & Recording
You can start with any source of audio you like. The easiest thing to do is use a microphone or recording device to capture a sound in physical reality.
Grab an instrument (or anything, really) and start using it to generate a sound. Capture it into an audio editor on your computer or into a sampler (hardware or software).
This approach is best for creating a sound that has it’s basis in real life – something you want to sound like a “natural” sound or instrument, but more unique.
When you have a hardware or software synthesizer, you start with single waveforms (sine/square/triangle/saw) and then layer and manipulate them to create something unique.
Every synthesis session starts with the all-important oscillator. An oscillator is a component in a synth that generates a waveform. A synth can have one or many oscillators that you’re able to use as the basis of your sound.
Other components of the synth will then let you manipulate those waveforms to come up with more and more complex sounds.
Synthesis is best when you are trying to create a sound that is otherworldly or more electronic in nature.
— Related Buyers Guide: The Best Synth VST Plugins Compared —
You’re not limited to just using one or the other tool when you’re designing sounds.
Music and beat software has become powerful enough now that you’re able to combine audio sampling with synthesis to create sounds using a hybrid approach.
Try out various approaches and you’ll be able to come up with some really interesting results.
Step 3 – Design and Shape the Sound
Now that you’ve got a sound source, it’s time to bend and twist and shape it into the exact sound you want.
If you’re using an audio editor or a sampler as the starting point for your sound design, you’ll now be able to manipulate the audio you recorded in various ways. You can slice it up and re-arrange it, shorten it, loop it, etc. You can also alter the frequencies or have the audio play in reverse.
There are limitless ways you can play with audio and create something extraordinary. Think of yourself like a butcher or surgeon, cutting up and patching up this audio file in new and unique ways.
You can also layer this audio file with other audio, and add special effects to it. Your audio editor or sampler will have instructions on how to do all these various things and more. RTFM.
Now that you’ve got your source sounds – whether sampled or synthesized – it’s time to start manipulating the harmonics and frequencies to turn them into unique and interesting audio experiences.
ADSR and AHD Envelopes
One of the ways you’re able to manipulate sounds is through what’s known as an “envelope” – i.e. how the sound plays back.
There are usually two types of envelopes you’ll find – ADSR (Attack, Decay, Sustain, Release) and AHD (Attack, Hold, Decay). ADSR is the most common.
Every sound has these 4 characteristics to how it plays back:
- Attack – how long it takes the sound to reach it’s highest volume level
- Decay – how long it takes the sound to fall to it’s “normal” volume level
- Sustain/Hold – how long the sound plays at it’s “normal” level if a note/key is held down
- Release – how long it takes the sound to fade out once the key/note is let go
For example, a lot of synth pads are sounds that have a very long attack time – when you hit the note, it gradually raises in volume until a peak
On the other hand, when you pluck a guitar string, that sound hits it’s peak almost immediately, but the resonance of the string after it’s been plucked takes a while to fade out completely (i.e. a slow or long “release”).
By manipulating this envelope – and changing the way a sound source “plays back” – you can completely alter the sound itself. Try messing around with an ADSR envelope on a software synth in your DAW to see how it affects various types of sounds.
The next biggest thing that will impact your sounds is your use of effects processing.
Effects are devices (hardware or software) that are used to alter a sound source in various ways. The way you apply effects to your sounds will determine exactly how they are perceived by your listeners.
Every sound you hear in your favorite song has had insane amounts of effects added to it. That’s very obvious if you just listen to a bare recording of an instrument or one of the basic waveforms we mentioned earlier.
Without effects, they sound pretty dull and boring.
There are a lot of different types of effects you can use to achieve different things. Again it’s good to experiment with these different modules/devices to see exactly how they impact a particular sound.
- Filters and EQs – these are used to boost or cut different frequencies within a particular sound – can make things brighter or darker, muffled or sizzling, etc.
- Low Frequency Oscillators (LFOs) – used to add a sense of movement to a sound by “moving” something about the sound back and forth
- Waveshapers – allow you to shape a waveform more precisely to get weird/unique textures
- Distortion – used to intentionally degrade, deform or destroy a waveform for effect – results in a harsher sound
- Saturation – used to enhance or generate more of the harmonics of a sound – results in a “richer” texture to the sound.
- Compressors – used to control the peaks in amplitude (loudness) or to shape a sound – can make things punchy or louder, etc
- Reverb – used to add a sense of “space” and ambience to a sound – like a sound coming from a bathroom, etc.
- Delays – used to repeat a sound to generate an “echo” like effect
- Flangers, Chorus, Phasers – used to “modulate” a sound source (move things back and forth) creating a unique texture
- Resamplers and Bit Crushers – used to degrade the “sample rate” or the “bit depth” of a sound (how accurately a sound is being reproduced)
It’s important to just play around with these various effects on different types of sounds so you get an idea of exactly what each one does.
That’ll make it easier to choose what’s best for the particular type of sound you’re trying to create.
Deviant Noise Top Pick Recommendation:
Mixing, Blending & Layering
We mentioned the idea of layering sound before, but it’s not something you should brush off.
Layering is HUGELY important in sound design. If you want to create sounds like your favorite artists, then you’ll need to get good at layering and blending things together.
The idea here is to choose different sound layers that aren’t exactly alike – they should add to each other, instead.
If you layer two of the same sine waves on top of each other, for example, all that does is make the sound louder. It doesn’t change the actual character or texture of the sound.
What you’re trying to do with layer is to alter the sound itself.
So choosing layers that have different timbres and textures is a much better approach. It will take a lot of trial and error to get good at this, but it’s an essential skill to have. It’s also why it’s important to be able to tell the difference between various timbres of sounds.
Finally, mixing will become a big part of the layering process. What is the ideal mix of all the layers in your sound? How loud should layer A be compared to layers B and C? Should you compress all your layers together to gel them? These are all decisions you’ll make when you’re working with various layers in sound design.
Experimentation is Key
What we’ve gone over in this article is the knowledge you need to get started with sound design. There are tutorials out there you can use to learn how to design specific sounds, but we thought this was a better approach.
If you understand these basics, and apply them to some hardcore experimentation, you’ll become a much better sound designer than someone who just knows how to create a handful of different sounds.
Get into your DAW or hardware of choice and just start messing around. Now that you know the various parts of the sound design process, you’ll know what all those knobs on your gear are for.
Mess around. Break shit. Just experiment. Have fun and treat it like play.
Use all the tools you have and see what happens. The more you do this, the more you’ll be able to do things with intention and methodically when it comes time to do it for your next song or beat.
— Related Article: How to Make Your Own Beats —
Deviant Noise TOP PICK Recommendation:
Learn the Secrets to Writing and Producing HIT SONGS
Other Uses/Applications of Sound Design
There’s more to sound design than just music, even though that is our primary purpose here at Deviant Noise.
The art of sound design reaches beyond into various other creative fields as well:
- Film & Television – recreating realistic sounds and dialogue in movies and shows
- Advertising & Media – using interesting sounds in commercials or podcasts
- Theatre – recreating realistic sounds in a live environment
- Video Games – sound effects, music, dialogue during gameplay
- Technology – sounds in hardware, apps and programs
The basic tools and techniques are the same across these disciplines, although some areas have much more in-depth processes to create their sounds.
You can use the principles in this article to delve deeper into sound design in any of these fields, if you’re interested.
Frequently Asked Questions
Yes, becoming a sound designer can be a good career in several different fields including film/tv, advertising and video game production. Sound design for music is a bit more difficult to turn into a career, but is still possible.
Yes, sound design is a very important skill and task. Using the best possible sounds in your projects is important to conveying the feeling and emotion you’re hoping to get across. Whether you need a realistic sound or an otherworldly sound, good sound design can help you achieve the desired effect. A badly designed sound, on the other hand, can take away from the experience.
Sound design can be a difficult skill to really master. There are a lot of different parts to the sound design process, and it takes a lot of experimentation to really begin to understand how sound works. Once you understand this, however, you’re able to design any sound you can possibly think of.
Learning how to design sounds can be an intimidating thing – especially with all the terminology and jargon that gets thrown around. But don’t let yourself feel that way.
You’ll get better and better the more you try things out. Let your imagination and creativity run wild here – you never know what you might come up with.
All that’s left to do now is to roll up your sleeves and start messing around with the tools you already have. Everything I talked about in this guide is available in all major music making software options out there.
I hope this beginner’s guide to sound design helped you understand the basics of how everything works. There are a lot of rabbit holes you can jump down to get more details on all of the tools and techniques mentioned.
But I recommend you just go out, play around a little and start to design your own sounds.
Related Music Production Articles
- How to Arrange Music
- Review of Magix Music Maker Software
- Splice Review
- Studio Monitor Placement Guide
Resources and Tools for Music Makers (affiliate links)
Back to Main Music Production Section