|
A component within an audio system that operates as a frequency dependent amplifier which is designed to pass, attenuate, or increase a range of frequencies There are different types of filters |
|
|
typically referred to when multiple filters are working together within one component |
|
|
The apex of a filter that is created by determining a center frequency |
|
|
The frequency at which a filter takes effect for filters that are created by defining a cut-off frequency |
|
|
The steepness of a filter (slope) can be defined by: +- dB/8va -6 dB/8va or -12 db/8va |
|
|
High Q equals steep slope Low Q equals gradual slope Q can be referred to as resonance |
|
|
Function A high pass filter passes frequencies above the cut-off frequency without any change, frequencies below the cut-off frequency are attenuated
Settings Cut-off frequency Slope or Q |
|
|
Function A low pass filter passes frequencies below the cut-off frequency without any change, frequencies above the cut-off frequency are attenuated
Hire a custom writer who has experience. It's time for you to submit amazing papers!
order now
Settings Cut-off frequency Slope or Q |
|
|
Function Pass only a band of frequencies Settings Center Frequency Slope, Q, or bandwidth |
|
|
Function Attenuate a band of frequencies around a center frequency Settings Center Frequency Slope, Q, or bandwidth Gain! |
|
|
Function Amplify a band of frequencies around a center frequency Settings Center Frequency Slope, Q, or bandwidth Gain |
|
|
Function Amplify or attenuate the frequencies below a cut-off frequency Settings Cut off frequency Slope or Q Gain |
|
|
Function Amplify or attenuate the frequencies above a cut-off frequency Settings Cut off frequency Slope or Q Gain |
|
|
Many EQ components group the single band components we just discussed into modules that allow you to control multiple filter bands that can consists of different types of filters Parametric EQ? A filter for witch you can control the frequency, gain, and bandwidth |
|
A Note about filters in logic |
|
The default “channel eq” does not correct the phase distortion that results from the filter algorithms, this may or may not be significant depending on the type of sound you are filtering The “linear phase eq” has the same functionality, but it corrects any phase distortion, this both requires more CPU resources and can cause latency, this is best saved for a mastering EQ, not on every channel |
|
+/- 10 dB is perceived as |
|
|
|
Normalization is a process in various domains that fixes some primary feature of the data at hand to be equal in measure in some parameter For audio we normalize the amplitude of an audio file by increasing the gain by a fixed amount for the entire audio file such that loudest amplitude in audio file will be the loudest possible amplitude that can be represented by the audio system (100%) |
|
The standard dynamic processing functions are: |
|
Noise Gate Expander Compressor Limiter |
|
|
Multiband compression you can compress your low mid and high frequencies Purpose: Mute an audio signal when it is below a certain threshold Parameters Threshold: The point below which gain reduction occurs Ratio: The extent of the gain reduction Knee A softening of the threshold point Temporal controls Attack Release Hold |
|
|
the point above or below which the amplitude will be changed Ratio: how much above or below the threshold the amplitude is changed. |
|
|
Ratio is defined by two numbers expressed as change in amplitude of input :typically 100-1 or infinity 10 1 . You can make a compressor to a limiter Change in amplitude of output Which is always 1 |
|
|
after the threshold is passed the gain will be reduced over a certain amount of time, that is defined by the attack time |
|
|
likewise the gain reduction is not just immediately stopped after the input signal moves below the threshold, the amount of time it takes for the gain reduction to return to 0 is defined by the release time |
|
|
Sometimes it is best to have a delay period before the release time occurs, this is defined as the hold time |
|
|
A softening of the threshold point |
|
|
Gain added to the signal after the compression occurs so that the signal peaks are not too quiet |
|
What are Global Parameters? |
|
Global Parameters are going to be settings that control something that will affect the behavior of all tracks, not just one track Where are the parameter settings that affect only one track? In Logic Global Parameters are controlled by “global tracks,” which are accessible and configurable at the top of the arrange view |
|
Hiding and showing different global tracks? |
|
Contextual click in the global tracks region and check or uncheck each global track How do you contextual click? What is a contextual click? |
|
|
You add markers at locations in your project that are significant in order help you more efficiently navigate your project Markers could be used for: Demarcating sections of a song Chorus/Verse/Intro/etc… Showing cue points for movie or film music Showing where different tracks start for mastering Reminders to yourself about what you did Anything else? Let’s add some markers |
|
|
Signature is short for “Time Signature” The time signature is a setting that alters the number of beats per measure and/or the number of divisions per beat Let’s add some time signature changes |
|
|
Tempo is the rate at which you feel the beat, pulse of the music We already noted that you can change the tempo of the music in the transport You can both change the tempo for the entire project or automate the tempo here What is automation? Creating an envelope for any parameter over time |
|
|
Remember ADSR Dictionary.com “4. Geometry. a curve or surface tangent to each member of a set of curves or surfaces” Lines connecting data points that are mapped to a number range appropriate to controlling a (sonic) parameter What are we going to automate? Volume Panning Effect Parameters |
|
|
Showing and hiding “a” is for automation; there is also a graphical button Entering automation with the mouse: Add and move points with the pointer tool Draw lots of points with the pencil tool To delete a point, click on an existing point with the pointer tool Other automation tools: Automation select tool Automation curve tool |
|
|
Each track in Logic has the automation modes that you can select: Off Read Touch Latch Write Trim Let’s see how to set the automation setting on any track These settings allow you to: Ignore all automation data on the track (Off) Playback the audio with the automation data affecting the playback (Read) Record automations data in real time from an external controller, such as the mouse Touch Latch Write Trim These settings allow you to: Ignore all automation data on the track (Off) Playback the audio with the automation data affecting the playback (Read) Record automations data in real time from an external controller, such as the mouse Touch Latch Write Trim |
|
Automation Record Mode: Touch |
|
“If a channel strip or an external (touch-sensitive) automation controller is touched, the existing track automation data of the active parameter is replaced by any controller movements—for as long as the fader or knob is touched. When you release the controller, the automation parameter returns to its original (recorded) value.” “Touch is the most useful mode for creating a mix, and is directly comparable to “riding the faders” on a hardware mixing console. It allows you to correct and improve the mix at any time, when automation is active.” |
|
Automation Record Mode: Latch |
|
“Latch mode basically works like Touch mode, but the current value replaces any existing automation data after releasing the fader or knob, when Logic Pro is in playback (or record) mode.” “To finish, or to end parameter editing, stop playback (or recording).” |
|
Automation Record Mode: Write |
|
“In Write mode, existing track automation data is erased as the playhead passes it.” “If you move any of the Mixer’s (or an external unit’s) controls, this movement is recorded; if you don’t, existing data is simply deleted as the playhead passes it.” Should be called ERASE AVOID! |
|
|
You can automate many parameters of many effects Add an effect as an insert on a track Show automation in the arrange view Look at the menu of automatable parameters Select the parameter and automate it Showing multiple automation lanes at one time Show automation Click on the arrow in the track header in order fold out an additional lane (repeat) |
|
Why is automation important? – Volume |
|
Real sounds change volume over time. If we have a loop, a sample, or anything that is stagnant in volume, we may want it to sneak in or out of our mix, so we need to be able to change volume over time We can be super creative with volume automation: Have similar volume automation on distinct samples in order to create an amplitude envelope motive Take a realistic sound and make it crazy with volume automation in order to avoid realism Anything else? Many real sound sources move in space, so we will want to move sound sources Sometimes we attend to sound sources in motion over sound sources that are in a fixed location, so we can draw attention to a sound by panning it We can be creative: We can move things unrealistically We can have panning motives We can link certain panning motives to certain sounds Maybe high sounds move faster We can link certain panning motives to certain functions of sound within a phrase or texture: So when a sound ends, it pans outward, for example What happens to the spectrum of a real world sound source when the sound source increases or decreases in amplitude? Review: What is the spectrum of a sound? To have this effect you can: Manipulate the EQ (we will cover EQ later in the semester) Add distortion when a sound increases in volume What happens to the spectrum of a real world sound source when the source moves away from you? Amplitude decreases Volume automation High frequencies decrease in amplitude more quickly than low Some type of filter or EQ Also, we will want to link structural moments (sectional divisions, climatic moments, calm moments) in our work with specific spectral changes. We don’t want to universally or randomly apply effects, we want them to change over time and to vary over time. Kayne West, Love Lockdown Muse, Take a Bow Lady Gaga, Telephone |
|
|
Purpose: Compression is employed to decrease the dynamic range of an audio signal |
|
|
Limiters and compressors have basically the same functionality, the distinction is the purpose for which they are employed and designed (the PT compressor is called compressor/limiter) Purpose Limiters are employed to define the maximum amplitude that can be achieved for an audio signal, they are often employed to avoid clipping, but CAN NOT do so absolutely because they can not respond to signals immediately Parameters Threshold Ratio Typically 100:1 or infinity: 1 As fast as possible attack times (often undefined) Release times vary, but typically they are short |
|
|
De-esser This is gain reduction compression for the sibilant range of speech (app. 8-10K) Ducking You can reduce the gain of one signal based on the amplitude of another signal. Think about the radio. Maximizer Pump it |
|
|
“A side chain is effectively an alternate input signal—usually routed into an effect—that is used to control an effect parameter. As an example, you could use a side-chained track containing a drum loop to act as the control signal for a gate inserted on a sustained pad track, creating a rhythmic gating effect of the pad sound.” Logic Manual glossary |
|
What do you do with side chaining? |
|
Two basic applications of side chaining: Ducking Ducking is accomplished by setting the side of a compressor on a track to the audio signal from another track What this does is reduces the gain of the audio on the track with the compressor when the audio on the other track is above the threshold You can also “turn on” the audio on a track based on the audio from another track. This is accomplished by setting a side chain to control a noise gate When the audio on the other track is above the threshold for the noise gate, the audio no the track with the noise gate will play |
|
|
With a blender you mix together different ingredients in different amounts. This is like an audio mixer. The ingredients are the inputs (or the audio on the channels in the DAW) The amounts are the amplitudes. Unlike an mixer for food, audio mixers also deal routing |
|
|
You determine what settings you can see in the mixer via the “View” Menu These are the channel strip for each track There is an output track for all the distinct outputs specified in the outputs of the channel strips Then there is the master output. You can also write notes about each track, which you need to do for you projects to tell me information |
|
|
Groups allow you to control features of tracks, such as volume, at the same time. The group area is located below the Output area. Let’s see this in Logic. |
|
Organizational Technique: Track Stacks (Main View |
|
Track Stacks help you organize multiple tracks into one folder. This stack can simply be an organizational tool: A Folder Stack This stack can also be a tool to sum all of the audio tracks into a single sub mix: A Summing Stack |
|
Working in the Mixer View: Audio FX Inserts and Channel Settings |
|
You can add effect processes (these are like guitar pedals or auto-tune or an echo etc) in the mixer view You will be required to play with effects, but note that there is another course in MAT that teaches effects processing in detail—we will only discuss filters and dynamics processing in this class There are also Legacy channel settings that you can play with that add effects that are designed to assist with certain types of goals, such as helping a vocal track sound great |
|
Working in the Mixer View: Selecting Tracks |
|
To the right we see that three channel strips are selected by the fact that they lighter gray. To select one channel strip, simply click it at the bottom near the name. To select additional channel strips, hold shift and click the additional channel strips. Selecting additional channel strips allows you to change the volume, pan, input, output, sends, send volume, and other settings all at the same time It also allows you to delete all the channels at the same time |
|
Another Useful Organizational Technique: Track Stacks (Main View) |
|
Track Stacks help you organize multiple tracks into one folder. This stack can simply be an organizational tool: A Folder Stack This stack can also be a tool to sum all of the audio tracks into a single sub mix: A Summing Stack |
|
Working in the Sample Editor: Destructive vs. Non-Destructive editing |
|
Remember to open the sample editor you double click on a sound file that is the main view. The actions that you do in the main view are called non-destructive editing because the edits do not change the audio file that is stored on the hard drive The changes you do in the sample editor are called destructive edits because they do change the file on the hard drive In Logic Pro X it really tries to protect you from doing destructive. You must turn on that option under preferences > advanced > select allow destructive editing |
|
Working in the Sample Editor: Change Gain and Normalize |
|
We are only going to learn about two destructive edits that are available in the sample editor. They are both available in the “Functions” menu. Note that when using the built-in apple loops it will prevent you from doing destructive edits. The “Change Gain…” option within the functions menu allows you to increase or decrease the amplitude of the audio file on disk. This can be helpful for sound files that are just too quiet or too loud for how you want them function in your project |
|
Mixing Basics: Mixing vs. Scaling |
|
We will deal with changing the volume over time later in the semester. When two audio signals from different tracks occur at the same time their simultaneous amplitudes add together. Remember that we can think of how a computer stores audio consisting of numbers from -1 to 1 with 0 being no air pressure, 1 being the maximum pressure that be represented, and -1 being the minimum pressure that can be represented. This means that when signals mix together, which can be many signals if we have many tracks, we have to make sure that clipping does not occur on the output track of our project. You are graded on this, so don’t let it happen. Note that signals add together all the time without clipping. The example that I just showed involved two signals at the same frequency. Clipping is more likely to occur when the signals that are mixed together are contain strong amplitude the same frequency region or regions In sum, when signals are mixed together their simultaneous amplitudes—the samples—are added together. Scaling a signal is multiplying the signal by a constant or smoothly changing value, in order to turn up or down the volume of a signal In a car or on a phone/iPod device, you scale the audio signal with the volume knob. What scales signals in Logic? The faders on the channel strips Let’s see how this works. Math reminders Remember that when you multiple any number by 0 you get 0. Remember that when you multiple any number by 1, you get that number. Therefore, we can think of the range of the value that we scale audio signals by as going from 0 to 1 Or higher if we want increase the gain When two signals are mixed together, the output consists of both sounds. When a signal is scaled, the output consists of the original sound only, but at a different amplitude (unless the the gain on the fader is set to Unity, which is no change). In sum, scaling a signal involves multiplying the signal by a smoothly changing or constant value that is typically positive and between 0 and 1. In DAWs the 0 to 1 is typically converted to decibels, which are typically displayed as negative infinity to 0 or slightly higher In Logic it is negative infinity to positive 6 dB |
|
|
Sometimes more is less. Masking occurs when sounds have similar frequency components. When this is the case, only the louder of any two similar frequency components is audible to the listener. The quieter of the two is covered—masked. Therefore, having many audio files playing simultaneously is often counterproductive and leads to a messy, unclear mix. Of course there are creative reasons to have a messy amount of density in any frequency region. Note that our ears are best able to comprehend multiple auditory events, if they are within the range of the human voice, approximately 150 to 350 hertz |
|
Mixing Basics: Depth and Location |
|
We can define depth as the perceived distance between the listener and the audio components in the mix. Even when you are thinking about it, you have an understanding of the distance of sounds from your person—even if they are in a recording. Mixes that sound “flat” lack depth. Depth is achieved via many different mixing practices. Stereo signals with little different between the right and left channels will lack depth. Having different audio signals at different amplitudes increases the depth of the mix. If you have largely mono-signals, panning is very important to a sense of depth. Having different amounts of reverb on different components of a mix increases are sense of depth. What is reverb? Having a distinct EQ on different sounds, increases the perceived depth of the mix. What is EQ? The same basic features will affect what we perceive as the location of the audio signal. Note that panning is more effective, meaning it results in more precise localization of the audio signal, when you pan mono, opposed to stereo, audio files What is really happening when we pan stereo files in Logic? How does Pro Tools handle panning stereo files? |
|
|
Mastering a mix is an art of it’s own. Note that the term mastering generally applies to an entire album The most significant parts of mastering are: Adding EQ to the output track Adding dynamics processing to the output track Adding reverb to the output track We will cover EQ and dynamics processing later in the semester Let’s look at the channel EQ in Logic now so that you can learn how to see what frequencies are present in your mix and how loud they are |
|
|
The primary type of software we will learn and use in this class is a DAW Digital Audio Workstation A digital audio workstation is a hardware or software device designed primarily for recording, editing, manipulating, and reproducing digital audio Multitrack MIDI and Audio Playback Engines |
|
Primary Windows of a Modern DAW |
|
Edit (Pro Tools), Main Window (Logic and Live), and Sequence (Digital Performer) are all basically the same thing This is the primary window for editing, arranging, and sequencing your audio clips that will constitute your project |
|
|
Another window that automatically attaches itself to the Main window in Logic, but is common to all DAWs, is the Sample Editor, which is also referred to as a Sound File Editor in other programs Note that by default the audio file editor doesn’t allow any changes. You must allow destructive editing the audio preferences. To open the audio file editor you double click on a sound file in the Main view |
|
|
With the sample editor you can really zoom in and see a sound file up closeThere are also a variety of editing functions available in the sample editor that are not available in the Main view.When you edit a file here, it is changed forever on the disk! All instances of the file in your project will be replaced with the edits that occur in the sample editor! |
|
|
All DAWs have a mixer window and thankfully that is what they all call it! To open the mixer window in Logic either Press command + 2 Window menu > Mixer The mixer menu has one channel strip for each track in your project A channel strip is a vertical graphic display of the settings available in the mixer that impact all sound file on that track |
|
How do you know if a file is stereo or mono? |
|
We can look in the project audio window
Before we import a file, we can select the file in Mac’s finder and press Command + I This will open Mac’s file inspector, which will have the channel format information available for .wav and .aif files |
|
Then we can use the pointer tool to: |
|
Select a file Move it Delete it Copy it Command + C then Command + V to paste Hold option and then drag the selected file to the location you want the copy Select Multiple files The pointer tool can accomplish different functions depending on its location relative to a sound file in the Main view: When you are in the lower left or lower right corner of a sound file the graphic for the pointer tool will change and you can trim the sound file by clicking and dragging You can compress or expand the audio file by holding the option key and dragging from the same position from which you can trim You can loop the sound file by moving the pointer tool to the upper right corner of the sound file and dragging to the right Now I’ll demonstrate the pointer tool in Logic |
|
|
Click a region to erase it |
|
|
|
|
This useful tool cuts regions where you click it It is good to set this as the secondary tool Let’s look at this tool in Logic |
|
|
joins separate regions into a signal region |
|
|
Click a region(s) to solo them during playback, which mutes the other regions |
|
|
Mute any regions by clicking them |
|
|
Click and drag over an area to make that area fill the entire Main window We will navigate the Main window in more effective ways |
|
|
What is a click/pop? How do we avoid these? Note that in Logic’s preferences you can make it so the pointer tool will have the fade tool functions depending on it’s location relative to a sound file Let’s look at Logic’s fade tool Editing preferences (fade regions) |
|
Setting Tempo and Controlling the grid |
|
To change the tempo, drag on the tempo indication in the transport bar You can also change the meter You can also change the format of the “LCD”s appearance… |
|
|
is when the start of a sound file will have its movement limited such that the start of files begin at points along the grid |
|
|
Vibrations of air pressure Oscillating compressions (greater pressure) and rarefactions (less pressure |
|
|
farther from the center line represent perceptually louder sounds |
|
|
The more rapidly the air pressure oscillates between rarefactions and compressions, the higher humans perceive the pitch to be The more slowly the air pressure oscillates between rarefactions and compressions, the lower humans perceive the pitch to be |
|
|
We measure frequency in cycles per cycles
1 hertz is one cycle per seccond
|
|
|
a perfectly periodic cycle produces a single frequency |
|
|
many frequencies that occur simultaneously at different amplitudes, and look more like: |
|
How do we percieve frequency |
|
We do not perceive frequency linearly20 Hz to 40 Hz represents a greater difference in pitch, one octave, than 40 Hz to 60 Hz, which is about a minor 6th All doublings of frequency are perceived equally |
|
|
Like frequency, amplitude is not perceived as changing equally when there are equal changes in air pressure The perceived difference between .25 and .5 is greater in comparison to the perceived difference between .5 and .75
Similar to frequency, greater amplitudes require greater change in order to be perceived as the same difference Unlike frequency, the manner in which we commonly measure amplitude takes the non-linear manner in which we hear amplitude and corrects it Decibels (dB) Decibel range 0 to 135 dB SPL -90 to 0 -90 to +24 +10 or -10 dB is perceived as about twice as loud or twice as quiet, respectively |
|
|
Since sounds result from multiple sine tones at different amplitudes and change in amplitude rapidly, we do not see the frequencies present in real sounds in the amplitude-time representations The spectrum of a sound, which is defined as the relative amplitudes of all the frequency components present in a sound at any given time, requires a different graphical representation Spectrum really determines what a sound sounds like Two instruments or two people realizing the same exact pitch at the same exact volume still sound different Why? The fundamental frequency is the same The amplitude of each singer is the same The relative amplitudes of the overtones |
|
|
Another very important feature of how sounds evolve is how their amplitude changes overtime, rather than describe all the minute changes, these changes are averaged What is an envelope? What is an amplitude envelope? Over simplification: ADSR Attack Decay Sustain Release What is a transient or attack transient? |
|
how does a microphone work |
|
A microphone transduces (converts energy) acoustical pressure (waves in the air) to electrical energy (oscillations in voltage) These oscillations in electrical energy can be graphed in the same exact manner as acoustical energy To store sound in an analog format magnetic particles are displaced on tape by an incoming waveform in a theoretically continuous manner |
|
|
After we transduce acoustical energy to electrical energy, we can convert the analog signal to a digital format. Then we can store, reproduce, combine, and manipulate digital audio signals The same graphs still apply dB is typically from -90 or –infinity Bits |
|
Digital Audio: Sampling Rate |
|
A digital audio system encodes, stores, and reproduces audio by taking or recalling rapid snap shots of the amplitude of an audio signal It can only do this every so often, not continuously, it is discrete The speed at which the samples are recorded is the called the sampling rate The sampling rate is measured in hertz (the number of samples per second) Common sample rates are 44.1 kHz, 48 kHz, 88.2 khz, 96 kHz The sampling rate determines the highest frequency a digital audio system can correctly represent The highest frequency that can possibly be represented is the sampling rate/2: this frequency is called the Nyquist frequency Since higher range of hearing is about 20 kHz, 44.1 kHz is sufficient This is because 44.1 kHz/2 = 22,500 Hz If 44.1 kHz is sufficient, why are there higher sampling rates? Spatial Location Detail Remember: the sampling rate determines the highest frequency that can possibly be represented Since nothing is discrete, digital audio systems can also not record all amplitudes, only certain amplitudes These points can be represented by a horizontal grid Etc… The bit depth determines the number of horizontal lines, i.e. amplitudes that can be represented by the digital audio system Common bit depths are 16, 24, and 32 These do not refer to the number of lines, but are exponents of a binary system 16 bit = 216 = 65,536 24 bit = 224 = 16,777,216 The bit depth determines the dynamic range of the digital audio system Dynamic range is the difference between the quietest sound that can be represented (silence) and the loudest sound The higher the bit depth, the wider the dynamic range 16 bit audio has a dynamic range of 96 dB 24 bit audio has a dynamic range of 144 dB |
|
|
Clipping occurs when the incoming signal exceeds the maximum amplitude that can be represented by the digital audio system This causes audio artifacts that are generally not desired The audio that would be smooth, becomes a flat line at the negative and positive poles of the digital audio representation Aliasing occurs when the incoming signal contains frequencies that approach the Nyquist frequency High frequencies require faster sampling times, if the frequency is too high, it will be digitally reproduced as a lower frequency Aliasing is avoided in digital audio systems via built in anti-aliasing filters An anti-aliasing filter eliminates frequencies that approach the Nyquist frequency before they are digitally represented Remember to associate sampling rate with frequency; also, aliasing should be associated with frequency |
|
Digital Audio: Quantization |
|
This type of error occurs when a sample is shifted in amplitude from its initial position to a point that can be represented in the digital audio system This is called quantization error and is only notable when it is audible |
|
Digital Audio: Sound Files |
|
A digital audio sound file is a file stored in a digital format that represents a digital audio signal over time There are three primary parameters that determine the quality of the sound file Sampling Rate 44.1 kHz (CD quality audio) 48 (Digital Audio Tape and many movie formats) 88.2 kHz 96 kHz What does the bit depth determine? The dynamic range Channels Mono A Monophonic sound file stores and reproduces one stream of digital audio Stereo A Stereophonic sound file stores and reproduces two streams of digital audio Quad A quadrophonic sound file stores and reproduces four streams of digital audio 5.1 5.1 is a standard surround sound format and stores 5 streams of digital audio, plus a distinct stream that is only for low frequency sounds Standard File Formats (Uncompressed) .wav (Waveform Audio File Format) .aif or .aiff (Audio Interchange File Format) Compressed Audio Formats FLAC (Free Loseless Audio Codec) .mp3 (MPEG-2 Audio Layer III) .aac (Advanced Audio Coding) |
|
What is a signal? Basic Signal Flow |
|
Is a function (the representation of the compression and rarefactions of air) that describes the attributes of a phenomena (acoustical energy) We are dealing with both analog and digital audio signals What is signal flow Signal flow is the path that a signal takes The basic terms related to signal flow are: Source The starting point of the signal A Microphone (analog audio) Sound File (digital audio) Inputs A point along a signal path that accepts a signal into any component of the system Outputs A point along a signal path that outputs a signal from any component of the system |
|
|
Meters provide a visual representation of the amplitude of an audio signal at a point within a signal path |
|
Frequency Response Curves |
|
Nothing in an audio system is benign This means that all (basically) components have some effect on an audio signal signal that passes through them This effect is described by graphing the change in amplitude that occurs at any given frequency This graph, which is in the amplitude-frequency domain, is called a frequency response curve Audio components are imprecise enough that a change at any frequency of less than + or – 3 dB is considered perfect (flat) |
|
|
wet is processed dry is not |
|
|
|
What is the Nyquist Frequency for each of these sampling rates? |
|
Bit Depth 16 Bits (CD quality audio) 216 = 65,536 24 Bits 224 = 16,777,216 32 bits 232 = 4,294,967,296 |
|
What is phase distortion? What is phase? |
|
Phase distortion occurs when certain frequencies shift in phase while other do not sometimes this is desirable… |
|