Who Is T Pain And What Is Auto Tune

Posted By admin On 04.05.20
Who Is T Pain And What Is Auto Tune

Oct 13, 2018  From T-Pain to Travis Scott: The Rap Auto-Tune Spectrum. You don’t need extensive research in sound engineering to understand how rap’s biggest stars are utilizing Auto-Tune. Oct 31, 2014  On Wednesday, T-Pain dropped the auto-tune, paired with a piano player and amazed the Internet with his natural singing voice on the NPR Tiny Desk Concert. Apr 25, 2014    What people generally mistake as the singer being particularly off-key—the robotic, pronounced style of Auto-Tune made famous by T-Pain—is. Nov 29, 2017  Realistic Pitch Correction With Auto-Tune Evo With all of the attention these past few years on the Auto-Tune Vocal Effect (the T-Pain/Cher style effect), it’s easy to forget that Auto-Tune was initially designed for natural-sounding pitch correction. Sep 27, 2018  Auto-Tune is a piece of software which corrects singers when they sing out of tune, either live or in the studio. It was invented by an engineer who was developing acoustic tests for interpreting seismic data, and when he realised that it could be used to alter pitch, he changed the landscape of.

Auto-Tune is one of the most widely used plug-ins in music production. This tutorial shows you the power within this amazing audio processor.

In the 22 years since it’s inception (1997), Auto-Tune has been the industry standard for tuning vocals, and for good reason. From my own personal experience, it’s still my go-to tuning software, as it can keep up with my own workflow, and does exactly what I need it to do. There are many other tuning softwares available, but none have the proven to me better. In the past 20 years, I’ve never had a single negative comment, or even anyone notice that I’ve used a tuning software, which is exactly as it should be. There are many people out there wanting to lay blame on the tools for their work sounding robotic, or unnatural. I may take some heat for saying so, but this doesn’t have to be the case if you learn how to use your tools properly; pay attention to what the settings do. If something doesn’t sound right, keep tweaking until it does. It’s as simple as that. Now I must say though, there is a limit to how much tuning or editing you CAN do to a less than perfect performance. A common saying in the industry comes to mind - “You can’t polish a turd”. I could probably write an entire book on tuning vocals, but the intent here is to give you an inside look at the most commonly used parameters and how to use Auto-Tune in a more effective way….

The Correction Modes In Auto-Tune

There are two correction modes and ways to use Auto-Tune. There’s Auto Mode, also know as “lazy mode”, and Graphical Mode, also known as “Auto-Tune”. Auto Mode basically runs in real-time, and analyzes the audio as it passes through. It then determines what to do to the audio, as it passes through. Adjusting your settings can help it to do a better job of tuning, but nothing replaces your own ears on what needs to be tuned, and what does not. The only time I personally use Auto Mode is when I have several songs that need to be mixed in a very short amount of time, and there simply is not enough time, or budget, to properly tune the tracks. Graphic Mode is a bit more involved, but yields MUCH better results! Graphic Mode basically works like this: You capture (track pitch) the performance once into the plug-in, so it can be analyzed, displayed and edited. (Same for most other professional tuning software) Then, you choose which notes are to be tuned, and how, and which are to be left alone. This is far superior to every single bit of audio being automatically adjusted. By the way, if what you are trying to achieve with Auto-Tune is the T-Pain, or CHER effect, use Auto Mode with a very fast Retune Speed, and you can skip the rest of this article.

Auto Mode

Auto Mode is the default mode when opening Auto-Tune. It is designed to automatically analyze audio as it passes through, and tune up or down to the nearest note everything that passes through. With that being said, there are some very important things to pay attention to, as they will help you get much better results. Paying attention to a few of these settings following, you can minimize Auto-Tune attempting to tune things that should not be, such as vibrato and notes that are intentionally slurred from one note to another.

Input Type: This basic setting help Auto-Tune focus on specific frequency ranges and types based upon the type of content you are trying to tune. Always start here!

  • Soprano -For high or female voices
  • Alto/Tenor -For normal voices
  • Low Male -For Barry White
  • Instrument -For violins, violas, and other types of monophonic instruments
  • Bass Inst -For lower pitched instruments, and yes, it is quite common to tune a bass guitar.

Scale: Setting the scale to the actual key of your song will most certainly help minimize errors in automatically tuning. Chromatic is the default scale, and probably most popular, but setting the proper key of your song will narrow down the choices of tuning from eleven notes down to the seven within a given key. For example, you have a song in the key of “C”, which has no sharps or flats. A singer sings a little bit sharp on a trying to sing a “C”. If the note sang is closer to “C#”, Auto-Tune will try to tune the note up to “C#”, resulting in an improperly tuned note. When setting the scale to C Major in this same scenario, the singer would have to sing past “C#” for it to create and error and try to correct to a “D”. This is another great starting point for Auto Mode usage. As you can see from the picture to the right, there are many other scales to choose from, and yes, Auto-Tune is used world wide, and there are many other scales available to those around the world using alternate tuning and scales.

Retune Speed: This is one of the most important settings to pay attention to, as it sets how fast Auto-Tune will tune a note, similar to a glide or fade time from non-tuned to fully tuned processing. Setting a very fast time will remove any variations in pitch, but can yield some very unnatural results. But then again, this is a big part of creating the T-Pain/Cher effect. If this is what you are looking for, absolutely start here with a very fast time!

Humanize: This allows sustained notes to have a slower Retune speed than the shorter duration notes. Typically you would start a setting of 0 while setting the Retune speed, making sure all notes that need tuning are being tuned, then adjusting the Humanize will help with sustained notes from not sounding overly tuned, while still being fast enough to tune shorter duration notes.

Natural Vibrato: This is independent of your pitch settings and is used solely to tame natural vibrato of a performance. Leaving it at it’s default setting of 0, will not affect the original vibrato, but adjusting will minimize the amount of vibrato allowed. Once again, this is independent of pitch controls.

Targeting Ignores Vibrato: Turning this on can help with what Auto tuning tries to tune and what it ignores. If you have a track with a lot of vibrato, try turning this on and see if it helps. This is something that would typically be used with a lead type of vocal, allowing the natural vibrato to be ignored. Backing vocals typically shouldn’t have as much vibrato, therefore, minimizing vibrato is preferred.

Target Notes Via MIDI: This is quite fun to play with, along with fast Retune speeds. When engaging, Auto-Tune does nothing until a MIDI note is present from a keyboard or MIDI track, then it tunes to the MIDI notes present. You can then play in a melody from a MIDI device, and the track will be tuned to what you play.

Graphic Mode

Graphic Mode is the mode you will use the most often when quality is the primary concern. The advantage: Graphic mode allows you to specify which notes are to be tuned, and which are not, along with independent settings for each note to be tuned, instead of the global settings to be used for every note passing through in Auto Mode. Ready to get started?

Correction Mode to Graph: Pretty self-explanatory, slide or click the correction mode from Auto to Graph.

Options

Click on the options button next to correction mode to get here:

Enter buffer seconds: The default here is 240 seconds, which is 4 minutes at 44.1k or 48k sample rate, based upon your session settings. A minute song would require 300 seconds. There’s no need to set a really high buffer amount, as it uses much more RAM from your system. The max setting of 14400 would yield 4 hours on one track! If any of you actually need that much, I’d like to know what project you are working on.

Default Retune speeds: After learning a bit about retune speed from Auto Mode, you can set the default retune speeds for various tune settings in which I will discuss shortly here, but this is where you set your defaults.

Track Pitch in Autotune

The first thing we need to do is capture, or “Track Pitch”, our audio track into Auto-Tune so that it can analyze it, draw a graphic representation of the audio pitches, and respond appropriately. This allows Auto-Tune the time to not only respond quickly, but also to ramp in tuning before a note needs to be tuned, which is impossible in Auto Mode, as it is only running in real-time. So to get started:

  • Click on the “Track Pitch” button: It will turn “Red” when enabled to track pitch.
  • Play the track: Play your song from beginning to end, or section by section. As long as all the information that needs to be tuned is tracked in, you can then proceed.
  • Turn off the “Track Pitch” button: Self-explanatory, but necessary to start tuning.

Decisions decisions!

You have two options now for tuning. You can draw or auto-create lines/curves or notes. The difference is that notes are typically easier to work with and treat an area of audio as a block, or note, and a line or curve allows you to treat bends in between specific notes with a little more intent.

The Tools

There are a few tools to start with here and I’ll describe them briefly from left to right.

  • The Line Tool is used to draw multi-segment lines on the pitch graph. It is typically used when you want to hold a straight pitch, or bend evenly from one pitch to another.
  • The Curve Tool is used when you would like to free-hand draw in pitch correction. I personally find this one quite difficult to use.
  • The Note Tool is used to draw notes. These are constrained to specific pitches and cannot vary off of them. I tend to use these more often than the line tool.
  • The Arrow Tool is the most commonly used tool, as it is how you select and edit existing lines or notes.
  • The Scissors Tool is used to cut existing lines or notes into separate pieces for individual editing. I typically use this when notes or lines have been generated automatically, and need to be separated. We’ll take a look at automatically generating lines or notes shortly.
  • The Magnifying Glass is used for zooming. Simply click and drag a box around what you would like to zoom into, and release to zoom.
  • The I-Beam Tool is used to select an area of time to be used edit with in, or generate data between. This is also a commonly used tool.
  • The Hand Tool is used to move the display. Click and hold on an area of the screen, and then drag the screen to an area you would like to see. I find the scrolling functions on apple mice work quite nicely for this same purpose, so this one doesn’t get used much.

Manual Editing/Drawing of Lines and Notes in Auto-Tune

In this example above, after capturing (Track Pitch) a vocal into Auto-Tune, I selected the Line Tool, and then clicked on “Snap to Note” which forces any segments of a line to snap to a specific note. Upon clicking the last segment, it must be double-clicked to end the line. After drawing this line, it is still selected, and retune speed can be set for this line independently of other lines. If it is not selected for some reason, using the Arrow Tool, click on the line to re-select it, and then you can adjust the retuning speed. The advantage of using the Line tool is that, as shown, the bend from one note to another can be drawn in as well.

In this example to the below, I selected the Note Tool, and then drew in some notes. I’ve found that drawing notes from where they are on key, or crossing through the desired key, on the beginning and end of a note give the best results. The advantage of the working with Notes is that Notes can be moved from one pitch to another much easier than trying to move a line.

Automatically creating Lines and Notes in Auto-Tune

Select an area: Using the I-Beam Tool, select an area that you wish to generate notes or Lines/Curves> Personally, I like to select the duration of the entire song, and then fix the points that are not created to my satisfaction, rather than manually create each event, one by one.

Down at the bottom of the plug-in next to “Track Pitch” are the option for “Make Curve” and “Make Notes”, which are how we can auto-create “Notes” or “Line Curves”.

Make Curve: Clicking the Make Curve button will automatically draw a curved line, matching exactly the pitches captured in from the Track Pitch function earlier. As you can see to the right, there are green lines overlapping the detected pitches, and anchor points on either side of each detected event. These anchor points can be moved independently by clicking on, and dragging each anchor point up or down. This is particularly useful is in key, but starts drifting sharp or flat as a note is being held out. You need to use the Arrow Tool to manipulate these points.

In the example below, an area was first selected using the I-Beam Tool, then using the Arrow Tool, the Curves were moved up together to another pitch, keeping all the bending between notes still intact. If only part of a curve or line is to be moved, the line can be separated into two segments by clicking at the desired split point using the Scissors Tool. Now the segments can be individually manipulated.

In the example below, the “Make Notes” button was pressed after selecting the same area as described above. The advantage with working this way is that the only things being tuned, or manipulated are the notes that are being sustained, and the bending in-between notes is left alone. I find it particularly advantageous to modify these notes using the Arrow Tool. What I’ve found to give the best results is to drag the edges of each note to a crossing point, where the original audio is on, or crossing through, the correct pitch. By starting and stopping the tuning process on these points that are already in tune, I’ve found that I have much more transparent tuning, and less “T-Pain” sounding tuning.

Hopefully this is enough to get you started in Auto-Tuning, and has shed some light onto the mysterious world of tuning. Honestly, Auto-Tune has saved so many projects from bankrupting, and allowed thousands of productions to keep amazing performances, that in the past would have been performed over, and over, and over, and over again, until finally in key. Did anyone happen to think about the feeling, or emotion, left in a recording that an artist just finished singing for the 150th time? Yes, it may finally be perfectly in tune, but is the emotion of the singer still representing the initial idea of the song, and convincing all the listeners that this is a happy song. I think Elvis left the building about 145 takes back…. My point is, if a take sounds and feels great, but has a little pitch problems here and there, it’s worth tuning vs. beating the life out of a part until it is performed technically correct.

Until next time, happy tuning!

Mihai BoloniCreative Director & Avid Expert Pro Tools instructor
Mihai has made it his life's work to help others in the audio industry. Mihai gained experience as an audio engineering Full Sail Instructor in early 2000's and joined ProMedia in 2002. Since then, he has become one of Avid's Top Leading and most experienced and in-demand Instructors Worldwide, with clients who come to him form all over the world. Corporate clients include MTV, PBS, NBC, Telemundo, The Voice's Chief Engineer Mike Bernard, Atlanta Public School System, countless professors from leading Universities, CNN, Turner Broadcasting, and the top producers, artists, and engineers in leading studios and record labels. For over 20 years, Mihai has continued to work as an Audio Engineer, Record Producer, Songwriter (ASCAP), Dog Lover, Record Label Owner, and Expert Level AVID Certified Pro Tools Instructor.

Promedia Training offers Pro Tools Training, from beginner to advanced, including Avid Pro Tools Certification and is an official Avid Training Facility.
Now you have access to our most popular 101 course with our exclusive ONLINE PRO TOOLS TRAINING with 20 Learning Videos, your own Pro Tools Session, Bonus Drum Loop Library as our top instructor goes through step by step so you can follow along. Learn Recording, Editing and Mixing in Pro Tools and take your Music Production to the next level.

Perfect for singers, songwriters, musicians, producers, and aspiring engineers.

If you don't have PRO TOOLS you can download the FREE VERSION HERE

Auto-Tune — one of modern history’s most reviled inventions — was an act of mathematical genius.

The pitch correction software, which automatically calibrates out-of-tune singing to perfection, has been used on nearly every chart-topping album for the past 20 years. Along the way, it has been pilloried as the poster child of modern music’s mechanization. When Time Magazine declared it “one of the 50 worst inventions of the 20th century”, few came to its defense.

But often lost in this narrative is the story of the invention itself, and the soft-spoken savant who pioneered it. For inventor Andy Hildebrand, Auto-Tune was an incredibly complex product — the result of years of rigorous study, statistical computation, and the creation of algorithms previously deemed to be impossible.

Hildebrand’s invention has taken him on a crazy journey: He’s given up a lucrative career in oil. He’s changed the economics of the recording industry. He’s been sued by hip-hop artist T-Pain. And in the course of it all, he’s raised pertinent questions about what constitutes “real” music.

The Oil Engineer

Andy Hildebrand was, in his own words, “not a normal kid.”

A self-proclaimed bookworm, he was constantly derailed by life’s grand mysteries, and had trouble sitting still for prolonged periods of time. School was never an interest: when teachers grew weary of slapping him on the wrist with a ruler, they’d stick him in the back of the class, where he wouldn’t bother anybody. “That way,” he says, “I could just stare out of the window.”

After failing the first grade, Hilbrebrand’s academic performance slowly began to improve. Toward the end of grade school, the young delinquent started pulling C’s; in junior high, he made his first B; as a high school senior, he was scraping together occasional A’s. Driven by a newfound passion for science, Hildebrand “decided to start working [his] ass off” -- an endeavor that culminated with an electrical engineering PhD from the University of Illinois in 1976.

In the course of his graduate studies, Hildebrand excelled in his applications of linear estimation theory and signal processing. Upon graduating, he was plucked up by oil conglomerate Exxon, and tasked with using seismic data to pinpoint drill locations. He clarifies what this entailed:

“I was working in an area of geophysics where you emit sounds on the surface of the Earth (or in the ocean), listen to reverberations that come up, and, from that information, try to figure out what the shape of the subsurface is. It’s kind of like listening to a lightning bolt and trying to figure out what the shape of the clouds are. It’s a complex problem.”

Three years into Hildebrand’s work, Exxon ran into a major dilemma: the company was nearing the end of its seven-year construction timeline on an Alaskan pipeline; if they failed to get oil into the line in time, they’d lose their half-billion dollar tax write-off. Hildebrand was enlisted to fix the holdup — faulty seismic monitoring instrumentation — a task that required “a lot of high-end mathematics.” He succeeded.

“I realized that if I could save Exxon $500 million,” he recalls, “I could probably do something for myself and do pretty well.”

A subsurface map of one geologic strata, color coded by elevation, created on the Landmark Graphics workstation (the white lines represent oil fields); courtesy of Andy Hildebrand

So, in 1979, Hildebrand left Exxon, secured financing from a few prominent venture capitalists (DLJ Financial; Sevin Rosen), and, with a small team of partners, founded Landmark Graphics.

At the time, the geophysical industry had limited data to work off of. The techniques engineers used to map the Earth’s subsurface resulted in two-dimensional maps that typically provided only one seismic line. With Hildebrand as its CTO, Landmark pioneered a workstation an integrated software/hardware system — that could process and interpret thousands of lines of data, and create 3D seismic maps.

Landmark was a huge success. Before retiring in 1989, Hildebrand took the company through an IPO and a listing on NASDAQ; six years later, it was bought out by Halliburton for a reported $525 million.

“I retired wealthy forever (not really, my ex-wife later took care of that),” jokes Hildebrand. “And I decided to get back into music.”

From Oil to Music Software

An engineer by trade, Hildebrand had always been a musician at heart.

As a child, he was something of a classical flute virtuoso and, by 16, he was a “card-carrying studio musician” who played professionally. His undergraduate engineering degree had been funded by music scholarships and teaching flute lessons. Naturally, after leaving Landmark and the oil industry, Hildebrand decided to return to school to study composition more intensively.

While pursuing his studies at Rice University’s Shepherd School of Music, Hildebrand began composing with sampling synthesizers (machines that allow a musician to record notes from an instrument, then make them into digital samples that could be transposed on a keyboard). But he encountered a problem: when he attempted to make his own flute samples, he found the quality of the sounds to be ugly and unnatural.

“The sampling synthesizers sounded like shit: if you sustained a note, it would just repeat forever,” he harps. “And the problem was that the machines didn’t hold much data.”

Hildebrand, who’d “retired” just a few months earlier, decided to take matters into his own hands. First, he created a processing algorithm that greatly condensed the audio data, allowing for a smoother, more natural-sounding sustain and timbre. Then, he packaged this algorithm into a piece of software (called Infinity), and handed it out to composers.

A glimpse at Infinity's interface from an old handbook; courtesy of Andy Hildebrand

Infinity improved digitized orchestral sounds so dramatically that it uprooted Hollywood’s music production landscape: using the software, lone composers were able to accurately recreate film scores, and directors no longer had a need to hire entire orchestras.

“I bankrupted the Los Angeles Philharmonic,” Hildebrand chuckles. “They were out of the [sample recording] business for eight years.” (We were unable to verify this, but The Los Angeles Times does cite that the Philharmonic entered a 'financially bleak' period in the early 1990s).

Unfortunately, Hildebrand’s software was inherently self-defeating: companies sprouted up that processed sounds through Infinity, then sold them as pre-packaged soundbanks. “I sold 5 more copies, and that was it,” he says. “The market totally collapsed.”

But the inventor’s bug had taken hold of Hildebrand once more. In 1990, he formed his final company, Antares Audio Technology, with the goal of innovating the music industry’s next big piece of software. And that’s exactly what happened.

The Birth of Auto-Tune

A rendering of the Auto-Tune interface; via WikiHow

At a National Association of Music Merchants (NAMM) conference in 1995, Hildebrand sat down for lunch with a few friends and their wives. Randomly, he posed a rhetorical question “What needs to be invented?” — and one of the women half-jokingly offered a response:

“Why don’t you make a box that will let me sing in tune?”

“I looked around the table and everyone was just kind of looking down at their lunch plates,” recalls Hildebrand, “so I thought, ‘Geez, that must be a lousy idea’, and we changed the topic.”

Dec 01, 2017  50+ videos Play all Mix - Setting Up Auto-Tune Real Time for Live Performances with Ableton YouTube How to Set Up Live Auto-Tune for Worship Vocals - Duration: 20:54. Churchfront with Jake. Possible to run auto tune live. To take maximum advantage of the power of Auto-Tune Live’s pitch cor - rection functions, you should have a basic understanding of pitch and how Auto-Tune Live functions to correct pitch errors. This chapter presents basic terminology and introduces Auto-Tune Live’s operating paradigm, giving you the background you need to use it e!ectively. Apr 20, 2017  So you should have one instrumental beat and one vocal track. That way, the only processing the computer is doing is basic vocal processing and autotune. Set buffer latency to low. It works quite well. But requires you to use your laptop and audio interface live. It makes the mixing guys life a bit harder, but yeah. Live Auto-Tune: How does it work? Specifically, is it possible to auto-tune on the fly like that so it sounds like the singer is hitting the notes with none of the typical auto-tuned sound? The Auto-tune Live was built for this specific reason - auto-tune during live performance. Jun 25, 2018  For live performance and computers latency is the main concern. The only way to run the autotune plugin on a computer at a gig is to use a pro tools hd rig. If you mean you want an effect LIKE autotune then there are loads of options like tc stop boxes etc. I would suggest anything without getting a computer involved is best.

Hildebrand completely forgot he’d even had this conversation, and for the next six months, he worked on various other projects, none of which really took off. Then, one day, while mulling over ideas, the woman’s suggestion came back to him. “It just kind of clicked in my head,” he says, “and I realized her idea might not be too bad.”

What “clicked” for Hildebrand was that he could utilize some of the very same processing methods he’d used in the oil industry to build a pitch correction tool. Years later, he’d attempt to explain this on PBS’s NOVA network:

'Seismic data processing involves the manipulation of acoustic data in relation to a linear time varying, unknown system (the Earth model) for the purpose of determining and clarifying the influences involved to enhance geologic interpretation. Coincident (similar) technologies include correlation (statics determination), linear predictive coding (deconvolution), synthesis (forward modeling), formant analysis (spectral enhancement), and processing integrity to minimize artifacts. All of these technologies are shared amongst music and geophysical applications.'

At the time, no other pitch correction software existed. To inventors, it was a considered the “holy grail”: many had tried, and none had succeeded.

The major roadblock was that analyzing and correcting pitch in real-time required processing a very large amount of sound wave data. Others who’d made an attempt at creating software had used a technique called feature extraction, where they’d identify a few key “variables” in the sound waves, then correlate them with the pitch. But this method was overly-simplistic, and didn’t consider the finer minutia of the human voice. For instance, it didn’t recognize dipthongs (when the human voice transitions from one vowel to another in a continuous glide), and, as a result, created false artifacts in the sound.

Hildebrand had a different idea.

As an oil engineer, when dealing with massive datasets, he’d employed autocorrelation (an attribute of signal processing) to examine not just key variables, but all of the data, to get much more reliable estimates. He realized that it could also be applied to music:

“When you’re processing pitch, you add wave cycles to go sharp, and subtract them when you go flat. With autocorrelation, you have a clearly identifiable event that tells you what the period of repetition for repeated peak values is. It’s never fooled by the changing waveform. It’s very elegant.”

While elegant, Hildebrand’s solution required an incredibly complex, almost savant application of signal processing and statistics. When we asked him to provide a simple explanation of what happens, computationally, when a voice signal enters his software, he opened his desk and pulled out thick stacks of folders, each stuffed with hundreds of pages of mathematical equations.

“In my mind it’s not very complex,” he says, sheepishly, “but I haven’t yet found anyone I can explain it to who understands it. I usually just say, ‘It’s magic.’”


The equations that do autocorrelation are computationally exhaustive: for every one point of autocorrelation (each line on the chart above, right), it might’ve been necessary for Hildebrand to do something like 500 summations of multiply-adds. Previously, other engineers in the music industry had thought it was impossible to use this method for pitch correction: “You needed as many points in autocorrelation as the range in pitch you were processing,” one early-1990s programmer told us. “If you wanted to go from a low E (70 hertz) all the way up to a soprano’s high C (1,000 hertz), you would’ve needed a supercomputer to do that.”

Who Is T Pain And What Is Auto Tune Mean

A supercomputer, or, as it turns out, Andy Hildebrand’s math skills.

Hildebrand realized he was limited by the technology, and instead of giving up, he found a way to work within it using math. “I realized that most of the arithmetic was redundant, and could be simplified,” he says. “My simplification changed a million multiply adds into just four. It was a trick — a mathematical trick.”

With that, Auto-Tune was born.

Auto-Tune’s Underground Beginnings

Hildebrand built the Auto-Tune program over the course of a few months in early 1996, on a specially-equipped Macintosh computer. He took the software to the National Association of Music Merchants conference, the same place where his friend’s wife had suggested the idea a year earlier. This time, it was received a bit differently.

“People were literally grabbing it out of my hands,” recalls Hildebrand. “It was instantly a massive hit.”

At the time, recording pitch-perfect vocal tracks was incredibly time-consuming for both music producers and artists. The standard practice was to do dozens, if not hundreds, of takes in a studio, then spend a few days splicing together the best bits from each take to a create a uniformly in-tune track. When Auto-Tune was released, says Hildebrand, the product practically sold itself.

With the help of a small sales team, Hildebrand sold Auto-Tune (which also came in hardware form, as a rack effect) to every major studio in Los Angeles. The studios that adopted Auto-Tune thrived: they were able to get work done more quickly (doing just one vocal take, through the program, as opposed to dozens) — and as a result, took in more clients and lowered costs. Soon, studios had to integrate Auto-Tune just to compete and survive.

Images from Auto-Tune's patent

Once again, Hildebrand dethroned the traditional industry.

“One of my producer friends had been paid $60,000 to manually pitch-correct Cher’s songs,” he says. “He took her vocals, one phrase at a time, transferred them onto a synth as samples, then played it back to get her pitch right. I put him out of business overnight.”

For the first three years of its existence, Auto-Tune remained an “underground secret” of the recording industry. It was used subtly and unobtrusively to correct notes that were just slightly off-key, and producers were wary to reveal its use to the public. Hildebrand explains why:

“Studios weren’t going out and advertising, ‘Hey we got Auto-Tune!’ Back then, the public was weary of the idea of ‘fake’ or ‘affected’ music. They were critical of artists like Milli Vanilli [a pop group whose 1990 Grammy Award was rescinded after it was found out they’d lip-synced over someone else’s songs]. What they don’t understand is that the method used before doing hundreds of takes and splicing them together was its own form of artificial pitch correction.”

This secrecy, however, was short-lived: Auto-Tune was about to have its coming out party.

The “Coming Out” of Auto-Tune

When Cher’s “Believe” hit shelves on October 22, 1998, music changed forever.

The album’s titular track -- a pulsating, Euro-disco ballad with a soaring chorus -- featured a curiously roboticized vocal line, where it seemed as if Cher’s voice were shifting pitch instantaneously. Critics and listeners weren’t sure exactly what they were hearing. Unbeknownst to them, this was the start of something much bigger: for the first time, Auto-Tune had crept from the shadows.

In the process of designing Auto-Tune, Hildebrand had included a “dial” that controlled the speed at which pitch corrected itself. He explains:

“When a song is slower, like a ballad, the notes are long, and the pitch needs to shift slowly. For faster songs, the notes are short, the pitch needs to be changed quickly. I built in a dial where you could adjust the speed from 1 (fastest) to 10 (slowest). Just for kicks, I put a “zero” setting, which changed the pitch the exact moment it received the signal. And what that created was the ‘Auto-Tune’ effect.”

Before Cher, artists had used Auto-Tune only supplementally, to make minor corrections; the natural qualities of their voice were retained. But on the song “Believe”, Cher’s producers, Mark Taylor and Brian Rawling, made a decision to use Auto-Tune on the “zero” setting, intentionally modifying the singer’s voice to sound robotic.


Cher’s single sold 11 million copies worldwide, earned her a Grammy Award, and topped the charts in 23 countries. In the wake of this success, Hildebrand and his company, Antares Audio Technologies, marketed Auto-Tune as the “Cher Effect”. Many people in the music industry attributed the artist’s success to her use of Auto-Tune; soon everyone wanted to replicate it.

“Other singers and producers started looking at it, and saying ‘Hmm, we can do something like that and make some money too!’” says Hildebrand. “People were using it in all genres: pop, country, western, reggae, Bollywood. It was even used in an Islamic call to prayer.”

The secret of Auto-Tune was out — and its saga had just begun.

The T-Pain Debacle

In 2004, an unknown rapper with dreads and a penchant for top hats arrived on the Florida hip-hop scene. His name was Faheem Rashad Najm; he preferred “T-Pain.”

After recording a few “hot flows,” T-Pain was picked out of relative obscurity and signed to Akon’s record label, Konvict Muzik. Once discovered, he decided he’d rather sing than rap. He had a great singing voice, but in order to stand out, he needed a gimmick -- and somewhat fortuitously, he found just that. In a 2014 interview, he explains:

“I used to watch TV a lot [and] there was always this commercial on the channel I would watch. It was one of those collaborative CDs, like a ‘Various Artists’ CD, and there was this Jennifer Lopez song, ‘If You Had My Love.’ That was the first time I heard Auto-Tune. Ever since I heard that song — and I kept hearing and kept hearing it — on this commercial, I was like, ‘Man, I gotta find this thing.’”

T-Pain — who is capable of singing very well naturally — decided to use Auto-Tune to differentiate himself from other artists. “If I was going to sing, I didn’t want to sound like everybody else,” he later toldThe Seattle Times. “I wanted something to make me different [and] Auto-Tune was the one.” He contacted some “hacker” friends, found a free copy of Auto-Tune floating around on the Internet, and downloaded it for free. Then, he says, “I just got right into it.”

Poor or outdated shop equipment. Precision tune auto raleigh nc. Too much focus on immediate sales instead of building customer relationships for long term sales.

An old Auto-Tune pamphlet; courtesy of Andy Hildebrand

Between 2005 and 2009, T-Pain became famous for his “signature” use of Auto-Tune, releasing three platinum records. He also earned a title as one of hip-hop’s most in-demand cameo artists. During that time, he appeared on some 50 chart-toppers, working with high-profile artists like Kanye West, Flo Rida, and Chris Brown. During one week in 2007, he was featured on four different Top 10 Billboard Hot 100 singles simultaneously. “Any time somebody wanted Auto-Tune, they called T-Pain,” T-Pain later told NPR.

His warbled, robotic application of Auto-Tune earned him a name. It also earned him a partnership with Hildebrand’s company, Antares Audio Technologies. For several years, the duo enjoyed a mutually beneficial relationship. In one instance, Hildebrand licensed his technology to T-Pain to create a mobile app with app development start-up Smule. Priced at $3, the app, “I Am T-Pain”, was downloaded 2 million times, earning all parties involved a few million dollars.

In the face of this success, T-Pain began to feel he was being used as “an advertising tool.”

'Music isn't going to last forever,' he toldFast Company in 2011, 'so you start thinking of other things to do. You broaden everything out, and you make sure your brand can stay what it is without having to depend on music. It's making sure I have longevity.'

So, T-Pain did something unprecedented: He founded an LLC, then trademarked his own name. He split from Antares, joined with competing audio company iZotope, and created his own pitch correction brand, “The T-Pain Effect”. He released a slew of products bearing his name — everything from a “T-Pain Engine” (a software program that mimicked Auto-Tune) to a toy microphone that shouted, “Hey, this ya boy T-Pain!”

Then, he sued Auto-Tune.


T-Pain vs. Auto-Tune: click to read the full filed complaint

The lawsuit, filed on June 25, 2011, alleged that Antares (maker of Auto-Tune) had engaged in “unauthorized use of T-Pain’s name” on advertising material. Though the suit didn’t state an exact amount of damages sought, it does stipulate that the amount is “in excess of $1,000,000.”

Antares and Hildebrand instantly counter-sued. Eventually, the two parties settled the matter outside of the court, and signed a mutual non-disclosure agreement. 'If you can't buy candy from the candy store,' you have to learn to make candy,' T-Pain later told a reporter. “It’s an all-out war.”

Of course, T-Pain did not succeed in his grand plan to put Auto-Tune out of business.

“We studied our data to see if he really affected us or not,” Hildebrand tells us. “Our sales neither went up or down due to his involvement. He was remarkably ineffectual.”


For Auto-Tune, T-Pain was ultimately a non-factor. More pressing, says Hildebrand, was Apple, which aquired a competing product in the early 2000s:

“We forgot to protect our patent in Germany, and a German company, [Emagic], used our technology to create a similar program. Then Apple bought [Emagic], and integrated it into their Logic Pro software. We can’t sue them, it would put us out of business. They’re too big to sue.”

But according to Hildebrand, none of this matters much: Antares’ Auto-Tune still owns roughly 90% of the pitch correction market share, and everyone else is “down in the ditch”, fighting for the other 10%. Though Auto-Tune is a brand, it has entered the rarified strata of products Photoshop, Kleenex, Google — that have become catch-all verbs. Its ubiquitous presence in headlines (for better or worse) has earned it a spot as one of Ad Age’s “hottest brands in America.”

Yet, as popular as Auto-Tune is with its user base, it seems to be universally detested by society, largely as a result of T-Pain and imitators over-saturating modern music with the effect.

Haters Gonna Hate

Who Is T Pain And What Is Auto Tune

A few years ago, in a meeting, famed guitar-maker Paul Reed Smith turned toward Hildebrand and shook his head. “You know,” he said, disapprovingly, “you’ve completely destroyed Western music.”

He was not alone in this sentiment: as Auto-Tune became increasingly apparent in mainstream music, critics began to take a stand against it.

In 2009, alternative rock band Death Cab For Cutie launched an anti-Auto-Tune campaign. “We’re here to raise awareness about Auto-Tune abuse” frontman Ben Gibbard announced on MTV. “It’s a digital manipulation, and we feel enough is enough.” This was shortly followed by Jay-Z’s “Death of the Auto-Tune” — a Grammy-winning song that dissed the technology, and called for an industry-wide ban. Average music listeners are no less vocal: a comb of the comments section on any Auto-Tuned YouTube video reveals (in proper YouTube form) dozens of virulent, hateful opinions on the technology.

Hildebrand at his Scotts Valley, California office

In his defense, Hildebrand harkens back to the history of recorded sound. “If you’re going to complain about Auto-Tune, complain about speakers too,” he says. “And synthesizers. And recording studios. Recording the human voice, in any capacity, is unnatural.”

What he really means to say is that the backlash doesn’t bother him much. For his years of work on Auto-Tune, Hildebrand has earned himself enough to retire happy — and with his patent expiring in two years, that day may soon come.

“I’m certainly not broke,” he admits. “But in the oil industry, there are billions of dollars floating around; in the music industry, this is it.”

He gestures toward the contents of his office: a desk scattered with equations, a few awkwardly-placed awards, a small bookcase brimming with Auto-Tune pamphlets and signal processing textbooks. It’s a small, narrow space, lit by fluorescent ceiling bulbs and a pair of windows that overlook a parking lot. On a table sits a model ship, its sails perfectly calibrated.

Who Is T Pain And What Is Auto Tune Good

“Sometimes, I’ll tell people, ‘I just built a car, I didn’t drive it down the wrong side of the freeway,'” he says, with a smile. “But haters will hate.”

Who Is T Pain And What Is Auto Tune Mean

Our next post profiles an entrepreneur who wants to disrupt the only industry Silicon Valley won't touch: sex. To get notified when we post it join our email list. A version of this article previously appeared on December 14, 2015.

T Pain Autotune Download Free

Announcement: The Priceonomics Content Marketing Conference is on November 1 in San Francisco. Get your early bird ticket now.