We all kind of understand wavetables, I think. We tend to use them in place of normal oscillators within a subtractive synth, and like normal oscillators, we know they can produce different waveshapes for different base sounds which we then alter with filters and effects. We can choose the shapes we use, so instead of the usual sine, triangle, saw, and square, we can use basically any shape we want. And instead of switching waveforms, we can morph between them. It’s kinda like using a waveshaper, but with much more complex results. But… do we really understand wavetables?
First, it helps to know that the concept of a wave table has changed significantly since its introduction in the late 1970s. Initially it was just a table of single cycle waveforms that would be played on loop at a set speed to produce a note. This is more or less referred to as table lookup synthesis, and is similar to wave sequencing where a sound is composed of multiple single-cycle waves strung together (not to be confused with the Korg Wavestation’s idea of wave sequencing). Changing the waveform was a harsh instant switch, and modulating between waveforms generally required that the waves next to each other did not change much such to lessen the amount of stepping involved. This is more or less how most wavetables even today work: just a single line of waves, all independent. This is most commonly seen in software, like in Serum, Vital, Massive, and even the first PPG synths.
Then interpolation came along. Interpolation allows for the mathematical calculation of waves as they transition from one to the next. The simplest is indeed the crossfade. This is what was used on synths like the later PPGs, Plaits, newer Waldorfs like Iridium, etc. use, and is actually called Multiple Wavetable Synthesis. There are other algorithms used for interpolating, including FFT/spectral morphing, phase interpolation which can be “glitched” to work similarly to quantised/stepped crossfading (this is what the E350/352/330/370 use), and in the case of more unique wavetables such as those used in scanned synthesis, the interpolation is calculated in real time based on input parameters. Interpolation allowed for smooth transitions between even completely different frames, which allows for everything from complete sound morphing to dynamics control and pseudo-filtering similar to pulsar synthesis, all through the use of a single wavetable.
It’s worth noting at this point there are multiple axes wavetables can exist and interpolate between. 1 dimensional are like what you get on Iridium, Vital, Serum, etc, where it’s just a string of waves. 2 dimensional uses an XY grid, which is what Synthesis Technology tends to use (with Z being a straight scroll through all waves) and, similarly, 4MS SWN, which uses a sphere upon which an XY grid is wrapped. 3 dimensional uses an XYZ cube and is used by Plaits. I’ve yet to see more than 3 dimensions however.
There’s also a variety of methods of creating wavetables. The most common is likely sample-derived wavetables, where you feed a sample to a wavetable interpreter which splits the sample in various ways to create wavetable frames which can then be optionally interpolated and morphed through. There’s also the option to draw waveforms as keyframes and have the interpreter generate interpolations between them. Yet others are generated and morphed through in real time based on a parametric algorithm, such as with Scanned synthesis, which uses a struck string to determine morphing and set “hammer” shapes (similar to exciters for modal synthesis) to determine the actual waveforms generated.
As an interesting note, due to the generally single cycle nature of wavetables, there are other synthesis methods related to wavetables but work MUCH differently. Most interestingly is waveguide synthesis, which is likely better known as Karplus-Strong or Modal synthesis, both making use of delay lines and filters to emulate acoustic propagation through various materials. The delay line runs at audio rate, and the filters remove frequencies over time, often per delay iteration in a feedback loop, to settle on parameter-defined modes. While the actual method is different, the end result is similar to a wavetable with smoothly varying shape.
Curiously, sample based synthesis has often been confused with wavetable synthesis. It’s easy to confuse on the surface: both use existing audio that is played back in various ways to create sound. But, sample based synthesis is far different from wavetables in that there is (usually) no morphing between samples and the samples are much longer and/or contain the entire sound’s information, rather than just the initial waveform(s). Most sample-based synthesizers will also layer multiple sounds, such as the Roland D-50, Korg Wavestation, or Yamaha SY77, which use different parts of samples to get a cohesive sound. For example, you could play a closed hi-hat along with a synthetic brass sound to create a complete, and more realistic, trumpet sound. This also touches into multisamples, but I feel sampling really deserves its own post so I’ll leave this here for now.
Speaking of confusion, often people will refer to anything that has multiple modes that sound different, such as with Mutable Instruments’ Plaits module, or anything that can change waveshape, such as MI’s Tides, as wavetable. While the end result may sound the same, these are very different and distinct from wavetables. For those two examples, Plaits models multiple synthesis methods per mode, and Tides is a waveshaping device that acts like a 2-dimensional wavetable.
As a final note, I would like to mention wavetable lookup distortion, aka window shaping. In this scenario, we use phase distortion (which shouldn’t be confused with phase modulation synthesis, which is also known as FM synthesis despite not actually modulating frequency) to shape input audio based on a single frame of a wavetable. This will effectively remap the phase (or in some implementations, amplitude) of the input to match the wavetable. For example, a perfect saw would have no effect, while a triangle wave will act as a full wave rectifier. The wavetables available for this purpose are often non-interpolated as sound is altered via an “amount” or “gain” control rather than the wavetable changing, through each wave in the table does have a unique sound. It’s like having several distortions in one!
As you can see, wavetable have a long, complicated history, and are equally complicated means of synthesis, albeit very fulfilling. I highly suggest exploring the topic more deeply as your tools allow.
Leave a comment