Audio Sample Rate in HTML5/JavaScript

This series so far has addressed signal processing geared toward audio processing. Everything covered and most of what I will cover could be done using analogue audio equipment (and some people prefer it that way). Nevertheless, you are reading this on a digital device (I think.. It would be strange if you weren’t) and all the demonstrations in the series will be in the digital domain. For that reason, I will discuss some of the things that distinguish digital signal processing from standard analogue signal processing, starting with sample rate. This is a deep topic so I will only cover the basics.


Analogue vs Digital Signals

Analogue signals are continuous and digital signals are discreet. This means that you can inspect any 2 points in an analogue signal and the amplitude could be different. Analogue storage such as cassette tape or vinyl are a continuous representation of signals but they have limitations which mean they are generally not as good quality as a digital. The storage of a continuous signal is not possible in the digital domain which is where sampling comes in.

You might have heard of sampling related to sound recording but it is also the term used for creating a digital representation of any type of analogue signal. Disregarding the coloration introduced by the hardware components, there are 2 qualities of digitally sampled signals which affect the fidelity. The first is the sample rate and the second is the bit depth. This post will concentrate on the sample rate.


Conversion between Digital and Analogue Audio

Below are the basic steps for recording audio digitally and listening to this digitally recorded audio.

Capturing Digital Audio

Vibrations create fluctuations in air pressure known as sound. A microphone captures a representation of this in the form on an analogue signal. An ADC (Analogue to Digital Converter) makes a digital signal representation of this analogue signal.

Playing Back Digital Audio

A digital device generates a stream of samples from a digital audio file. A DAC (Digital to Analogue Converter) converts the digital signal to an analogue signal. A loudspeaker then vibrates to create the fluctuations in air pressure which are heard as sound.


Sample Rate

The sample rate of a digital signal is the number of times per second (Hz) that the signal amplitude is captured and stored. 44.1kHz and 48kHz are common sample rates for audio equipment today. It is possible to go above this, but unnecessary as we will see. For the latter rate, the ADC takes 48,000 snapshots (samples) of the signal per second.

The canvas below shows a sine wave in the time domain sampled at discrete intervals. Each vertical line is the point in time at which an individual sample of the signal is taken. The amplitude of the sample is equal to the amplitude of the wave at the point.

Between each of the 48,000 samples there could be a great deal of lost information but, in the context of audio signals, this does not matter. The Nyquist–Shannon Sampling Theorem states that to make a perfect digital reconstruction of a signal, the signal must contain only frequencies below half of the sample rate. This would provide at least 1 positive and 1 negative amplitude sample of each wave cycle indefinitely. Half of 44.1kHz is 22.05kHz which is way above the upper limit of human hearing.


Downsampling and Upsampling

Downsampling is the process of reducing the sample rate of a digital signal and upsampling is the process of increasing the sample rate of a digital signal.

It is worth noting that the frequency range of downsampled signal will be reduced but the frequency range of an upsampled signal will no be increased. The signal would be capable of holding a greater range of frequencies than before the upsampling, but since these frequencies were not present before then they can not be created by this process.

Sample Rate Reduction Demonstration

The demonstration below uses the visualiser first introduced in this post. Touching/clicking the visualiser at different points across the screen will adjust the sample rate to simulate downsampling. At the far left of the screen the signal is at the full sample rate set by the Web Audio API. The sample rate decreases continuously towards the right of the screen. The new sample rate is shown in the top left corner.

The red wave is the input (unaffected) signal. The blue wave is the downsampled output signal.


The Sample Rate Reduction Effect

You should notice that as the sample rate decreases, the highest audible frequencies of the original audio signal decreases making the audio sound muffled. You may also notice additional artefacts that are not in the original signal. There are 2 reasons for this.

This effect is only ‘pseudo’ downsampling because the actual sample rate never changes. Instead, the effect processor takes a sample from the input and then holds that sample for the next N outputs. Usually, after a signal is produced in a DAC it is passed through a reconstruction filter to remove frequencies above the Nyquist frequency. This would smooth out the signal so that only frequencies from the original signal were present. Because the effect processor comes before the input to the DAC (in the PC sound card) then the held samples are seen as legitimate amplitudes meaning that the additional frequencies are introduced into the signal.

We can prove that this effect is occurring using the visualiser. Switch the source to the  300Hz sine wave and then reduce the sample rate. The Nyquist–Shannon Sampling Theorem would state that a sample rate above 600Hz is enough to capture all the required information in the signal. You will notice distortion in the wave as soon as it is downsampled. Passing the signal through a reconstruction filter after the downsampling would remove this distortion and would mean that the red and blue waves would look exactly the same down to 600Hz.

I will look at this topic again later in this series when discussing filters.

Back to the additional artefacts.



The other cause of additional artefacts is due to a phenomenon is known as aliasing. Aliasing occurs when the sampled signal contains frequencies at or above twice the Nyquist Frequency.

We can use the visualiser of the 300Hz sine wave again to prove the effect of aliasing. The screenshot below shows the visualiser playing the 300Hz sine wave with maximum sample rate reduction. The purple wave has been added as a rough estimation of the output of the signal from a reconstruction filter. There is no sign (pun intended) of the 300Hz wave. There is also no evidence that the new alias signal is an alias at all. This could have been in the original signal! The downsampling has compromised the signal and it could never be restored.

To avoid aliasing when sampling, a low-pass filter cutting all frequencies above the Nyquist Frequency is used before the ADC.

Sample Rate Aliasing


You might recognise this effect as having a similar quality to some VSTs like Ableton Live’s Redux effect.

I will keep this effect processor handy for use in other demonstration later.

Change Log

  • Changes to the sine wave graph to demonstrate sampling
  • Sample rate reduction effect


Source Code (click to expand)

I have only included the source for the sample sine wave canvas and the downsampling effect as the other files have been covered on other pages.


<!DOCTYPE html>

    <meta charset="UTF-8">
    <title>Sine and Cosine</title>
    <script src="Scripts/com/littleDebugger/namespacer.js"></script>
    <script src="Scripts/com/littleDebugger/daw/dsp/generator/sineWave.js"></script>

    <canvas id="canvas" height="750" width="1580">
        Browser does not support canvas
    <script src="Scripts/sineGraph.js"></script>



// js file for SineGraph.html

// Reference the sineWave generator.
var sineWaveGenerator = com.littleDebugger.daw.dsp.generator.sineWave;

// Setup the canvas.
var canvas = document.getElementById("canvas");
var ctx = canvas.getContext("2d");
var noneCroppingHeight = canvas.height - 2;

// Set the grid colours.
var axisColor = "Black";
var divideColor = "grey";

// Variables for the sample rate demostration
var sampleFrequency = 30;
var samples = [];

// Draw a horizonal line of the canvas
// <y> The point on the Y-axis where the line should be draw.
// <color> Color of the line.
function drawHorizonalLine(y, color) {
    ctx.strokeStyle = color;
    ctx.moveTo(0, y);
    ctx.lineTo(canvas.width, y);

// Draw grid.
drawHorizonalLine(canvas.height / 4, divideColor);
drawHorizonalLine(canvas.height / 4 * 3, divideColor);
drawHorizonalLine(canvas.height / 2, axisColor);

// Draw a wave.
// <waveOffset> The offset of the wave.
// <color> Color of the wave.
var drawWave = function (waveOffset, color) {
    ctx.strokeStyle = color;
    var sineWaveGeneratorInstance = sineWaveGenerator(waveOffset, canvas.width);


    for (var i = 0; i < canvas.width + 1; i++) {
        // get sine wave samples at 2 cycles per sample rate.
        var amplitude = sineWaveGeneratorInstance.getSample(2) * 0.9;
        var y = canvas.height - ((amplitude * noneCroppingHeight / 2) + (canvas.height / 2))
        ctx.lineTo(i, y);

        if ((i + 1) % sampleFrequency == 0) {
                x: i,
                y: y


    ctx.strokeStyle = 'black';
    samples.forEach(function (sample) {
        // draw the sample line.
        ctx.moveTo(sample.x, canvas.height / 2);
        ctx.lineTo(sample.x, sample.y);

        // draw the circle.
        ctx.arc(sample.x, sample.y, 10, 0, 2 * Math.PI);

// So the Iframes don't overflow on mobile devices.
var setDimensions = function () { = (window.innerWidth - 20) + "px";
    var height = window.innerHeight > window.innerWidth ?
        window.innerWidth :
        window.innerHeight; = (height - 30) + "px";

window.onresize = function () {




// Psuedo sample rate reduction effect.
com.littleDebugger.daw.dsp.sampleRateReduction = function (bitReductionControl) {

    var currentPos = 0;
    var pow = 0;

    // Process audio buffer.
    // <inputBuffer> The buffer to processed.
    // <outputBuffer> The processed buffer.
    return function (inputBuffer, outputBuffer) {
        var reductionFraction = bitReductionControl.value;

        for (var sample = 0; sample < inputBuffer.length; sample++) {
            if (currentPos >= reductionFraction) {
                pow = inputBuffer[sample];
                currentPos = 0;

            outputBuffer[sample] = pow;