JavaScript Audio Processing Series

I am going to start this blog with a series of articles which combine two of my passions; computer programming and sound engineering.

There are a few HTML5/JavaScript synthesisers and drum machines about but nothing that is using the browser to its full capacity. The goal of this series will be to push the browser audio capabilities to the limit.

 

Web Audio API

HTML5 introduced the Web Audio API. It was possible to play audio in the browser before the HTML5 but it was only possible with plugins like Flash or QuickTime [1]. HTML5 provides the video and audio elements and the Web Audio API provides many well-designed modular components for generating waveforms, modulation, effects and many other digital signal processing (DSP) features.

Not Web Audio API

The API is all well and good but it not what I want for this project. It would be a bit misleading for me to include Web Audio API in the name of the series. The problem with the components is also their unique selling point – they are very high-level.

This means that, when calling an API function, a caller knows what to expect from a function call, but how it does is hidden away in a black-box. This does not help us to actually understand the involved digital signal processing (DSP).

All I want is something that can allow me to manipulate buffers and then translate these buffers into sound. That is it. Fortunately, the Web Audio API provides this lower-level functionality with (mainly) the ScriptProcessorNode.

I will use standard programming operators, functions, and data structures as much as possible. The Web Audio API components will only be utilised when necessary.

What Hardware can you find in a Traditional Studio?

Many individual hardware components make up traditional studios. These components also exist in digital studios but they are usually all rolled into in a single application,  so where one component ends and another begins is not always obvious. A traditional studio has much less flexibility when it comes to the splitting of functionality.

Young budding producers who don’t know physical studios well may not even realise that the components in their digital audio workstation (DAW) exist as individual hardware components.

The following list has some of the components found in a studio. Throughout this series, I will compare the analogue hardware components and their digital counterparts and co
de them in JavaScript.

Mixing Desk

  • Sequencer
  • Sampler
  • Drum Machine
  • Mixing Desk (Mixer)
  • Effects processor
  • Audio Editor
  • Synthesizer

 

Why am I writing this blog?

I am passionate about computer programming and sound engineering. I have wanted to write audio software for a long time and I am ready to give it a go. This series will give me the platform to talk about what I already know and document the process of this development. It has also been several years since I learned about DSP, so this will also serve as a useful refresher for me.

 

QuestionMark

 

Why read this blog?

This series will have something for everyone, no matter your background or technical experience. If you are coming from a programming background, then learning about digital signal processing might be new and interesting for you.

If you are coming from a sound engineering background then you will learn about how your trusty audio hardware/software does what it does.

There will be an interactive workshop in each post. If you no have experience in either field then you can still enjoy playing around with the apps and workshops created in this series.

About the blog

I am a C# ASP.NET developer by day. I wanted a platform which could be integrated into, and exhibited from, my blog which made HTML5 /JavaScript the obvious choice. The workshops and apps written for this series will all be in HTML5/JavaScript.

I will include the related code in each post and will make it available on GitHub as a stand-alone web project.

The code will be commented to explain exactly what is happening. The comments will include the design and architectural decisions.

Computer programming is the focus of this blog and series. The nature of this series topic means that I will delve into some of the theoretical aspects of DSP, sound synthesis, and possibly even a little music theory.

 

Let’s get cracking.

 

References

[1] Web Audio API Documentation

 

Leave a Reply