Build Design Systems With Penpot Components
Penpot's new component system for building scalable design systems, emphasizing designer-developer collaboration.

medium bookmark / Raindrop.io |
8-bit music, or chiptune, is synthesized electronic music used in old computers, consoles and arcade machines. It’s a part of culture for people who grew up in the late 70s and 80s, and it is still very popular among geeks. The sound was produced by the PSG – Programmable Sound Generator – a sound chip that synthesized several waveforms and often noises. The waveform generator could produce two or three simple waveforms, which are pulse wave, square wave, triangle wave and the sawtooth wave. It also had a pseudo-random-noise generator (PRNG).
There’s a very interesting video on YouTube by The 8-Bit Guy, How Oldschool Sound/Music worked, where he tells about how the sound was created in different computers and how different approaches and tricks let the game developers create the legendary music with very limited hardware. I really encourage you to watch that video, if you want to understand how 8-bit music is produced. It was very interesting to me, so I decided to do some more research and try to emulate the legendary 8-bit games music using Web Audio API. First, I’m going to dig into how it works and then see how we can do that.
Let’s take a look at what I have created and read further to see what it’s all about and how to build it.
There’s too much JavaScript code there, because all of the music is played by code and there js code for every note that you hear. So I’ve created a CodePen Project, so that you can browse the organized files.
Let’s take a look at Commodore 64. It used the MOS Technology 6581/8580 Sound Interface Device, which featured:
The startup screen of Commodore 64 looks like this:
and people like me have first seen it in the GTA Vice City intro
NES works in a different way than the Commodore 64. You don’t choose which oscillator uses which waveform. It supports a total of 5 sound channels, but each channel can do exactly what’s it is programmed to do:
Interestingly enough, the Nintendo GameBoy uses 2 square channels, programmable custom wave channel and a noise channel.
Then more advanced sound chip Yamaha came to the market. It was used in the Ad Lib sound card, which could play up to 9 oscillators at the same time, and while the synthesizers took the great advantage of that, video games music didn’t become much better. Because no matter how many pulse waves you play at once, it still sounds like a pulse wave. Game creators realized that each oscillator can dynamically change the waveform type, varying between pulse, triangle or noise, to produce richer sounds. That, for example, is what Commodore 64 game developers did. Since we can create as many oscillators as we need, with Web Audio API we don’t have to do that. However, I find the NES attitude very interesting for this project, and I’m going to use it. I’ll create 4 oscillators (we don’t need the digital samples in our soundtracks) and make each of them play separate tracks. This will sound very close to original Nintendo Entertainment System games music.
The Web Audio API supports the sine, square, triangle and sawtooth waves. You can hear the how different waveforms produce different sounds:
However you may have noticed, that none of those consoles has a sine wave. It was introduced after the consoled were release and produces a much milder tone. What about the pulse wave? Yes, the Web Audio API does not support pulse wave out of the box, however there’s a repo that lets us create a pulse oscillator. It uses WaveShaperNode to transform the sawtooth wave into a square wave.
Good news! We don’t have to do that, because we are going to use Tone.js, which gives are the ability to use the pulse waveform!
I decided to use Tone.js for this project for a simple reason: Tone.js accepts pitch-octave notation, so instead of asking it to play a sound with 440Hz frequency (which we can), we can simple tell it to play the A4 note; as well as it has tempo relative timing. Let’s see how it works. We are going to use triggerAttackRelease
method, which triggers a note with an attack and release. This means that the note will start and stop smoothly. For more information about attack and release, read about the ADSR here.
We call it like this:
var synth = new Tone.Synth().toMaster()
synth.triggerAttackRelease('C4', '4n', '8n')
synth.triggerAttackRelease('E4', '8n', '4n + 8n')
This will schedule two notes. One thing we shall know. is that it uses absolute positioning to place the notes on the “timeline”. The C4 is a quarter note (4n indicated that) and will start with an Eighth note “offset”. The second note is a E4, that is an Eights note, and will start with a Quarter note + Eighth note “offset”. That is the delay the first note plays, plus the length of the first note. ‘1m’ is also helpful in timing, and it’s stands for 1 measure in Tone.js
I tried playing Pac-Man intro using this technique, so I found this sheet and written down the notes and timing into code. I ended up with a huge amount of code, half an hour spent on calculating and double checking the time and this part shows only the right hand:
synthBass.triggerAttackRelease('B1', '8n + 16n')
synthBass.triggerAttackRelease('B2', '16n', '8n + 16n')
synthBass.triggerAttackRelease('B1', '8n + 16n', '4n')
synthBass.triggerAttackRelease('B2', '16n', '4n + 8n + 16n')
synthBass.triggerAttackRelease('C2', '8n + 16n', '2n')
synthBass.triggerAttackRelease('C3', '16n', '2n + 8n + 16n')
synthBass.triggerAttackRelease('C2', '8n + 16n', '2n + 4n')
synthBass.triggerAttackRelease('C3', '16n', '2n + 4n + 8n + 16n')
synthBass.triggerAttackRelease('B1', '8n + 16n', '1m')
synthBass.triggerAttackRelease('B2', '16n', '1m + 8n + 16n')
synthBass.triggerAttackRelease('B1', '8n + 16n', '1m + 4n')
synthBass.triggerAttackRelease('B2', '16n', '1m + 4n + 8n + 16n')
synthBass.triggerAttackRelease('F#2', '8n', '1m + 2n')
synthBass.triggerAttackRelease('G#2', '8n', '1m + 2n + 8n')
synthBass.triggerAttackRelease('A#2', '8n', '1m + 2n + 4n')
synthBass.triggerAttackRelease('B2', '8n', '1m + 2n + 4n + 8n')
OK… The first thing I don’t like is the amount of code it takes to play the 6 second composition. So the first steps of refactoring this would logically be creating an array of objects, containing note, it’s length and it’s position; then we could loop through it and call the triggerAttackRelease
on each iteration. This kind of makes it easier, but… what if we make mistake somewhere in the note position, then we have to change all of the consequent notes, and it requires a lot of time to calculate that in the first place. Also, using 1/8
or 1/4
for an eight and quarter notes seems more natural to me. And, it becomes very annoying quickly when you have to write 4n + 8n
for dotted notes (dotted quarter note in this case). So I think it would be cool if we could write 1/4.
for dotted quarter note.
Now that we know our challenges, let’s see how Tone.js works and how we can solve all of the problems.
Tone.js is a complex framework and it has a lot of great features for scheduling and applying sound effects. But I’ll try to show how all of the features we need work as simple as possible.
Remember, that we want to emulate the Nintendo Entertainment System sound. It has five voices: 2 pulse, 1 triangle, 1 noise and PCM samples (which we don’t need for the project).
At first, we need to create the synthesizers, 1 per each voice. Tone.js comes with several instruments – different types of synthesizers. For example, Tone.MonoSynth
is composed of one oscillator, one filter, and two envelopes. There are several types of synths, but we need the simple Tone.Synth
, which consists of one oscillator and an ADSR envelope. That what we need. However we might want to change the default value for release
, as the notes in chiptunes usually end rapidly, while 1 second release will make the notes end rather smoothly.
oscillator:{
type:"triangle"
}
envelope:{
attack:0.005
decay:0.1
sustain:0.3
release:1
}
One other super helpful feature for this project is the Tone.js support of pulse oscillator wave type. Web Audio API comes with sine, square, triangle and sawtooth waves out of the box. However, the Tone.js comes with a PulseOscillator which let’s us use the pulse waveform. If you read about Synth in the documentation, you can see it is composed of a Tone.OmniOscillator
routed through a Tone.AmplitudeEnvelope
. And the OmniOscillator
aggregates Tone.Oscillator
, Tone.PulseOscillator
, Tone.PWMOscillator
, Tone.FMOscillator
, Tone.AMOscillator
, and Tone.FatOscillator
into one class. So we can use the pulse wave with Synth.
Another great news is that Tone.js comes with a NoiseSynth as well, which will be our noise generator. So we have to create three Synths and one NoiseSynth to get all of the NES voices we need:
var pulseOptions = {
oscillator:{
type: "pulse"
},
envelope:{
release: 0.07
}
};
var triangleOptions = {
oscillator:{
type: "triangle"
},
envelope:{
release: 0.07
}
};
var squareOptions = {
oscillator:{
type: "square"
},
envelope:{
release: 0.07
}
};
var pulseSynth = new Tone.Synth(pulseOptions).toMaster();
var squareSynth = new Tone.Synth(squareOptions).toMaster();
var triangleSynth = new Tone.Synth(triangleOptions).toMaster();
var noiseSynth = new Tone.NoiseSynth().toMaster();
In the options we set the wave type and we also change the release value of the Amplitude Envelope. If we take a look at a note from Pac-Man intro theme, we can see that the selected note has a duration of 1.142 seconds, of which 0.070 seconds is the release time (see the pink section).
Now that we have the oscillators, let’s see how to play a song composed of many different notes. Tone.js has very advanced functionality for scheduling playback. Why do we need it? Using native JavaScript clock by calling setTimeout() doesn’t work well for creating music, because it’s not precise enough. And if the time is of by just a few milliseconds, that can make the song sound very bad, especially if it’s playing loops or drum patterns. Web Audio API comes with it’s own very precise clock and Tone.js makes it even better by making it possible to think in musical timing, instead of milliseconds. They call it Transport. Tone.Transport is the master timekeeper, allowing for application-wide synchronization of sources, signals and events along a shared timeline. There’s one tricky point when you try to play some tune: if you’ve scheduled a playback of some notes, if won’t start until the Transport is started. We can start it by calling Tone.Transport.start();
Tone.Event provides a scheduled callback for a single or repeatable events. It’s the base class for musical events. Tone.Loop creates a looped callback at the specified interval. Tone.Pattern arpeggiates between the given notes in a number of patterns. Tone.Part is a collection Tone.Events which can be started/stoped and looped as a single unit. Tone.Part is what we need for playing our songs. Here’s how it works:
var part = new Tone.Part(function(time, note){
synth.triggerAttackRelease(note.name, note.duration, time, note.velocity);
}, [
{"time": 0, "duration": "8n", "note": "C3", "velocity": 0.9},
{"time": "8n", "duration": "8n", "note": "C4", "velocity": 0.5},
{"time": "4n", "duration": "8n", "note": "C5", "velocity": 0.5}
]
);
Part will loop through the array of notes and pass the object of note values to triggerAttackRelease. Velocity indicates how hard you hit note and is optional.
Now to play the part we have to call part.start(0);
But we also have to start the Tone.Transport to hear the music Tone.Transport.start('+0.1', 0);
The first thing that we can do for playing the music is to finding the sheet music, put all the notes, their duration etc to the objects and play them. That’s a very fun thing to do if can read sheet music, but very takes a lot of time.
The second thing is converting MIDI files to JSON and automatically grab the notes, their duration etc. This is a huge shortcut, however we’ll have to clean up the result a little to make it playable. Actually there’s a lib called Tonejs/MidiConvert that converts midis to a Tone.js friendly JSON. This sounds awesome. The downside of this method is that after midi gets parsed we end up with an array of notes like this:
{"duration":0.12916666666666998, "name":"F4", "time":38.19999999999999, "velocity":1},
instead of duration and time presented in musical timing, we can’t control the tempo. But it’s OK, and most importantly, fast.
Actually, we’ll use both approaches and I’ll teach you to do both.
I decided to create my own format for converting sheet music to code. I also decided to write a function that would calculate the note position automatically for me.
At first let’s convert dotted notes to normal notes. It’s very simple by replacing dotted notes into two notes, like 1/4. = 1/4 + 1/8
:
function filterDots(value) {
value = value.replace('1/32.', '1/32 + 1/64')
.replace('1/16.', '1/16 + 1/32')
.replace('1/8.', '1/8 + 1/16')
.replace('1/4.', '1/4 + 1/8')
.replace('1/2.', '1/2 + 1/4');
return value;
};
So how can make it automatically calculate the time. We’ll the offset value for current note shall be the (offset + length) of the previous note. Imagine 4 quarter notes. So if a 1/4
note starts at the zero position, then the second note starts at the 1/4
position, the third one at 1/4 + 1/4
position and the last one on 1/4 + 1/4 + 1/4
position. Be we shall “round” that numbers to get 1/2 + 1/4
. Otherwise, we end up with this:
8n + 16n + 16n + 8n + 16n + 16n + 8n + 16n + 16n + 8n + 16n
while the “rounded” value is 2n + 4n + 8n + 16n
. Much nicer, considering how long that string would become, if we played the whole tune.
It actually was tricky, but I had a pretty neat solution. I’m sure there are many ways to do it. As I worked with a lot of fractions here, I decided to use the math.js library. So here’s the basic idea: we sum up all of the little time segments and get one fraction. Then if the numerator is an even number, we divide it by two, if not, then we minus 1 from numerator then divide it by two, but we stash a number that equals 1/denominator
. That way our smallest possible fraction “pops out”. As the denominator is either 64, 32, 16, 8, 4 or 2, we can divide it by two without double checking.
function filterTime(time) {
time = time.replace('m', '');
time = time.split(' + ');
var sum = time.reduce(function(total, element){
return math.add(math.fraction(total), math.fraction(element));
}, 1);
var str = [];
while(sum.d != 1) {
if(sum.n % 2 != 0) {
str.push('1/' + sum.d);
sum.n = (sum.n - 1) / 2;
} else {
sum.n = sum.n / 2;
}
sum.d = sum.d / 2;
}
if(sum.n > 1) str.unshift(sum.n-1 + 'm');
return str.join(' + ');
}
As an example, let’s see the number the following line:
1/8 + 1/16 + 1/16 + 1/8 + 1/16 + 1/16 +
, which sums up to 27/16. So this is what happens:
1/8 + 1/16 + 1/8 + 1/8 + 1/16 + 1/16 +
1/8 + 1/16 + 1/16 + 1/8 + 1/16 + 1/8 + 1/16
27/16 =
26/16 + 1/16 =
13/8 + 1/16 =
12/8 + 1/8 + 1/16 =
6/4 + 1/8 + 1/16 = 3/2 + 1/8 + 1/16 =
2/2 + 1/2 + 1/8 + 1/16 = 1 + 1/2 + 1/8 + 1/16 = 1m + 2n + 8n + 16n
The function returns a string of fractions joined with a plus. This function does not support triplets yet, but it doesn’t hinder this project, so I’ll come back to that later.
OK, so now I also want to be able to use 1/4
, 1/8
etc, instead of 4n
or 8n
. However it shall be converted to the values that are supported by Tone.js
So I have one more function for that:
function filterValue(value) {
if(value == 0) return value;
value = value.replace('1/64', '64n')
.replace('1/32', '32n')
.replace('1/16', '16n')
.replace('1/8', '8n')
.replace('1/4', '4n')
.replace('1/2', '2n');
return value;
}
This converts dotted note length to two separate note length (in music 1/8 dotted note length equals 1/8 + 1/16). So we do that. After the manipulation with dotted notes, we convert them to 2n, 4n, 8n
syntax.
So now that huge amount of code required for the Pac-Man bass lines as short as this:
var pacManBassline = [
['B1', '1/8.'], ['B2', '1/16'], ['B1', '1/8.'], ['B2', '1/16'],
['C2', '1/8.'], ['C3', '1/16'], ['C2', '1/8.'], ['C3', '1/16'],
['B1', '1/8.'], ['B2', '1/16'], ['B1', '1/8.'], ['B2', '1/16'],
['F#2', '1/8'], ['Ab2', '1/8'], ['Bb2', '1/8'], ['B2', '1/8'],
];
We made so much of that automatic, that all we have to do now is to create an array of note and note length pairs. Now we have to put all of the filters together. Remember that .triggerAttackRelease
accepted four params? Note name, note duration, note time and note velocity. So we have to provide that. I’ve created a function filterNotes, that takes the array of notes (like pacManBassline) and converts it to an array of notes objects ready to be played:
function filterNotes(notes) {
var timeline = 0;
var part = {notes:[], length:''};
var last = notes.length - 1;
notes.forEach(function(note, index) {
note[1] = filterDots(note[1]);
note[2] = (index > 0) ? filterTime(timeline) : 0;
timeline = (timeline) ? timeline + ' + ' + note[1] : note[1];
part.notes.push({"name": filterValue(note[0]), "duration": note[1], "time": filterValue(note[2]), "velocity": 1});
})
part.length = filterValue(filterTime(notes[last][2] + ' + ' + notes[last][1]));
return part;
}
We don’t need to care about velocity that much, so we just put 1s everywhere, however you can work a little more and add more velocity to root notes and less velocity on the rest of the notes.
Let’s get the midi of the Legend Of Zelda overworld theme and convert it to JSON. Here’s a website I found where they have midi and sheet music of Zelda. While you can run MidiConvert locally, you can also use it online by drag-and-dropping a midi and receiving the output. We get a huge object, containing 2 tracks with a lot of notes and some additional info. If you take a look at the notes you’ll see that three notes are played at the same moment:
On the right hand part (top) there are two notes played at one, on the left hand part (bottom) there is a single note played. The top two notes will be played by the pulse and square waves, the bottom one is the bass, we will use triangle waveform to achieve that bass sound. If you take a look at the MidiConvert output, there are two tracks, the first one for the right hand (top) part of the notes, the second one for left hand (bottom) part of the notes. The notes
property contains all of the notes of the right hand, however we need to separate the top two notes, so that one of them played by the pulse synth and the second one by square synth.
"notes": [
{
"name": "D4",
"midi": 62,
"time": 0.0038548535156249994,
"velocity": 0.6692913385826772,
"duration": 0.40206122167968744
},
{
"name": "A#4",
"midi": 70,
"time": 0.0038548535156249994, // same time value as the D4 note
"velocity": 0.7480314960629921,
"duration": 0.40206122167968744
},
...
];
Also, we don’t need the midi value. So I decided to create a script in another Pen to separate the tracks and return them formatted. My initial idea was to push all the odd notes to one array and even notes to another, but I ended up making it universal, because there can be 3 or even 4 notes be played at the same moment for one hand. So I check, if the time value of the note equals the time value of the previous note, then I push it to another track. But if the note time value if unique, then I push it to the first track:
var notes = [/* paste the notes here */];
var track1 = [];
var track2 = [];
var track3 = [];
var track4 = [];
var temp = {};
notes.forEach(function(note, index) {
temp = {
'duration': note.duration,
'name': note.name,
'time': note.time,
'velocity': note.velocity
}
if(notes[index-1] != undefined && note.time == notes[index-1].time) {
track2.push(temp);
} else if(notes[index-2] != undefined && note.time == notes[index-2].time) {
track3.push(temp);
} else if(notes[index-3] != undefined && note.time == notes[index-3].time) {
track4.push(temp);
} else {
track1.push(temp);
}
})
console.log(JSON.stringify(track1));
This way I obtain the correct notes for using in my project. Then I create a songs
array, where I set notes for each separate track as well as the length of the song, which we will use to stop the Tone.Transport after the playback.
For example, in the Legend of Zelda track, I noticed that the drum pattern mimics the bass track, while the Super Mario Bros have separate swingy drum track with a lot of triplets. So I simply copied all of the Zelda bass notes to the drum (noise) track, but for Mario I had to convert the drum track as well. Be aware, that the triggerAttackRelease()
method does not accept “note.name” value on NoiseSynth, so you can comment the “note.name” line when converting the drum track.
I thought if would be cool to display the waves for each track separately, that way it would be clear how waveform shape affects the sound. Good news, Web Audio API gives us that ability and Tone.js comes with a great Analyser. We have to create a waveform analyser for each synth and then bind the analyser to listen to that synth. The analyser will return the values of the waveform which we will use as points and draw lines between them on a canvas. Each analyser will have it’s own canvas, which will display the analysis information.
First we need to create an analyser. var pulseAnalyser = new Tone.Analyser("waveform", 1024);
Then we have to bind it to the synth, using the .fan()
method like this: var pulseSynth = new Tone.Synth(pulseOptions).fan(pulseAnalyser).toMaster();
Then I used a function from the Tone.js example for drawing the wave on a canvas and modified it a little. Just a little of knowledge of canvas is enough to create the visualization. Most of all I wanted to add gradients to the waves. The following code shows how to add the visualization for one synth, and that is done for the other 3 synths:
var pulseAnalyser = new Tone.Analyser("waveform", 1024);
var pulseContext = document.querySelector('#pulse').getContext('2d');
var canvasWidth = 182, canvasHeight = 60;
var pulseGradient = pulseContext.createLinearGradient(0, 0, canvasWidth, canvasHeight);
pulseGradient.addColorStop(0, '#322982');
pulseGradient.addColorStop(1, '#B742CB');
function drawWave(context, values, gradient){
context.clearRect(0, 0, canvasWidth, canvasHeight);
context.beginPath();
context.lineJoin = "round";
context.lineWidth = 1;
context.strokeStyle = gradient;
context.moveTo(0, (values[0] / 255) * canvasHeight);
for (var i = 1, len = values.length; i < len; i++){
var val = values[i] / 255;
var x = canvasWidth * (i / len);
var y = val * canvasHeight;
context.lineTo(x, y);
}
context.stroke();
}
function visualize(){
requestAnimationFrame(visualize);
drawWave(pulseContext, pulseAnalyser.analyse(), pulseGradient);
}
I tried to show as much code as possible in the article, however the idea of it was to show you the logic or how to recreate the 8-bit games music with Web Audio API usign Tone.js. However if you want to see the whole code, I’ve created a CodePen project, where all of the JavaScript files are separated, so take your time, see how everything works together.
I hope you enjoyed this project as much as I did. It’s a tribute to developers who could create such great games with so little system resources. It was fun playing the games, now it’s fun to understand how they’ve been built. If you are interested in how the sound was create, search the Internet how the graphics was created. There’s so much Rock and Roll there.
For any questions contact me @greghvns
Many thanks.
AI-driven updates, curated by humans and hand-edited for the Prototypr community