An HTML5-based audio player with React.js

One of the motivations for me to move this site away from Blogger to a different platform was so I could add a set of pages dedicated to my music, with an audio player embedded on each page. Even better for it to have a visual design consistent with the rest of the site.

Having been involved in the responsive redesign project at StarNow, where I implemented an HTML5-based audio player for member profiles (among other things), I had a sense of what needed to be done to implement my own for this site. I had also learnt some of the technical gotchas from hosting potentially multiple players on a single page or switching tracks within the same player.

After discovering an interesting post about React.js versus other JavaScript MVC frameworks, I had a brainwave to try out React.js since it was apparently simpler and more efficient. This is significant, because I haven't bought into the hype surrounding other JS MVC frameworks since a bad experience with a project involving Sencha Touch (and my subsequent wariness). If I'm going to invest time and effort in learning something, I need to understand the pain point it is trying to solve and the actual benefits I should get out of using it.

Given these backgrounds, I decided to implement an HTML5-based audio player using React.js and see how that compares to implementing the entire thing using JQuery alone.

The other main differences this player has from the StarNow player are:

Firstly, there is a bit of JQuery to encapsulate the player as a plugin:

$.fn.audioPlayer = function(initParams) {

	//Global / 'static' state 
	var rDom = React.DOM;
	// ...other variables...
	this.each(function () {
		//Player instance-specific state
		var mp3Source = $(this).data('srcMp3');
		var oggSource = $(this).data('srcOgg');
		var trackTitle = $(this).data('trackTitle');
		//React.js component here

...which is called by a simple bootstrap script:

$(function () {

Next, the React component class is refined and rendered inside the containing div element for each player instance. In the render function, I decided against using JSX syntax (which allows templating using XML syntax directly within JavaScript), as that would require additional scripts to compile to proper JavaScript before executing it.

var player = React.createClass({
	getInitialState: function () {
		return {
			title: trackTitle,
			srcMp3: mp3Source,
			srcOgg: oggSource,
			isPlaying: false,
			trackLength: 0,
			volume: 0.7,
			isMute: false
	// ...other event handlers and functions...
	render: function () {
		return rDom.div({ className: 'playerHost' },{
				ref: 'audioObject'
					src: this.state.srcMp3,
					type: 'audio/mpeg'
					src: this.state.srcOgg,
					type: 'audio/ogg'
			// ...Rest of hydrated markup elements...

React.renderComponent(player({}), this);

Of particular note, the component state defined in getInitialState is referenced in the rendering code, e.g. this.state.mp3Source.

Since the player is interactive, the audio DOM element needs to be referenced from within other functions. Because of this, there is a 'ref' attribute defined on the audio element. Here's how this is actually used in the play function as an example:

play: function () {
	this.setState({ isPlaying: true });
// ...other functions...

The beauty of React is that changes to the internal state do not cause elements to be replaced by a different instance if that part of the DOM hierarchy remains the same, so the audio element reference is still valid. Changing the volume does not reset the playback position, let alone require reloading of the track. This aspect of React is similarly useful when maintaining the canvas state for the playback bar and volume bar.

One hurdle I did face was binding events to the audio element. With divs, spans and canvases, I could add an onClick attribute in the render function; however this did not work for the audio element. Instead, I had to manually wire this up in the componentDidMount function. This was also the case when binding the window resize event to enable responsive resizing (given that the canvases had to be redrawn to the new dimensions):

componentDidMount: function () {
	var audioElement = this.refs.audioObject.getDOMNode();
	audioElement.volume = this.state.volume;
	//Set track/volume bar resolutions and draw
	//Bind events
	audioElement.addEventListener('progress', this.updateProgress);
	audioElement.addEventListener('timeupdate', this.updateProgress);
	audioElement.addEventListener('ended', this.handleMediaEnd);
	window.addEventListener('resize', this.handleResize);

Similarly, the events are unbound at the end-of-life:

componentWillUnmount: function () {
	var audioElement = this.refs.audioObject.getDOMNode();
	audioElement.removeEventListener('progress', this.updateProgress);
	audioElement.removeEventListener('timeupdate', this.updateProgress);
	audioElement.removeEventListener('ended', this.handleMediaEnd);
	window.removeEventListener('resize', this.handleResize);

So this player turned out to be a success; I didn't have to abandon the idea and revert to implementing this entirely in JQuery.

In comparison, the two approaches were similar in this case. There were still several functions to implement due to the player interactions that can be so easily taken for granted, though it was simpler from not having to fade out a volume slider nor worry about providing a fallback to Flash - both factors that were unrelated to the technology I used. In jQuery, my references to DOM elements would be stored in the per-player closure instead of accessing 'this.refs' as often. On the other hand, the markup generation and event binding code seems a little cleaner in React than what I'd typically generate with jQuery alone.

Here's an example of the final result, with more of my music in the Music section: