Streaming Audio on the Web with NodeJS

About a year ago I wrote a post about how to specifically stream audio from a radio server using NodeJS, and since then I’ve made some upgrades to the code that made the app much scalable and performant, which I would like to share on this post. Unlike that post, however, I’ll try to cover everything you might need regarding streaming audio in general, like encoding and decoding, keeping a global stream and multiple clients, how to deal with real-time and an updated take on streaming from a radio server.

First of all: why use Node? Well, one easy is answer is: because it’s simple. Node is very well known for making streaming extremely straightforward, and now, thanks to a few modules, with audio it is no different. Although subject to discussion, Node also scales well with streams, or at least well enough to get you started, which you will fast enough considering Node is just JavaScript and you do know JavaScript, right?

Before we start I’d like to give a big shout out to @TooTallNate, which, as well as owning most of the modules I’ll talk about here (and being a Node legend), helped me a lot (and continues to help) when I got/get stuck with all of this audio mess. Thanks man, without you I could not have got this far.

Okay, so let’s start off by tackling the biggest problem regarding streaming audio on the web which is that you can’t just stream any type of binary data and expect the HTML5 audio tag to figure everything out by itself (at least not yet), so we need to supply it with compressed audio data, and to do it we need a way to encode the binary data into a raw PCM stream (which is the most uncompressed form of audio) and then compress it in whatever format we want, such as MP3 or OGG (for this post I’ll stick to MP3 only).

Decoding/Encoding MP3 with node-lame

Lame is an open-source MP3 encoder/codec, it is the de-facto tool when doing any sort of manipulation with MP3 files. Until some time ago, the only way to decode/encode MP3 was to manually spawn a child process for Lame and feed data into its standard input (stdin) and read from its standard output (stdout), which sorta works…sorta. But we don’t have to worry about that anymore, because we now have node-lame, which are native bindings to the Lame codec that simply gets the job done: you feed it raw PCM data (that could come from say a radio server, or from a file you decoded using another codec) and it gives you back a stream with valid MP3 data which you can do whatever you want, be it stream directly to clients, write to a file, etc. And if you’re worried about OGG, Mr. TooTallNate took care of it as well with node-ogg.

Even though the lib repos contain a lot of code examples, I’ll paste bits of how I use node-lame in my radio website (which I’ll talk about later):

  var icecast = require("icecast"), // I'll talk about this module later
      lame = require("lame");

  var encoder = lame.Encoder({channels: 2, bitDepth: 16, sampleRate: 44100});
  encoder.on("data", function(data) {
    sendData(data);
  });
  var decoder = lame.Decoder();
  decoder.on('format', function(format) {
    decoder.pipe(encoder);
  });

  var url = 'http://stream.pedromtavares.com:10000'
  icecast.get(url, function(res) {
    res.on('data', function(data) {
      decoder.write(data);
    });
   }

  var clients = []; // consider that clients are pushed to this array when they connect

  function sendData(data){
    clients.forEach(function(client) {
    	client.write(data);
  	});
  }

Considering you have an array of client response objects to write to (which you can manage by using something like express, those lines of code are all you need to successfully stream audio data from a radio server directly to an HTML5 audio tag, and node-lame makes it that much simple.

Keeping a Global Stream and Multiple Clients

Chances are that when you’re dealing with streaming audio on the web, it’s gonna involve serving multiple clients. So far I have dealt with two options regarding this:

  1. You create a personalized child stream for each user connection all coming from a parent stream, and close that stream once the user is gone.
  2. You create a global stream and feed that directly to all clients by adding and removing them from a collection where the data gets streamed to.

As you might have guessed based on the previous code example, I prefer number 2. The reason is that creating a personalized stream with encoder/decoder instances for each user, although guaranteeing a perfect audio experience (since it’s being served to that user only), is way too expensive when it comes to both RAM and CPU. When I chose option 1, my VPS box would easily reach cap all processors with 100% when my stream got about 50 listeners, so imagine what would happen if there were more.

With a global stream you only have to worry about one instance of the decoder and encoder, and leave it to Node to serve 128kb per second to all of your listeners without losing audio integrity (so far I have managed 70 with no issues). There are still problems with this approach though, such as having to keep the decoder/encoder write streams open all the time, allocating a lot of memory due to instances of Buffer objects — and since these are allocated outside of the V8 memory heap, it’s up to the operating system to clear them.

The Challenge of Real Time Streaming

Keeping things in real time for the users is hard when you’re not in control of what’s being streamed, like the case where you want to stream audio files. Depending on your machine, Node will finish reading an audio file in about 5 seconds or so, and if you’re piping that directly to the client response objects, a couple of problems occur:

  • If you’re streaming a sequence of files, such as a playlist, Node will read the entire playlist much faster than your users will hear it, meaning that they’ll be fed ‘old’ buffered data which can be garbage collected at any time. If you’re also informing what song is currently playing on the playlist, what users hear and what the playlist says is playing will get desynced very fast.
  • Two users will be hearing completely different parts of the song if one happens to connect a few seconds after the other, since the reading speed is much faster than ‘hearing speed’.

To solve this problem, we need to constantly pause and resume the file reading speed so it never gets too far ahead nor too far behind (causing pauses in the audio). In terms of code, this is as easy as creating a custom buffer which will only accept a certain size of data, then pause the stream for a determined period of time (like a second). Not surprisingly, TooTallNate also built a small module to encapsulate this logic called node-throttle. But we still need to know how much data we need to stream per second, also known as an audio file’s bitrate. Most files have a bitrate of 128kpbs, but bit rates vary way too much and we can’t always count on them being the same unless we have 100% certainty about the files we’re gonna serve, and remember, if we throttle the wrong number of bytes per second, the live stream will get desynced.

Bundled with FFmpeg (an open-source multimedia package containing various libraries and programs) comes a tool called ffprobe, which does exactly what we need: it analyzes media files and returns every sort of information about it, including the bit rate. The gotcha of using ffprobe is that you need an entire and valid mp3 file for it to work, meaning that if you don’t have the entire file on your server for it to analyze, you’re out of luck. Since ffmpeg is a command line tool, we can simply spawn it as a child process and parse its stdout for the information we need, which is exactly what node-ffprobe does.

With node-throttle and node-ffprobe combined, you can simulate a real time stream pretty decently, of course that it will never be 100%, but, at least on my experience, it gets about 90% accuracy (stream desyncs in about 10-20 seconds), which is already good enough for me :)

  var throttle = require('throttle'),
      fs = require('fs'),
      probe = require('node-ffprobe');
  probe(track, function(err, probeData) {
    var bit_rate = probeData.format.bit_rate;
    var currentStream = fs.createReadStream('track.mp3');
    var unthrottle = throttle(currentStream, (bit_rate/10) * 1.4); // this multiplier may vary depending on your machine
  	currentStream.on('data', function(data){
      decoder.write(data); // consider the decoder instance from the previous example
    });
  });

Streaming From a Radio Server (SHOUTCast/Icecast)

Something that most people aren’t aware of is the usefulness of a radio server for streaming audio. For starters, radio servers like SHOUTCast and Icecast have been around for a long time, so there are a lot of tools that connect to and support them, such as DJing software like VirtualDJ and Traktor as well as broadcasting software such as OS X’s Nicecast, which can broadcast the output of any application running on your computer (like Skype or iTunes), as well as any audio device (like your built-in microphone) or even system audio. This means that with the help of a radio server, you can easily stream live (in real time) from your microphone, a Skype conference (podcasts, anyone?), your iTunes songs and even your DJ mixes, if you happen to be one (like me o/).

For my particular case, I chose SHOUTcast because they provide a server package for you to install on your machine. The installation was extremely easy (they even have a wiki for it) and I got it up and running in no time on my VPS at http://stream.pedromtavares.com.

As you might have noticed from the previous examples, the code for proxying a radio server is really simple with the help of node-icecast, which streams everything coming from the radio server in the form of raw PCM data (which you can encode with node-lame) and, as its README states, you even get to treat ‘metadata’ events, which usually contain the current track information, along with other properties of your broadcast. With the help of web-sockets, you can set up a complete radio website with current track updates in no time:

  var icecast = require("icecast");

  var url = 'http://stream.pedromtavares.com:10000'
  icecast.get(url, function(res) {
    res.on('data', function(data) {
      decoder.write(data); // consider the decoder instance from the first example
    });
    res.on('metadata', function(metadata) {
      var track = icecast.parse(metadata).StreamTitle;
      publishToClients(track); // use your pub/sub lib of choice (I use Juggernaut) to publish tracks to all connected clients
    });
   }

With a radio stream you don’t need to worry about throttling requests or analyzing files since you will be manually feeding the radio server with data in real time anyway, so things just work.

How I Applied All of This

I started messing with this audio stuff with Node about a year and a half ago, and since then I have been maintaining my own radio website. It has two modes:

  1. When there is a DJ connected to the radio server, it proxies the radio server.
  2. When there is no DJ connected, it streams tracks (in real time) from ex.fm, and since they provide such a killer API, I also give the ability of users to build their own playlists to be played.

I am keeping all the code as open-source on a GitHub project if you want to check it out, and although there is much more code, the core of it has already been explained here.

That’s it! I’ll update this post with any other awesome tools related to audio streaming using Node that I manage to find and apply, but I hope what I gathered so far has been of some help. Cheers!

Using NodeJS to Stream a Radio Broadcast

Update

It has now been almost a year since I originally wrote this article, and thus, the codebase for the project I show here has changed considerably, as well as the tools used. For an updated take on the subject of audio streaming with NodeJS, please refer to this post.

If you still wish to continue reading, refer to the branch ‘pre-express’ on GitHub which reflects perfectly to what I explain on this post.


So recently I was invited to compete on NodeKnockout (similar to RailsRumble) by one of Brazil’s best JS developers, the thing was, I had never done NodeJS in my life, and the competition is like in a few weeks time, so for the sake of not slowing the team down I decided to write an app to get familiar with it.

Background

I started out with the NodeJS Peepcode screencast, which lays down a pretty cool app which I immediately hooked to my Rails side project. A week later I started looking in ways to broadcast a stream of music from VirtualDJ, since I’m a kind of an amateur DJ myself and wanted a way to play stuff to my friends through the internet. Through some research I came across Shoutcast, which, as Wikipedia states, is a “cross-platform proprietary software for streaming media over the Internet”. The cool thing is that they have server packages which you can just download and install on your computer, so since I have a Linode VPS box I got it quickly set up on http://stream.pedromtavares.com.

But having a radio stream is not enough, nor does it have anything to do with programming since it’s just server configuration, so where does Node come in? A Shoutcast stream sends buffered audio data, which is perfectly readable by audio players such as iTunes, Windows Media Player and Winamp because they have internal decoders, but something like an HTML5 audio tag can’t read that buffer directly, and that’s where an HTTP decoding proxy comes in. A quick search on StackOverFlow miraculously got me started with very simple solution that worked on Chrome out of the box. From that example I found the creator of the radio-stream package, which was also the owner of icecast-stack (a rewrite of radio-stream). The icecast-stack repo has some examples in it, being the most relevant to our case the simpleProxy, which was the solid foundation for all my work.

Walkthrough the code

Enough talking, let’s go over the things I did on top of that simpleProxy example and how everything is working together. First things first, the code repository for my radio is hosted on GitHub, so make sure you open it to follow along. Just to clear out the architecture, we have a Shoutcast server (which I’ll call radio server) which sends the stream to our Node HTTP server (which I’ll call proxy), we have our income source (VirtualDJ for my case) which feeds data into the radio server, and the browser, which plays data coming from the proxy.

The most important feature is to, obviously, decode and stream audio, and the code for that is all in the decoder.js file. We basically start an instance of Decoder, and with it also start a stream of raw PCM data, which we’ll basically use to write all data that comes from the stream and later decode to mp3/ogg depending on what is asked from the web server. This stream of raw data on STDIN is also useful for a very cool feature called “Burst-On-Connect”, which is basically a buffer of about 2MB of data that will get played immediately when requested, so we don’t have to make the user wait for the radio server to send data to the proxy for the browser to actually play something — we can just send the buffered data immediately.

So we already have data ready to be streamed in our STDIN that’s getting endlessly fed from the radio server, what now? Now we can have our audio tag (or flash player) point to a url such as /stream.mp3, and route that to our mp3 decoder, which will start an mp3 stream that can be read by the audio tag/flash player, and the same goes for /stream.ogg, which is also necessary because some browsers can’t stream mp3 data.

Ok, so moving away from the decoder and into the web server, we have quite a few things to look at. The second most important feature after streaming the audio is showing the current track being played, and that’s dealt with a really cool browser feature called WebSockets, which is basically a thin connection that the server keeps with each of its clients, sending them messages whenever needed. With that in our hands, it becomes really easy to implement real-time track updates, we just need to treat an event (called a ‘metadata’ event) on the proxy that receives information from the radio server that a new track is playing, and spread that out to all clients connected through WebSockets, which is all done through the Faye package.

Another pretty cool thing that was implemented is the stream reconnection feature, where the proxy will go into standby mode when the input source disconnects from the radio server (thus closing the stream), until another source connects and a new stream is stablished. Node makes this extremely easy to handle since everything in it is event-driven.

But users don’t care about all this crap, so we need to deal with the front-end also. Unfortunately HTML5 audio players will only go so far, so I decided to use an awesome jQuery player called jPlayer, which uses the Audio API (instead of relying on native HTML5 audio players) on a very nice looking player. It also has a fallback to a Flash Player for old browsers, and all of this is handled through a common API, so all the JS you write for the player will work no matter what.

The result

You can check out what became of all this at http://mixradio.fm. “BUT DOES IT SCALE?”, you ask. Well, I haven’t managed to get a lot of people connected at it at once since I finished writing it like 2 days ago, but I did manage to get 10~15 people simultaneously and nobody complained about it, so I guess we’re off to a good start.

I plan on adding more features to the radio such as a chat (for song requests) and a track history (using a database), so stay stuned more things to come. And, of course, since the project is open source, feel free to contribute, I know very little about audio encoding and already stated that I am a Node noob, so the code is far from perfect and could use an expert hand. And also, if you happen to be a DJ yourself, I’ll gladly stream your tracks, just hit me at pedro@pedromtavares.com.

Call to Arms: Universitas!

As I’ve mentioned on previous blog posts, I’ve been spending a lot of time working on my side project lately, which I’m glad to announce has reached its first releasable version, which you can check out at universit.as.

The thing is, it’s ‘releasable’, but it’s too basic, it has its major features which I consider to be appealing, but it lacks those small details which makes a platform addicting and that makes it clear to everyone what it’s all about. So after reaching this first usable version, I decided it was time to ask for help. I alone won’t be able to take this platform to the maturity it deserves, and since its code is all open source anyways, why not make a huge call to arms to all Rails developers and designers out there?

I did my best to ‘sell’ the application to users by describing it as much as possible on its homepage, now I’m gonna convince you on why you, as a Rails developer or designer with some time spare time, should help me out building this thing and make it also YOUR side project.

The Tech

All of us geeks love hearing about tech, so let’s get that out of the way first. Universitas is a Ruby on Rails 3 application running on top of NGINX, Unicorn, Ruby 1.9.2 (on RVM) and MySQL, all on a Linode Ubuntu 10.04 box with its code being hosted on GitHub. Some on the gems include Devise, OmniAuth, InheritedResources and CarrierWave. RSpec is being used for unit testing, Steak for acceptance testing (started using it before the Capybara update) and Factory Girl for mocking. Amazon S3 is also being used for storage since one of the project’s main feature is document uploading. Oh, and let’s not forget about HAML for presentation (haters gonna hate).

Use it as your playground

I use Universitas to test out new stuff. Take a look at the application Gemfile and see what gems are being used, I try my best to test out every new thing (that’s in the context of the application, of course) that comes out on news channels such as Railscasts, Ruby5, RubyInside and so on. You probably don’t get to use all of these new things on your job, so here’s your chance to have a live production app where you can just go wild and see if that gem you heard about is really as cool as they say.

Use it as a practice ground

Sometimes you need to deliver fast because you have deadlines. We’ve all been there. And sometimes, when in a hurry, we write some code which we’re not really proud of, but that gets the job done nevertheless. We also don’t experiment new coding techniques or not break existing legacy patterns, because you shouldn’t be practicing at the cost of being wrong, you should do what you know will work.

Well, since we have no deadlines in Universitas, you don’t have to be in a hurry, so you can just write code, refactor it to look the best it can, and ultimately try out new things that will raise your skill level as a programmer, which is the whole point behind a side project.

Go wild with your ideas

As I said, I am only one person, and I can’t think of everything, so if you have tons of cool ideas about the project, I don’t see why we shouldn’t put them on the table and start working on them. The project is extremely green and in need of more minds to polish it.

I’ve created a public project on Pivotal Tracker for Universitas so you guys can check out some features I thought of after this first release. Anyone in the project will, obviously, be able to add stories and start working on any story they want.

Be a part of something useful

Universitas isn’t just for kicks, it has a solid idea which I believe is of great use to lots of people, since we all know too well that Google Groups does a poor job doing what it does and that someone could do much better. Let that someone be us. I think a level of seriousness and vision is needed for everything, and if I wasn’t serious about this, I wouldn’t be willing to alone pay all the bills necessary (such as S3 storage, server and domain (which was NOT cheap)). Last but not least, the ‘big picture’ behind Universitas is to become a huge information reference, where you would go when you want to learn something and build a solid community around something you know about.


If you got interested (if you read all the way down to here then you probably did), drop me a line at pedromateustavares@gmail.com so we can get you set up. I’m not too picky, but I need people with at least some experience in Rails that have built 2 or 3 Rails apps and knows how the flow goes. The project is simple enough to pick up in a couple of hours, so the time between setting it up in your environment and starting to work on a feature will be minimal, and I am online almost all the time to answer any questions and help out anyway I can.

So, what are you waiting for? To arms!

Getting Your Rails 3 App’s Routes at Runtime

Recently on my side project I came across an interesting problem: how could I get all the route paths in my Rails 3 app at runtime? I needed a way to get this information because the app uses a record’s attribute to directly set the path to it through a friendly URL, such as http://universit.as/ruby, so if a user created a record called ‘dashboard’, that would conflict with the ‘dashboard’ path and always show the dashboard page instead of the intended record page. As you can see, this is a terrible security flaw if we consider more serious scenarios.

Back to the initial problem, I figured I could work with a result similar to what the well-known “rake routes” prints out, so there I went to the Rails source code to see how it worked: https://github.com/rails/rails/blob/master/railties/lib/rails/tasks/routes.rake

After looking at it for 5 seconds or so, we can immediately notice how easy it is to get the app’s routes:

all_routes = Rails.application.routes.routes

Although it seems like that loading the whole app’s route mapping would be slow, it actually isn’t because they’re only loaded once when the Rails environment is loaded (when the server starts), so you basically get this for free.

So after mapping that array to the route’s path (you can get other attributes too, check the Rails source link) and doing some string manipulation, I managed to get a pretty satisfying result:

Rails.application.routes.routes.map{|r| r.path.spec.to_s.split('/').third}.compact.uniq
# currently returns ["profile", "users", "users(.:format)", "groups", "groups(.:format)", "documents", "documents(.:format)", "updates(.:format)", "updates", "authentications(.:format)", "authentications", "home(.:format)", "about(.:format)", "track.js(.:format)", "textile_guide(.:format)", ":id(.:format)", "auth", "rails"] 

It’s not perfect but it’s definitely something I can work with on a validation like so:

validates :name, :uniqueness => true, :presence => true, :exclusion => {:in => lambda { |u| return Rails.application.routes.routes.map{|r| r.path.spec.to_s.split('/').third.try(:gsub, /\(.*\)/, '')}.compact.uniq}}

Hope this small tip saves the next dev some time (looking at Rails source wasn’t my first action either, although it should be).

Endless Page Scrolling with Rails 3 and jQuery

We are all used to common content pagination, where you have tons of links pointing to other pages with more results. The main reason behind this is performance, after all, you’re not mad to render tons of records at once and slow down your application on each gigantic request. But then we come to a discussion about usability and how wouldn’t it be a better user experience to load more records asynchronously as the user scrolls down instead of requiring him to click a link and re-render the entire page, specially when it comes to things such as feeds, where people are already scrolling down a ton of content and wouldn’t like to hit any sort of ‘STOP’ sign on their scrolling.

After spending some time dealing with this on my side project, I came up with a nice and simple endless page scrolling solution with jQuery and Rails 3. I created an example application for a more practical view on our problem and hosted all its code at this Github repository for anyone to check out.

First off, the whole endless scrolling Javascript front-end is a courtesy of this plugin, although it lacks a huge improvement which I’ll talk about later, so case you want to use it, I recommend downloading my version instead.

On the back-end side of things, I decided it was better to to ditch pagination plugins such as will_paginate altogether and load records not based on an ‘offset’ parameter, but on a ‘last timestamp’ parameter, this way if the feed happens to update while you’re scrolling down you won’t get duplicated results on the bottom due to the database offset pushing older (and already rendered) records to the bottom, thus loading them again. Also, it’s important to somehow stop the endless scrolling when there are no more results to show, so the user doesn’t spam our server with useless requests. So we basically need to manage 3 things: appending new records to our already existent list through AJAX, update the ‘last timestamp’ parameter somewhere with each re-render and know when to stop asking for more.

Let’s get down to the code, starting with the JS:

$('ul').endlessScroll({
  fireOnce: true,
  fireDelay: 500,
	ceaseFire: function(){
	  return $('#infinite-scroll').length ? false : true;
	},
	callback: function(){
	  $.ajax({
		  url: '/posts',
		  data: {
			  last: $(this).attr('last')
		  },
		  dataType: 'script'
		});
	}
});

Assuming we have an UL with a fixed height, an overflow:auto and a ‘last’ attribute, this code will make requests to the specified URL on each scroll event (you can configure the scrolling distance with the bottomPixels property) and stop making them when we don’t have an #infinite-scroll div. The original plugin would not check for this ceaseFire condition on EACH scrolling event, which is crucial here (or else, what’s the point in having it?), that’s why using my patched version is key. This is how our page should look like to be ready to receive the upcoming updates:

<% unless @posts.blank?%>
  <ul class='list' last="<%=@posts.to_a.last.created_at%>">
    <%=render :partial => "post", :collection => @posts%>
    <div id="infinite-scroll"></div>
  </ul>
<% end %>

Now we move to the back-end, where we need to respond with new records based on that ‘last’ timestamp attribute and remove the #infinite-scroll div case the record collection returns empty.

###### - Model (post.rb)
class Post < ActiveRecord::Base
	def self.feed(last)
		self.where("created_at < ? ", last).order('created_at desc').limit(5)
	end
end

###### - Controller (posts_controller.rb)
respond_to :html, :js
def index
	last = params[:last].blank? ? Time.now + 1.second : Time.parse(params[:last])
	@posts = Post.feed(last)
end

###### - View (index.js.erb)
<% unless @posts.blank? %>
	$('.endless_scroll_inner_wrap').append("<%=escape_javascript(render :partial => 'post', :collection => @posts)%>");
	$('ul').attr('last', '<%=@posts.to_a.last.created_at%>')
<% else %>
	$('#infinite-scroll').detach();
<% end %>

Notice how we append the post partial to an .endless_scroll_inner_wrap div instead of the original UL, that’s because the endless-scroll plugin creates this new div wrapping our original div whenever we scroll said div, so we need to work around that as well. The model/controller logic is pretty dull, no need to further explain that, just make sure to always pass a future timestamp when there isn’t a parameter yet so that latest (up to the second) updates will still show.

Well, that basically covers it, the idea behind this post is to be a quick tutorial, if you’re a Rails beginner then I highly recommend you download the entire repository code and check it out (although I pasted most of it here already for explanation’s sake). Be sure to give some feedback if you have a more robust solution, this was my first try on this subject and I’m sure more experienced developers out there must have better stuff to show :)

My Development Environment in 7 Items

Machine/OS

I recently acquired a 13″ MacBookPro which I currently work on combined with a 19″ monitor that was used on my desktop machine. Working with double screens is really awesome aside the fact that I manage to lose the mouse pointer all the time and that I forget to switch apps before executing commands (causing some colateral damage), but things will get better once I get more used to it.
My desktop computer also has a full work environment on a Ubuntu install case the MBP melts down, but since that’s unlikely to happen I use it only for gaming and mass file storage (can’t just leave aside 750GB of space).
Aside the huge hassle of installing MySQL with Homebrew and having to manually create a root user, getting everything set on the Mac was quite easy, that is, after having to download and install XCode, of course.

Editor/IDE

On my desktop days I was a huge fan of a recent project called RedCar, which is a very light editor extremely similar to TextMate, which I now use on the Mac. I didn’t choose to stick with RedCar (since it works on all OSs) due to the fact that it crashed some times for unknown reasons, so I prefer to stick with TextMate until it’s further developed. I think the feature I liked the most about it was the Ruby syntax correction plugin, which isn’t all that relevant but avoids problems due to mistypes.

Recently I started reading some stuff on MacRuby and use XCode along with Interface Builder for it. Since everything related to Mac development integrates so well with those tools, I see no reason not to use them.

Terminal

My terminal is the same as it was shipped since I haven’t found the need to add any modifications to it yet, but I’m sure that’s because I haven’t come across anything that would make me feel the urge to change it.

Browser

Like most people, Chrome for common usage and Firefox for debugging/other development reasons (especially AJAX). I even gave Firebug for Chrome a try but it was crap, but like it or not, Chrome is way faster so I’m sticking with it.

Software

Four applications open at startup: Chrome, Twitter (the Twitter Mac app), Adium and Skype. I eventually use Office for Mac and this free app from the App Store called Remind Me Later, which is really simple and useful.
I browse the Mac App Store almost every day to find something that I truly need, and aside from the two already mentioned, I haven’t found others cool enough to be worth the cost.

Source Code

Not much to say, Git/GitHub for the win. I guess that’s expected from a Rails dev nowadays, huh.

Music

Besides not liking it much, I use iTunes since I can’t use Winamp on the Mac, but I guess (and hope) it’s a matter of getting used to.

Being a huge fan of psychedelic trance, I check out PsyPlanet.org almost every day (for the last 3 years) to see if anything new comes out worthy enough to get in my music selection, which, by the way, is being shared with almost 20 people through Dropbox, which in turn share their favorites too.


Since this blog post was suggested to me by @mauriciojr, I now pass it on to @diofeher, @jeffersongirao and @flaviogranero.

Quick Tips for the Casual jQuery Developer

So the latest technical book I’ve been reading is Learning jQuery by Jonathan Chaffer and Karl Swedberg. The book is very interesting and comes packed with a bunch of neat little tricks that come in handy for a casual jQuery developer. With that said I decided to, again, get the most interesting parts and write them down for later referencing.

Unlike the book I am assuming all readers do know the basics of jQuery (thus, Javascript), so don’t expect to learn the whole thing here, and in case that’s what you were after then go search for some basic intro tutorials or buy the book. It’s worth mentioning that the book also treats about some more advanced topics such as creating shufflers, rotators, plugins, etc., but the main goal here is to be practical and know only enough necessary for our needs (thus, casual), anything above that we can always count on a number of plugins, some of which I’ll show in the end of the post.

For anyone willing to see any of the code that is being presented here in action, I created a small app separated into the same sections as the post containing the exact same code. Please forgive my poor design :)

Since this will be more of a reference guide (to myself and hopefully others) here’s a nice little quick-link menu:




The first and most basic thing we do on jQuery is supply a selector to the $() factory function with what we desire to manipulate, so let’s start simple with some of them:

$('#items > li:not(.sold)').addClass('highlight'); // will get all li elements directly descendant of the #items element that do NOT have the 'sold' class

$('a[href^=mailto:]').addClass('mailLink'); // will add an specific class for all links that start (^=) with 'mailto:'. The [] syntax works with any attribute;

$('a[href$=.pdf]').addClass('pdfLink'); // same as above, but gets all links ending ($=) with '.pdf'

$('tr:odd').addClass('odd'); // useful for alternating table rows, tr:even could also have been used.

$('td:contains(highlight)').addClass('highlight'); // will add the 'highlight' class to all td elements containight the 'highlight' text. This selector IS case sensitive.

$(':radio:checked').addClass('checked'); // easily handles checked radio buttons.

Apart from the selectors themselves, we have a method which works much like a selector, but holds the power of accepting a function and filtering results from the result of that function:

$('a.last').filter(function(){
	return this.hostname && this.hostname != location.hostname;
}).addClass('external');

The example above will return (and style) all links that have a domain name (excluding mailto links) and which hostnames are different than our current one, something impossible to do with the use of selectors alone.

Selectors Examples




Although holding immense power in them, selectors sometimes aren’t enough and we need some DOM traversal methods to give us this flexibility of going up and down the DOM tree:

$('td:contains(highlight)').next().addClass('highlight'); // will style the element right after the selected one

$('td:contains(highlight)').nextAll().addClass('highlight'); // will style ALL elements after the selected one

$('td:contains(highlight)').nextAll().andSelf().addClass('highlight'); // will style ALL elements after the selected one AND the selected one

The next() and nextAll() above have their counterparts prev() and prevAll(), behaving as you might expect.

We can also work with parents, children and siblings of an element:

$('#some-item').parent().addClass('highlight'); // will get the element right on top of the selected element in the DOM tree

$('#categories').children().addClass('highlight-blue'); // will get all elements that come below the selected element in the DOM tree

$('#products').children('.available').addClass('highlight-blue'); // you can optionally add a selector to be more specific on what you get back.

$('#posts').siblings().addClass('highlight-green'); // will get all elements in the same level as the selected element in the DOM tree. You can also add a selector here.

DOM Traversing Examples



Let’s move on to events, which allow us to respond to user interactions. In jQuery we handle events quite easily by doing what we call ‘binding’ a handler to an event on a certain element set (generated by a selector), so let’s take a look at a couple of compound method which can take two or more event handlers:

// will highlight on the first click, then go back to normal on the second, then highlight again on the third, and so on.
$('#item').toggle(function(){
	$(this).addClass('highlight');
}, function(){
	$(this).removeClass('highlight');
});
// this will achieve the same as above
$('#item').click(function(){
	$(this).toggleClass('highlight');
}); 

// will execute the first function when the mouse is over the element, and the second one when it leaves
$('#item2').hover(function(){
	$('#item2').addClass('highlight');
}, function(){
	$('#item2').removeClass('highlight');
});

We could also create our custom events and attach handlers to those, binding them through the use of bind() and triggering through the use of trigger() (somewhat obvious):

// assuming we have an outer fieldset with a bunch of text fields in it and a submit button 
$('fieldset').bind('verify', function(){
	$(this).children('input[type=text]').each(function(index,child){
		if ($(child).val().length > 5){
			$(child).addClass('highlight');
		}else{
			$(child).removeClass('highlight');
		}
	});
});
//notice how we can trigger our custom events through various ways
$('#submit').click(function(){
	$('fieldset').trigger('verify');
});
$('fieldset input[type=text]').keyup(function(){
	$('fieldset').trigger('verify');
});

To avoid getting caught in traps caused by event bubbling, we can either control the event target by manipulating the event object itself or by simply halting event propagation:

// assuming we have a button inside of another a div, we want to assure that the event only happens when we click on the div.
// first, using the event target
$('#outer').click(function(event){
	if (event.target == this) {
		$('#outer .button').toggleClass('hidden');
	}
});
// second, using the stopPropagation technique
$('#outer2').click(function(){
	alert('You clicked the div!');
});
$('#outer2 .button').click(function(event){
	alert('Button clicked!');
	event.stopPropagation();
});

Event delegation is a powerful technique that should be used as often as possible and we can achieve it by assigning an outer element an event handler and using the target attribute on the event to determine what should be done:

// assuming we have an outer #items div with alot of single items inside with ids like #item-34, #item-23, etc
$('#items').click(function(event){
	var foo = event.target.id.split('-')[0];
	if (foo == 'item'){
		alert('An item was clicked!');
	}
});
// of course we could use more simple ways to do this, but you get the catch: every item will report to the #items handler which will then decide what to do.

By making use of Javascript’s anonymous functions we can achieve some peculiar behavior with event handlers, such as binding and unbinding them at runtime by passing these functions as callbacks:

var alertHandler = function(){
	alert('hi');
}
$('#some-item').click(alertHandler);
$('#some-item .sub').click(function(){
	$('#some-item').unbind('click', alertHandler);
	$(this).addClass('highlight');
});
$('#restore').click(function(){
	$('#some-item').click(alertHandler);
};

Events Examples



Effects are one of the coolest parts in jQuery, with very little effort we can get some really eye-candy results. Starting some basic effect methods:

// the peculiar thing about hide() is that it remembers the display type (block, inline, etc) before turning it to 'none', whereas show() gets that remembered value and applies it back.
$('#button').toggle(function(){
	$('#div1').hide('slow');
	$('#div2').show('slow');
}, function(){
	$('#div2').hide('slow');
	$('#div1').show('slow');
});
// now with fading
$('#button2').toggle(function(){
	$('#div3').fadeOut('slow');
	$('#div4').fadeIn('slow');
}, function(){
	$('#div4').fadeOut('slow');
	$('#div3').fadeIn('slow');
});

Just like toggle() is a compound event handler, we have have slideToggle() (and most recently fadeToggle()) as a compount effects method:

$('#link').click(function(){
	$('#extra').slideToggle('slow');
});

Moving on to the farily complex animate() method (which I suggest you take a look at its documentation), we can chain a bunch of animations together and get one compound result or we can queue some so they happen in the order we specify:

// the single compount result
$('#animate').click(function(){
	$('#block').animate({height: '+=20px'}, 'slow')
	.animate({width: '+=5px'}, 'fast')
	.css('background-color', 'grey');
});
// queuing
$('#animate2').click(function(){
	$('#block2').animate({height: '+=20px'}, 'slow')
	.animate({width: '+=5px'}, 'fast')
	.queue(function(){
		$(this).css('background-color', 'grey')
		.dequeue();
	});
});
//now the coloring happens only after the width grows.

Another way we could achieve this queuing is by using callbacks on animate():

$('#animate3').click(function(){
	$('#block3').animate({height: '+=20px'}, 'slow')
	.animate({width: '+=5px'}, 'fast', function(){
		$('#block3').css('background-color', 'grey');
	});
});

Effects Examples




The book recommends a few very useful plugins that can handle alot of day-by-day tasks for us with quite an ease:

  • Form Plugin: makes form submition with AJAX extremely easy.
  • jQuery UI: a whole library of extremely cool widgets and interaction components.
  • Autocomplete: provides a list of possible matches as the user types in a text input.
  • Validation: a solution for client-side validation on forms.
  • Masked Input: makes it easy for users to enter data in specific formats, such as dates, phone numbers, etc.
  • Table Sorting: for client-side sorting of table data.
  • Jcrop: for client-side image cropping.
  • FancyBox: a cool way to add overlayed information to your page (instead of using pop-ups).
  • Highcharts: my personal recommendation, this is an awesome plugin for chart generation, giving you tons of options and a well spread documentation. In fact, I even recommend you taking a look at its demo page just for kicks.

As for other parts of the jQuery API such as DOM manipulation and AJAX support, there wasn’t anything thats stands out enough to be worth a mention, a quick read on the documentation should be enough since all DOM manipulation methods and the standard $.ajax() method are pretty straight forward. Also, if you happen to be a Rails developer, these 2 subjects are abstracted by the use of remote form tags and RJS templates, so in most cases you wouldn’t even need to look into the jQuery documentation for this anyway.

That’s about it folks, if you have any other tips worth pointing out for us casual jQuery devs please comment. Cheers!

Notes on Behaviour Driven Development

So lately I’ve been reading the beta version of the RSpec book which is supposed to come out last year, and after reading the philosophical chapter 2 that talks exclusively about Behavior Driven Development, I decided to take some notes and post them. Like my other reviews, I’m writing this mostly as a quick reference guide that I can check from time to time and reinforce some concepts and I suggest everyone who decides to read this to do the same.

First there’s an introduction on agile methodologies and how they came to be, so if you’re just interested in the BDD part, click here and jump right to it.

The chapter starts by mentioning how ‘traditional projects’, that is, those who do not apply Agile methodologies, fail. Usually, it’s because of one (or more) of these reasons:

  • Delivering late or over budget
  • Delivering the wrong thing
  • Unstable in production
  • Costly to maintain

Ok, so, why? As we know most software projects go through the sequence of Planning, Analysis, Design, Code, Test and Deploy. The biggest reason for all this organization to happen is so we can avoid big changes later in development, which may screw up the whole project or at least give us a huge repairing cost. We don’t want that. So we make sure everything is ‘perfect’ by writing gigantic sets of documents to specify what each little functionality does and that we are predicting every single detail in the system.

Alot of people manage to work like this and others try to improve the process by having wonderful ideas such as creating review committees and stablishing standards and whatever. Even so, mistakes do manage to get by and when something gutters up in the testing phase (where everything is supposed to be unicorns ready to go into production) such as an entire feature that was overlooked…then everyone goes mad and the whole process needs to be redone and reviewed etc etc etc.

As you can see, what seems to make the process not work is the way the process is being executed in the first place! We can understand why we’re taught to do things this way — people simply thought: “hey! here’s something brilliant, how about we apply the same concepts of civil engineering on software and turn it into ‘software engineering’?!?!”. It makes sense to spend a huge amount of time thinking about how do build an entire building before actually starting to build it, realizing that you needed an extra pillar to hold the building after getting the third floor done is not cool. The difference is that software, and I quote the book on this, ‘is soft’, it’s supposed to be malleable to change and not to stay the same way forever (such as a building), so you can see how we’ll be needing a redefinition of things.

That’s where Agile comes in. First, read the Agile Manifesto. Read it like 50 times. In short, it values ‘doing software’ instead of mostly documenting it. One of the central principles of agil development is the use of iterations, which work like mini-projects: instead of delivering a final grand piece of software, you deliver small pieces of it as development evolves. This helps us solve those first four problems traditional projects encountered:

  • No longer delivering late or over budget: iterations help us predict how long we’re gonna take based on time spent in each iteration vs the number of iterations we defined.
  • No longer delivering the wrong thing: since we’re delivering working software from time to time we can get feedback from our stakeholders and change anything that was requested with ease.
  • No longer unstable in production: by delivering on each iteration we are making sure that our software is constantly working all the time.
  • No longer costly to maintain: after the first iteration everything becomes maintenance, so the team is always worried about keeping everything working continuously.

But not everything is marshmallows, agile development is, well, hard. Keeping a team organized enough to launch software every week or so is tense. The good news is that since agile is not a recent practice, most of its problems have answers:

  • Outcome-based planning: all we know is that everything is bound to change, so we need to find a way to estimate despite all this uncertainty.
  • Streaming requirements: creating large documents of requirements won’t be able to keep up with our new delivery process, so we need a way of describing features more rapidly.
  • Evolving design: with the project changing in each iteration we’ll need to always keep redesigning our software as it shapes up.
  • Change existing code: as the project changes so does the code, and being able to refactor the code and add new functionalities to it without much difficulty is essencial.
  • Frequent code integration: everything needs to keep working together.
  • Continual regression testing: as new features are added and code is refactored we need to make sure all the work already done keeps on working and the tests keep on passing.
  • Frequent production releases: all previous aspects are behaviours we can adopt ourselves, but releasing working software frequently requires co-ordination with the downstream operations team who have to keep a formally controlled environment into place. But still, we need this and if we can’t get this right then everything else doesn’t really matter because software only starts making money when in production.
  • Co-located team: for everything to work you can’t afford to waste time on office bureaucracy, everybody even remotely connected to the project need to be in touch for easy communication.




And what about BDD? Well, as the book describes:

Behaviour-driven development is about implementing an application by describing its behaviour from the perspective of its stakeholders.

To help us understand the perspective of a stakeholder we use a technique called Domain Driven Design, and keep in mind that a ‘stakeholder’ is anyone interested in the project. A huge premisse behind BDD is ‘writing software that matters’, which is exactly what we can acomplish from viewing this from a stakeholder’s perspective, because this way we know that what we’re doing has value. To help us focus on what’s really important, BDD follows three principles:

  • Enough is enough: simplicity, the art of maximizing the amount of work not done, is what we aim for.
  • Deliver stakeholder value: if something is not delivering value nor is it helping you deliver value, forget about it and aim for something important (to the stakeholder!).
  • It’s all behavior: whether you’re designing or coding, remember to always keep in mind that we’re dealing with behavior, it’s not about what something is, it’s what it does.

Cool, we know all the theory behind BDD, and how do we apply that theory? Since BDD has major focus on stakeholders, we start from them. The first thing to do when creating a project is gathering the stakeholders and stablish a vision (or purpose), which will be the overall goal of the project. Of course this is something extremely high level but it keeps us reminded on what we’re aiming for in the long run. For instance, the vision of this project I’m developing is to “integrate the CRM to our specific business needs”.

The book also uses the concept of incidental stakeholders which are those who will help the core stakeholder’s problem. In short, core stakeholders define a vision and incidental stakeholders help them understand what’s possible, what cost and with what likelihood. After defining the vision, we continue on working with the stakeholders to define goals (or outcomes), which are tangible achievments that we’ll need to address in order to know that we’ve reached our final purpose. To help keep these outcomes objective enough we could use a set of characteristics called SMART, which stands for Specific, Measurable, Achievable, Relevant and Timeboxed.

To get to these goals we’ll need solid software, and to describe what the software will do to achieve them we use feature sets (or themes), which are composed of, you guessed it, features, which go down to the level in which we work on day-to-day. In a nutshell, a feature adds value to a feature set which is included in one of the goals that achieve the overall purpose of the project. This way we are certain that what we’re doing is somewhat connected to the big picture we’re trying to solve. Here’s a badly drawn diagram to show that small hierarchy:

In a more practical side of BDD, various roles work together to get an amount of work done, so a BDD delivery cycle would work something like this: a stakeholder and a business analyst discuss requirements in terms of features that make sense to the stakeholder (DDD helps alot here) and probably breaking them down into even smaller chunks called stories (which take no more than a few days work). Next the analyst talks to a tester to define the stories’ scope, remembering to do just enough to get it done. Finally the developers then implement only enough to satisfy those determined scopes, and no more (this is where Red, Green, Refactor comes in).

Getting a bit more specific on the development phase (which is the one that matters most to all of us devs anyway =P), BDD emphasizes the importance of automating these scenarios and that they should still be understandable by stakeholders (Cucumber is largely used here). The developers should focus on coding by example, which is basically TDD with a laser-sharp focus on behavior, so you write a code example by describing the behavior you want and then implement just enough code to make it work (this is where RSpec, for Rubyists, comes in) and we iterate through this process until all scenarios are done. This way we have working scenarios that we can demonstrate to the stakeholder and the story is done.

Since we work with stories (chunks of features) in a day-to-day basis it’s important to know deeply about them. A story is made up by 3 things:

  • Title: so we know what story we’re talking about.
  • Narrative: tells us what the story is about and should at least include a type of stakeholder, benefit and a description of the feature. There are 2 formats people usually follow to narrate their stories, the first one is the Connextra format: As a |stakeholder|, I want |feature| so that |benefit|. — and a similar format that focuses more on the benefit: In order to |benefit|, a |stakeholder| wants to |feature|.
  • Acceptance criteria: so we know when we are done. In the case of BDD the acceptance criteria takes form of a number of scenarios made up by individual steps. (take a look at Cucumber to get a more detailed specification on this)

It’s usually a good practice to define stories before actually starting an iteration and that all the stories uses the language of the stakeholders so everyone uses a consistent vocabulary.

Summarizing this whole process in a nutshell: we start a project by understanding its purpose and nailing it down into smaller, doable features and then into stories and scenarios, which we automate to keep us focused on writing software that actually matters to the stakeholder. These automated scenarios become acceptance tests to ensure the application does everything we expect, and BDD thrives in supporting this model of work by making the process of automation quite easy while clearly understandable by stakeholders.

BatchBook CRM Integration With Rails

The company I currently work for recently asked me to develop an integration application (in Rails) with BatchBook, an excellent cloud-based CRM service. The purpose of this integration was to adapt the CRM to our specific business needs, which require a bit more functionality than the CRM alone can offer. Fortunately they provide a mature API and an easy to use Ruby library, so it’s quite simple to get started.

After getting used to their API and the Rails way of dealing with external objects via ActiveResource, we decided to share our achievements by open-sourcing the project, which can be found on GitHub.

The project is an entire Rails 2.3.5 application: just download the source, configure it to integrate with your BatchBook account and you have an entire application fully integrated to BatchBook. All of the installation and configuration are explained in the project’s wiki, along with other project technical details.

We split the integration process into three steps:

  1. Our first goal was to create some simple custom reporting tools. Doing so led to some overlapping functionality with the ‘stock’ BatchBook service. Our goal here was not to recreate the features that are available in the ‘stock’ version of BatchBook, but simply to learn the BatchBook API and extend it based on our needs.
  2. The second was to automate and combine a sequence of the tasks that simplifies the processes of our sales team. For example, converting a lead (prospect) to a customer, assigning ownership, updating tags/supertags and creating to-do’s. We are currently running a basic version of this.
  3. The third and most involved task will be integrating BatchBook CRM with our custom quoting web application. Currently, BatchBook has no built-in functionality for creating traditional quotes using an inventory list of our products.
    We don’t want to add too many features and make it overly complex (SalesForce anyone?). Instead, we plan on building a suite of custom tools, integrated just the way we need them.

On a more technical note, our application features some great performance improvements on top of Rails’ ActiveResource to ease the burden of making external requests. Some of these enhancements include:

  • Object caching: the number of contacts can quickly grow, even for smaller companies. The constant stream of requests to many objects leads to awful delays. We overcame this by writing a caching system that reduces page loading time from minutes to milliseconds.
  • Request limitation: a request which asks for thousands of objects is a recipe for trouble, as often it will time out. We limited the maximum number of objects per request and combined all of these in the end, preventing error pages for the end user and still getting the job done.
  • Pagination: pagination is usually an easy feature to implement in a database-oriented application, but things get a bit more complicated with a service-oriented application like ours. We solved this by developing a simple pagination feature that works much like the widely-used Rails will_paginate plugin.
  • Integration testing: with the help of the Dupe gem, we wrote a test suite for our entire application to guarantee consistency and expected behavior using Cucumber.

Although we developed quite a bit already, the application is still evolving as we learn more, BatchBlue extends the API and our business requirements change. This is just the start – stay tuned for upcoming features and please give us your feedback.

Reach us via email at feedback@usedcisco.com or Twitter at @usedciscodotcom with any issues you run into or extensions you create.

[DPR] – Convention Over Configuration

This post is part of a series of reviews on the book Design Patterns in Ruby. Check out the Introduction post for a full table of contents along with some generic principles regarding Design Patterns.

Convention over configuration was a concept largely introduced by Ruby on Rails, and is certainly one of the key features of its success. The meaning behind it is simple: instead of working around some central piece of heavy configuration defined by the user, why not introduce a simple, default way to do things and put most of the configuration aside? Instead of supplying the users the ability to define a rulebook to your system, you define the rulebook when designing it, except that this rulebook is usually a set of stardands that people would follow anyway.

Obviously you can’t predict every single way people are gonna use your system, but you can predict the most simple, common cases and make those easy to work with. Of course, case a user wants something more complicated, than he can have a harder time figuring it out or configuring it himself. This concept is largely used in GUIs to favor user accessibility: in browsers you can easily mark some page as a favorite, but to, say, export those favorites to some file requires a bit more work.

There are a few guidelines that GUI designers follow to create easy-to-use interfaces, which the Convention Over Configuration pattern focuses on applying:

  • Anticipate needs: figure out what users do the most and make it the default way, the first paragraphs made this very clear.
  • Let them say it once: design a convention that lessens the need of repetition: when people tend to always do something in a specific way, that means something, usually that it’s the right way to go, so listen and don’t ask again.
  • Provide a template: conventions might be a bit overwhelming to a simple user who just wants to get started, so supply him something that he can get started and follow up on those conventions as he goes.

To see all this in action, let’s develop a small application following the convention over configuration principles. To keep it simple, we want to create reports and generate them in a certain format, but we also want to make the application extendable by easily allowing other formats in.

class Report

	def initialize(title)
		@title = title
	end

	def method_missing(name, *args)
		foo = name.to_s.split('_')
		super(name, *args) unless foo.shift == 'to'
		format_name = "#{foo.first.capitalize}Format"
		begin
			format_class = self.class.const_get(format_name)
			format_class.new.format(@title)
		rescue
			puts "Please define #{format_name} class."
		end
	end
	
end

We’re using the method_missing trick here to dynamically generate instances of formatting classes and calling the format method on them. So if I have an HtmlFormat class with a format method, to_html should make a call to it:

class HtmlFormat
	def format(title)
		puts "<html><body><h1>Header</h1>"
		puts "<p>#{title}</p>"
		puts "<div class='footer'>Footer</div></body></html>"
	end
end

report = Report.new('My Report')
puts report.to_html

As you can see, we’re defining a convention by saying that formatting classes should have the fortmat name followed by ‘Format’, and each of these classes should have a format method that returns a text formatted in its specific formatted name.
Notice how there’s no need to configure anything: we don’t have some file telling what formats are supported and which class those formats are in, by following a convention, all this is gracefully understood by our code.

Suppose we wanted to add even another format, as we’ve seen, it’s as easy as naming the class right an defining a single method:

class PlainFormat
	def format(title)
		puts "Header\n\n"
		puts "#{title}\n\n"
		puts "Footer"
	end
end

report = Report.new('My Report')
puts report.to_plain

Cool, we have all that functionality working, but its obviously not gonna be all in a single file, so we need to also organize our directory architecture by also defining a convention for it:

Now each format class should have it’s own filename and be under the formats/ directory. Notice how this convention we’re defining is something a good engineer would do anyway.
Having the directory setup, all we need is to load those files into our Report class so we don’t get requiring errors when instantiating them:

class Report

	def initialize(title)
		@title = title
		load_formats
	end

	def method_missing(name, *args)
		foo = name.to_s.split('_')
		super(name, *args) unless foo.shift == 'to'
		format_name = "#{foo.first.capitalize}Format"
		begin
			format_class = self.class.const_get(format_name)
			format_class.new.format(@title)
		rescue
			puts "Please define #{format_name} class."
		end
	end
	
	def load_formats
		dir = File.dirname(__FILE__)
		pattern = File.join(dir, 'formats', '*.rb')
		Dir.glob(pattern).each{|file| require file}
	end
	
end

Cool, we have our whole convention setup without requiring a single piece of configuration, but we’re still missing the guideline that states we must provide a template, so let’s do that. Suppose the user doesn’t know/care what the convention is and just wants to get started he could use a scaffolding feature, supply a format as an argument and start coding format logic. Here’s a really simple way to implement that:

format_name = ARGV[0]
class_name = format_name.capitalize + 'Format'

File.open(File.join('formats', format_name + '_format.rb'), 'w') do |f|
	f.write %Q!
class #{class_name}

	def format(title)
		#Code to format title
	end
end
	!
end

With this code in a file saved as format_scaffold.rb, a call like:

ruby format_scaffold.rb xml

Will generate an XmlFormat class with a format method ready to be edited.

As you might have noticed, this pattern, along with the DSL pattern, relies heavily on runtime evaluation of code and program introspection to work, both only possible due to Ruby’s dynamism.

Follow

Get every new post delivered to your Inbox.