Tuesday, December 18, 2012

Some findings on using Jasmine for DOM feedback with Backbone and RequireJS

In my first attempts to develop frontend interactions with Backbone.js, I discovered some blind spots in my understanding of Backbone views. Especially, I find the different options to organize views with parent and child-views, as well as attaching views to DOM nodes difficult. A nice overview on Backbone pitfalls is given here.

As in many programming matters, thinking about outcomes and tests can help to write better code. This is where Jasmine can help in the development of Backbone applications.

First, there are several options to run Jasmine:
  • The most common option for running Jasmine specs is by having your default web browser loading a test DOM (from e.g. spec/index.html or spec/SpecRunner.html). Within this option you can declare a number of JavaScript dependencies that contain the actual specs, and that will be executed as soon as the Jasmine execution is triggered (with jasmine.getEnv().execute(); ).
  • Another option is to run Jasmine specs through rake. This option seems to be very popular in the Rails community; but as I am thinking more towards development of stand-alone frontend applications, I don't want to have a tight Rails coupling. I could not get rake jasmine to render my specs from the Backbone setup; so, I will postpone this approach for a while. Maybe someone of you has made some more successful experiments?
  • Another interesting attempt to get feedback from the DOM is to run Jasmine through a headless browser. PhantomJS comes naturally to mind in this context, and some interesting article is here.


As a first step to testing a Backbone application, I wanted to test some basic Backbone view properties. Now, one of the difficulties that I found was, how to actually load all dependencies into a browser with SpecRunner.html including RequireJS; and, execute some specs.

I found two approaches that work for me:

1. Global declaration of backbone dependencies

References to JQuery, Underscore and BackboneJS can be declared in a global way in the SpecRunner.html as follows:



This should give access to the Backbone library where you need them. It works good enough, but maybe it's useful to load your dependencies more in a dynamic way if your application grows. For this, the next option might be more interesting.

2. Loading Backbone dependencies with RequireJS

When using several dependencies (including custom views, collections, routers, etc.), requiring every module in the global scope of SpecRunner.html might result into increased editing efforts. Here is an attempt to re-use some module definitions with requirejs that could in principle be shared between your tests and the real application.

First, to load dependencies with requirejs, the following lines are needed in the specrunner.html





The first line refers to the jasmine-require project by Scott Burch that helps to require dependencies from within specs. Some example how this can be used with a JavaScript framework is shown here from the Pattern JS project.

With Backbone this might look like:


  requireDependencies(["underscore", "backbone"], function(_, Backbone) {
    View = Backbone.View.extend({tagName: "li" });
    view = new View();
  });

3. Other approaches


Another option to load dependencies with requirejs was described in this discussion in the Jasmine user group by using  testr.js However, I did not understand yet, how this is better than the previous option. (Maybe someone wants to comment?)

Last but not least, there is an interesting Jasmine setup with RequireJS here by Peter Toomberg's Shortcut project. This one does not require any additional setup declarations, but I did not yet look into this too much.


Conclusion


Before actually running a succesful Jasmine spec for a Backbone view, we need a library to actually match DOM nodes with expected values. The common library for doing this are the Jasmine-JQuery matchers These allow to express many things, among if the .el property of a Backbone view actually match a DOM node as follows:


    View = Backbone.View.extend({tagName: "li" });
  view = new View();


  it("has el property", function() {
    expect(view.el).toBe("li");                                                                                                              
  });


My current setup can be found here: https://github.com/mulderp/backbone-require-test/tree/view_specs



Well, so far my findings. Maybe they are helpful for others. I would be curious to hear what you think? How you approach testing of DOM nodes.

Friday, December 14, 2012

Some new tools for asset heavy Rails applications

Nowadays mobile browsers and changing use cases for web applications, require programmers to understand detailed DOM abstractions (usually html5 tags, css, js) as well as API's that talk to a number of different client setups. Although Ruby-on-Rails has brought us a long way to easily meet our business goals, I had the feeling to be stuck when it came to use Backbone with Rails.

There is Sprockets; and as long as I work with JQuery and use Twitter Bootstrap as default assets, Sprockets worked nicely: Sprockets gives some nice abstractions to bundle external asset dependencies, but if you want to develop your own client-side assets (e.g. Backbone programming and work with a Sass precompiler and Compass) Sprockets has some learning curve, and debugging asset problems is often painful. Also, for client-side development dealing with problems through a Rails stack is in my opinion not so ideal.

Now, over the last months, I've found some options for a new toolchain that allows a better combination of client- and server-side programming. Here are my findings:

Rake-Pipeline


I've discovered this tool shortly after the great Baruco conference 2012 in Barcelona. After talks from Josh Kalderimis and Konstantin Haase on software development at Travis, it was a nice discovery to see how Travis manages a modular asset repository. The tool that makes this work is: rake-pipeline. Here some background information on rake-pipeline:
  • The Assetfile
    
    defines how precompilers, concat and copy commands can be combined to generate your assets as needed from a bunch of asset source files.
  •  rakep build
    
    This command reads the Assetfile definition and performs the actions on the sources. It's the asset build step so to say.
  •  rakep server
    
    Now, when developing your client-side assets, you actually don't need to run rakep build from the command line. rakep server gives you a Sinatra server that nicely serves assets as they change during development.
  • Also, as a nice debugger for rake-pipeline, is a minimalistic Python webserver python -m SimpleHTTPServer that directly can serve all your files from the directory where you are in (e.g. /public). Quite handy if you just need some server, for quick-and-dirty browser debugging and experimentation.
So far about building assets, next about serving data that assets want... we'll move on to:

Rails-Api


Some weeks ago at Rupy in Brno, there was a great melting pot of Ruby, Python and JavaScript programmers, and if you were looking for the lowest common denominator, it might have been JSON and REST. Now, there is some discussion recently headed by Steve Klabnik on how to interprete Roy Fielding's ideas for modern Rails applications, but in this context, some nice tools are ready for use: The Rails-Api stack (and ActiveSerializer)

The Rails-Api removes the Rails ERB templates and Sprockets from your application. This is nice, because your Rack stack becomes lighter, and you can focus on the thing that matters: Serving data to clients. From first experiments, Rails-API combines very nicely with rake-pipeline. As you can see from my demo-project, the Rails app just servers JSON to client-side code that is built with rake-pipeline from the /source directory.


Backbone.js, Underscore.js and Require.js


Last but not least, for my application design, I want to use a JS framework that allows to structure the interaction with the DOM and with the end-user. This framework is Backbone.js - but maybe first, a step back.

As a Ruby programmer, you think JavaScript has some problems: Incompatible browsers with different language implementations, as well as language constructs that leave you alone quite fast. At least part of the language problems are solved by JQuery and Underscore.js (which reminds on Ruby, see the collection stuff and enumerator constructs at underscorejs.org

For the rest, a lot of folks from the Node.js community is an example of disruptive innovation at work; especially it is interesting to see, that the JS community nowadays has a modular requirement setup to manage dependencies: require.js. In my view, this will make fancy browser (and maybe one day server) programming fun again.

What you need to know as Rails programmer, Require.js injects dependencies where they are needed, and as such prevents problems in the global scope. Additionally, you can inject HTML templates into your JS modules, which is very nice too. I'll need to explore this, but you can actually take your Rails ERB templates and inject them 1-1 to Backbone templates, where you need them. Some ideas behind this technique are discussed by Thomas Davis, here. A boilerplate for backbone and require-js is here. Another nice overview on Backbone development is here and here (Backbone and Require.js).

That's all for now.

Here some references to my Rake-Pipeline-Rails-Api-BackboneJS-RequireJS experiment. I hope to share some small screencasts soon, to show you why this toolchain is cool. At least for me, these tools make me #happy.

Monday, December 10, 2012

An experiment with Vagrant and Neo4J

The RuPy conference 2012 in Brno was very inspiring! Especially, there were some interesting talks on databases and scalable approaches to web development:

* First,  the ArangoDB team from Cologne set the tone why #nosql matters (see http://www.arangodb.org/ ) . The triAGENS team has written a database which mixes elements from MongoDB and graph databases. The shell of Arangodb looks very clean, and additionally, the system is based on C++ (= integrates with V8 JS engine and MRuby). 

* Another interesting talk was by Mitchell Hashimoto. He showed how Vagrant came into place, and why using isolated, virtual environments make sense for web integration. Some slides (not from Rupy) about this are here slideshare.net/mitchellh/sf-devops-introducing-vagrant

* Andreas Ronge gave a very nice talk on what graphs can do, and SQL can't (well, it can, but not nicely ... ) I can't find his slides from RuPy right now, but he maintains a great blog on Neo4J here:  http://maxdemarzi.com/ Also, these slides are interesting: slideshare.net/andreasronge/neo4jrb

So, ok, coming home to Munich with all these interesting thoughts in my mind, it was clear, that I had to start playing with graph databases in isolated environments for new kind of web applications. Fortunately, I had great input for my learnings from Jorge Bianquetti and Nathen Harvey.

First about Vagrant and VirtualBox. It takes a bit of time to download virtual machines, but it's not too difficult to get going. The single, most important command might be:

$ vagrant init

This creates an environment for setting up a virtual machine. It's very cool, because now, you can imagine to setup an Ubuntu, Debian, CentOS, or whatever system, and vagrant will try to go ahead, download or copies the VM and prepares it just that you can use it.

Ok, not quite, since next, you must tell Vagrant where to download the box; you do it in the Vagrantfile,  e.g.:


config.vm.box = "opscode-ubuntu-12.04"
config.vm.box_url = "https://opscode-vm.s3.amazonaws.com/vagrant/boxes/opscode-ubuntu-12.04.box" 

It's the standard Ubuntu box from Opscode right now.
Just do a:

$ vagrant up

and you would have an Ubuntu box running.

Well, we were charmed by graph databases, weren't we?  Ok, so, let's go ahead and add the setup for Neo4J. Some googling gives, that we have a Neo4J cookbook here: https://github.com/michaelklishin/neo4j-server-chef-cookbook

Hmm.. in this stage, we actually decided already for chef-solo. There is chef-solo and chef-server, and if you want to understand the difference, I suggest you look here: http://www.nathenharvey.com/blog/2012/12/07/learning-chef-part-2/

Chef-server is the approach you want to use in production. Chef-solo is the approach for quick-and-dirty experiments, like we do here. So, let's assume, chef-solo is ok, and we just need to get the cookbooks dependencies right. Luckily, we have a tool for this: chef-librarian.

$ chef-librarian init

This gives you a Cheffile. It's similar to a Gemfile if you are used to Ruby.
Let's throw in the Neo4J dependency here:

cookbook 'apt'
cookbook 'neo4j-server', :git => 'http://github.com/michaelklishin/neo4j-server-chef-cookbook'

And now, similar to bundle install, we run librarian-chef install

Last, but not least, we need to tell our VM that interaction with chef-solo is needed. You'll do this by adding something into the Vagrantfile :

config.vm.provision :chef_solo do |chef|
  chef.cookbooks_path = "cookbooks"
  chef.add_recipe "apt"
  chef.add_recipe "neo4j-server::tarball"
end


Cool, now we only need to build our server, since our ingredients are prepared, and chef is ready for cooking. The magic command is:

$ vagrant up

You should see something like: 

[2012-12-10T20:33:58+00:00] INFO: *** Chef 10.14.4 ***
[2012-12-10T20:33:59+00:00] INFO: Setting the run_list to ["recipe[apt]", "recipe[neo4j-server::tarball]"] from JSON
[2012-12-10T20:33:59+00:00] INFO: Run List is [recipe[apt], recipe[neo4j-server::tarball]]
[2012-12-10T20:33:59+00:00] INFO: Run List expands to [apt, neo4j-server::tarball]
[2012-12-10T20:33:59+00:00] INFO: Starting Chef Run for vagrant.vm


This takes a while.... 


Eventually, you end up successfully, and you can do:

$ vagrant ssh

Now, your VM has Neo4J running on it, and if you enable port forwarding in Vagrant, you might even go to: localhost:7474 and enjoy your fresh Neo4J server.

Some References: