Tuesday, December 18, 2012

Some findings on using Jasmine for DOM feedback with Backbone and RequireJS

In my first attempts to develop frontend interactions with Backbone.js, I discovered some blind spots in my understanding of Backbone views. Especially, I find the different options to organize views with parent and child-views, as well as attaching views to DOM nodes difficult. A nice overview on Backbone pitfalls is given here.

As in many programming matters, thinking about outcomes and tests can help to write better code. This is where Jasmine can help in the development of Backbone applications.

First, there are several options to run Jasmine:
  • The most common option for running Jasmine specs is by having your default web browser loading a test DOM (from e.g. spec/index.html or spec/SpecRunner.html). Within this option you can declare a number of JavaScript dependencies that contain the actual specs, and that will be executed as soon as the Jasmine execution is triggered (with jasmine.getEnv().execute(); ).
  • Another option is to run Jasmine specs through rake. This option seems to be very popular in the Rails community; but as I am thinking more towards development of stand-alone frontend applications, I don't want to have a tight Rails coupling. I could not get rake jasmine to render my specs from the Backbone setup; so, I will postpone this approach for a while. Maybe someone of you has made some more successful experiments?
  • Another interesting attempt to get feedback from the DOM is to run Jasmine through a headless browser. PhantomJS comes naturally to mind in this context, and some interesting article is here.


As a first step to testing a Backbone application, I wanted to test some basic Backbone view properties. Now, one of the difficulties that I found was, how to actually load all dependencies into a browser with SpecRunner.html including RequireJS; and, execute some specs.

I found two approaches that work for me:

1. Global declaration of backbone dependencies

References to JQuery, Underscore and BackboneJS can be declared in a global way in the SpecRunner.html as follows:



This should give access to the Backbone library where you need them. It works good enough, but maybe it's useful to load your dependencies more in a dynamic way if your application grows. For this, the next option might be more interesting.

2. Loading Backbone dependencies with RequireJS

When using several dependencies (including custom views, collections, routers, etc.), requiring every module in the global scope of SpecRunner.html might result into increased editing efforts. Here is an attempt to re-use some module definitions with requirejs that could in principle be shared between your tests and the real application.

First, to load dependencies with requirejs, the following lines are needed in the specrunner.html





The first line refers to the jasmine-require project by Scott Burch that helps to require dependencies from within specs. Some example how this can be used with a JavaScript framework is shown here from the Pattern JS project.

With Backbone this might look like:


  requireDependencies(["underscore", "backbone"], function(_, Backbone) {
    View = Backbone.View.extend({tagName: "li" });
    view = new View();
  });

3. Other approaches


Another option to load dependencies with requirejs was described in this discussion in the Jasmine user group by using  testr.js However, I did not understand yet, how this is better than the previous option. (Maybe someone wants to comment?)

Last but not least, there is an interesting Jasmine setup with RequireJS here by Peter Toomberg's Shortcut project. This one does not require any additional setup declarations, but I did not yet look into this too much.


Conclusion


Before actually running a succesful Jasmine spec for a Backbone view, we need a library to actually match DOM nodes with expected values. The common library for doing this are the Jasmine-JQuery matchers These allow to express many things, among if the .el property of a Backbone view actually match a DOM node as follows:


    View = Backbone.View.extend({tagName: "li" });
  view = new View();


  it("has el property", function() {
    expect(view.el).toBe("li");                                                                                                              
  });


My current setup can be found here: https://github.com/mulderp/backbone-require-test/tree/view_specs



Well, so far my findings. Maybe they are helpful for others. I would be curious to hear what you think? How you approach testing of DOM nodes.

Friday, December 14, 2012

Some new tools for asset heavy Rails applications

Nowadays mobile browsers and changing use cases for web applications, require programmers to understand detailed DOM abstractions (usually html5 tags, css, js) as well as API's that talk to a number of different client setups. Although Ruby-on-Rails has brought us a long way to easily meet our business goals, I had the feeling to be stuck when it came to use Backbone with Rails.

There is Sprockets; and as long as I work with JQuery and use Twitter Bootstrap as default assets, Sprockets worked nicely: Sprockets gives some nice abstractions to bundle external asset dependencies, but if you want to develop your own client-side assets (e.g. Backbone programming and work with a Sass precompiler and Compass) Sprockets has some learning curve, and debugging asset problems is often painful. Also, for client-side development dealing with problems through a Rails stack is in my opinion not so ideal.

Now, over the last months, I've found some options for a new toolchain that allows a better combination of client- and server-side programming. Here are my findings:

Rake-Pipeline


I've discovered this tool shortly after the great Baruco conference 2012 in Barcelona. After talks from Josh Kalderimis and Konstantin Haase on software development at Travis, it was a nice discovery to see how Travis manages a modular asset repository. The tool that makes this work is: rake-pipeline. Here some background information on rake-pipeline:
  • The Assetfile
    
    defines how precompilers, concat and copy commands can be combined to generate your assets as needed from a bunch of asset source files.
  •  rakep build
    
    This command reads the Assetfile definition and performs the actions on the sources. It's the asset build step so to say.
  •  rakep server
    
    Now, when developing your client-side assets, you actually don't need to run rakep build from the command line. rakep server gives you a Sinatra server that nicely serves assets as they change during development.
  • Also, as a nice debugger for rake-pipeline, is a minimalistic Python webserver python -m SimpleHTTPServer that directly can serve all your files from the directory where you are in (e.g. /public). Quite handy if you just need some server, for quick-and-dirty browser debugging and experimentation.
So far about building assets, next about serving data that assets want... we'll move on to:

Rails-Api


Some weeks ago at Rupy in Brno, there was a great melting pot of Ruby, Python and JavaScript programmers, and if you were looking for the lowest common denominator, it might have been JSON and REST. Now, there is some discussion recently headed by Steve Klabnik on how to interprete Roy Fielding's ideas for modern Rails applications, but in this context, some nice tools are ready for use: The Rails-Api stack (and ActiveSerializer)

The Rails-Api removes the Rails ERB templates and Sprockets from your application. This is nice, because your Rack stack becomes lighter, and you can focus on the thing that matters: Serving data to clients. From first experiments, Rails-API combines very nicely with rake-pipeline. As you can see from my demo-project, the Rails app just servers JSON to client-side code that is built with rake-pipeline from the /source directory.


Backbone.js, Underscore.js and Require.js


Last but not least, for my application design, I want to use a JS framework that allows to structure the interaction with the DOM and with the end-user. This framework is Backbone.js - but maybe first, a step back.

As a Ruby programmer, you think JavaScript has some problems: Incompatible browsers with different language implementations, as well as language constructs that leave you alone quite fast. At least part of the language problems are solved by JQuery and Underscore.js (which reminds on Ruby, see the collection stuff and enumerator constructs at underscorejs.org

For the rest, a lot of folks from the Node.js community is an example of disruptive innovation at work; especially it is interesting to see, that the JS community nowadays has a modular requirement setup to manage dependencies: require.js. In my view, this will make fancy browser (and maybe one day server) programming fun again.

What you need to know as Rails programmer, Require.js injects dependencies where they are needed, and as such prevents problems in the global scope. Additionally, you can inject HTML templates into your JS modules, which is very nice too. I'll need to explore this, but you can actually take your Rails ERB templates and inject them 1-1 to Backbone templates, where you need them. Some ideas behind this technique are discussed by Thomas Davis, here. A boilerplate for backbone and require-js is here. Another nice overview on Backbone development is here and here (Backbone and Require.js).

That's all for now.

Here some references to my Rake-Pipeline-Rails-Api-BackboneJS-RequireJS experiment. I hope to share some small screencasts soon, to show you why this toolchain is cool. At least for me, these tools make me #happy.

Monday, December 10, 2012

An experiment with Vagrant and Neo4J

The RuPy conference 2012 in Brno was very inspiring! Especially, there were some interesting talks on databases and scalable approaches to web development:

* First,  the ArangoDB team from Cologne set the tone why #nosql matters (see http://www.arangodb.org/ ) . The triAGENS team has written a database which mixes elements from MongoDB and graph databases. The shell of Arangodb looks very clean, and additionally, the system is based on C++ (= integrates with V8 JS engine and MRuby). 

* Another interesting talk was by Mitchell Hashimoto. He showed how Vagrant came into place, and why using isolated, virtual environments make sense for web integration. Some slides (not from Rupy) about this are here slideshare.net/mitchellh/sf-devops-introducing-vagrant

* Andreas Ronge gave a very nice talk on what graphs can do, and SQL can't (well, it can, but not nicely ... ) I can't find his slides from RuPy right now, but he maintains a great blog on Neo4J here:  http://maxdemarzi.com/ Also, these slides are interesting: slideshare.net/andreasronge/neo4jrb

So, ok, coming home to Munich with all these interesting thoughts in my mind, it was clear, that I had to start playing with graph databases in isolated environments for new kind of web applications. Fortunately, I had great input for my learnings from Jorge Bianquetti and Nathen Harvey.

First about Vagrant and VirtualBox. It takes a bit of time to download virtual machines, but it's not too difficult to get going. The single, most important command might be:

$ vagrant init

This creates an environment for setting up a virtual machine. It's very cool, because now, you can imagine to setup an Ubuntu, Debian, CentOS, or whatever system, and vagrant will try to go ahead, download or copies the VM and prepares it just that you can use it.

Ok, not quite, since next, you must tell Vagrant where to download the box; you do it in the Vagrantfile,  e.g.:


config.vm.box = "opscode-ubuntu-12.04"
config.vm.box_url = "https://opscode-vm.s3.amazonaws.com/vagrant/boxes/opscode-ubuntu-12.04.box" 

It's the standard Ubuntu box from Opscode right now.
Just do a:

$ vagrant up

and you would have an Ubuntu box running.

Well, we were charmed by graph databases, weren't we?  Ok, so, let's go ahead and add the setup for Neo4J. Some googling gives, that we have a Neo4J cookbook here: https://github.com/michaelklishin/neo4j-server-chef-cookbook

Hmm.. in this stage, we actually decided already for chef-solo. There is chef-solo and chef-server, and if you want to understand the difference, I suggest you look here: http://www.nathenharvey.com/blog/2012/12/07/learning-chef-part-2/

Chef-server is the approach you want to use in production. Chef-solo is the approach for quick-and-dirty experiments, like we do here. So, let's assume, chef-solo is ok, and we just need to get the cookbooks dependencies right. Luckily, we have a tool for this: chef-librarian.

$ chef-librarian init

This gives you a Cheffile. It's similar to a Gemfile if you are used to Ruby.
Let's throw in the Neo4J dependency here:

cookbook 'apt'
cookbook 'neo4j-server', :git => 'http://github.com/michaelklishin/neo4j-server-chef-cookbook'

And now, similar to bundle install, we run librarian-chef install

Last, but not least, we need to tell our VM that interaction with chef-solo is needed. You'll do this by adding something into the Vagrantfile :

config.vm.provision :chef_solo do |chef|
  chef.cookbooks_path = "cookbooks"
  chef.add_recipe "apt"
  chef.add_recipe "neo4j-server::tarball"
end


Cool, now we only need to build our server, since our ingredients are prepared, and chef is ready for cooking. The magic command is:

$ vagrant up

You should see something like: 

[2012-12-10T20:33:58+00:00] INFO: *** Chef 10.14.4 ***
[2012-12-10T20:33:59+00:00] INFO: Setting the run_list to ["recipe[apt]", "recipe[neo4j-server::tarball]"] from JSON
[2012-12-10T20:33:59+00:00] INFO: Run List is [recipe[apt], recipe[neo4j-server::tarball]]
[2012-12-10T20:33:59+00:00] INFO: Run List expands to [apt, neo4j-server::tarball]
[2012-12-10T20:33:59+00:00] INFO: Starting Chef Run for vagrant.vm


This takes a while.... 


Eventually, you end up successfully, and you can do:

$ vagrant ssh

Now, your VM has Neo4J running on it, and if you enable port forwarding in Vagrant, you might even go to: localhost:7474 and enjoy your fresh Neo4J server.

Some References:


Monday, April 16, 2012

Small tutorial to using backbonejs with Rails and Backbone-on-Rails-Gem

A demo Todo explanation with the Backbone-on-Rails gem, that is the background of this discussion, is ready for download here: https://github.com/mulderp/Backbone-on-Rails-todoDemo

1. From server-side to client-side programming

The Rails framework is well known for its nice interaction of views, controllers and models. How these components work with HTTP and a database, is typically explained using a blog application. Client-side programming poses slightly different programming problems. Client-side programming is influenced by intricacies of web browsers and the DOM, which includes presentation details (HTML/CSS) and logic (Javascript). Similarly as JQuery provides a better API to manipulate simple structues in the DOM, the goal of Backbone is to provide a language that facilitates so called "data-driven programming", where a high amount of data changes and events in the DOM are becoming easier to deal with on the client-side.

2. Entering client-side programming
Typically, client-side programming is explained with the help of a Todo application. There is great demo of a Todo application from Jérôme Gravel-Niquet , here:

http://documentcloud.github.com/backbone/examples/todos/index.html

 The HTML of a Todo-List is rather simple, and consist of a 'list' that contains a number of 'todos'. For those, who are new to client-side programming, a different toolset is helpful in solving programming problems. These tools and debugging tricks are:


  • jsfiddle:  An interactive sandbox to play with HTML, CSS and Javascript code. Libraries, such as backbone, can be included too
  • jslint: This tools helps in finding errors in Javascript or JSON data
  • the browser console like firebug in Firefox or in the web developer tools of Chrome:  The console helps in evaluating small pieces of code and variables, and breakpoints can help to understand which context and scope is currently active.
  • console-log: With the console.log() function in Javascript, it's possible to monitor the correct flow of data in the application
  • http://js2coffee.org/ : When using coffeescript, as is advised from Rails 3.1 on, it's helpful to understand the conversion of coffeescript into Javscript



3. Fetching data from the server
The main mechanism in backbone to fetch data is by extending a Backbone.Collection For a todo list, where 'todos' should be fetched from the server, or written to the server, a Todos collection might look like this:


class BackboneOnRailsTodo.Collections.Todos extends Backbone.Collection                                                                                                  
  model: BackboneOnRailsTodo.Models.Todo
  url: '/todos'


The important piece here is the 'url'. Coming from a Rails environment, where an 'url' is only defined in the router, this might be a bit confusing, however, 'routes' in Backbone have a different usage, namely to interact with a client-side URL that is marked by a hashtag (e.g. http://mydomain/todos#list ). As a first test, to see that your collection is working, you can use the browser console and fetch some simple todo json from the server.

This could look like this:


todos = new BackboneOnRailsTodo.Collections.Todos()

Todos
todos.fetch()

Object



Note, the 'new' and '()' in the statement above are important, because otherwise, you get some wrongly initalized object. You can then fill your collection, with todos.fetch()


4. Rendering data with help of views and templates
Once, data is available, render it with help of views

a) Views are some kind of containers, where you put data and recipes (templates), how to render the data. In the Backbone-on-Rails gem, you can easily use the ECO type template, which is some kind of ERB in the coffeescript context. Note, you must address view variables with help of @ from the view, like so:
 
  <%= @todo.get('content') %>


b) Views must be initialized with a model or collection hash, typically looking like this:

 view = new BackboneOnRailsTodo.Views.TodoListIndex(collection: @todos)   

c) Views can be rendered, and for this, the render() function is called together with .el(), that actually gives the HTML of the rendered element

d) In views, unlike as in Ruby, there is not much syntactic sugar by default. A function like 'each' is given by the underscore library, but it's even easier to use the construct  for .. in from backbone


5. When to render views?

a) The rendering of view can easy be tested for development purposes, by using the browser console.
preload / 'reset' function

As the rendering of a view, needs to have a model or collection as input, a collection must be initialized first:

todos = new BackboneOnRailsTodo.Collections.Todos()
todos.fetch()


Then,


view = new BackboneOnRailsTodo.Views.TodoListIndex({collection: todos})

The rendering of a view can be tested with

view.render()

b) In our Todo application we work with 2 views. Similar to the demo Todo app by Jérôme NG as above:

// Todo Item View --> The DOM element for a todo item...
var TodoView = Backbone.View.extend({

and

// The Application --> Our overall **AppView** is the top-level piece of UI.
var AppView = Backbone.View.extend({ .. })


c) For the doing the first, startup rendering of a view, a Backbone router can be instructed to initialize the view:


class BackboneOnRailsTodo.Routers.TodoLists extends Backbone.Router
  routes:
    '': 'index' 


  initialize: ->
    @todos = new BackboneOnRailsTodo.Collections.Todos()
    @todos.fetch()
  
  index: ->
    view = new BackboneOnRailsTodo.Views.TodoListIndex(collection: @todos)                                                                                               
    $('#todo-list').html(view.render().el)

It's important to have at least something in a router, otherwise the Backbone router may not have a 'history' state

There are other ways to initialize views, such as synchronous or asynchronous loading of data and/or view templates. In the example above, the data is provided asynchronous from server side.

6. Handling user interaction
So far, the explanation above can be used to fetch data from the server, and to render it. However, in a rich-client application, events in the DOM, that are issued by user interactions (mouse click, key pressed, etc. ) are importat too.
For having user interaction in the application event binding to DOM elements is used. Event binding events uses either Backbone or JQuery event delegation ('bind' or 'on' functions). To lookup the right DOM elements, the delegation must be bound in the correct context.

This can be a cause for confusion as discussed here:



  • http://stackoverflow.com/questions/9304625/in-backbone-js-how-do-i-bind-a-keyup-to-the-document
In the demo Todo application I use the following strategy to binding to events in the TodoListIndex:

  initialize: ->
    @collection.on('reset', @addAll, this)
    @collection.on('add', this.addOne, this)
    $('#new-todo').on "keypress", {collection: @collection}, @keyTodoInput


Note, the 'add' event works by using a backbone event binding. The 'keypress' event must use a JQuery binding, because the input form is outside the scope of the TodoIndex view.

The event that a new todo is added, is processed with:

  addOne: (todo) ->
    console.log(todo)
    view = new BackboneOnRailsTodo.Views.Todo({model: todo})
    $("#todo-list").append(view.render().el) 

The event that an input is made, is processed with:

  keyTodoInput: (e) ->
    # console.log(event.type, event.keyCode)
    return if (e.keyCode != 13)
    return if (!this.value)
    console.log(e.data.collection)

Thursday, May 26, 2011

Beauty in programming

Ahh... nice day today! Finally, could experience and explore the beauty of programming again after some weeks of social science research. The logics of programming is often hidden behind many doors and dark rooms where light switches must be turned on first.
Well, that happened just today: First, taking a class from a C++ project with more than 100 methods, and 5 related classes. Making simplified versions in Ruby. And finally, seeing some relationships between methods and classes.... the hidden code behind abstractions :)

Well, I tried to post something on stackexchange to ask fellow software developers about their experience with beauty in programming, but not much response yet.

Monday, March 21, 2011

Some models for the design thinking process


Here is a short list on variations on the design thinking process.

The first process can be found on webpages at the d.school at Stanford. We see several stages, with variating degree in intensity: Empathize, Define, Ideate, Prototype, Test and Iterate.
The process starts with reflections on whom to work for, exploring and selecting perspectives, reflections on learning outcomes and prototyping, as well as evaluation of prototypes.

Next, there is a circular model proposed by Tim Brown of Ideo.


Here, we see that Inspiration influences ideation and implementation and vice-versa.

An iterative design thinking process that is taught at TU Munich Business School is shown below:


Here, we start with an analysis phase, a design phase follows, a prototype is build. Then, there is play and review on the experiences.

An approach called "customer journey" to service design can be found here. It is also a circular model, starting with a "pre-service" period, a service period, and a post-service period.

Another circular model to design thinking is given by Prof. Ranjan from India. He calls his model the "hand-heard-head" model of design.

Design of 1st order is about form and function. Design on 2nd order is about Function, Feeling, impact and effect. Design of 3rd order is about Meaning and Purpose.

Still farther East, I. Nonaka proposes a model for innovation and learning that is somewhat similar to design thinking. It is the SECI model of knowledge creation. Nonaka starts with the aspect of empathy and observation, that he calls "socialisation". Here, knowledge that is difficult to articulate is experienced. Then, the implicit knowledge is made explicit by the process of "externalisation". This is mainly about codification of experiences with symbols or models. Third, externalised knowledge is combined in new ways to generate new concepts and ideas. This is called "combination". Last is the process of internalisation, where explicit knowledge is converted again into implicit knowledge in the form of best practices.



A short overview on design-thinkers can be found here.