mattwhite

Express web application framework

Having created a simple application and installed our first NPM package, now things start to get serious. We want to start building out our real application, there will be more than one page, we’ll need client side JavaScript and CSS files. Organising all of this is important, you will have hundreds of files before you know it. This is where Express comes in.

If we install the basic Express npm package then we can get one of the biggest advantages for us, that is routing. I can define URLs and URL patterns that make up my application and then map functionality onto each one as I see fit. As you get more advanced you can start defining middleware steps that test security before allowing the user to get to a page. It makes your application configuration orders of magnitude simpler to look after. From my point of view at the moment, if you wanted to create a node.js web application that didn’t use Express then I’d want a very good reason why.

The basic installation is very simple, simply edit your package.json file and add express as a dependency:

Then, from the terminal, run “npm install”. You’ll see a bunch of new files installed under node_modules and can now start modifying the server.js file:

The key section is the line that starts app.get. This tells the application that it wants to listen for GET requests at the URL “/hello.txt” and what to do when someone requests that page, in this case just return some Hello World text. If the user requests a different URL then they’ll get an error. And of course you can also configure your routes to listen for PUT, POST and DELETE requests. And hopefully at this point you’ll see what’s happening, you now have the simple ability to support CRUD in your application, whether that be from web browsers or creating a REST API.

In the real world you’ll want to move your application logic off to other files before the server.js file becomes too large and unwieldy, so you can equally have lines like this:

In this example I’ve moved the business logic into a file called hello.js that lives in a subfolder from the root of my application: app/controllers/hello.js. You can put all of your hello world application functions into that file to keep things nicely organised.

In fact I actually move the routes themselves off to a separate file, so that the server.js really is just the instantiation of objects rather than any real business logic.

But this is not the extent of what Express can offer you. If you’re starting from scratch you’ll probably want to server up static content like CSS, images, client side JavaScript etc. Express can manage all of that for you as well, but it’s easier to create a new application:


Structure of new demo app

Structure of new demo app

The first line generates a new application with placeholder content including CSS and Jade files and a place to insert your own client side JavaScript. This is more complex than it sounds because, of course, all of the routes to the static files have to be configured. Once the app has been generated you can move to its directory, and then run an npm install to get all of the required modules as defined in the auto generated package.json file. Finally we can run the application with npm start. This kicks off the application listening on port 3000 and we can load it in the browser.


The point of a framework is to make life easier, there is no reason you couldn’t set all of this up manually yourself, but there really is no reason to. The rest of the world seems to use Express as a standard and it does everything you could possibly want, well at least I’ve not come across anything that I couldn’t do yet. 

NPM or Node Module Packages

After getting your development environment set up and your fancy schmancy Hello World application running what next?

Let’s say we want to display a nicely formatted version of the current date and time. The current best of breed for date and time handling in JavaScript is called Moment and we can choose to use it either server side or client side. For our purposes here, we’ll use it server side, which means we need to install it into our application.

First we’ll want to create a file called package.json which acts as the main configuration point for the application. One of the things it stores is which NPM packages we have installed and which version of each package we want to use.

From the terminal we can now install moment: 

data-animation-override>
npm install —save moment

You’ll find the package.json file has been modified to include a dependency for moment and you’ll also see a new folder in your project called node_modules that contains the source code for the package. Rule of thumb is that you’ll never be editing files in that folder.

What we can now do though is start to use the newly imported package in our own code.

So we’ll add some extra code to the line which returns Hello World:

And when that runs in the browser we see something like:

data-animation-override>
Hello World
The day will end in 8 hours

As I write this there are nearly 75,000 packages available from NPM, so this is where you have to get into the mindset that if you are writing something new, then you are probably not writing something new. Leverage what has already been put out there.

Of course there are always issues. In your dependencies in package.json you’ll see that moment has been added with a version number, something like “^2.6.0”. You can also set this to be “latest”, but this means that whenever you run an “npm install” against your application, that NPM will look to see if a more recent version has been released and upgrade it for you. You need to be very aware of this. I recently ran a general install and found that Express had been upgraded from version 3 to 4. This broke my entire application and I had to roll back in Git to recover. So you have to become highly attuned to the impact of what is being upgraded. Things that are important to you, like Express, you should probably fix the version (by removing the ^ from the dependency), whereas things that are more utility helpers like moment you can probably just change to “latest”.

There is a huge amount more that you can do with NPM and it’s worth reading up on it. It’s probably your biggest friend when getting started with node. 

But first the development environment

I started a series of blog posts about my work with node.js last week. And of course the thing I forgot to mention is the actual process of development. 

A lot of people seem to really like using Webstorm from Jetbrains. and it’s certainly a full featured tool. I tried it for ten days and just couldn’t get used to it. So I now spend my days living inside three different windows: Sublime Text for editing, Chrome for browser testing and the Terminal for running node.js.

If you’ve not yet used Sublime Text then it’s worth having a look, it’s a text editor but on steroids, so it will has awareness of different text file formats: JS, HTML, CSS etc etc and will offer basic code completion. But will also allow you to edit multiple lines of text at once, it has all sorts of plugins and so on.

Chrome has become my defacto standard for browser testing (although I actually use Safari for actual web browsing). Its developer tools are unmatched, and more recently it has also offered basic emulation of mobile devices. Definitely not the same as running on the real thing, but good for basic testing.

And then finally there is the terminal window. As my colleagues will testify, I am not a command line sort of geek, it just feels too much like a hair shirt to me. But with node.js development there really is no avoiding it so it’s worth learning a little. Although to be honest my usual workflow is to have two terminal tabs open: one in which I type “npm start” and the other in which I type “grunt watch”. The first of these launches my dev server running on port 3000. While it’s running in dev mode it will automatically pick up changes I make to source code and restart itself as required. I can also print out to the console if I need to do some debugging.


A screen capture of my working environment

A screen capture of my working environment

The latter command launches GruntJS which is a task runner. This allows me to have scripts to run every time I make a change to a subset of files. In my dev environment, I want my JS and CSS files to be automatically minified. And then I can also have it automatically run unit tests against my code, or JS Lint checks to make sure I’m not introducing too many bugs.

The other thing that you need to get working with very early on (if it’s not already baked into your workflow) is some sort of source control system. You will be editing a lot of files and it’s easy to lose track, so why not offload that task onto a tool. In my case (as with 90% of the development world it seems) I use Git and Github. This has the nice advantage that Heroku (my application hosting platform) also allows me to push up changes from my Git client as and when I need to. Talking of Git clients, I am switching between Source Tree and Tower these days (see what I mean about avoiding the command line when I can).

Of course everyone has their own way of working and the dev workflow for node in particular it seems like you could spend your whole life perfecting the workflow rather than actually doing any work so don’t get too hung up on it, but this is what works for me. 

Getting started with node.js

As you may know I’ve been writing Domino web applications in one form or another for 18 years now (good God that’s a long time). But over the last few months I’ve been working on a node.js application. I’ve learned a huge amount, but am still really a newb when it comes to node, so bear that in mind when you read my blog posts over the next few weeks and months.

The first question, I suppose, is “What is node.js?”. Well, the simple answer is that it’s a JavaScript engine that will do whatever you tell it. It will run on whatever platform you care to mention, I am using my Mac, but it will also run on Windows or Linux. The most popular use seems to be to write web applications, but you can equally write utilities for data migration or anything else that you can think of really. Everything you write is in JavaScript, and much like with XPages, you get to write JavaScript that runs on the server side and the client side.

The node.js community is huge and there are vast amounts of resources out there to get you going, so your first thought whenever you want to achieve something is “has someone already done this”. In most cases they will have. Enter NPM or Node Packaged Modules. You can simply install packages into your application and make use of them, much as you would with JAR files in Java.

What I have learned with node.js so far is that there is a package for almost everything you could want to do, and there are also frameworks to make managing your applications easier. This becomes important pretty quickly, otherwise your code will get out of hand. So over the next few posts I’m going to be talking about each of the major aspects of the application I’m working on, without actually talking about the application itself, that will come later I hope.

What this won’t be is a tutorial, there are plenty of great tutorials out there already, I’d recommend one of the following:

Be aware, node.js is evolving very fast, so things can get out of date pretty quickly, but the themes are still generally valid.

Your best bet for an over-arching framework is called Express. This is far and away the most popular framework for creating a web application. It recently upgraded to version 4, I am using version 3 for the moment so am already out of date!

You’ll probably want a database for your application as well. There is plenty of choice, there’s no reason, for example, why you couldn’t continue to use your NSF with a REST API in front of it, but the reality is that there are better options out there. Again the darling of the moment is MongoDB. This is a whole subject in itself, but in my case the application requirements are pretty simple and we’re used to thinking in terms of non relational databases so MongoDB makes a lot of sense right away. To integrate it into node.js and provide some simple structure to my application, I am using a framework called Mongoose.

That’s three elements then, server, application framework and database. What about the front end? The norm, it seems is to use AngularJS. Together these four elements create the MEAN stack (Mongo Express Angular Node). But this is where I diverge slightly. I still can’t quite get my poor old head around Angular, it still feels like too much work to me, so I am using plain old HTML in the form of Jade and jQuery.

Once you get into creating the application itself, then again we can turn to NPM and start making use of the great work that other people have done to quickly get going. For example, with authentication we can use Passport, for email integration we have Mailgun, for file attachments we have Amazon S3, full text searching there is Elastic Search, real time browser interaction we can use socket.io and for credit card processing there is Stripe.

The other thing to get involved in is hosting the application. I’ve mentioned a bunch of technology already, and so far I have it all running in the cloud for zero cost. It’s obviously all dev rather than production, but there are hundreds of choices, so again I’ll go through what I have chosen. But the headlines are: Heroku for node.js hosting, MongoLab for MongoDB hosting, Searchly for Elastic Search hosting, Stripe to process credit cards and Amazon AWS for file attachment storage.

There we go, nice and simple; just learn twenty new technologies in 3 months! Honestly it’s not as scary or difficult as it sounds. I mean if I can do it then so can you.

My plan over the next few weeks is to dip into each of the areas I’ve mentioned above and describe how I’m using them. If you’ve got questions or things to add then I’d love to hear them, this is as much a learning experience for me as it is for you I can promise you.

 

jQuery UK Conference

I realised while sat in the main room at jQuery UK that I hadn’t been to a conference as an attendee without speaker or organiser duties for years. It made a nice change.

jQuery always feels like one of those utility things, I use it every day on pretty much every project I do, but I don’t feel like I really know everything about it, this was the main reason for attending and boy was I right. There was so much to learn here that it was slightly scary. During the day I attended eight sessions and by the end my poor head was hurting rather a lot.

The format for that day was very similar to the LUGs that continue to run around the world, but with the numbers dialled up (around 700 people) and the organisation level just that tad more… well organised. This is the difference between having the money and being able to devote people to the job full time. White October Events do a very good job indeed. Although that’s not to say the venue was perfect; the main room was excellent but the second and third tracks were too small, so they were constantly overflowed, hopefully we’ll see recordings of those sessions in due course.

The speakers were almost all excellent. The sessions I did were: 

And the result of all of that is that I have a *lot* of reading and learning to do. Some of the headlines, but definitely not everything I picked up include:

Overall the day was excellent and I have no doubt that I’ll be back in 2015.

Git cloning of very large repo’s in Windows

I’m working on a node.js project at the moment, and for various reasons I’m spending part of my time working in Windows rather than the usual Mac environment. It shouldn’t be a problem, and generally it’s not. Once the node environment is set up on Windows it works just the same. But one problem I have encountered is with the depth of my repo.

Some of the folders have very many levels of nesting, due to the joys on npm. When I was trying to clone the repo to my Windows machine I was getting errors saying that the folder names were too long. It turns out that Git in Windows has a limit on file path length of 256 characters. There are registry hacks that you can apply to increase this limit, but I never like touching the registry unless I absolutely have to.

Enter the following command which allows me to map a pseudo network drive to a location within my C:\ drive:

subst G: C:\Users\Matt\Documents\github\NewProject

A new network drive, G: in this case will become available. What this means is that I can clone my repo to g:\ instead of cloning it to C:\Users\Matt\Documents\github\NewProject which saves me 42 characters and thus allows me, in this project at least, to clone my repo without problems. The other benefit is that “NewProject” can continue to sit in the github folder on the C: drive, and I can work in it when doing development, it’s just the Git world that operates against G:

My thoughts on the IBM BlueMix workshop

The idea of BlueMix is that a developer can create an application very quickly and get it deployed without having to go through the usual rigmarole of enterprise app deployment which can take weeks or months. From that point of view BlueMix is very interesting; the developer can choose their preferred language (basically node.js, Java or Ruby), their preferred database (DB2, MongoDB, MySQL, Postrgresql) and various other add ons such as integration with the Internet of Things, big data analysis, even fitness trackers.

The day itself covered the basics of the infrastructure running on Cloud Foundry on Softlayer, and the various different ways to deploy apps. We looked at using the command line, the Bluemix website, Jazz.net (the IBM Devops tools for continuous integration) and also briefly integration with Git. So from the point of view of actually pushing code up to the service it’s very simple, or at least as simple as any of the other cloud services out there.

Unfortunately the day was rather plagued with technical issues thanks to the IBM Impact conference running in Las Vegas. The BlueMix team were busy pushing up new code for Impact during the night over there, but it was the middle of the day here and so the BlueMix services were like the proverbial nun’s drawers.

And due to the down time it gave plenty of time to think about the offering. We’re doing some Node.js / MongoDB development at the moment and deploying it to the cloud has been incredibly simple, we use Heroku for the application, MongoLab for the database and Search.ly for the full text searching. We could equally run all three elements on a single box and run that in the cloud ourselves. Or we could use Bluemix to host all three elements.

Using multiple services is probably dangerous, there’s multiple points of failure etc, but for dev at least I like it, I can understand exactly what is running where, I can see application logs and so on. If I run them locally, likewise I can see everything running. But BlueMix is very much a black box, I couldn’t see any simple way to watch my application console ticking over for example. The MongoDB has no interface to it, no facility to control sharding and there was no integration of ElasticSearch at all so we’d need to re-think how we handle those elements of our projects. The point being, by creating one big, black box, it makes deployment very simple, but it takes control away from me, and I like control over the applications that I write.

The mantra for the day seemed to be “as a developer you don’t need to know this stuff”. From my point of view, not knowing “this stuff” is very dangerous and it’s not a road I want to go too far down.

The problem for IBM is that the only differentiator they can offer for this type of project is on price or SLA. I find it very hard to believe that either will be better than the existing cloud service providers as this sort of thing is basically commodity at this point. So the target for IBM seems to be the add-ons. Data and analytics, and other services which are being written by third parties (i.e. Business Partners) that can then be added to other applications as required. A very similar proposition to the add-ons that Heroku offers but more targeted towards the enterprise than SaaS. It seems like BPs will be able to provide services into BlueMix but there were no details on how that would work in terms of certification or earning money from it.

Overall it was a good day in the sense that I realised that BlueMix is not something I need to spend any further time with at least until pricing has been announced. For the purposes of LDC and Elguji, we can achieve everything we need with existing services such as Heroku or EC2.

What does your desktop look like?

For the longest time I have done all of my development inside an IDE be it Eclipse or Domino Designer. But recently I’ve been playing with node.js (on which I’m sure I shall bore you later) and the nature of that sort of development is that you work in a text editor, terminal window and browser.

I’m sure this says more about me than it should, but I can’t quite organise my desktop into a setup which feels right. In an IDE you are constrained by the containing window and at least in the Eclipse world you can drag panes around within that. I have my perfect arrangement of windows and I know just where they all are. But with Sublime Text, Terminal, Chrome, Firefox and Safari all open for various tasks with the dev work I’m doing I end up with windows all over the place.


I’ve had a tool on the Mac called Window Tidy for a long time and that is helping but I’m curious, what do other people do with their desktops or am I just being completely OCD?

Unplugged Controls v3.1

Last week we released version 3.1 of the Unplugged Controls project on OpenNTF.

It’s not as big a change as 3.0 but there are several new features which you may find useful, and also several bug fixes that are worth noting. The release notes have the full listing:

  • UnpFormEditor controls now fully supports display and saving of data to a different database
  • Radio buttons now display correctly
  • A new toggle switch control can be used (for an example see the Sampler application)
  • Fixed a bug where the navigator menu partially showed in portrait mode
  • Fixed a bug in the Sampler application where back button navigation worked incorrectly in Android
  • Fixed a bug where Font Awesome icons did not display correctly in mobile web browser
  • Fixed a bug where pages did not display correctly after syncing
  • Removed deprecated UnpNavigator control, the UnpNavigatorComputed should not be used
  • Improved demo of text editing in Sampler application
  • Fixed html data in Sampler application
  • Fixed bug where Dialog box was hidden under the navigator in landscape mode

As ever if you come across bugs or have an idea for an enhancement then please do log them at our Github project.

XPages and IE11

During testing for a project that we’re working on at the moment we obviously have to go through the rigmarole of supporting the eleventy different versions of IE that the world continues to use.

With the addition of IE11 to the roster, it just makes life that little bit more complex. On the server side in XPages what I’m seeing is that the “isIE” test doesn’t work exactly as expected.

If you’re testing the simple boolean to see if the browser is IE at all (regardless of version) then you’ll be fine (although I’m not sure how that’s working), but if you’re detecting specific versions of IE, then you’ll need to add some manual user agent checks. So in my before render response code I now have something like this:

To detect IE 11 (and later) we have to use the new Trident user agent. And the ideal is that we set the document mode to “edge” which is the default from IE’s point of view. You can still revert the document mode to earlier versions if you must but it appears that this is not going to be the case for ever so getting your site working with IE11 will probably be worth the effort in the long run.