So two posts ago I documented my first foray into nodeJS. The result was a simple HTTP server that served one line of static HTML. All I really accomplished was to prove to myself that nodeJS was installed correctly, I could copy/paste six lines of code, and that the copied code would actually create a working HTTP server. Of course I was ecstatic when it worked… a bit giddy, even!
With the aroma (stench?) of success still lingering I plunged in again, this time following along with Ryan Dahl‘s 2009 JSConf presentation. Of course, nodeJS has changed a lot in the last 14 months so getting his presentation examples to work took a bit more than copy/paste skills. It made for an excellent playground in which to learn nodeJS basics.
Having finished the video and finding success in getting each of Ryan’s demos to work in the latest nodeJS (v0.3.6-pre) I ended up with an excellent beginner’s showcase: 7 example nodeJS scripts showing off non-blocking, long-poll, file IO, streaming IO, TCP servers and HTTP servers. Each one of these scripts can be run with the simple command “node <script_name>”. The gist of my experience (sorry… I couldn’t resist) can be found below: well, at least 2 of the seven examples. You’ll have to visit this gist on github if you’re interested in seeing all 7 examples along with comments containing directions and expected response details for each script.
My next step will be to drive nodeJS services through tests, ideally using the excellent BDD framework… Jasmine!
As I was doing research and working to get my EC2 instance up and running, I ran across a thread (I can’t remember where) that outlined a potentially frustrating scenario. Fortunately I escaped learning this lesson in my normal mode and learned it from someone else’s mistake!
There are two ways to shut down a running EC2 instance: “Stop” and “Terminate”. I had only used “Stop”, which leaves the instance in tact but not using any CPU cycles. That way you can easily start the instance later while avoiding the hourly charge for an instance if you aren’t using it.
Here is the important safety tip: do not use “Terminate” until you’ve created a snapshot of your instance. “Terminate” means… terminate, as in “with extreme prejudice.” If you terminate your instance, it’s gone. If you didn’t create a snapshot of your instance, you will have to rebuild it from scratch.
So, prefer “Stop” over “Terminate” and, once you have a working instance, take a snapshot of it so you can get it back when your instance inadvertently gets terminated.
While my last post cast a wide net, this post will start the process of focusing in on just a couple things: first nodeJS and later, Angular.
This approach appeals to me. I have always felt that a loose confederation of small, simple services that can be aggregated to provide more complex offerings is a much better approach than creating a single, monolithic web server that handles every service request an enterprise can imagine (and creates a single point of failure). Using nodeJS, services can be independent which is more robust, easily scales by spooling up more processes either on the same machine or different machines, and makes the overall system much easier to extend and maintain. This also makes nodeJS an excellent tool for cloud-based services, where scaling is accomplished by spooling up new instances of virtual machines on an as-needed basis.
Having said all of that, I’m not proposing nodeJS is the perfect fit for any situation. However, I do think that just about any organization would find it a valuable tool, even if just used for a couple simple services. I can envision a migration path where some of the simpler enterprise services are rewritten as individual node processes and removed from their monolithic web server.
Here is the list of nodeJS resources I promised in my last post:
In my last post, I also promised the code for my first HTTP server written in nodeJS (it is very close to the example provided on the nodeJS web site):
If you’re running node on an EC2 instance (as I outlined in my last post) and want to use port 80, you’ll have to “sudo node <filename>”. A non super-user doesn’t seem to have permissions to start a service on that port. If you’d rather run the server locally you’ll need to change line 6 above to:
Of course you’ll also need to append the port number to the URL in your browser: http://127.0.0.1:8000.
In my next post I’ll start exploring some of the details of nodeJS, demonstrating non-blocking, TCP, HTTP, File IO, and more!
Over the last couple days I’ve set up an EC2 Linux Micro instance to start playing with a nodeJS server. Over the next several posts I’ll be chronicling my journey, starting with this list of things I did to get my EC2 instance set up:
I used the AWS Console to create my “t1.micro” instance:
Selected the default security group, which didn’t have SSH enabled
Added SSH through the Security Groups section of the AWS Console
Copied my public key to my .ssh directory
Selected “Connect” under Instance Actions in AWS Console to get an example of the command I needed.
Found out that the example shows logging in as root which doesn’t work: use ec2-user instead.
Once I was able to SSH into my EC2 instance it was time for some installs. Fortunately the Micro instance is based on RedHat and has yum installed, as well as Ruby, Python and other things. I added the following:
sudo yum install git
sudo yum install gcc
sudo yum install gcc-c++
sudo yum install openssl-devel
git clone git://github.com/ry/node.git ~/src/node
sudo make install
At this point I was able to type “node -v” in my ssh instance and see “v0.3.6-pre.” Success!
Coming up next time: a nice list of nodeJS resources to study, as well as a quick and dirty http server written in nodeJS (from their web site) which serves one page: “Hello World!”
Yes, I spent two days to put a web site up on the internet that only serves one static page. Ah, but it’s only the beginning!
It has been a while (almost a year) since I started my “Agile Blind Spots” series. If you’re like me, you may need a memory refresher: here is the first article, and here is the second article. If you’re also lazy like me, the basic gist of these articles was that, as of a year or so ago, I was finding Agile adoption:
strong in individual development teams
starting to push upstream, dealing with how work gets identified, sized and prioritized before being handed to a development team
weak in pushing downstream to improve the substantive amount of work that occurs (at least in larger organizations) to get a project into production after the development team is “done”
I dealt with the first two points in the first two articles, and left the third point unaddressed. I had made peace with the fact that this series would be left unfinished… until the last couple weeks. The impetus for this change of heart? A relatively new buzzword: DevOps.
I may follow up this article with some of my ideas around DevOps, but something else occurred to me while having this particular line of thought re-awakened. In my experience, Agile adoption in many organizations has taken a myopic view of success: make the development teams better and everything will be better.
One of the worst disconnects I’ve personally witnessed ended up creating a six month backlog of production deployments: the development teams were cranking out high quality releases only to have them sit and rot because Operations couldn’t keep up with the demand. The irony here is deep: while Agile proponents trumpeted the triumph of teamwork over individualism, the actual project was a failure: it wasn’t getting promoted to production for months after the victory was claimed! Then it hit me: this is a case of Premature Optimization.
Yes, I know this is normally applied at the coding level, but the premise works equally well for the process of getting a gleam in one’s eye to a tangible, usable product. The tie-in with Lean and the TPS are equally obvious: if you’re not working on solving your biggest bottleneck, it’s Muda. To my mind, the development teams became the biggest bottleneck for many organizations sometime in the early-to-mid 1990′s. The Snowbird meeting was an acknowledgment that too many software projects were ending in failure. Even though there was probably no discussion of Lean or the TPS, I think those involved in the discussion believed they were seeking a solution to the biggest problem: seeking to remove the biggest bottleneck. (EDIT: Please see my note in the comments for clarification on this paragraph)
So your organization has turned your rag-tag group of software developers into high performing Agile teams. Congratulations! You’ve removed a bottleneck. Now where is your organization’s biggest bottleneck? I’m guessing it’s not the development team anymore. One of the prime candidates is Operations, but it could by anything. If you’re still focused on improving yesterday’s bottleneck, it’s Premature Optimization and it’s waste.
So what are your organization’s biggest bottlenecks? How do you know? Are they prioritized? Are you focusing your primary efforts on the worst bottleneck? If you’re not, maybe it’s time that you did.
Refactor the .js file with impunity
Let me share an example. Here is the function that needed test coverage, followed by the Jasmine test:
I’m not confident I have the right answer to any of these questions. I’m hoping my readership will have some helpful hints now that I’ve made my ignorance a matter of public record.
So, what am I missing? What would make this a better test?
I have been following a thread in the Software Craftsmanship Google Group on the interview question “What personal projects do you hack on in your spare time?” As far as I’m concerned (and contrary to several opinions), this is an excellent interview question. The basis for my opinion is my own, personal experience. I’ve included my thread comment below:
I have very strong opinions on this subject but rather than rant, let me give you a concrete example: namely, me.
I went through a very dark time in my career as a software developer. I had bounced from job to job, working in some pretty nasty situations and the experiences had sapped my passion to the point that I was looking for a new career.
As a developer, I am very good at what I do. I’ve always been one of the best players on the team… except for those dark days. Even though my coding ability was top-notch, my apathy buried my ability.
During that time, had I been asked “what are you coding for fun on your own time?” I would have answered honestly: “absolutely nothing.” And you know what? Even though I had tons of ability, asking that question would have given the interviewer a hint that maybe they shouldn’t hire me. And they would be right. Nobody should have hired me. I was a liability.
Recently, I’ve found myself stating in several conversations around dealing with dead weight or hiring the right people “we’re not running a charity.” It may be harsh, but it is not my goal to rehabilitate someone who is wallowing in apathy.
The happy ending is that I was able to find the personal fortitude to rise above the negative impact of my environment. I realized that if I wanted to attract the attention of the right kind of employer I had to regain my passion. Since that time, I’ve never had to look for a job. They’ve come looking for me, including the company that I’m starting with 3 January, 2011: ThoughtWorks.
Recently I was honored to have the opportunity to work toward being approved to teach Ron Jeffries’ and Chet Hendrickson’sAgile Developer Skills course. I had my first opportunity to co-teach the class last week. It was a great class and I very much enjoyed working along side Chet and Cheezy, two software craftsmen I respect highly.
If you follow the links above, you’ll notice a theme and, though it’s not explicit in the class’ title, Extreme Programming is a significant influencing factor in the course content. Like most software developers, XP has had a major positive impact on my pursuit of the craft, and it’s great to see the practices hold up so well over time.
The ADS course utilizes some lecture, but it is primarily a hands-on workshop giving participants the opportunity to experience developing high-quality, working software using Agile principles and practices. This is the second time I have experienced the class (the first time was as a participant back in May) and it has remained an intense, thoroughly enjoyable way to either learn the practices for the first time or to delve deeper into Agile. I find it hard to believe that anyone could leave the course without learning a great deal. It is both challenging and insightful.
I think my favorite part of the class are the questions and how they help shape and guide the content. That feedback is crucial to the success of the course (maybe any course) and, as an instructor, I find the questions leave me thinking about these principles long after the class is over.
Next time, I’ll share a couple of the questions and some of those thoughts. For now, though, I encourage you to take a look at Agile Skills Network and consider taking the course. Regardless of where you are in your software craftsman journey, the ADS course will encourage you to push further down the path!
I don’t think I’ll end up writing a third “Agile Blindspots” post… there was something that I was feeling very passionate about back in February but, for better or worse, this time I simply waited for the feeling to pass.
It has been a busy year. I knew that new ventures are all-consuming, but knowing it and living it are two very different things. I’m closing in on a full year as an independent consultant, the last six months of which have been with one client. Things have been going great and the client is looking for ways to extend the contract; possibly through the end of the year.
That’s a long time to spend in the same environment, and this comes with a significant danger. In Weinberg’s book, The Secrets of Consulting (you can find the book online, but you’ll probably have to buy it used), he refers to this as “Prescott’s Pickle Principle”:
Cucumbers get more pickled than brine gets cucumbered… A small system that tries to change a big system through long and continued contact is more likely to be changed itself.
I experienced this first-hand in a recent conversation with a friend and colleague who just started working with the same client a little over a month ago. I had just had a closed-door meeting with a member of upper management in which they laid out their plan to address several challenges in their organization. I didn’t completely agree with the plan, but I have worked with this group long enough to know when the decision is final so, except for a few clarifying questions, I accepted the news with very little feedback. As I explained to my friend how this might impact him, his response was “this is the wrong solution. It’s going to cause more problems than it solves!”
My initial thought was “technically he’s right, but he doesn’t have enough experience with this client to know when to accept the inevitable.” And that’s when it hit me: I’ve been pickled.
My job as a consultant is to be a change agent. “Long and continued contact” with a client diminishes my ability to fulfill that role. That is why Weinberg gives the following advice:
To avoid getting pickled, a consultant must not spend too much time with one client. If you can’t avoid this, at least break up the time by working with other clients, even for free… It’s hard to be effective, though, if you’re always switching jobs or clients. Change generally takes both time and continued contact, or at least one of the two. The challenge, then, is how to get the client in long, continued contact with some kind of brine, without the consultant even being present.
The challenge indeed! There is a natural tension: time and continued contact erode a consultant’s ability to affect change, yet affecting change takes time and continued contact. I think there are ways to counteract this and I hope my efforts to do so in my current situation will prove effective. But it is also important to recognize the warning signs and to act in the interest of both your client and yourself.
What are some ways you’ve found to keep your clients “in continued contact with some kind of brine without even being present?”
I’ll get back to Agile Blind Spots in my next post, but I have discovered a great app and piece of hardware that has turned my iPhone into an art easel.
Brushes is an iPhone app that was highlighted during the introduction of the iPad a few weeks ago. One of my hobbies is Graphic Arts and I have often bemoaned the fact that I can’t find more time to spend on it. I even carry around a Wacom tablet… that rarely gets used: partially because there is a (very little) bit of setup required, but mostly because I’ve never been able to get comfortable with drawing in one place (on the tablet) while watching someplace else (the monitor) to see how it looks.
Once the iPad rumors hit a fevered pitch, my first thought was that it would make a great graphic arts platform. When Apple highlighted Brushes during the iPad unveiling I decided to purchase it for my iPhone. I played with it a little, but found the lack of accuracy due to using a fingertip somewhat off-putting. I’ve put in many hundreds of hours sketching with a pencil or pen so that approach is completely natural to me. I thought using my fingertip would be close enough to be satisfying, but… well, it isn’t. I really need a writing utensil in my hand.
While I’m really excited about the iPad, it’s looking like it will not ship with a stylus and this is a bit of a disappointment. Enter Ten One Design and the Pogo Sketch, a stylus that works with the iPhone, with newer Macbook trackpads and the iPad (well, once they’re released anyway… Hurry up Apple!!!).
But wait, there’s more! The pièce de résistance: due to a built in Web server that allows you to transfer drawings from Brushes on your iPhone to your Mac and a companion Mac app, Brushes Viewer, you can get a high resolution version of your small-screen artwork. Since the iPhone app actually records your strokes, it can replay those strokes at a higher resolution. Not only that, you can actually watch yourself create your masterpiece and save it off as a Quicktime movie! The max resolution for a static image is 1920 x 2880 and it looks incredible. While I wasn’t happy with the results I was getting using my fingertip, I’m very happy with the results using a stylus. Here is a medium quality (960 x 640) version of my first attempt using a stylus (click on the image to see it full size):
Paul Nelson, 15 February, 2010
The combination of the Brushes app, the Pogo Sketch stylus and Brushes Viewer means I can carry an art studio in my pocket… well, close enough.
I can’t wait to play with Brushes and the Pogo Sketch on an iPad! Oh… and look for a nice tie-in with my Agile Blind Spots series of articles soon!