Slimmed Down Software: A Lean, Groovy Approach
presented by Hamlet D'Arcy
This talk turned out to be much more of a review of Lean principles than about how Groovy supports these principles, but that was fine by me. Obviously the principles are far more important than the language you’re using.
The key principle of Lean Development is to eliminate waste, which in essence means to stop doing things that you don’t need to do.
Interestingly, Hamlet proposed getting more sleep and ingesting less caffeine as good development practices
Meetings are more often than not a form of waste, especially when they seek to produce 100% consensus. Hamlet talked through the four forms of decision arrival, from dictatorship at one end to unanimity at the other, with democracy and something else in the middle. He didn't really make a conclusion out of this, but my guess is that he was warning us to stay away from the extreme ends. Certainly, constantly trying to achieve unanimity would cause a lot of waste.
He highlighted unfinished work as an example of waste. For example, choosing to half-develop 6 features rather than finish 2 or 3 causes waste. He didn’t really go into why this is waste, but my immediate thoughts were:
1) that it requires energy – both from individual developers and from the team – to keep abreast of unclosed loops; and
2) that knowledge learned towards the end of one feature may accelerate the development or prevent a change in another feature if it’s developed later rather than simultaneously.
While talking about unfinished work as a form of waste, he criticised distributed version control (e.g. Git, Mercurial/Hg) based on the fact that local branches, which are essentially code that’s not committed to head, are unfinished work and hence waste.
He recommended Groovy for unit testing, even if the production code isn’t Groovy.
He briefly discussed Value Stream Mapping, which from what I could tell is basically graphing out the flow of a process, including dependencies, time delays and actions, some of which may only be required to be performed every Nth time through the process (e.g. maintenance). My take-away message from the example shown was that you shouldn’t just accept time that is wasted waiting for a process to complete, but should schedule other tasks, that you know will need to be done in the future anyway, to occupy this time. (This is all in reference to analysing one’s process, not the flow of a program.)
While discussing Value Stream Mapping, he mentioned that you really want to be measuring activities and waste in $$$, not something more abstract like time or gut feel. If you’re making business decisions, $$$ is the unit that makes sense.
He referred us to an article on DZone called the “Seven Wastes of Software Development” (Though the top hit for this phrase on Google is a 2002 paper by Mary Poppendieck [PDF])
Hamlet postulated that languages with less syntax, e.g. Groovy (or Scala), allow one to write unit tests that look/read a lot closer to the original requirements specification.
He talked a little bit about EasyB, a Groovy DSL for BDD, and explained how its hierarchical test cases allow you to write multiple tests based on a shared scenario.
He claimed that the Agile idea that “the tests are the documentation” has been over-sold by evangelists and under-delivered by developers.
He showed how the Spock testing framework has you list all your assertions as Boolean expressions in an “expect” block, eliminating the need to write an
assertThat(...call on every single line.
He raised the idea that every time we are called upon to make a decision there are three potential outcomes: Yes, No, or Defer.
He talked about something called “Real Options”, which posits that every possible decision outcome has both a cost and a deadline or expiry date. It almost never makes sense to commit to any decision before the expiry of the next option, because that’s when you’ll have the most information. The problem with achieving this is that human brains are wired to eliminate unknowns by locking down decisions as early as possible. The solution to that is to make an action point out of deferred decisions, the required action being to collect more information and reconvene when the decision needs to be made.
All of the above being under the banner of “delayed commitment”, it occurred to me that a good method for getting good at this is to constantly be asking yourself and everyone around you the question: “Is this the best time to be making this decision?”
He mentioned Canoo a couple of times, which was the firm he was working with while experimenting with all this lean and Groovy stuff. (I think they were doing something with App Engine?)
He said that his team stopped using fixed-length iterations because “two weeks is not a unit of value but a unit of time”, i.e. you should be releasing when you have value to deliver, no sooner or later.
He suggested reducing the number of integration tests because having these tests fail due to valid changes to the system is a form of waste. I actually disagree on this one. Around my workplace, the idea that you should delete a test because it keeps failing is an ongoing joke. Obviously you shouldn’t have tests that duplicate each other or if they fail for no reason – both of which result in waste. However, if you’re doing TDD, you probably want to change the test first anyway, so it shouldn’t break when you change the implementation, it should pass! It was very interesting to hear someone working with a dynamic language suggesting having less integration tests. My assumption has always been that, if anything, you would need more tests at this level to prove the correctness of the wiring of your essentially un-typed components than you would with static typing. Now that I think about it a bit deeper, I was probably wrong - you shouldn't need any more tests - you should need exactly the same amount. Full coverage is full coverage. Having more wouldn't prove anything.
I really liked this: He emphasised that when you decide to make a change to your process it is an experiment – you should define a time limit and then assess, as professionally as possible (i.e. without bias), the results of the experiment before deciding whether or not to make a permanent change or to start a different experiment.
He criticised what he called “closed-door architecture”, where a small set of “architects” within an organisation decide what technologies will be used and dictate these to the rest of the developers. He didn’t mention his exact reasoning for talking this down, but the obvious ones I see are the demotivational effect on the non-architect employees and the potential to miss good ideas by not providing everyone with an opportunity to contribute their brainpower and expertise to the problem. I think, in order for this to work well, you need a pretty mature bunch of developers. If you're going to canvas everyone's opinion, then everyone needs to be really good at leaving their ego at the door, otherwise you're going to end up in a six hour meeting about which developer has the best idea rather than which idea is best for the customer.
In the context of introducing Agile practices to an organisation, he discussed an equation from some book that says that the value of a change to an organisation is relative to Why over How, meaning that a big organisational change (large How) has to tackle a big problem or create a big advantage (large Why) in order to provide value. Changes that can provide a large benefit with minimal impact on the work (that's a large Why over a small How) are obviously the sweet spot in terms of increasing value.
Lastly he showed an example of a Groovy script that used a @Grab annotation to download a Maven dependency and bring it into the classpath at runtime. Very cool.
Want to learn more?
|From Amazon...||From the Book Depository...|