Friday, October 29, 2010

SpringOne 2010: Groovy and Concurrency

I spent the second-last week of October at SpringOne 2GX 2010 in Chicago and I thought some of you might get something useful out of my notes. These aren’t my complete reinterpretations of every slide, but just things I jotted down that I thought were interesting enough to remember or look into further.

Groovy and Concurrency
presented by Paul King

Paul by started by mentioning a library called Functional Java which, to me, looks like an attempt at porting a bunch of ideas present in Scala over to Java, and another one called Kilim, which is an Actors library for Java.

Paul said that his main argument for why you should use Groovy, rather than Scala or Clojure, is that Groovy is closer to the Java syntax and, hence, is more easily integrated with Java. (In my personal experience, I can’t say I’ve ever had any problems integrating Java with Scala. Going the other way (using Scala in Java) has some gotchas, but wouldn't be described as hard.)

Groovy supports a very nifty pipe (‘|’) operator extension to the java.lang.Process class, allowing you to easily pipe stdout to stdin between two or more processes, just like in a shell.

I've now heard Google Collections (now part of the Guava libraries) mentioned for about the 5th time this week. I really should check out what these are because they’re very popular!

Groovy supports adding functions to classes, and even individual objects, at runtime. That is, your code can contain statements that add members to an existing type. This is not creating an inline definition of a new type, but actually changing the type at runtime, as you might do in JavaScript. They call this Dynamic Groovy. I've never really got my head around why meta-programming - programs that write programs (and then run them) - is a good idea, but I've also read Paul Graham saying that this feature gave him a major competitive advantage, so there must be something mind-bending about it. Perhaps I just need to give it a try?

Groovy has an @Immutable annotation that, as well as making the contained fields final (without you declaring them so), is also shorthand for adding getters, toString(), hashCode() and equals() to a class based on just the field names and types. Case classes in Scala provide the same functionality along with the added bonus of making pattern matching simple.

He mentioned two Java concurrency libraries, one called Jetlang for message-based concurrency (the Jetlang site itself refers to Jetlang being similar to Scala's Actors) and JPPF, a grid-computation library. The JPPF intro uses domain language similar to that of Spring Batch, with regards to jobs and tasks.

He talked a bit about GPars (short for Groovy Parallel Systems), a concurrency library specifcally for Groovy.

He also said that shared transactional memory looked interesting but didn’t go into it much beyond mentioning the Multiverse library. I have seen this term bandied around a little, in particular due to a couple of people attempting to implement it in Scala, but I've never looked into it - it's frequency hasn't yet punched through my signal/noise filter.

He gave a little preview of what the GPars guys are trying to achieve with a “Dataflows” library. You use a DSL to describe a dependency graph between steps in your (complex, but parallelisable) algorithm and Dataflows will automatically run in parallel those parts of the algorithm that are not dependent, synchronising at the points where multiple asynchronous outputs are required to enter the next step.

Want to learn more?

From Amazon...



From Book Depository...


SpringOne 2010: How to Build Business Applications using Google Web Toolkit and Spring Roo

I spent the second-last week of October at SpringOne 2GX 2010 in Chicago and I thought some of you might get something useful out of my notes. These aren’t my complete reinterpretations of every slide, but just things I jotted down that I thought were interesting enough to remember or look into further.

How to Build Business Applications using Google Web Toolkit and Spring Roo
presented by Amit Manjhi

Amit works at Google on the GWT (Google Web Toolkit) team. In this session, he showed the ease of creating a GWT app with Spring Roo and then talked through a a whole swathe of GWT best practices.

Roo supports the addition of JSR-303 constraints [PDF] as part of adding a field to an entity.

Creating a GWT webapp from an existing Roo domain model is as simple as typing gwt setup (and hitting enter) in the Roo console.

The default webapp generated by Roo automatically handles validation on the server side. I have to say that the default rendering for errors was pretty poor. The error he showed as an example just appeared at the top of the form and said something like "this field is required", without any reference to which field had caused the error. Room for improvement there, but you certainly get a lot of webapp for doing nothing much at all.

Bookmarkable URLs work out of the box, although I'm not sure if this meant that all URLs are bookmarkable by default or whether it's just easy to make URLs bookmarkable when you need them.

Amit showed a version of the same UI that some Google guys had jazzed-up with some nicer CSS and a few changes to the layout components. It was using some kind of slide transition where, when an item was selected from a list, the list would slide off to the left and the item detail slide in from the right, then vice versa when you went back. Looked very neat.

The Spring Roo addon for GWT generates the site using a whole bunch of best practices as learned by the Google Engineers who’ve been using GWT to develop the AdWords site.

The Model-View-Presenter pattern was presented as a suitable pattern for client-side GWT. This decoupling pattern allows different view implementations to be attached to the same Presenter.

Using DTOs (Data Transfer Objects) (as opposed to sending entities to the webapp) was recommended, though he did note that, coded manually, DTOs typically violate the DRY (don’t repeat yourself) principle. This downside is overcome by creating an empty interface for the DTO, annotated with a @ProxyFor annotation. GWT then automagically creates a proxy object for the entity class named in the annotation and this proxy acts as a DTO. The Google guys call this an Entity Proxy. From what I could tell, the proxy automatically proxies all value fields of the entity. You can provide annotated methods on the DTO interface that allow lazy navigation of entity relationships.

The default Roo project doesn'’t use GWT-RPC, chiefly because of the bandwidth implications when mobile devices are involved. Instead, they use an object on the client called a RequestFactory that talks JSON to a generic RequestFactoryServlet on the server.

RequestFactory receives notifications of the side-effects of its server-side calls and posts these as events on an event bus in the client.

They’ve replaced the Presenter with something they call an Activity that abstracts away a lot of the boilerplate normally required in a Presenter. The common parts of Activities are themselves abstracted out into a generic ActiviyManager.

He highlighted a proposed method for allowing search engine web crawlers to follow Ajax bookmarkable URLs by using #! to prefix the anchor rather than just #. Google have published an article called 'Making AJAX Applications Crawlable' where they discuss the details.

The talk became quite confusing past about half-way when it stopped being obvious (to me at least, having never written a GWT app) which code samples were part of the server and which were run on the client. My kind request to any GWT talk presenters: please introduce each code-sample slide by saying “This code runs on the {server|client}”.

GWT has a nice SafeHtmlUtils class for escaping HTML entered by users to avoid cross-site scripting (XSS) attacks.

GWT 2.1 contains a client-side logging library in the vein of the JDK logging. Spring Roo-generated GWT apps come with a handler that allows client events to be logged on the server. You can also use a GWT <property-provider> element to enable and disable logging at runtime on a per-user scope. (There's some information about property-provider here under the heading 'Elements for Deferred Binding'.)

Want to learn more?

From Amazon...


From Book Depository...


Wednesday, October 27, 2010

SpringOne 2010: Creating the Next Generation of Online Transaction Authorization

I spent the second-last week of October at SpringOne 2GX 2010 in Chicago and I thought some of you might get something useful out of my notes. These aren’t my complete reinterpretations of every slide, but just things I jotted down that I thought were interesting enough to remember or look into further.

Creating the Next Generation of Online Transaction Authorization
presented by Maudrit Martinez, Anatoly Polinsky and Vipul Savjani

These three guys from Accenture presented patterns of architecture with Spring Batch and Spring Integration that they have used in production systems for both online and batch processing of financial transactions.

Their diagram showed two technologies – Pacemaker and Corosync – that I hadn’t heard of before. Apparently Corosync is the clustering technology recommended by the Rabbit guys. They also used a product called Hazelcast for a distributed cache and Grid Gain for a compute grid.

They combined Spring Batch with Grid Gain in order to partition the processing of a batch of transactions across multiple nodes. The presenter was fairly impressed with GridGain’s over-the-wire classloading. (To be fair, this idea has been around at least since RMI was released in '97.)

Rather than passing the transaction data around their whole integration network, they instead placed the data in the distributed cache and passed around only the keys to the items in the cache.

They made use of a library called ScalaR, which is a DSL for using GridGain in Scala. They used Scala to process the transactions chiefly because of the availability of the ScalaR DSL and also due to its provision of Actors for simplified concurrent programming. and because, due to the need for performance, they didn’t want to use an interpreted language like Groovy.

They mentioned that parts of GridGain (though perhaps only the DSL) have reportedly been re-written in Scala, and that the GridGain team chose Scala over Groovy because of its compiled, static typing providing better performance than interpreted languages.

They showed where their code was calling Hazelcast and I noted that there wasn’t any attempt at decoupling the cache implementation – a Hazelcast instance was retrieved by calling a static method. Perhaps it was just some demo code they'd thrown together.

I noticed a cool way of converting a Scala list to a Java list that I hadn’t seen before:
new ArrayList ++ myScalaList
From what I can tell, this ++ operator isn't standard (at least you can't use it in the Scala 2.8 REPL), but it was an interesting, succinct syntax that caught my eye.

They mentioned the STOMP protocol, which is a text-based protocol for message broker interoperability supported by Rabbit MQ, among others.

The Spring Integration config they used to send a message to Rabbit didn’t have any implementation, but just an interface they had defined which was then proxied by Spring to send a payload onto the channel.

They mentioned a couple of times that the advantage of Rabbit MQ over JMS is that Rabbit's AMQP is an on-the-wire protocol whereas JMS is a Java API. They didn’t elaborate on why this was an advantage, but I suppose the protocol easily allows other programming languages to integrate with the messaging, where as a Java interface doesn’t offer any standard way to do that.

Their implementation for processing transactions used a chain of three actors: #1 for coordinating the authorisation of the transaction (I think – it may have been for coordinating the authorisation of multiple transactions?), #2 for performing the authorisation, which chiefly meant looking up a collection of rules and then passing these rules off to Actor #3, which was an executor for the Rules.

While searching for an online profile for Anatoly Polinsky, I found this great presentation on Spring Batch that he apparently authored. It also looks like he has released some of the code from the presentation in a project called 'gridy-batch' on github.

Want to learn more?

From Amazon...

From Book Depository...

Tuesday, October 26, 2010

SpringOne 2010: Introduction to Spring Roo

I spent the second-last week of October at SpringOne 2GX 2010 in Chicago and I thought some of you might get something useful out of my notes. These aren’t my complete reinterpretations of every slide, but just things I jotted down that I thought were interesting enough to remember or look into further.

Introduction to Spring Roo
presented by Rod Johnson and Stefan Schmidt

I didn’t take a whole lot of notes in this session because it was entirely demo, however the ease with which Spring Roo can quickly create working (though rudimentary) CRUD web applications was quite amazing.

Rod gave some good explanation around the magic that makes it happen. Basically, the commands given to Roo result in it generating two sets of files. One set is the Java files that the developer can work with, for example adding business logic, without worrying that Roo might overwrite the contents of these files later (it doesn't). The Java classes are annotated with Roo annotations, e.g. @RooEntity, @RooToString, @RooJavaBean, that Roo uses to determine what additional functionality it will add to the annotated class.

The functionality that Roo adds in is actually defined in aspects that are maintained in files juxtaposed with the Java files. From what I could tell, there is basically an aspect for each Roo annotation on each Roo class, e.g. Customer.java has CustomerRooEntity.aj, CustomerRooToString.aj, etc. These aspect files are updated or re-written by Roo automatically as the Roo model is changed, so if you were to make any changes to them your changes to the aspect source files they would get wiped out by the next Roo operation. This is what allows Roo to provide round tripping (although, in truth, I think it’s the illusion of round-tripping) : it generates an empty, annotated class that you can edit to your heart’s content without fear of interference from Roo’s automated operations, while allowing all the Roo-controlled stuff to be “round-tripped” by keeping it in aspects that are kept up-to-date with Roo-controlled changes.

At first I thought this meant that every Roo class had several aspects over it at runtime, which sounded like a performance nightmare, but I found out later that the aspects are applied at compile –time, essentially acting as source mix-ins to the Java class file that will deployed.

I spent a little time thinking about how one might try to achieve a similar thing with Scala. The first thing that occurred to me is that you wouldn’t need to use aspects. Because Scala supports multiple inheritance, the extraneous data and functions that Roo is storing in aspects could, in Scala, just be separate traits that the main class extends. This would also alleviate the need for the annotations, if you wanted to get rid of them. (I supposed the annotations have the advantage that they're not actually tied to the code in the aspects. I'm fairly sure you can remove and introduce the annotations in the Java files and the aspects will disappear and reappear accordingly (that one part is true round-tripping), while the same would not be true of traits - they would have to exist before you could extend them, unlike the aspects.)

The other little thing I thought is that, using Scala, the @RooJavaBean functionality would be relatively unnecessary, seeing as Scala already has the @BeanProperty annotation to do the same thing (albeit on a per-field basis). So while the Roo code generation saves a lot of boilerplate for Java developers, I think Scala devs can achieve pretty much the same thing with some sensible common traits and minimal extra effort. (I'm just thinking about the entity stuff we looked at here. It's likely there's Roo goodness in the web tier for which Scala cannot naturally provide a neat alternative.)

At the end of the presentation, they showed the latest version of the Spring Insight project, I’m guessing because they used Spring Roo to develop it. (?) It looks very cool, and the level of detail you can browse down to is amazing, e.g. you can see the JDBC calls issued during a web request. I've found in the past that the problem with tools that have this much data is always in figuring out how best to represent it all sensibly and aiding the user in selecting where to drill down. From the little I’ve seen, some of the charts they have seemed to do a good job of this. It's definitely worth a try back home to see what it can do. They are currently working on a version that will be able to be deployed against production systems.

Monday, October 25, 2010

SpringOne 2010: Slimmed Down Software: A Lean, Groovy Approach

I spent the second-last week of October at SpringOne 2GX 2010 in Chicago and I thought some of you might get something useful out of my notes. These aren’t my complete reinterpretations of every slide, but just things I jotted down that I thought were interesting enough to remember or look into further.

Slimmed Down Software: A Lean, Groovy Approach
presented by Hamlet D'Arcy

This talk turned out to be much more of a review of Lean principles than about how Groovy supports these principles, but that was fine by me. Obviously the principles are far more important than the language you’re using.

The key principle of Lean Development is to eliminate waste, which in essence means to stop doing things that you don’t need to do.

Interestingly, Hamlet proposed getting more sleep and ingesting less caffeine as good development practices

Meetings are more often than not a form of waste, especially when they seek to produce 100% consensus. Hamlet talked through the four forms of decision arrival, from dictatorship at one end to unanimity at the other, with democracy and something else in the middle. He didn't really make a conclusion out of this, but my guess is that he was warning us to stay away from the extreme ends. Certainly, constantly trying to achieve unanimity would cause a lot of waste.

He highlighted unfinished work as an example of waste. For example, choosing to half-develop 6 features rather than finish 2 or 3 causes waste. He didn’t really go into why this is waste, but my immediate thoughts were:
1) that it requires energy – both from individual developers and from the team – to keep abreast of unclosed loops; and
2) that knowledge learned towards the end of one feature may accelerate the development or prevent a change in another feature if it’s developed later rather than simultaneously.

While talking about unfinished work as a form of waste, he criticised distributed version control (e.g. Git, Mercurial/Hg) based on the fact that local branches, which are essentially code that’s not committed to head, are unfinished work and hence waste.

He recommended Groovy for unit testing, even if the production code isn’t Groovy.

He briefly discussed Value Stream Mapping, which from what I could tell is basically graphing out the flow of a process, including dependencies, time delays and actions, some of which may only be required to be performed every Nth time through the process (e.g. maintenance). My take-away message from the example shown was that you shouldn’t just accept time that is wasted waiting for a process to complete, but should schedule other tasks, that you know will need to be done in the future anyway, to occupy this time. (This is all in reference to analysing one’s process, not the flow of a program.)

While discussing Value Stream Mapping, he mentioned that you really want to be measuring activities and waste in $$$, not something more abstract like time or gut feel. If you’re making business decisions, $$$ is the unit that makes sense.

He referred us to an article on DZone called the “Seven Wastes of Software Development” (Though the top hit for this phrase on Google is a 2002 paper by Mary Poppendieck [PDF])

Hamlet postulated that languages with less syntax, e.g. Groovy (or Scala), allow one to write unit tests that look/read a lot closer to the original requirements specification.

He talked a little bit about EasyB, a Groovy DSL for BDD, and explained how its hierarchical test cases allow you to write multiple tests based on a shared scenario.

He claimed that the Agile idea that “the tests are the documentation” has been over-sold by evangelists and under-delivered by developers.

He showed how the Spock testing framework has you list all your assertions as Boolean expressions in an “expect” block, eliminating the need to write an assertThat(... call on every single line.

He raised the idea that every time we are called upon to make a decision there are three potential outcomes: Yes, No, or Defer.

He talked about something called “Real Options”, which posits that every possible decision outcome has both a cost and a deadline or expiry date. It almost never makes sense to commit to any decision before the expiry of the next option, because that’s when you’ll have the most information. The problem with achieving this is that human brains are wired to eliminate unknowns by locking down decisions as early as possible. The solution to that is to make an action point out of deferred decisions, the required action being to collect more information and reconvene when the decision needs to be made.

All of the above being under the banner of “delayed commitment”, it occurred to me that a good method for getting good at this is to constantly be asking yourself and everyone around you the question: “Is this the best time to be making this decision?

He mentioned Canoo a couple of times, which was the firm he was working with while experimenting with all this lean and Groovy stuff. (I think they were doing something with App Engine?)

He said that his team stopped using fixed-length iterations because “two weeks is not a unit of value but a unit of time”, i.e. you should be releasing when you have value to deliver, no sooner or later.

He suggested reducing the number of integration tests because having these tests fail due to valid changes to the system is a form of waste. I actually disagree on this one. Around my workplace, the idea that you should delete a test because it keeps failing is an ongoing joke. Obviously you shouldn’t have tests that duplicate each other or if they fail for no reason – both of which result in waste. However, if you’re doing TDD, you probably want to change the test first anyway, so it shouldn’t break when you change the implementation, it should pass! It was very interesting to hear someone working with a dynamic language suggesting having less integration tests. My assumption has always been that, if anything, you would need more tests at this level to prove the correctness of the wiring of your essentially un-typed components than you would with static typing. Now that I think about it a bit deeper, I was probably wrong - you shouldn't need any more tests - you should need exactly the same amount. Full coverage is full coverage. Having more wouldn't prove anything.

I really liked this: He emphasised that when you decide to make a change to your process it is an experiment – you should define a time limit and then assess, as professionally as possible (i.e. without bias), the results of the experiment before deciding whether or not to make a permanent change or to start a different experiment.

He criticised what he called “closed-door architecture”, where a small set of “architects” within an organisation decide what technologies will be used and dictate these to the rest of the developers. He didn’t mention his exact reasoning for talking this down, but the obvious ones I see are the demotivational effect on the non-architect employees and the potential to miss good ideas by not providing everyone with an opportunity to contribute their brainpower and expertise to the problem. I think, in order for this to work well, you need a pretty mature bunch of developers. If you're going to canvas everyone's opinion, then everyone needs to be really good at leaving their ego at the door, otherwise you're going to end up in a six hour meeting about which developer has the best idea rather than which idea is best for the customer.

In the context of introducing Agile practices to an organisation, he discussed an equation from some book that says that the value of a change to an organisation is relative to Why over How, meaning that a big organisational change (large How) has to tackle a big problem or create a big advantage (large Why) in order to provide value. Changes that can provide a large benefit with minimal impact on the work (that's a large Why over a small How) are obviously the sweet spot in terms of increasing value.

Lastly he showed an example of a Groovy script that used a @Grab annotation to download a Maven dependency and bring it into the classpath at runtime. Very cool.

Want to learn more?

From Amazon...


From the Book Depository...


Sunday, October 24, 2010

SpringOne 2010: What's New in Spring Framework 3.1

I spent the second-last week of October at SpringOne 2GX 2010 in Chicago and I thought some of you might get something useful out of my notes. These aren’t my complete reinterpretations of every slide, but just things I jotted down that I thought were interesting enough to remember or look into further.

What’s New in Spring Framework 3.1
presented by Juergen Hoeller

Juergen started off with a review of the new features that were added in 3.0
(I’ve only noted down things that I’m not already using but thought are probably worth trying out)
Next was what’s coming in 3.1...
  • Servlet 3.0 (I didn't know much about this, but there was a good overview at JavaOne [PDF])
  • 'Environment Profiles' will allow for placeholder resolution based on a specified environment name (or names). This allows different configuration in, for example, dev and production, without having two different deployment artifacts. I think he also mentioned an Environment abstraction, which I assume would be for accessing the same configuration programatically.
  • They are putting some effort towards bringing convenience of specialised namespace XML elements, e.g. the task: namespace, to the annotation-based Java config
  • An abstraction for Cache implementations (including implementations for EHCache and GemFire to begin with), along with a @Cacheable annotation for aspect-oriented caching.
Lastly, he covered the main points that are currently on the cards for 3.2 ...

Saturday, October 9, 2010

Turn FSC (Fast Scala Compiling) on in IDEA

I've tried turning the 'fsc' (Fast Scala Compilation) option on in IntelliJ IDEA before and it has always failed miserably (usually with a java.lang.reflect.InvocationTargetException from the scala.tools.nsc.CompileClient via sun.reflect.NativeMethodAccessorImp.invoke0) and I've never been able to find the solution before.

Well, my google-fu must have been turned up to 11 today because I found this post detailing how to start the compilation server from within IDEA.

To save you reading the whole thread, the solution is just:
1. Open the 'Edit (Run/Debug) Configurations' dialog
2. Click on the [+]
3. Select 'Scala Compilation Server' (should be near the end of the list)
4. Give the new launcher a name (e.g. 'fsc')
5. Hit OK
6. Press the Play icon

Assuming you're also sensible enough to turn on the 'Use fsc (fast scalac)' option (it's under 'Scala Compiler' in the Project Settings), re-compilation of .scala files should get a LOT faster with fsc. I'm seeing single-file changes have gone down from over 10 seconds to less than one.

Friday, October 8, 2010

"Importing" Static Functions for use in Scala Subclasses

I'm writing some WebDriver web tests at the moment, using the PageObjects pattern. This is a pattern where you have a class in your test source tree representing each page on your site and each of these classes exposes all the data and functions of the page it represents, using the domain language of your application and hiding the HTML stuff from the test.

As well as having the tests clear of HTML knowledge, I like to keep the PageObject classes as simple as possible when I can. WebDriver is very handy and succinct, but I like to go even further, so instead of my PageObject containing:
driver.findElement(By.id("submit")).click
I would much prefer to just write:
click(id("submit"))
I already have an abstract base class called Page, so the solution is simple enough: write a click() function in the base class and somehow import all the methods in org.openqa.selenium.By.

Being super-lazy, I don't really want to have to add import org.openqa.selenium.By._ to the top of every Page subclass, especially seeing as I know I'll need them in EVERY subclass. I want the Page to provide this importing for me, so that the methods in By are automatically available in every subclass.

Failed Attempts
I tried a couple of solutions before I found what I believe is the laziest working approach. I tried just importing By._ in the Page class. This seemed too simple to work, and I was right. I also tried to import the methods with identical aliases (import By.{id => id}) but this didn't do anything either.

I thought maybe I could extend the By class and bring all the methods into the Page namespace that way, but the By class is both a factory for By instances as well as the abstract interface for those classes themselves, so I can't extend it without implementing the abstract findElements() method, which doesn't really make sense.

I concluded that explicit delegation was the only solution, so I resorted to writing this:
def id(id: String) = By.id(id)
That didn't seem too bad, but by the time I was half way through writing the second one, I decided that this was not lazy enough. I didn't want to have to retype the signature of all these functions.

A Suitably Lazy Solution
And then I remembered that I was working in a functional language, and that the functions I was delegating to were actually first-class values in Scala. So all I really needed was a way to provide an alias for those function values that allowed my subclasses to invoke the alias. The solution I came up with is to partially-apply each of the functions with no arguments and to make my alias functions return these partially-applied (though in reality completely unapplied) functions.

The resulting code looks like this:
def id = By.id _
def linkText = By.linkText _
def name = By.name _
def xpath = By.xpath _
Now my subclasses can use the functions id(), linkText(), name() and xpath() without qualification, and I didn't have to re-type the signatures of each one. It obviously didn't save me amazing wads of time in this case (especially now that I've written a blog entry about it) but in some other scenario where there are many methods with non-trivial signatures that need to be delegated, this could be a real time-saver.

What Does it Cost?
Note that there is a slight cost to what I've done here. Scala doesn't see what I've done and have its compiler automatically substitute the straightforward delegating method that I originally wrote. Instead, scalac creates a new class, a subclass of FunctionN, for each of these methods, with the classes having names like Page$$anonfun$id$1. The implementation of the Page.id() function that Scala outputs will create and return an object of this class, such that the caller then executes apply() on the Function, which in turn invokes the original method on By (statically at that point - no reflection). So my desire for laziness at the source level has resulted in some indirection at runtime, but this kind of simple functor creation is unlikely to cause any serious performance pain.

(If you were really worried about it, you could make the aliases vals instead of defs so that the functor objects would only be instantiated once, but then IDEA will start highlighting the usages of the aliases as fields instead of methods, which just looks weird.)

If you want to know more about the gory details of Partially-Applied functions, I would suggest grabbing a copy of "Programming in Scala" from O'Reilly from the Book Depository:
Programming Scala - Dean Wampler & Alex Payne (O'Reilly)
or from Amazon: