Database-Backed Refreshable Beans with Groovy and Spring 3

Posted on October 30, 2010 by Scott Leberknight

In 2009 I published a two-part series of articles on IBM developerWorks entitled Groovier Spring. The articles showed how Spring supports implementing beans in Groovy whose behavior can be changed at runtime via the "refreshable beans" feature. This feature essentially detects when a Spring bean backed by a Groovy script has changed, recompiles it, and replaces the old bean with the new one. This feature is pretty powerful in certain scenarios, for example in PDF generation; mail or any kind of template generation; and as a way to implement runtime modifiable business rules. One specific use case I showed was how to implement PDF generation where the Groovy scripts reside in a database, allowing you to change how PDFs are generated by simply updating Groovy scripts in your database.

In order to load Groovy scripts from a database, I showed how to implement custom ScriptFactoryPostProcessor and ScriptSource classes. The CustomScriptFactoryPostProcessor extends the default Spring ScriptFactoryPostProcessor and overrides the convertToScriptSource method to recognize a database-based script, e.g. you could specify a script source of database:com/nearinfinity/demo/GroovyPdfGenerator.groovy. There is also DatabaseScriptSource that implements the ScriptSource interface and which knows how to load Groovy scripts from a database.

In order to put these pieces together, you need to do a bit of configuration. In the articles I used Spring 2.5.x which was current at the time in early 2009. The configuration looked like this:

<bean id="dataSource"
  class="org.springframework.jdbc.datasource.DriverManagerDataSource">
    <!-- set data source props, e.g. driverClassName, url, username, password... -->
</bean>

<bean id="scriptFactoryPostProcessor"
  class="com.nearinfinity.spring.scripting.support.CustomScriptFactoryPostProcessor">
    <property name="dataSource" ref="dataSource"/>
</bean>

<lang:groovy id="pdfGenerator"
  script-source="database:com/nearinfinity/demo/DemoGroovyPdfGenerator.groovy">
    <lang:property name="companyName" value="Database Groovy Bookstore"/>
</lang:groovy>

In Spring 2.5.x this works because the <lang:groovy> tag looks for a Spring bean with id "scriptFactoryPostProcessor" and if one exists it uses it, if not it creates it. In the above configuration we created our own "scriptFactoryPostProcessor" bean for <lang:groovy> tags to utilize. So all's well...until you move to Spring 3.x at which point the above configuration no longer works. This was pointed out to me by João from Brazil who tried the sample code in the articles with Spring 3.x, and it did not work. After trying a bunch of things, we eventually determined that in Spring 3.x the <lang:groovy> tag looks for a ScriptFactoryPostProcessor bean whose id is "org.springframework.scripting.config.scriptFactoryPostProcessor" not just "scriptFactoryPostProcessor." So once you figure this out, it is easy to change the above configuration to:

<bean id="org.springframework.scripting.config.scriptFactoryPostProcessor"
  class="com.nearinfinity.spring.scripting.support.CustomScriptFactoryPostProcessor">
    <property name="dataSource" ref="dataSource"/>
</bean>

<lang:groovy id="pdfGenerator"
  script-source="database:com/nearinfinity/demo/DemoGroovyPdfGenerator.groovy">
    <lang:property name="companyName" value="Database Groovy Bookstore"/>
</lang:groovy>

Then, everything works as expected and the Groovy scripts can reside in your database and be automatically reloaded when you change them. So if you download the article sample code as-is, it will work since the bundled Spring version is 2.5.4, but if you update to Spring 3.x then you'll need to modify the configuration in applicationContext.xml for example #7 (EX #7) as shown above to change the "scriptFactoryPostProcessor" bean to be "org.springframework.scripting.config.scriptFactoryPostProcessor." Note there is a scheduled JIRA issue SPR-5106 that will make the ScriptFactoryPostProcessor mechanism pluggable, so that you won't need to extend the default ScriptFactoryPostProcessor and replace the default bean, etc. But until then, this hack continues to work pretty well.

Making Cobertura Reports Show Groovy Code with Maven

Posted on December 15, 2009 by Scott Leberknight

A recent project started out life as an all-Java project that used Maven as the build tool. Initially we used Atlassian Clover to measure unit test coverage. Clover is a great product for Java code, but unfortunately it only works with Java code because it works at the Java source level. (This was the case as of Spring 2009, and I haven't checked since then.) As we started migrating existing code from Java to Groovy and writing new code in Groovy, we started to lose data about unit test coverage because Clover does not understand Groovy code. To remedy this problem we switched from Clover to Cobertura, which instruments at the bytecode level and thus works with Groovy code. Theoretically it would also work with any JVM-based language but I'm not sure whether or not it could handle something like Clojure or not.

In any case, we only cared about Groovy so Cobertura was a good choice. With the Cobertura Maven plugin we quickly found a problem, which was that even though the code coverage was running, the reports only showed coverage for Java code, not Groovy. This blog shows you how to display coverage on Groovy code when using Maven and the Cobertura plugin. In other words, I'll show how to get Cobertura reports to link to the real Groovy source code in Maven, so you can navigate Cobertura reports as you normally would.

The core problem is pretty simple, though it took me a while to figure out how to fix it. Seems to be pretty standard in Maven: I know what I want to do, but finding out how to do it is the really hard part. The only thing you need to do is tell Maven about the Groovy source code and where it lives. The way I did this is to use the Codehaus build-helper-maven-plugin which has an add-source goal. The add-source goal does just what you would expect; it adds a specified directory (or directories) as a source directory in your Maven build. Here's how you use it in your Maven pom.xml file:

<plugin>
    <groupId>org.codehaus.mojo</groupId>
    <artifactId>build-helper-maven-plugin</artifactId>
    <executions>
        <execution>
            <phase>generate-sources</phase>
            <goals>
                <goal>add-source</goal>
            </goals>
            <configuration>
                <sources>
                    <source>src/main/groovy</source>
                </sources>
            </configuration>
        </execution>
    </executions>
</plugin>

In the above code snippet, we're using the "build-helper-maven-plugin" to add the src/main/groovy directory. That's pretty much it. Run Cobertura as normal, view the reports, and you should now see coverage on Groovy source code as well as Java.

Can Java Be Saved?

Posted on November 09, 2009 by Scott Leberknight

Java and Evolution

The Java language has been around for a pretty long time, and in my view is now a stagnant language. I don't consider it dead because I believe it will be around for probably decades if not longer. But it appears to have reached its evolutionary peak, and it doesn't look it's going to be evolved any further. This is not due to problems inherent in the language itself. Instead it seems the problem lies with Java's stewards (Sun and the JCP) and their unwillingness to evolve the language to keep it current and modern, and more importantly the goal to keep backward compatibility at all costs. Not just Sun, but also it seems the large corporations with correspondingly large investments in Java like IBM and Oracle aren't exactly chomping at the bit to improve Java. I don't even know if they think it even needs improvement at all. So really, the ultra-conservative attitude towards change and evolution is the problem with Java from my admittedly limited view of things.

That's why I don't hate Java. But, I do hate the way it has been treated by the people charged with improving it. It is clear many in the Java community want things like closures and a native property syntax but instead we got Project Coin. This, to me, is sad really. It is a shame that things like closures and native properties were not addressed in Java/JDK/whatever-it-is-called 7.

Why Not?

I want to know why Java can't be improved. We have concrete examples that it is possible to change a major language in major ways. Even in ways that break backward compatibility in order to evolve and improve. Out with the old, in with the new. Microsoft with C# showed that you can successfully evolve a language over time in major ways. For example C# has always had a property syntax but it now also has many features found in dynamically typed and functional languages such as type inference and, effectively, closures. With LINQ it introduced functional concepts. When C# added generics they did it correctly and retained the type information in the compiled IL, whereas Java used type-erasure and simply dropped the types from the compiled bytecode. There is a great irony here: though C# began life about five or six years after Java, it not only has caught up but has surpassed Java in most if not all ways, and has continued to evolve while Java has become stagnant.

C# is not the only example. Python 3 is a major overhaul of the Python language, and it introduced breaking changes that are not backwards compatible. I believe they provide a migration tool to assist you should you want to move from the 2.x series to version 3 and beyond. Microsoft has done this kind of thing as well. I remember when they made Visual Basic conform to the .NET platform and introduced some rather gut wrenching (for VB developers anyway) changes, and they also provided a tool to aid the transition. One more recent example is Objective-C which has experienced a resurgence in importance mainly because of the iPhone. Objective-C has been around longer than all of Java, C#, Ruby, Python, etc. since the 1980s. Apple has made improvements to Objective-C and it now sports a way to define and synthesize properties and most recently added blocks (effectively closures). If a language that pre-dates Java (Python also pre-dates Java by the way) can evolve, I just don't get why Java can't.

While it is certainly possible to remain on older versions of software, forcing yourself to upgrade can be a Good Thing, because it ensures you don't get the "COBOL Syndrome" where you end up with nothing but binaries that have to run on a specific hardware platform forever and you are trapped until you rewrite or you go out of business. The other side of this, of course, is that organizations don't have infinite time, money, and resources to update every single application. Sometimes this too can be good, because it forces you to triage older systems, and possibly consolidate or outright eliminate them if they have outlived their usefulness. In order to facilitate large transitions, I believe it is very important to use tools that help automate the upgrade process, e.g. tools that analyze code and fix it if possible (reporting all changes in a log) and which provide warnings and guidance when a simple fix isn't possible.

The JVM Platform

Before I get into the changes I'd make to Java to make it not feel like I'm developing with a straightjacket on while having to type masses of unnecessary boilerplate code, I want to say that I think the JVM is a great place to be. Obviously the JVM itself facilitates developing all kinds of languages as evidenced by the huge number of languages that run on the JVM. The most popular ones and most interesting ones these days are probably JRuby, Scala, Groovy, and Clojure though there are probably hundreds more. So I suppose you could make an argument that Java doesn't need to evolve any more because we can simply use a more modern language that runs on the JVM.

The main problem I have with that argument is simply that there is already a ton of Java code out there, and there are many organizations who are simply not going to allow other JVM-based languages; they're going to stick with Java for the long haul, right or wrong. This means there is a good chance that even if you can manage convince someone to try writing that shiny new web app using Scala and its Lift framework, JRuby on Rails, Grails, or Clojure, chances are at some point you'll also need to maintain or enhance existing large Java codebases. Wouldn't you like to be able to first upgrade to a version of Java that has closures, native property syntax, method/property handles, etc?

Next I'll choose what would be my top three choices to make Java much better immediately.

Top Three Java Improvements

If given the chance to change just three things about Java to make it better, I would choose these:

  • Remove checked exceptions
  • Add closures
  • Add formal property support

I think these three changes along would make coding in Java much, much better. Let's see how.

Remove Checked Exceptions

By removing checked exceptions you eliminate a ton of boilerplate try/catch clauses that do nothing except log a message, wrap and re-throw as a RuntimeException, pollute the API with throws clauses all over the place, or worst of all empty catch blocks that can cause very subtle and evil bugs. With unchecked exceptions, developers still have the option to catch exceptions that they can actually handle. It would be interesting to see how many times in a typical Java codebase people actually handle exceptions and do something at the point of exception, or whether they simply punt it away for the caller to handle, who in turn also punts, and so forth all the way up the call stack until some global handler catches it or the program crashes. If I were a betting man, I'd bet a lot of money that for most applications, developers punt the vast majority of the time. So why force people to handle something they cannot possible handle?

Add Closures

I specifically listed removing checked exceptions first, because to me it is the first step to being able to have a closure/block syntax that isn't totally horrendous. If you remove checked exceptions, then adding closures would seem to be much easier since you don't need to worry at all about what exceptions could possibly be thrown and there is obviously no need to declare exceptions. Closures/blocks would lead to better ability to handle collections, for example as in Groovy but in Java you would still have types (note I'm also using a literal property syntax here):

// Find all people whose last name is "Smith"
List<Person> peeps = people.findAll { Person person -> person.lastName.equals("Smith");   } 
or
// Create a list of names by projecting the name property of a bunch of Person objects
List<String> names = people.collect { Person person -> person.name; }

Not quite as clean as Groovy but still much better than the for loops that would traditionally be required (or trying to shoehorn functional-style into Java using the Jakarta Commons Collections or Google Collections). Removal of checked exceptions would allow, as mentioned earlier, the block syntax to not have to deal with declaring exceptions all over the place. Having to declare checked exceptions in blocks makes the syntax worse instead of better, at least when I saw the various closure proposals for Java/JDK/whatever 7 which did not get included. Requiring types in the blocks is still annoying, especially once you get used to Ruby and Groovy, but it would be passable.

Native Property Syntax

The third change should do essentially what Groovy for properties does but should introduce a "property" keyword (i.e. don't rely on whether someone accidentally put an access modifier in there as Groovy does). The syntax could be very clean:

property String firstName;
property String lastName;
property Date dateOfBirth;

The compiler could automatically generate the appropriate getter/setter for you like Groovy does. This obviates the need to manually code the getter/setter. Like Groovy you should be able to override either or both. It de-clutters code enormously and removes a ton of lines of silly getter/setter code (plus JavaDocs if you are actually still writing them for every get/set method). Then you could reference properties as you would expect: person.name is the "getter" and person.name = "Fred" is the "setter." Much cleaner syntax, way less boilerplate code. By the way, if someone used the word "property" in their code, i.e. as a variable name, it is just not that difficult to rename refactor, especially with all the advanced IDEs in the Java community that do this kind of thing in their sleep.

Lots of other things could certainly be done, but if just these three were done I think Java would be much better off, and maybe it would even come into the 21st century like Objective-C. (See the very long but very good Ars Technica Snow Leopard review for information on Objective-C's new blocks feature.)

Dessert Improvements

If (as I suspect they certainly will :-) ) Sun/Oracle/whoever takes my suggestions and makes these changes and improves Java, then I'm sure they'll want to add in a few more for dessert. After the main course which removes checked exceptions, adds closures, and adds native property support, dessert includes the following:

  • Remove type-erasure and clean up generics
  • Add property/method handles
  • String interpolation
  • Type inference
  • Remove "new" keyword

Clean Up Generics

Generics should simply not remove type information when compiled. If you're going to have generics in the first place, do it correctly and stop worrying about backward compatibility. Keep type information in the bytecode, allow reflection on it, and allow me to instantiate a "new T()" where T is some type passed into a factory method, for example. I think an improved generics implementation could basically copy the way C# does it and be done.

Property/Method Handles

Property/method handles would allow you to reference a property or method directly. They would make code that now must use strings strongly typed and refactoring-safe (IDEs like IntelliJ already know how to search in text and strings but can never be perfect) much nicer. For example, a particular pet peeve of mine and I'm sure a lot of other developers is writing Criteria queries in Hibernate. You are forced to reference properties as simple strings. If the lastName property is changed to surname then you better make sure to catch all the places the String "lastName" is referenced. So you could replace code like this:

session.createCriteria(Person.class)
	.add(Restrictions.eq("lastName", "Smith")
	.addOrder(Order.asc("firstName")
	.list();

with this using method/property handles:

session.createCriteria(Person.class)
	.add(Restrictions.eq(Person.lastName, "Smith")
	.addOrder(Order.asc(Person.firstName)
	.list();

Now the code is strongly-typed and refactoring-safe. JPA 2.0 tries mightily to overcome having strings in the new criteria query API with its metamodel. But I find it pretty much appalling to even look at, what with having to create or code-generate a separate "metamodel" class which you reference like "_Person.lastName" or some similar awful way. This metamodel class lives only to represent properties on your real model object for the sole purpose of making JPA 2.0 criteria queries strongly typed. It just isn't worth it and is total overkill. In fact, it reminds me of the bad-old days of rampant over-engineering in Java (which apparently is still alive and well in many circles but I try to avoid it as best I can). The right thing is to fix the language, not to invent something that adds yet more boilerplate and more complexity to an already overcomplicated ecosystem.

Method handles could also be used to make calling methods using reflection much cleaner than it currently is, among other things. Similarly it would make accessing properties via reflection easier and cleaner. And with only unchecked exceptions you would not need to catch the four or five kinds of exceptions reflective code can throw.

String Interpolation

String interpolation is like the sorbet that you get at fancy restaurants to cleanse your palate. This would seem to be a no-brainer to add. You could make code like:

log.error("The object of type  ["
    + foo.getClass().getName()
    + "] and identifier ["
    + foo.getId()
    + "] does not exist.", cause);

turn into this much more palatable version (using the native property syntax I mentioned earlier):

log.error("The object of type [${foo.class.name}] and identifier [${foo.id}] does not exist.", cause);

Type Inference

I'd also suggest adding type inference, if only for local variables like C# does. Why do we have to repeat ourselves? Instead of writing:

Person person = new Person();

why can't we just write:

var person = new Person();

I have to believe the compiler and all the tools are smart enough to infer the type from the "new Person()". Especially since other strongly-typed JVM languages like Scala do exactly this kind of thing.

Elminate "new"

Last but not least, and actually not the last thing I can think of but definitely the last I'm writing about here, let's get rid of the "new" keyword and either go with Ruby's new method or Python's constructor syntax, like so:

// Ruby-like new method
var person = Person.new()

// or Python-like construction
var person = Person()

This one came to me recently after hearing Bruce Eckel give an excellent talk on language evolution and archaeology. He had a ton of really interesting examples of why things are they way they are, and how Java and other languages like C++ evolved from C. One example was the reason for "new" in Java. In C++ you can allocate objects on the stack or the heap, so there is a stack-based constructor syntax that does not use "new" while the heap-based constructor syntax uses the "new" operator. Even though Java only has heap-based object allocation, it retained the "new" keyword which is not only boilerplate code but also makes the entire process of object construction pretty much immutable: you cannot change anything about it nor can you easily add hooks into the object creation process.

I am not an expert at all in the low-level details, and Bruce obviously knows what he is talking about way more than I do, but I can say that I believe the Ruby and Python syntaxes are not only nicer but more internally consistent, especially in the Ruby case because there is no special magic or sauce going on. In Ruby, new is just a method, on a class, just like everything else.

Conclusion to this Way Too Long Blog Entry

I did not actually set out to write a blog whose length is worthy of a Ted Neward blog. It just turned out that way. (And I do in fact like reading Ted's long blogs!) Plus, I found out that speculative fiction can be pretty fun to write, since I don't think pretty much any of these things are going to make it into Java anytime soon, if ever, and I'm sure there are lots of people in the Java world who hate things like Ruby won't agree anyway.

Groovification

Posted on May 04, 2009 by Scott Leberknight

Last week I tweeted about groovification, which is defined thusly:

groovification. noun. the process of converting java source code into groovy source code (usually done to make development more fun)

On my main day-to-day project, we've been writing unit tests in Groovy for quite a while now, and recently we decided to start implementing new code in Groovy rather than Java. The reason for doing this is to gain more flexibility in development, to make testing easier (i.e. in terms of the ability to mock dependencies in a trivial fashion), to eliminate a lot of Java boilerplate code and thus write less code, and of course to make developing more fun. It's not that I hate Java so much as I feel Java simply isn't innovating anymore and hasn't for a while, and isn't adding features that I simply don't want to live without anymore such as closures and the ability to do metaprogramming when I need to. In addition, it isn't removing features that I don't want, such as checked exceptions. If I know, for a fact, that I can handle an exception, I'll handle it appropriately. Otherwise, when there's nothing I can do anyway, I want to let the damn thing propagate up and just show a generic error message to the user, log the error, and send the admin team an email with the problem details.

This being, for better or worse, a Maven project, we've had some interesting issues with mixed compilation of Java and Groovy code. The GMaven plugin is easy to install and works well but currently has some outstanding issues related to Groovy stub generation, specifically it cannot handle generics or enums properly right now. (Maybe someone will be less lazy than me and help them fix it instead of complaining about it.) Since many of our classes use generics, e.g. in service classes that return domain objects, we currently are not generating stubs. We'll convert existing classes and any other necessary dependencies to Groovy as we make updates to Java classes, and we are implementing new code in Groovy. Especially in the web controller code, this becomes trivial since the controllers generally depend on other Java and/or Groovy code, but no other classes depend on the controllers. So starting in the web tier seems to be a good choice. Groovy combined with implementing controllers using the Spring @MVC annotation-based controller configuration style (i.e. no XML configuration), is making the controllers really thin, lightweight, and easy to read, implement, and test.

I estimate it will take a while to fully convert all the existing Java code to Groovy code. The point here is that we are doing it piecemeal rather than trying to do it all at once. Also, whenever we convert a Java file to a Groovy one, there are a few basics to make the classes Groovier without going totally overboard and spending loads of time. First, once you've used IntelliJ's move refactoring to move the .java file to the Groovy source tree (since we have src/main/java and src/main/groovy) you can then use IntelliJ's handy-dandy "Rename to Groovy" refactoring. In IntelliJ 8.1 you need to use the "Search - Find Action" menu option or keystroke and type "Rename to..." and select "Rename to Groovy" since they goofed in version 8 and that option was left off a menu somehow. Once that's done you can do a few simple things to make the class a bit more groovy. First, get rid of all the semi-colons. Next, replace getter/setter code with direct property access. Third, replace for loops with "each"-style internal iterators when you don't need the loop index and "eachWithIndex" where you do. You can also get rid of some of the redundant modifiers like "public class" since that is the Groovy default. That's not too much at once, doesn't take long, and makes your code Groovier. Over time you can do more groovification if you like.

The most common gotchas I've found have to do with code that uses anonymous or inner classes since Groovy doesn't support those Java language features. In that case you can either make a non-public named class (and it's OK to put it in the same Groovy file unlike Java as long as it's not public) or you can refactor the code some other way (using your creativity and expertise since we are not monkeys, right?). This can sometimes be a pain, especially if you are using a lot of them. So it goes. (And yes, that is a Slaughterhouse Five reference.)

Happy groovification!

Groovy + Spring = Groovier Spring

Posted on January 06, 2009 by Scott Leberknight

If you're into Groovy and Spring, check out my two-part series on IBM developerWorks on using Groovy together with Spring's dynamic language support for potentially more flexible (and interesting) applications. In Part 1 I show how to easily integrate Groovy scripts (i.e. .groovy files containing one or more classes) into Spring-based applications. In Part 2 I show how to use the "refreshable beans" feature in Spring to automatically and transparently reload Spring beans implemented in Groovy from pretty much anywhere including a relational database, and why you might actually are to do something like that!

The "N matchers expected, M recorded" Problem in EasyMock

Posted on September 30, 2008 by Scott Leberknight

EasyMock is a Java dynamic mocking framework that allows you to record expected behavior of mock objects, play them back, and finally verify the results. As an example, say you have an interface FooService with a method List<Foo> findFoos(FooSearchCriteria criteria, Integer maxResults, String[] sortBy) and that you have a FooSearcher class which uses a FooService to perform the actual searching. With EasyMock you could test that the FooSearcher uses the FooService as it should without needing to also test the actual FooService implementation. It is important in unit tests to isolate dependent collaborators so they can be tested independently. One thing I pretty much always forget when using EasyMock is that if you use any IArgumentMatchers in your expectations, then all the arguments must use an IArgumentMatcher. Going back to the FooSearcher example, you might start out with the following test (written in Groovy for convenience):

void testSearch() {
  def service = createMock(FooService)
  def searcher = new FooSearcher(fooService: service, maxAllowedResults: 10)
  def criteria = new FooSearchCriteria()
  def sortCriteria = ["bar", "baz"] as String[]
  def expectedResult = [new Foo(), new Foo()]
  expect(service.findFoos(criteria, 10, sortCriteria)).andReturn(expectedResult)
  replay service
  def result = searcher.search(criteria, "bar", "baz")
  assertSame expectedResult, result
  verify service
}

The above test fails with the following error message:

java.lang.AssertionError: 
  Unexpected method call findFoos(com.acme.FooSearchCriteria@ea443f, 10, [Ljava.lang.String;@e41d4a):
    findFoos(com.acme.FooSearchCriteria@ea443f, 10, [Ljava.lang.String;@268cc6): expected: 1, actual: 0
	at org.easymock.internal.MockInvocationHandler.invoke(MockInvocationHandler.java:29)
	at org.easymock.internal.ObjectMethodsFilter.invoke(ObjectMethodsFilter.java:45)
	at $Proxy0.findFoos(Unknown Source)
	at com.acme.FooSearcher.search(FooSearcher.java:19)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    ...

We expected a call to findFoos which takes a FooSearchCriteria, an integer, and a string array describing the sort conditions. But from the error message, EasyMock told us that the expected method was not called and so verification of the mock behavior failed. What happened? Well, basically the string array that was expected was not the string array actually passed as the argument. Look back at the stack trace and specifically the array arguments: the actual argument was [Ljava.lang.String;@e41d4a while the expected argument was [Ljava.lang.String;@268cc6. The FooSearcher.search method's signature is List<Foo> FooSearchCriteria criteria, String... sortBy) - the varargs that are passed to FooSearcher.search are getting packed into a new array when called and that new array is subsequently passed into the FooService which is what causes the difference between the expected and actual array arguments!

To make sure that arguments such as arrays and other complex objects are matched properly by the mock object, EasyMock provides IArgumentMatcher to compare the expected and actual arguments to method calls. Essentially, it is like performing a logical "assertEquals" on the arguments. One of the matchers EasyMock provides is obtained via the static aryEq method in the EasyMock class. So for example if you had a method that took a single array argument, you could make an expectation of mock behavior like this:

def myArray = ["foo", "bar", "baz"] as String[]
expect(someObject.someMethod(EasyMock.aryEq(myArray)).andReturn(anotherObject)

Here you tell EasyMock to expect a call to someMethod on someObject with myArray as the sole argument, and to return anotherObject. Cool, so let's try to fix the failing test above using EasyMock.aryEq (which was imported statically using import static):

void testSearch() {
  def service = createMock(FooService)
  def searcher = new FooSearcher(fooService: service, maxAllowedResults: 10)
  def criteria = new FooSearchCriteria()
  def sortCriteria = ["bar", "baz"] as String[]
  def expectedResult = [new Foo(), new Foo()]
  // Try to use EasyMock's aryEq() to ensure the expected array argument equals the actual argument...
  expect(service.findFoos(criteria, 10, aryEq(sortCriteria))).andReturn(expectedResult)
  replay service
  def result = searcher.search(criteria, "bar", "baz")
  assertSame expectedResult, result
  verify service
}

This test also fails with the following error message:

java.lang.IllegalStateException: 3 matchers expected, 1 recorded.
	at org.easymock.internal.ExpectedInvocation.createMissingMatchers(ExpectedInvocation.java:41)
	at org.easymock.internal.ExpectedInvocation.(ExpectedInvocation.java:33)
	at org.easymock.internal.ExpectedInvocation.(ExpectedInvocation.java:26)
	at org.easymock.internal.RecordState.invoke(RecordState.java:64)
	at org.easymock.internal.MockInvocationHandler.invoke(MockInvocationHandler.java:24)
	at org.easymock.internal.ObjectMethodsFilter.invoke(ObjectMethodsFilter.java:45)
	at $Proxy0.findFoos(Unknown Source)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    ...

Wait, shouldn't that have made EasyMock ensure that the supplied argument was verified using an IArgumentMatcher, specifically an ArrayEquals matcher? Well, sort of. And this is where I always forget what the "N matchers expected, M recorded" error message means and fumble around for a few minutes while I remember. In short, the rule is this:

If you use an argument matcher for one argument, you must use an argument matcher for all the arguments.

So in the above example, we recorded one matcher via the call to aryEq. Now EasyMock will expect an argument matcher for all the arguments in the expectation, and there are three arguments. Now this makes sense. We need to add argument matchers for the other arguments as well. So let's now fix the test:

void testSearch() {
  def service = createMock(FooService)
  def maxResults = 10
  def searcher = new FooSearcher(fooService: service, maxAllowedResults: maxResults)
  def criteria = new FooSearchCriteria()
  def sortCriteria = ["bar", "baz"] as String[]
  def expectedResult = [new Foo(), new Foo()]
  // If you define one matcher for an expected argument, you need to define them for all the arguments!
  expect(service.findFoos(isA(FooSearchCriteria), eq(maxResults), aryEq(sortCriteria))).andReturn(expectedResult)
  replay service
  def result = searcher.search(criteria, "bar", "baz")
  assertSame expectedResult, result
  verify service
}

Now the test passes as we expect it to. We used several other common types of argument matchers here via the static isA and eq argument matchers. The isA matcher ensures the argument is an instance of the specified class, while the eq matcher checks that the actual argument equals the expected argument via the normal Java equality check, i.e. expected.equals(actual). So in summary, if you ever receive the dreaded "N matchers expected, M recorded" error message from EasyMock, you know you need to ensure that all arguments to an expectation use a matcher. And, if you got this far and were dying to mention that if you're using Groovy to test Java code there are easier ways in Groovy to test than using a framework like EasyMock, you're right for the most part. There are still some things you cannot do when testing Java code using Groovy. I plan to go into that more in a future blog post.