Testing HTTP Clients Using Spark, Revisited

Posted on March 14, 2017 by Scott Leberknight

In a previous post I described the very small sparkjava-testing library I created to make it really simple to test HTTP client code using the Spark micro-framework. It is basically one simple JUnit 4 rule (SparkServerRule) that spins up a Spark HTTP server before tests run, and shuts it down once tests have executed. It can be used either as a @ClassRule or as a @Rule. Using @ClassRule is normally what you want to do, which starts an HTTP server before any tests has run, and shuts it down afer all tests have finished.

In that post I mentioned that I needed to do an "incredibly awful hack" to reset the Spark HTTP server to non-secure mode so that, if tests run securely using a test keystore, other tests can also run either non-secure or secure, possibly with a different keystore. I also said the reason I did that was because "there is no way I found to easily reset security". The reason for all that nonsense was because I was using the static methods on the Spark class such as port, secure, get, post, and so on. Using the static methods also implies only one server instance across all tests, which is also not so great.

Well, it turns out I didn't really dig deep enough into Spark's features, because there is a really simple way to spin up separate and independent Spark server instances. You simply use the Service#ignite method to return an instance of Service. You then configure the Service however you want, e.g. change the port, add routes, filters, set the server to run securely, etc. Here's an example:

Service http = Service.ignite();
http.port(56789);
http.get("/hello", (req, resp) -> "Hello, Spark service!");

So now you can create as many servers as you want. This is exactly what is needed for the SparkServerRule, which has been refactored to use Spark#ignite to get separate servers for each test. It now has only one constructor which takes a ServiceInitializer and can be used to do whatever configuration you need, add routes, filters, etc. Since ServiceInitializer is a @FunctionalInterface you can simply supply a lambda expression, which makes it cleaner. Here is a simple example:

@ClassRule
public static final SparkServerRule SPARK_SERVER = new SparkServerRule(http -> {
    http.get("/ping", (request, response) -> "pong");
    http.get("/health", (request, response) -> "healthy");
});

This is a rule that, before any test is run, spins up a Spark server on the default port 4567 with two GET routes, and shuts the server down after all tests have completed. To do things like change the port and IP address in addition to adding routes, you just call the appropriate methods on the Service instance (in the example above, the http object passed to the lambda). Here's an example:

@ClassRule
public static final SparkServerRule SPARK_SERVER = new SparkServerRule(https -> {
    https.ipAddress("127.0.0.1");
    https.port(56789);
    URL resource = Resources.getResource("sample-keystore.jks");
    https.secure(resource.getFile(), "password", null, null);
    https.get("/ping", (request, response) -> "pong");
    https.get("/health", (request, response) -> "healthy");
});

In this example, tests will be able to access a server with two secure (https) endpoints at IP 127.0.0.1 on port 56789. So that's it. On the off chance someone was actually using this rule other than me, the migration path is really simple. You just need to configure the Service instance passed in the SparkServerRule constructor as shown above. Now, each server is totally independent which allows tests to run in parallel (assuming they're on different ports). And better, I was able to remove the hack where I used reflection to go under the covers of Spark and manipulate fields, etc. So, test away on that HTTP client code!

This blog was originally published on the Fortitude Technologies blog here.

Testing HTTP Clients Using the Spark Micro Framework

Posted on December 07, 2016 by Scott Leberknight

Testing HTTP client code can be a hassle. Your tests either need to run against a live HTTP server, or you somehow need to figure out how to send mock requests which is generally not easy in most libraries that I have used. The tests should also be fast, meaning you need a lightweight server that starts and stops quickly. Spinning up heavyweight web or application servers, or relying on a specialized test server, is generally error-prone, adds complexity and slows tests down. In projects I'm working on lately we are using Dropwizard, which provides first class testing support for testing JAX-RS resources and clients as JUnit rules. For example, it provides DropwizardClientRule, a JUnit rule that lets you implement JAX-RS resources as test doubles and starts and stops a simple Dropwizard application containing those resources. This works great if you are already using Dropwizard, but if not then a great alternative is Spark. Even if you are using Dropwizard, Spark can still work well as a test HTTP server.

Spark is self-described as a "micro framework for creating web applications in Java 8 with minimal effort". You can create the steroptypical "Hello World" in Spark like this (shamelessly copied from Spark's web site):

import static spark.Spark.get;

public class HelloWorld {
    public static void main(String[] args) {
        get("/hello", (req, res) -> "Hello World");
    }
}

You can run this code and visit http://localhost:4567 in a browser or using a client tool like curl or httpie. Spark is a perfect fit for creating HTTP servers in tests (whether you call them unit tests, integration tests or something else is up to you, I will just call them tests here). I have created a very simple library sparkjava-testing that contains a JUnit rule for spinning up a Spark server for functional testing of HTTP clients. This library consists of one JUnit rule, the SparkServerRule. You can annotate this rule with @ClassRule or just @Rule. Using @ClassRule will start a Spark server one time before any test is run. Then your tests run, making requests to the HTTP server, and finally once all tests have finished the server is shut down. If you need true isolation between every single test, annotate the rule with @Rule and a test Spark server will be started before each test and shut down after each test, meaning each test runs against a fresh server. (The SparkServerRule is a JUnit 4 rule mainly because JUnit 5 is still in milestone releases, and because I have not actually used JUnit 5.)

To declare a class rule with a test Spark server with two endpoints, you can do this:

@ClassRule
public static final SparkServerRule SPARK_SERVER = new SparkServerRule(() -> {
    get("/ping", (request, response) -> "pong");
    get("/healthcheck", (request, response) -> "healthy");
});

The SparkServerRule constructor takes a Runnable which define the routes the server should respond to. In this example there are two HTTP GET routes, /ping and /healthcheck. You can of course implement the other HTTP verbs such as POST and PUT. You can then write tests using whatever client library you want. Here is an example test using a JAX-RS:

@Test
public void testSparkServerRule_HealthcheckRequest() {
    client = ClientBuilder.newBuilder().build();
    Response response = client.target(URI.create("http://localhost:4567/healthcheck"))
            .request()
            .get();
    assertThat(response.getStatus()).isEqualTo(200);
    assertThat(response.readEntity(String.class)).isEqualTo("healthy");
}

In the above test, client is a JAX-RS Client instance (it is an instance variable which is closed after each test). I'm using AssertJ assertions in this test. The main thing to note is that your client code be parameterizable, so that the local Spark server URI can be injected instead of the actual production URI. When using the JAX-RS client as in this example, this means you need to be able to supply the test server URI to the Client#target method. Spark runs on port 4567 by default, so the client in the test uses that port.

The SparkServerRule has two other constructors: one that accepts a port in addition to the routes, and another that takes a SparkInitializer. To start the test server on a different port, you can do this:

@ClassRule
public static final SparkServerRule SPARK_SERVER = new SparkServerRule(6543, () -> {
    get("/ping", (request, response) -> "pong");
    get("/healthcheck", (request, response) -> "healthy");
});

You can use the constuctor that takes a SparkInitializer to customize the Spark server, for example in addition to changing the port you can also set the IP address and make the server secure. The SparkInitializer is an @FunctionalInterface with one method init(), so you can use a lambda expression. For example:

@ClassRule
public static final SparkServerRule SPARK_SERVER = new SparkServerRule(
        () -> {
            Spark.ipAddress("127.0.0.1");
            Spark.port(9876);
            URL resource = Resources.getResource("sample-keystore.jks");
            String file = resource.getFile();
            Spark.secure(file, "password", null, null);
        },
        () -> {
            get("/ping", (request, response) -> "pong");
            get("/healthcheck", (request, response) -> "healthy");
        });

The first argument is the initializer. It sets the IP address and port, and then loads a sample keystore and calls the Spark#secure method to make the test sever accept HTTPS connections using a sample keystore. You might want to customize settings if running tests in parallel, specifically the port, to ensure parallel tests do not encounter port conflicts.

The last thing to note is that SparkServerRule resets the port, IP address, and secure settings to the default values (4567, 0.0.0.0, and non-secure, respectively) when it shuts down the Spark server. If you use the SparkInitializer to customize other settings (for example the server thread pool, static file location, before/after filters, etc.) those will not be reset, as they are not currently supported by SparkServerRule. Last, resetting to non-secure mode required an incredibly awful hack because there is no way I found to easily reset security - you cannot just pass in a bunch of null values to the Spark#secure method as it will throw an exception, and there is no unsecure method probably because the server was not intended to set and reset things a bunch of times like we want to do in test scenarios. If you're interested, go look at the code for the SparkServerRule in the sparkjava-testing repository, but prepare thyself and get some cleaning supplies ready to wash away the dirty feeling you're sure to have after seeing it.

The ability to use SparkServerRule to quickly and easily setup test HTTP servers, along with the ability to customize the port, IP address, and run securely intests has worked very well for my testing needs thus far. Note that unlike the above toy examples, you can implement more complicated logic in the routes, for example to return a 200 or a 404 for a GET request depending on a path parameter or request parameter value. But at the same time, don't implement extremely complex logic either. Most times I simply create separate routes when I need the test server to behave differently, for example to test various error conditions. Or, I might even choose to implement separate JUnit test classes for different server endpoints, so that each test focuses on only one endpoint and its various success and failure conditions. As is many times the case, the context will determine the best way to implement your tests. Happy testing!

This blog was originally published on the Fortitude Technologies blog here.

Towards More Functional Java - Digging into Nested Data Structures

Posted on November 14, 2016 by Scott Leberknight

In the last post we saw an example that used a generator combined with a filter to find the first available port in a specific range. It returned an Optional to model the case when no open ports are found, as opposed to returning null. In this example, we'll look at how to use Java 8 streams to dig into a nested data structure and find objects of a specific type. We'll use map and filter operations on the stream, and also introduce a new concept, the flat-map.

In the original, pre-Java 8 code that I was working on in a project, the data structure was a three-level nested structure that was marshaled into Java objects from an XML file based on a schema from an external web service. The method needed to find objects of a specific type at the bottom level. For this article, to keep things simple we will work with a simple class structure in which class A contains a collection of class B, and B contains a collection of class C. The C class is a base class, and there are several subclasses C1, C2, and C3. In pseudo-code the classes look like:

class A {
  List<B> bs = []
}

class B {
  List<C> cs = []
}

class C {}
class C1 extends C {}
class C2 extends C {}
class C3 extends C {}

The goal here is to find the first C2 instance, given an instance of A. The pre-Java 8 code looks like the following:

public C2 findFirstC2(A a) {
    for (B b : a.getBs()) {
        for (C c : b.getCs()) {
            if (c instanceof C2) {
                return (C2) c;
            }
        }
    }
    return null;
}

In this code, I made the assumption that the collections are always non-null. The original code I was working on did not make that assumption, and was more complicated as a result. We will revisit the more complicated case later. This code is pretty straightforward: two loops and a conditional, plus an early exit if we find an instance of C2, and return null if we exit the loops without having found anything.

Refactoring to streams using Java 8's stream API is not too bad, though we need to introduce the flat-map concept. Martin Fowler's simple explanation is better than any I would come up with so I will repeat it here: "Map a function over a collection and flatten the result by one-level". In our example, each B has a collection of C. The flat-map operation over a collection of B will basically return a stream of all C for all B. For example, if there are two B instances in the collection, the first having 3 C and the second having 5 C, then the flat-map operation returns a stream of 8 C instances (effectively combining the 3 from the first C and 5 from the second C, and flattening by one level up). With the new flat-map tool in our tool belts, here is the Java 8 code using the stream API:

public Optional<C2> findFirstC2(A a) {
    return a.getBs().stream()
            .flatMap(b -> b.getCs().stream())
            .filter(C2.class::isInstance)
            .map(C2.class::cast)
            .findFirst();
}

In the above code, we first stream over the collection of B. Next is where we apply the flatMap method to get a stream of all C. The one somewhat tricky thing about the Java 8 flatMap method is that the mapper function must return a stream. In our example, we use b.getCs().stream() as the mapper function, thus returning a stream of C. From then on we can apply the filter and map operations, and close out with the findFirst short-circuiting (because it stops at the first C2 it finds) terminal operation which returns an Optional that either contains a value, or is empty.

If you have read any of my previous posts, you won't be surprised that I prefer the functional-style of the Java 8 stream API, for the same reasons I've listed previously (e.g. declarative code, no explicit loops or conditionals, etc.). And as we've seen before in previous posts, we can make the above example generic very easily:

public <T extends C> Optional<T> findFirst(A a, Class<T> clazz) {
    return a.getBs().stream()
            .flatMap(b -> b.getCs().stream())
            .filter(clazz::isInstance)
            .map(clazz::cast)
            .findFirst();
}

Of course, it is also not difficult to make the imperative version with loops generic, using the isAssignableFrom and cast methods in the Class class. And you can even make it just as short by removing the braces, as in the following:

public <T> T findFirstC2(A a, Class<T> clazz) {
    for (B b : a.getBs())
        for (C c : b.getCs())
            if (clazz.isAssignableFrom(c.getClass()))
                return clazz.cast(c);
    return null;
}

I never omit the braces even for one liners, because I believe it is a great way to introduce bugs (remember goto fail a few years ago?). Braces or no braces, why prefer the more functional style to the imperative style? Some is obviously personal preference, and what you are used to. Clearly if you are used to and comfortable with reading imperative code, it won't be an issue to read the above code. But the same goes for functional style, i.e. once you learn the basic concepts (map, filter, reduce, flat-map, etc.) it becomes very easy to quickly see what code is doing (and what is intended).

One other thing is that instead of using stream(), you can easily switch to parallelStream() which then automatically parallelizes the code. But simply using parallelStream() will not always (counter-intuitively) make code faster, e.g. for small collections it will probably make it slower due to context switching. But if things like transformation or filtering take a significant amount of time, then parallelizing the code can produce significant performance improvement. Unfortunately there are no hard rules though, and whether parallelizing speeds the code up depends on various and sundry factors.

The examples above were very simple. The original code was more complex because it did not make any assumptions about nullability of the original argument or the nested collections. Here is the code:

public C2 findFirstC2(A a) {
    if (a == null || a.getBs() == null) {
        return null;
    }

    for (B b : a.getBs()) {
        List<C> cs = b.getCs();
        if (cs == null) {
            continue;
        }

        for (C c : cs) {
            if (c instanceof C2) {
                return (C2) c;
            }
        }
    }
    return null;
}

This code is more difficult to read than the original code due to the additional null-checking conditionals. There are two loops, three conditionals, a loop continuation, and a short-circuit return form within a nested loop. So what does this look like using the Java 8 stream API? Here is one solution:

public Optional<C2> findFirstC2(A a) {
    return Optional.ofNullable(a)
            .map(A::getBs)
            .orElseGet(Lists::newArrayList)
            .stream()
            .flatMap(this::toStreamOfC)
            .filter(C2.class::isInstance)
            .map(C2.class::cast)
            .findFirst();
}

private Stream<? extends C> toStreamOfC(B b) {
    return Optional.ofNullable(b.getCs())
            .orElseGet(Lists::newArrayList)
            .stream();
}

That looks like a lot, so let's see what is going on. The main difference is that we need to account for possible null values. For that the code uses the Optional#ofNullable method which unsurprisingly returns an Optional. We are also using map operations on the Optional objects, which returns an empty Optional if it was originally empty, otherwise it applies the operation. We are also using the Optional#orElseGet method to ensure we are always operating on non-null collections, for example if a.getBs() returns null then the first orElseGet provides a new ArrayList. In this manner, the code always works the same way whether the intermediate collections are null or not. Instead of embedding a somewhat complicated map operation in the flatMap I extracted the toStreamOfC method, and then used a method reference. When writing code in functional style, often it helps to extract helper methods, even if that ends up creating more code because, in the end, the code is more easily understood.

The code in this more complex example illustrates the declarative nature of the functional style. Once you are familiar with the functional primitives (like map, flat-map, filter, and so on) reading this code is quite easy and fast, because it reads like a specification of the problem. Like reading code, writing code in the functional style takes some practice and getting used to, but once you get the hang of it, I think you will find you can often write the code faster. The main difference when writing code in functional style is that I do more thinking about what exactly I am trying to do before just slinging code. Until next time, auf Wiedersehen.

This blog was originally published on the Fortitude Technologies blog here.

Towards More Functional Java using Generators and Filters

Posted on October 12, 2016 by Scott Leberknight

Last time we saw how to use lambdas as predicates, and specifically how to use them with the Java 8 Collection#removeIf method in order to remove elements from a map based on the predicate. In this article we will use a predicate to filter elements from a stream, and combine it with a generator to find the first open port in a specific range. The use case is a (micro)service-based environment where each new service binds to the first open port it finds in a specific port range. For example, suppose we need to limit the port range of each service to the dynamic and/or private ports (49152 to 65535, as defined by IANA). Basically we want to choose a port at random in the dynamic port range and bind to that port if it is open, otherwise repeat the process until we find an open port or we have tried more than a pre-defined number of times.

The original pre-Java 8 code to accomplish this looked like the following:

public Integer findFreePort() {
    int assignedPort = -1;
    int count = 1;
    while (count <= MAX_PORT_CHECK_ATTEMPTS) {
        int checkPort = MIN_PORT + random.nextInt(PORTS_IN_RANGE);
        if (portChecker.isAvailable(checkPort)) {
            assignedPort = checkPort;
            break;
        }
        count++;
    }
    return assignedPort == -1 ? null : assignedPort;
}

There are a few things to note here. First, the method returns an Integer to indicate that it could not find an open port (as opposed to throwing an exception, which might or might not be better). Second, there are two mutable variables assignedPort and count, which are used to store the open port (if found) and to monitor the number of attempts made, respectively. Third, the while loop executes so long as as the maximum number of attempts has not been exceeded. Fourth, a conditional inside the loop uses a port checker object to determine port availability, breaking out of the loop if an open port is found. Finally, a ternary expression is used to check the assignedPort variable and return either null or the open port.

Taking a step back, all this code really does is loop until an open port is found, or until the maximum attempts has been exceeded. It then returns null (if no open port was found) or the open port as an Integer. There are two mutable variables, a loop, a conditional inside the loop with an early break, and another conditional (via the ternary) to determine the return value. I'm sure there are a few ways this code could be improved without using Java 8 streams. For example, we could simply return the open port from the conditional inside the loop and return null if we exit the loop without finding an open port, thereby eliminating the assignedPort variable. Even so it still contains a loop with a conditional and an early exit condition. And some people really dislike early returns and only want to see one return statement at the end of a method (I don't generally have a problem with early exits from methods, so long as the method is relatively short). Not to mention returning null when a port is not found forces a null check on callers; if a developer isn't paying attention or this isn't documented, perhaps they omit the null check causing a NullPointerException somewhere downstream.

Refactoring this to use the Java 8 stream API can be done relatively simply. In order to accomplish this we want to do the following, starting with generating a sequence of random ports. For each randomly generated port, filter on open ports and return the first open port we find. If no open ports are found after limiting our attempts to a pre-determined maximum, we want to return something that clearly indicates no open port was found, i.e. that the open port is empty. I chose the terminology here very specifically, to correspond to both general functional programming concepts as well as to the Java 8 API methods we can use.

Here is the code using the Java 8 APIs:

public OptionalInt findFreePort() {
    IntSupplier randomPorts = () -> MIN_PORT + random.nextInt(PORTS_IN_RANGE);
    return IntStream.generate(randomPorts)
            .limit(MAX_PORT_CHECK_ATTEMPTS)
            .filter(portChecker::isAvailable)
            .findFirst();
}

Without any explanation you can probably read the above code and tell generally what it does, because we are declaring what should happen, as opposed to listing the explicit instructions for how to do it. But let's dive in and look at the specifics anyway. The refactored method returns an OptionalInt to indicate the presence or absence of a value; OptionalInt is just the version of the Optional class specialized for primitive integers. This better matches the semantics we'd like, which is to clearly indicate to a caller that there may or may not be a value present. Next, we are using the generate method to create an infinite sequence of random values, using the specified IntSupplier (which is a specialization of Supplier for primitive integers). Suppliers do exactly what they say they do - supply a value, and in this case a random integer in a specific range. Note the supplier is defined using a lambda expression.

The infinite sequence is truncated (limited) using the limit method, which turns it into a finite sequence. The final two pieces are the filter and findFirst methods. The filter method uses a method reference to the isAvailable method on the portChecker object, which is just a shortcut for a lambda expression when the method accepts a single value that is the lambda argument. Finally, we use findFirst which is described by the Javadocs as a "short-circuiting terminal operation" which simply means it terminates a stream, and that as soon as its condition is met, it "short circuits" and terminates. The short-circuiting behavior is basically the same as the break statement in the original pre-Java 8 code.

So now we have a more functional version that finds free ports with no mutable variables and a more semantically correct return type. As we've seen in several of the previous articles in this ad-hoc series, we are seeing common patterns (i.e. map, filter, collect/reduce) recurring in a slightly different form. Instead of a map operation to transform an existing stream, we are generating a stream from scratch, limiting to a finite number of attempts, filtering the items we want to accept, and then using a short-circuiting terminal operation to return the value found, or an empty value as an OptionalInt.

As you can probably tell, I am biased toward the functional version for various reasons such as the declarative nature of the code, no explicit looping or variable mutation, and so on. In this case I think the more functional version is much more readable (though I am 100% sure there will be people who vehemently disagree, and that's OK). In addition, because we are using what are effectively building blocks (generators, map, filter, reduce/collect, etc.) we can much more easily make something generic to find the first thing that satisifies a filtering condition given a supplier and limit. For example:

public <T> Optional<T> findFirst(long maxAttempts,
                                 Supplier<T> generator,
                                 Predicate<T> condition) {
    return Stream.generate(generator)
            .limit(maxAttempts)
            .filter(condition)
            .findFirst();
}

Now we have a re-usable method that can accept any generator and any predicate. For example, suppose you want to find the first random number over two billion if it occurs within 10 attempts, or else default to 42 (naturally). Assuming you have a random number generator object rand, then you could call the findFirst method like this, making use of the orElse method on Optional to provide a default value:

Integer value = findFirst(10, rand::nextInt, value -> value > 2_000_000_000).orElse(42);

So as I mentioned in the last article on predicates, there is a separation of concerns achieved by using the functional style that simply is not possible using traditional control structures such as the while loop and explicit if conditional as in the first example of this article. (*) Essentially, the functional style is composable using basic building blocks, which is another huge win. Because of this composability, in general you tend to write less code, and the code that you do write tends to be more focused on the business logic you are actually trying to perform. And when you do see the same pattern repeated several times, it is much easier to extract the commonality using the functional style building blocks as we did to create the generic findFirst method in the last example. To paraphrase Yoda, once you start down the path to the functional side, forever will it dominate your destiny. Unlike the dark side of the Force, however, the functional side is much better and nicer. Until next time, arrivederci.

You can find all the sample code used in this blog and the others in this series on my GitHub in the java8-blog-code repository.

(*) Yes, you can simulate functional programming using anonymous inner classes prior to Java 8, or you can use a library like Guava and use its functional programming idioms. In general this tends to be verbose and you end up with more complicated and awkward-looking code. As the Guava team explains:

Excessive use of Guava's functional programming idioms can lead to verbose, confusing, unreadable, and inefficient code. These are by far the most easily (and most commonly) abused parts of Guava, and when you go to preposterous lengths to make your code "a one-liner," the Guava team weeps.

This blog was originally published on the Fortitude Technologies blog here.

Towards More Functional Java using Lambdas as Predicates

Posted on September 13, 2016 by Scott Leberknight

Previously I showed an example that transformed a map of query parameters into a SOLR search string. The pre-Java 8 code used a traditional for loop with a conditional and used a StringBuilder to incrementally build a string. The Java 8 code streamed over the map entries, mapping (transforming) each entry to a string of the form "key:value" and finally used a Collector to join those query fragments together. This is a common pattern in functional-style code, in which a for loop transforms one collection of objects into a collection of different objects, optionally filters some of them out, and optionally reduce the collection to a single element. These are common patterns in the functional style - map, filter, reduce, etc. You can almost always replace a for loop with conditional filtering and reduction into a Java 8 stream with map, filter, and reduce (collect) operations.

But in addition to the stream API, Java 8 also introduced some nice new API methods that make certain things much simpler. For example, suppose we have the following method to remove all map entries for a given set of keys. In the example code, dataCache is a ConcurrentMap and deleteKeys is the set of keys we want to remove from that cache. Here is the original code I came across:

public void deleteFromCache(Set<String> deleteKeys) {
    Iterator<Map.Entry<String, Object>> iterator = dataCache.entrySet().iterator();
    while (iterator.hasNext()) {
        Map.Entry<String, Object> entry = iterator.next();
        if (deleteKeys.contains(entry.getKey())) {
            iterator.remove();
        }
    }
}

Now, you could argue there are better ways to do this, e.g. iterate the delete keys and remove each mapping using the Map#remove(Object key) method. For example:

public void deleteFromCache(Set<String> deleteKeys) {
    for (String deleteKey : deleteKeys) {
        dataCache.remove(deleteKey);
    }
}

The code using the for loop certainly seems cleaner than using the Iterator in this case, though both are functionally equivalent. Can we do better? Java 8 introduced the removeIf method as a default method, not in Map but instead in the Collection interface. This new method "removes all of the elements of this collection that satisfy the given predicate", to quote from the Javadocs. This method accepts one argument, a Predicate, which is a functional interface introduced in Java 8, and which can therefore be used in lambda expressions. Let's first implement this a regular old anonymous inner class, which you can always do even in Java 8. It looks like:

public void deleteFromCache(Set<String> deleteKeys) {
    dataCache.entrySet().removeIf(new Predicate<Map.Entry<String, Object>>() {
        @Override
        public boolean test(Map.Entry<String, Object> entry) {
            return deleteKeys.contains(entry.getKey());
        }
    });
}

As you can see, we first get the map's entry set via the entrySet method and call removeIf on it, supplying a Predicate that tests whether the set of deleteKeys contains the entry key. If this test returns true, the entry is removed. Since Predicate is annotated with @FunctionalInterface it can act as a lambda expression, a method reference, or a constructor reference according to the Javadoc. So let's take the first step and convert the anonymous inner class into a lambda expression:

public void deleteFromCache(Set<String> deleteKeys) {
    dataCache.entrySet().removeIf((Map.Entry<String, Object> entry) ->
        deleteKeys.contains(entry.getKey()));
}

In the above, we've replaced the anonymous class with a lambda expression that takes a single Map.Entry argument. But, Java 8 can infer the argument types of lambda expressions, so we can remove the explicit (and a bit noisy) type declarations, leaving us with the following cleaner code:

public void deleteFromCache(Set<String> deleteKeys) {
    dataCache.entrySet().removeIf(entry -> deleteKeys.contains(entry.getKey()));
}

This code is quite a bit nicer than the original code using an explicit Iterator. But what about compared to the second code example that looped through the keys using a simple for loop, and calling remove to remove each element? The lines of code really aren't that different, so assuming they are functionally equivalent then perhaps it is just a style preference. The explicit for loop is a traditional imperative style, whereas the removeIf has a more functional flavor to it. If you look at the actual implementation of removeIf in the Collection interface, it actually uses an Iterator under the covers, just as with the first example in this post.

So practically there is no difference in functionality. But, removeIf could theoretically be implemented for certain types of collections to perform the operation in parallel, and perhaps only for collections over a certain size where it can be shown that parallelizing the operation has benefits. But this simple example is really more about separation of concerns, i.e. separating the logic of traversing the collection from the logic that determines whether or not an element is removed.

For example, if a code base needs to remove elements from collections in many difference places, chances are good that it will end up having similar loop traversal logic intertwined with remove logic in many different places. In contrast, using the removeIf function leads to only having the remove logic in the different locations - and the removal logic is really your business logic. And, if at some later point in time the traversal logic in the Java collections framework were to be improved somehow, e.g. parallelized for large collections, then all the locations using that function automatically receive the same benefit, whereas code that combines the traversal and remove logic using explicit Iterator or loops would not.

In this case, and many others, I'd argue the separation of concerns is a much better reason to prefer functional style to imperative style. Separation of concerns leads to better, cleaner code and easier code re-use precisely since those concerns can be implemented separately, and also tested separately, which results in not only cleaner production code but also cleaner test code. All of which leads to more maintainable code, which means new features and enhancements to existing code can be accomplished faster and with less chance of breaking existing code. Until the next post in this ad-hoc series on Java 8 features and a functional style, happy coding!

This blog was originally published on the Fortitude Technologies blog here.

Towards more functional Java using Streams and Lambdas

Posted on August 23, 2016 by Scott Leberknight

In the last post I showed how the Java 7 try-with-resources feature reduces boilerplate code, but probably more importantly how it removes errors related to unclosed resources, thereby eliminating an entire class of errors. In this post, the first in an ad-hoc series on Java 8 features, I'll show how the stream API can reduce the lines of code, but also how it can make the code more readable, maintainable, and less error-prone.

The following code is from a simple back-end service that lets us query metadata about messages flowing through various systems. It takes a map of key-value pairs and creates a Lucene query that can be submitted to SOLR to obtain results. It is primarily used by developers to verify behavior in a distributed system, and it does not support very sophisticated queries, since it only ANDs the key-value pairs together to form the query. For example, given a parameter map containing the (key, value) pairs (lastName, Smith) and (firstName, Bob), the method would generate the query "lastName:Smith AND firstName:Bob". As I said, not very sophisticated.

Here is the original code (where AND, COLON, and DEFAULT_QUERY are constants):

public String buildQueryString(Map<String, String> parameters) {
    int count = 0;
    StringBuilder query = new StringBuilder();

    for (Map.Entry<String, String> entry : parameters.entrySet()) {
        if (count > 0) {
            query.append(AND);
        }
        query.append(entry.getKey());
        query.append(COLON);
        query.append(entry.getValue());
        count++;
    }

    if (parameters.size() == 0) {
        query.append(DEFAULT_QUERY);
    }

    return query.toString();
}

The core business logic should be very simple, since we only need to iterate the parameter map, join the keys and values with a colon, and finally join them together. But the code above, while not terribly hard to understand, has a lot of noise. First off, it uses two mutable variables (count and query) that are modified within the for loop. The first thing in the loop is a conditional that is needed to determine whether we need to append the AND constant, as we only want to do that after the first key-value pair is added to the query. Next, joining the keys and values is done by concatenating them, one by one, to the StringBuilder holding the query. Finally the count must be incremented so that in subsequent loop iterations, we properly include the AND delimiter. After the loop there is another conditional which appends DEFAULT_QUERY if there are no parameters, and then we finally convert the StringBuilder to a String and return it.

Here is the buildQueryString method after refactoring it to use the Java 8 stream API:

public String buildQueryString(Map<String, String> parameters) {
    if (parameters.isEmpty()) {
        return DEFAULT_QUERY;
    }

    return parameters.entrySet().stream()
            .map(entry -> String.join(COLON, entry.getKey(), entry.getValue()))
            .collect(Collectors.joining(AND));
}

This code does the exact same thing, but in only 6 lines of code (counting the map and collect lines as separate even though technically they are part of the stream call chain) instead of 15. But just measuring lines of code isn't everything. The main difference here is the lack of mutable variables, no external iteration via explicit looping constructs, and no conditional statements other than the empty check which short circuits and returns DEFAULT_QUERY when there are no parameters. The code reads like a functional declaration of what we want to accomplish: stream over the parameters, convert each (key, value) to "key:value" and join them all together using the delimiter AND.

The specific Java 8 features we've used here start with the stream() method to convert the map entry set to a Java 8 java.util.stream.Stream. We then use the map operation on the stream, which applies a function (String.join) to each element (Map.Entry) in the stream. Finally, we use the collect method to reduce the elements using the joining collector into the resulting string that is the actual query we wanted to build. In the map method we've also made use of a lambda expression to specify exactly what transformation to perform on each map entry.

By removing explicit iteration and mutable variables, the code is more readable, in that a developer seeing this code for the first time will have an easier and quicker time understanding what it does. Note that much of the how it does things has been removed, for example the iteration is now implicit via the Stream, and the joining collector now does the work of inserting a delimiter between the elements. You're now declaring what you want to happen, instead of having to explicitly perform all the tedium yourself. This is more of a functional style than most Java developers are used to, and at first it can be a bit jarring, but as you practice and get used to it, the more you'll probably like it and you'll find youself able to read and write this style of code much more quickly than traditional code with lots of loops and conditionals. Generally there is also less code than when using traditional looping and control structures, which is another benefit for maintenance. I won't go so far as to say Java 8 is a functional language like Clojure or Haskell - since it isn't - but code like this has a more functional flavor to it.

There is now a metric ton of content on the internet related to Java 8 streams, but in case this is all new to you, or you're just looking for a decent place to begin learning more in-depth, the API documentation for the java.util.stream package is a good place to start. Venkat Subramaniam's Functional Programming in Java is another good resource, and at less than 200 pages can be digested pretty quickly. And for more on lambda expressions, the Lambda Expressions tutorial in the official Java Tutorials is a decent place to begin. In the next post, we'll see another example where a simple Java 8 API addition combined with a lambda expression simplifies code, making it more readable and maintainable.

This blog was originally published on the Fortitude Technologies blog here.

Reduce Java boilerplate using try-with-resources

Posted on August 11, 2016 by Scott Leberknight

Java 8 has been out for a while, and Java 7 has been out even longer. But even so, many people still unfortunately are not taking advantage of some of the new features, many of which make reading and writing Java code much more pleasant. For example, Java 7 introduced some relatively simple things like strings in switch statements, underscores in numeric literals (e.g. 1_000_000 is easier to read and see the magnitude than just 1000000), and the try-with-resources statement. Java 8 went a lot further and introduced lambda expressions, the streams API, a new date/time API based on the Joda Time library, Optional, and more.

In this blog and in a few subsequent posts, I will take a simple snippet of code from a real project, and show what the code looked like originally and what it looked like after refactoring it to be more readable and maintainable. To start, this blog will actually tackle the try-with-resources statement introduced in Java 7. Many people even in 2016 still seem not to be aware of this statement, which not only makes the code less verbose, but also eliminates an entire class of errors resulting from failure to close I/O or other resources.

Without further ado (whatever ado actually means), here is a method that was used to check port availability when starting up services.

public boolean isPortAvailable(final int port) {
    ServerSocket serverSocket = null;
    DatagramSocket dataSocket = null;

    try {
        serverSocket = new ServerSocket(port);
        serverSocket.setReuseAddress(true);
        dataSocket = new DatagramSocket(port);
        dataSocket.setReuseAddress(true);
        return true;
    } catch (IOException e) {
        return false;
    } finally {
        if (dataSocket != null) {
            dataSocket.close();
        }

        if (serverSocket != null) {
            try {
                serverSocket.close();
            } catch (IOException e) {
                // ignored
            }
        }
    }
}

The core logic for the above code is pretty simple: open a ServerSocket and a DatagramSocket and if both opened without throwing an exception, then the port is open. It's all the extra boilerplate code and exception handling that makes the code so lengthy and error-prone, because we need to make sure to close the sockets in the finally block, being careful to first check they are not null. For good measure, the ServerSocket#close method throws yet another IOException, which we simply ignore but are required to catch nonetheless. A lot of extra code which obscures the actual simple core of the code.

Here's the refactored version which makes use of the try-with-resources statement from Java 7.

public boolean isPortAvailable(final int port) {
    try (ServerSocket serverSocket = new ServerSocket(port); 
         DatagramSocket dataSocket = new DatagramSocket(port)) {
        serverSocket.setReuseAddress(true);
        dataSocket.setReuseAddress(true);
        return true;
    } catch (IOException e) {
        return false;
    }
}

As you can hopefully see, this code has the same core logic, but much less of the boilerplate. There is not only less code (7 lines instead of 22), but the code is much more readable since only the core logic remains. We are still catching the IOException that can be thrown by the ServerSocket and DatagramSocket constructors, but we no longer need to deal with the routine closing of those socket resources. The try-with-resources statement does that task for us, automatically closing any resources opened in the declaration statement that immediately follows the try keyword.

The one catch is that the declared resources must implement the AutoCloseable interface, which itself extends Closeable. Since the Java APIs make extensive use of Closeable and AutoCloseable this means that most things you'll want to use can be handled via try-with-resources. Classes that don't implement AutoCloseable cannot be used directly in try-with-resources statments. For example, if you are unfortunate enough to still need to deal with XML, for example if you need to use the old-school XMLStreamReader then you are out of luck since it doesn't implement Closeable or AutoCloseable. I generally fix those types of things by creating a small wrapper/decorator class, e.g. CloseableXMLStreamReader, but sometimes it simply isn't worth the trouble unless you are using it in many difference places.

For more information on try-with-resources, the Java tutorials on Oracle's website has a more in-depth article here. In subsequent posts, I'll show some before/after code that makes use of Java 8 features such as the stream API and lambda expressions.

This blog was originally published on the Fortitude Technologies blog here.

Slides for RESTful Web Services with Jersey presentation

Posted on June 10, 2014 by Scott Leberknight

While teaching a course on web development which included Ruby on Rails and Java segments, we used Jersey to expose a simple web services which the Rails application consumed. I put together a presentation on Jersey that I recently gave. Here are the slides:

RESTful Web Services with Jersey from Scott Leberknight

Building a Distributed Lock Revisited: Using Curator's InterProcessMutex

Posted on December 29, 2013 by Scott Leberknight

Last summer I wrote a series of blogs introducing Apache ZooKeeper, which is a distributed coordination service used in many open source projects like Hadoop, HBase, and Storm to manage clusters of machines. The fifth blog described how to use ZooKeeper to implement a distributed lock. In that blog I explained that the goals of a distributed lock are "to build a mutually exclusive lock between processes that could be running on different machines, possibly even on different networks or different data centers". I also mentioned that one significant benefit is that "clients know nothing about each other; they only know they need to use the lock to access some shared resource, and that they should not access it unless they own the lock." That blog described how to use the ZooKeeper WriteLock "recipe" that comes with ZooKeeper in the contrib modules to build a synchronous BlockingWriteLock with easier semantics in which you simply call a lock() method to acquire the lock, and call unlock() to release the lock. Earlier in the series, we learned how to connect to ZooKeeper in the Group Membership Example blog using a Watcher and a CountDownLatch to block until the SyncConnected event was received. All that code wasn't terribly complex but it also was fairly low-level, especially if you include the need to block until a connection event is received and the non-trival implementation of the WriteLock recipe.

In the wrap-up blog I mentioned the Curator project, originally opened sourced by Netflix and later donated by them to Apache. The Curator wiki describes Curator as "a set of Java libraries that make using Apache ZooKeeper much easier". In this blog we'll see how to use Curator to implement a distributed lock, without needing to write any of our own wrapper code for obtaining a connection or to implement the lock itself. In the distributed lock blog we saw how sequential ephemeral child nodes (e.g. child-lock-node-0000000000, child-lock-node-0000000001, child-lock-node-0000000002, etc.) are created under a persistent parent lock node. The client holding the lock on the child with the lowest sequence number owns the lock. We saw several potential gotchas: first, how does a client know whether it successfully created a child node in the case of a partial failure, i.e. a (temporary) connection loss, and how does it know which child node it created, i.e. the child with which sequence number? I noted that a solution was the embed the ZooKeeper session ID in the child node such that the client can easily identify the child node it created. Jordan Zimmerman (the creator of Curator) was kind enough to post a comment to that blog noting that using the session ID is "not ideal" because it "prevents the same ZK connection from being used in multiple threads for the same lock". He said "It's much better to use a GUID. This is what Curator uses."

Second, we noted that distributed lock clients should watch only the immediately preceding child node rather than the parent node in order to prevent a "herd effect" in which every client is notified for every single child node event, when in reality each client only need care about the child immediately preceding it. Curator handles both these cases plus adds other goodies such as a retry policy for connecting to ZooKeeper. So without further comment, lets see how to use a distributed lock in Curator.

First, we'll need to get an instance of CuratorFramework - this is an interface that represents a higher level abstraction API for working with ZooKeeper. It provides automatic connection management including retry operations, a fluent-style API, as well as a bunch of recipes you can use out-of-the-box for distributed data structures like locks, queues, leader election, etc. We can use the CuratorFrameworkFactory and a RetryPolicy of our choosing to get one.

String hosts = "host-1:2181,host-2:2181,host-3:2181";
int baseSleepTimeMills = 1000;
int maxRetries = 3;

RetryPolicy retryPolicy = new ExponentialBackoffRetry(baseSleepTimeMills, maxRetries);
CuratorFramework client = CuratorFrameworkFactory.newClient(hosts, retryPolicy);
client.start();

In the above code we first create a retry policy - in this case an ExponentialBackoffRetry using a base sleep time of 1000 milliseconds and up to 3 retries. Then we can use the CuratorFrameworkFactory.newClient() to obtain an instance of CuratorFramework. Finally we need to call start() (note we'll need to call close() when we're done with the client). Now that we have a client instance, we can use an implementation of InterProcessLock to create our distributed lock. The simplest one is the InterProcessMutex which is a re-entrant mutual exclusion lock that works across JVMs, by using ZooKeeper to hold the lock.

InterProcessLock lock = new InterProcessMutex(client, lockPath);
lock.acquire();
try {
  // do work while we hold the lock
} catch (Exception ex) {
  // handle exceptions as appropriate
} finally {
  lock.release();
}

The above code simply creates a InterProcessMutex for a specific lock path (lockPath), acquires the lock, does some work, and releases the lock. In this case acquire() will block until the lock becomes available. In many cases blocking indefinitely won't be a Good Thing, and Curator provides an overloaded version of acquire() which requires a maximum time to wait for the lock and returns true if the lock is obtained within the time limit and false otherwise.

InterProcessLock lock = new InterProcessMutex(client, lockPath);
if (lock.acquire(waitTimeSeconds, TimeUnit.SECONDS)) {
  try {
    // do work while we hold the lock
  } catch (Exception ex) {
    // handle exceptions as appropriate
  } finally {
    lock.release();
  }
} else {
  // we timed out waiting for lock, handle appropriately
}

The above code demonstrates using the timeout version of acquire. The code is slightly more complex since you need to check whether the lock is acquired or whether we timed out waiting for the lock. Regardless of which version of acquire() you use, you'll need to release() the lock in a finally block. The final piece is to remember to close the client when you're done with it:

client.close();

And that's pretty much it for using Curator's InterProcessMutex to implement a distributed lock. All the complexity in handling connection management, partial failures, the "herd effect", automatic retries, and so on are handled by the higher level Curator APIs. To paraphrase Stu Halloway, you should always understand at least one layer beneath the one you're working at - in this case you should have a decent understanding of how ZooKeeper works under the covers and some of the potential issues of distributed computing. But having said that, go ahead and use Curator to work at a higher level of abstraction and gain the benefits of all the distributed computing experience at Netflix as well as Yahoo (which created ZooKeeper). And last, Happy New Year 2014!

Handling Big Data with HBase Part 6: Wrap-up

Posted on December 19, 2013 by Scott Leberknight

This is the sixth and final blog in an introduction to Apache HBase. In the fifth part, we learned the basics of schema design in HBase and several techniques you can use to make scanning and filtering data more efficient. We also saw two different basic design schemes ("wide" and "tall") for storing information about the same entity, and briefly touched upon more advanced topics like adding full-text search and secondary indexes. In this part, we'll wrap up by summarizing the main points and then listing the (many) things we didn't cover in this introduction to HBase series.

HBase is a distributed database providing inherent scalability, performance, and fault-tolerance across potentially massive clusters of commodity servers. It provides the means to store and efficiently scan large swaths of data. We've looked at the HBase shell for basic interaction, covered the high-level HBase architecture and looked at using the Java API to create, get, scan, and delete data. We also considered how to design tables and row keys for efficient data access.

One thing you certainly noticed when working with the HBase Java API is that it is much lower level than other data APIs you might be used to working with, for example JDBC or JPA. You get the basics of CRUD plus scanning data, and that's about it. In addition, you work directly with byte arrays which is about as low-level as it gets when you're trying to retrieve information from a datastore.

If you are considering whether to use HBase, you should really think hard about how large the data is, i.e. does your app need to be able to accomodate ever-growing volumes of data? If it does, then you need to think hard about what that data looks like and what the most likely data access patterns will be, as this will drive your schema design and data access patters. For example, if you are designing a schema for a weather collection project, you will want to consider using a "tall" schema design such that the sensor readings for each sensor are split across rows as opposed to a "wide" design in which you keep adding columns to a column family in a single row. Unlike relational models in which you work hard to normalize data and then use SQL as a flexible way to join the data in various ways, with HBase you need to think much more up-front about the data access patterns, because retrieval by row key and table scans are the only two ways to access data. In other words, there is no joining across multiple HBase tables and projecting out the columns you need. When you retrieve data, you want to only ask HBase for the exact data you need.

Things We Didn't Cover

Now let's discuss a few things we didn't cover. First, coprocessors were a major addition to HBase in version 0.92, and were inspired by Google adding coprocessors to its Bigtable data store. You can, at a high level, think of coprocessors like triggers or stored procedures in relational databases. Basically you can have either trigger-like functionality via observers, or stored-procedure functionality via RPC endpoints. This allows many new things to be accomplished in an elegant fashion, for example maintaining secondary indexes via observing changes to data.

We showed basic API usage, but there is more advanced usage possible with the API. For example, you can batch data and provide much more advanced filtering behavior than a simple paging filter like we showed. There is also the concept of counters, which allows you to do atomic increments of numbers without requiring the client to perform explicit row locking. And if you're not really into Java, there are external APIs available via Thrift and REST gateways. There's also even a C/C++ client available and there are DSLs for Groovy, Jython, and Scala. These are all discussed on the HBase wiki.

Cluster setup and configuration was not covered at all, nor was performance tuning. Obviously these are hugely important topics and the references below are good starting places. With HBase you not only need to worry about tuning HBase configuration, but also tuning Hadoop (or more specifically, the HDFS file system). For these topics definitely start with the HBase References Guide and also check out HBase: The Definitive Guide by Lars George.

We also didn't cover how to Map/Reduce with HBase. Essentially you can use Hadoop's Map/Reduce framework to access HBase tables and perform tasks like aggregation in a Map/Reduce-style.

Last there is security (which I suppose should be expected to come last for a developer, right?) in HBase. There are two types of security I'm referring to here: first is access to HBase itself in order to create, read, update, and delete data, e.g. via requiring Kerberos authentication to connect to HBase. The second type of security is ACL-based access restrictions. HBase as of this writing you can restrict access via ACLs at the table and column family level. However, HBase Cell Security describes how cell-level security features similar to Apache Accumulo are being added to HBase and are scheduled to be released in version 0.98 in this issue (the current version as of this writing is 0.96).

Goodbye!

With this background, you can now consider whether HBase makes sense on future projects with Big Data and high scalability requirements. I hope you found this series of posts useful as an introduction to HBase.

References