The new elasticsearch java Rest Client


With the latest release of elasticsearch 5.0.0 alpha 4, a new client for java is introduced. The idea behind this new client is less dependencies on elasticsearch.

At the moment you have to include the complete elasticsearch distributable with even a lot of Lucene libraries. Also there were some requirements when using the Transport client. The application has to make use of the same JVM version as the running elasticsearch instance and the version of elasticsearch in the application needs to be the exact same as the running elasticsearch cluster. Therefore they have started creating the new http based client. It is going to be created in multiple layers. The low end layer only contains the http communication, a sniffer to find other nodes, and maybe some classes for basic operations. The other layers will contain a query DSL and whatever may become important. At the moment only the low level layer is available. Also it is the first available version, so be warned, changes may come.

In this blogpost we introduce the new java http based client. We create a basic application that interacts with an elasticsearch cluster. We start with the connection and sniffing part. Then we send some data and create a search request to obtain the data again.

Setting up your java project

The sample project is a spring-boot project. I choose maven for the dependencies, just because it is so easy. To work with the new elasticsearch client you need 1 dependency. If you also want to sniff for other hosts, you need a second dependency. The following code block shows the required dependencies




Creating the connection

To create a connection you can use just one line. The goal is to create an instance of RestClient.

RestClient.builder(new HttpHost("localhost", 9200)).setFailureListener(loggingFailureListener).build();

Of course you can provide more than one host, but our goal is to use the sniffer to find the other hosts.

Sniffing for nodes

The RestClient has an option to find other hosts in the cluster using the sniffer. This way you can provide one host and the sniffer will find all other nodes in your cluster. The following lines initialise the sniffer.

public void afterCreation() {
    this.client = RestClient
            .builder(new HttpHost("localhost", 9200))
    this.sniffer = Sniffer.builder(this.client,

The Sniffer uses the HostsSniffer to find the other nodes in the cluster. It also maintains a blacklist of nodes that are not available anymore. In alpha 4 you have to specify the Scheme, in the next version this will become optional. The next block shows the log lines that more nodes are found using the sniffer.

RestClient      : request [GET] returned [HTTP/1.1 200 OK]
HostsSniffer    : adding node [EX2OSXW2QrOR0qnhgJyTIQ]
HostsSniffer    : adding node [ZhmTiqJ-QF-R6RsMYvKKwg]
HostsSniffer    : adding node [gz9vOjMaTVmyz0maxEn96w]
Sniffer         : sniffed hosts: [,,]
Sniffer         : scheduling next sniff in 300000 ms

Obtain the cluster health

The first thing we are going to try is a very basic request, obtain the health of the cluster. We use Jackson to parse the son result into a java bean. In our case the result we need is the ClusterHealth.

public class ClusterHealth {
    @JsonProperty(value = "cluster_name")
    private String clusterName;
    @JsonProperty(value = "status")
    private String status;
    @JsonProperty(value = "number_of_nodes")
    private int numberOfNodes;

At the moment the RestClient has one method to perform a request. The following code block shows how we do a get request with the url _cluster/health. You can provide request parameters. In this case just an empty map. You can also provide a request body, in this case we do not provide one, therefore we set it to null. Finally you can also add headers, but in our case we do not need them.

Response response = client.performRequest(
        new Hashtable<>(),
HttpEntity entity = response.getEntity();

Then using the Jackson mapper, we can transform the HttpEntity into the ClusterHealth object

ClusterHealth clusterHealth = jacksonObjectMapper.readValue(entity.getContent(), ClusterHealth.class);

Now let us move on to create a document. We have an index called luminis. The type of the documents is ams and the id is generated by elastic. At the moment the document contains just one field: employee. I take it you know what the java bean would look like. Now let us create the request to actually create the document in elasticsearch.

HttpEntity requestBody  = new StringEntity(jacksonObjectMapper.writeValueAsString(employee));
Response response = client.performRequest(
        new Hashtable<>(),

Finally query for employees

Now we are going to execute a match query. The response is the exact response from a normal search request. Therefore we create an object hierarchy that resembles the son structure. The classes we use are like this.

public class ResponseHits {
    private Hits hits;
public class Hits {
    private List hits;
public class Hit {
    @JsonProperty(value = "_index")
    private String index;
    @JsonProperty(value = "_type")
    private String type;
    @JsonProperty(value = "_id")
    private String id;
    @JsonProperty(value = "_score")
    private Double score;
    @JsonProperty(value = "_source")
    private Employee source;

Then we can obtain all employees with the provided name using this code.

String query = "{\"query\":{\"match\":{\"employee\":\"" + employee + "\"}}}";
Response response = client.performRequest(
        new Hashtable<>(),
        new StringEntity(query));
HttpEntity entity = response.getEntity();
ResponseHits responseHits = jacksonObjectMapper.readValue(entity.getContent(), ResponseHits.class);
return responseHits.getHits().getHits().stream()

Ok, writing queries like this is of course not what we would like to do, so we need the next layer with a query DSL. But now you can see how to create a query and how to parse the result.


The start of the new client is there. You are now able to start querying elasticsearch using the new client. Since this is all still alpha there can be changes and I cannot wait to get my hands on the first higher level layer with the query DSL.


Original issue with a lot of additional information

The sample project

Part two of this series