Hands-On with Observability: Installing Elasticsearch and Kibana

-

Following our previous discussion on the fundamentals of Elastic Stack and observability while using Elastic Stack, we’re set to take our exploration to the next level. In this chapter, we shift our focus from theory to practice, diving deep into the setup and utilization of Elasticsearch and Kibana via Docker.

Before we begin, ensure you have Docker installed and running on your machine. If not, you can download it here.

Diving into Observability: The Docker-Facilitated Elasticsearch and Kibana Setup

As we embark on this journey to enhance application observability with Elasticsearch, our first task involves creating an Elasticsearch cluster tailored to store and process our specific data. This is generally achieved through a series of steps, including the installation of Elasticsearch, the configuration of indices and mappings, as well as the establishment of data ingestion pipelines to collect and manipulate data from a variety of sources. Today, however, our focus is centered on the initial setup — the installation of Elasticsearch and Kibana facilitated by Docker.

Deploying Elasticsearch Using Docker CLI

Your journey into application observability with Elasticsearch starts with setting up an Elasticsearch cluster. To do this, pull the Docker image with the following command:

docker pull docker.elastic.co/elasticsearch/elasticsearch:8.7.0

Now that you have the image, you can run the following commands to start a single-node Elasticsearch cluster for development:

# create a new docket network for Elasticsearch and Kibana
docker network create elastic

# start Elasticsearch in Docker. Generates credentials --> Save it somewhere!
docker run --name es01 --net elastic -p 9200:9200 -it docker.elastic.co/elasticsearch/elasticsearch:8.7.0

# copy the security certificate from Docker to local
docker cp es01:/usr/share/elasticsearch/config/certs/http_ca.crt .

# open a new terminal and verify that you can connect to your cluster
curl --cacert http_ca.crt -u elastic https://localhost:9200

Make sure you copy the generated password and enrollment token and save them in a secure location. These values are shown only when you start Elasticsearch for the first time. You’ll use these to enroll Kibana with your Elasticsearch cluster and log in.

Once the Elasticsearch cluster is set up and configured, developers can use tools like Kibana to create dashboards and charts to visualize the data and identify trends and issues. Kibana is a source-available data visualization dashboard software for Elasticsearch.

Deploying Kibana Using Docker CLI

With the Elasticsearch cluster now set up, we’ll turn our attention to Kibana, a powerful tool for visualizing Elasticsearch data:

# start Kibana in Docker and connect it to existing Elasticsearch cluster
docker run --name kib-01 --net elastic -p 5601:5601 docker.elastic.co/kibana/kibana:8.7.0

This will start Kibana in docker and will output the next to the terminal when done:

Kibana has not been configured.

Go to http://0.0.0.0:5601/?code=579012 to get started.

When you click on the link you will see this screen:


Enter your enrollment token and generated password from before and you will get access to Kibana.

Rolling Up Our Sleeves: Diving Into Practical Application

As we pivot from theory to practice, our narrative unfolds around two main applications: an E-commerce Website (Application A) and an Inventory Management Service (Application B).

  1. E-commerce Website (Application A): it could include features like user registration, product browsing, adding to cart, and purchase transactions.
  2. Inventory Management Service (Application B): This could be a back-end microservice that manages the inventory for the e-commerce website. It could have features like adding new stock, updating existing stock, marking stock as expired, etc.

Application A and B _ observability

 

Introducing Dev Tools: Our Gateway to Observability in Elasticsearch

Eager to start communicating directly with Elasticsearch? You’re in luck! By navigating to the Management → Dev Tools in Kibana, you’ll gain direct access. This powerful tool enables us to execute HTTP requests against the Elasticsearch REST API, facilitating operations like creating indices, adding documents, and running queries. We’ll now create two indices to accommodate our different logs: ‘ecommerce_app_logs’ for the E-commerce application and ‘inventory_management_logs’ for the Inventory Management service.

Deploying Indices: Laying the Groundwork

Initiate your journey with index creation using the following commands in Dev Tools:

PUT /ecommerce_app_logs
{
  "mappings": {
    "properties": {
      "application-name": {
        "type": "text"
      },
      "timestamp": {
        "type": "date"
      },
      "log_level": {
        "type": "keyword"
      },
      "message": {
        "type": "text"
      }
    }
  }
}

PUT /inventory_management_logs
{
  "mappings": {
    "properties": {
      "application-name": {
        "type": "text"
      },
      "timestamp": {
        "type": "date"
      },
      "log_level": {
        "type": "keyword"
      },
      "message": {
        "type": "text"
      }
    }
  }
}

These commands generate the necessary environment for data ingestion, defining the properties of each document that will be inserted into these indices.

However, in production environments, it’s often more scalable and maintainable to utilize index templates. Index templates provide a way to automatically set up mappings, settings, and aliases as new indices are created. By adopting index templates, you can ensure that every new index conforms to a pre-defined structure, making your data ingestion process more streamlined and consistent. This not only reduces the risk of manual configuration errors but also simplifies operations when dealing with a multitude of similar indices. If you’re eager to learn about index templates, you can explore this subject further in this blog from Luminis colleague Jettro Coenradie. Don’t worry if you prefer to stay tuned here, as index templates will also be covered later in this series.

Feeding Data: Populating Our Indices

Now that we made these indices we can start adding documents/logs to them.

POST /ecommerce_app_logs/_doc
{
  "application-name": "ecommerce_app",
  "timestamp": "2023-07-24T14:00:23",
  "log_level": "INFO",
  "message": "User 'john_doe' successfully logged in"
}

POST /inventory_management_logs/_doc
{
  "application-name": "inventory_management",
  "timestamp": "2023-07-24T10:15:00",
  "log_level": "INFO",
  "message": "New shipment of 'Samsung Galaxy S20' arrived. Quantity: 100"
}

As we progress, we’ll explore automated ways of populating these indices, particularly through tools such as Filebeat and Logstash.

Peeking at Our Data: Exploring the Discover Tab

Congratulations on taking your first steps toward indexing and data ingestion. To view your logs, head over to the Analytics → Discover tab in Kibana. All your logs appear here, but should you need to focus on a specific application, simply add a filter. This not only organizes your data but also sets the stage for more advanced data manipulation techniques we’ll delve into in the future.
Observability blog 2
Observability series 2

Assessing Our Initial Journey Into Observability

How did your journey into observability start? Was your path clear and straightforward, or did you grapple with unexpected obstacles? Either way, remember that every hurdle overcome is a stepping stone toward mastery. We’d love to hear your stories, your triumphs, and even your challenges. So, feel free to share your experiences in the comments below!

And don’t forget: our exploration into the world of Elasticsearch-driven observability is just beginning. In our next post, we’ll expand our knowledge, shifting from the ingestion of data to its management. We’ll delve deeper into managing the index life-cycle and ensuring our data remains organized and accessible, even as it grows. So, stay tuned for more hands-on guidance in our observability series. Together, let’s unlock the power of data and transform how we see our applications.