Logging is an essential part of any service operation. Unlike monitoring metrics that keep tracking of regular performance data, logs register events that are both occasional and important. This may include access logs for a web service or various error conditions that interfere with normal system operation.

 

Information contained in logs is critical to maintain and improve an existing system as the data helps find out what happened to cause a particular event.

 

However, in modern multi-server environments tracing down a clue could be a difficult task. First, you need to understand which server you need to search on. Then plow through immense volumes of raw material that log files consist of. Moreover, these files are often dissimilar and stored in different places. Thankfully, up-to-date technologies come up with a line of efficient tools for log aggregation that are both convenient and ready-to-use. They provide a layout of properly organized and easily searchable data, including:

 

  • single storage place for logs from different services;
  • single human-readable format for disparate log files;
  • advanced options like parsing, search, and indexing.

 

At SHALB, we aggregate logs by means of the ELK stack (stands for Elasticsearch, Logstash and Kibana). Each of the three components is a separate open-source product. Elasticsearch is a NoSQL searching engine designed for high-load projects. The Logstash utility that collects data from servers, uses Elasticsearch as a searchable storage where it uploads the filtered data. The process analytics is visualized by Kibana, which is a user interface. United under the ELK stack, these tools bring forward an efficient solution for a variety of tasks related to logs data gathering, storing and analyzing.