Working with Log pipeline

Overview

Log Pipeline is a feature that allows you to parse or enrich data from unstructured, meaningless data (raw log) into structured, meaningful log data (eg Json format). Log data will be run through a series of Processor Groups and Processors. If that log meets the filter conditions in Processor Group and Processor, Processor will be executed on those logs sequentially until finished.

Ingredient

  • Log Pipeline: contains Processor Groups that allow parsing or enriching data from unstructured, meaningless data (raw log) into structured, meaningful log data (eg Json format)

  • Processor Group: allows you to specify where to get log data (source log project), where to store parsed data (destination log project) and which logs will be parsed when the filter is satisfied

  • Processor: are libraries that help you parse and enrich data, located in the Processor Group.


The model describes Log Pipeline

Log Pipeline 1 : the most basic model has only 1 source log project and 1 destination log project with the information below, for the need that raw logs only contain 1 type of log data and you only want to parse to 1 destination. log project

  • Source Log Project: is the place to store raw logs, unstructured data that you need to parse

  • Destination Log Project: is the place to store structured logs, data that has been parsed into structure after running through the Log Pipeline system.

  • Log Pipeline contains only 1 Processor Group with the Filter being source:nginx, meaning that logs with a source:nginx field will only run through the Processor Group system for parsing.

  • Processor Group contains 3 Processors to parse the log into structured data and save it to the destination log project.

Log Pipeline 2 : more complex model has 1 source log project but the data is parsed into 2 destination log projects with information as below, for the need that your raw logs contain many different types of log data and you want to parse and store in many different destination log projects

  • Source Log Project: is the place to store raw logs, unstructured data that you need to parse

  • Destination Log Project: is the place where structured logs are stored, the data has been parsed into structured form after running through the Log Pipeline system. There are 2 destination log projects to receive different logs. For example, destination log project 1 receives nginx logs data has been parsed, destination log project 2 receives parsed apache logs data

  • Log Pipeline contains 2 Processor Groups with Filters source:nginx and source:apache. Corresponding log lines that match the Group's Filter will be parsed accordingly to that Group.

  • Processor Group: there are 2 corresponding Processor Groups to properly parse logs and store them in 2 different destination log projects.


Log pipeline limited scope

Log pipeline naming rules

The following rules apply to Log pipeline naming in vMonitor Platform:

  • The Log pipeline name must be between 1 (minimum) and 63 (maximum) characters long.

  • Log pipeline names can only include lowercase letters (a-z,), numbers (0-9), hyphens (-).

  • Log pipeline names must begin with a letter and end with a letter or a number.

  • The Log pipeline name should not contain sensitive information (eg IP address, account name, login password,...).

  • The Log pipeline name must be unique within a VNG Cloud account until that Log pipeline is deleted.

Processor group naming rules

The following rules apply to Processor group naming in vMonitor Platform:

  • Processor group name must be from 1 (minimum) to 63 (maximum) characters.

  • Processor group names can only include lowercase letters (az,), numbers (0-9), hyphens (-).

  • Processor group names must begin with a letter and end with a letter or a number.

  • The Processor group name should not contain sensitive information (eg IP address, account name, login password,...).

  • The Processor group name must be unique within a VNG Cloud account until that Processor group is deleted.

Processor naming rules

The following rules apply to Processor naming in vMonitor Platform:

  • Processor name must be from 1 (minimum) to 63 (maximum) characters long.

  • Processor names can only include lowercase letters (az,), numbers (0-9), hyphens (-).

  • Processor names must begin with a letter and end with a letter or a number.

  • Processor name should not contain sensitive information (eg IP address, account name, login password,...).

  • Processor name must be unique within a VNG Cloud account until that Processor is deleted.


Initialize Log pipeline

To create a log pipeline, follow the instructions below:

  1. Select the Log folder , then select the Log pipeline menu .

  2. Select Create a Log pipeline .

  3. Enter Pipeline name . The pipeline name must comply with our rules described above.

  4. Enter Pipeline description if available.

  5. Select Create.


Edit Log pipeline

To edit a log pipeline, follow the instructions below:

  1. Log in to https://hcm-3.console.vngcloud.vn/vmonitor . If you don't have an account, register for free here .

  2. Select the Log folder .

  3. Select Log pipeline.

  4. In the list of existing log pipelines, at the Log pipeline you want to edit, select .

  5. Select Edit pipeline .

  6. Edit the parameters for the Log pipeline as you desire. You can edit all information fields in a Log pipeline configuration. This editing is similar to when you create a new Log pipeline according to the instructions above.

  7. Select Save.


Delete Log pipeline

When you no longer need to use a custom Log pipeline, you can delete the Log pipeline from the system according to the instructions below:

  1. Log in to https://hcm-3.console.vngcloud.vn/vmonitor . If you don't have an account, register for free here .

  2. Select the Log folder .

  3. Select Delete .

  4. At the Log pipeline deletion confirmation screen, select Delete .

After you successfully delete, your Log pipeline will be completely deleted from our system. You cannot restore a deleted Log pipeline, so be careful when using this feature.

Last updated