Monday, April 16, 2018

Logstash Indexing Error - Aggregate plugin: For task_id pattern '%{id}', there are more than one filter

Leave a Comment

Am using Elasticsearch 5.5.0 and logstash 5.5.0 on Linux - AWS ec2-instance.

Have a logstash_etl.conf file which resides in /etc/logstash/conf.d:

input {      jdbc {          jdbc_connection_string => "jdbc:mysql://localhost:3306/mydatabase"          jdbc_user => "root"          jdbc_password => ""          jdbc_driver_library => "/etc/logstash/mysql-connector/mysql-connector-java-5.1.21.jar"          jdbc_driver_class => "com.mysql.jdbc.driver"          schedule => "*/5 * * * *"          statement => "select * from customers"          use_column_value => false          clean_run => true      }   }   filter {     if ([api_key]) {       aggregate {         task_id => "%{id}"         push_map_as_event_on_timeout => false         #timeout_task_id_field => "[@metadata][index_id]"         #timeout => 60          #inactivity_timeout => 30         code => "sample code"         timeout_code => "sample code"       }     }   }    # sudo /usr/share/logstash/bin/logstash-plugin install logstash-output-exec   output {      if ([purge_task] == "yes") {        exec {            command => "curl -XPOST '127.0.0.1:9200/_all/_delete_by_query?conflicts=proceed' -H 'Content-Type: application/json' -d'                {                  \"query\": {                    \"range\" : {                      \"@timestamp\" : {                        \"lte\" : \"now-3h\"                      }                    }                  }                }            '"        }      } else {          stdout { codec => json_lines}          elasticsearch {             "hosts" => "127.0.0.1:9200"             "index" => "myindex_%{api_key}"             "document_type" => "%{[@metadata][index_type]}"             "document_id" => "%{[@metadata][index_id]}"             "doc_as_upsert" => true             "action" => "update"             "retry_on_conflict" => 7          }      }   } 

When I restart logstash like this:

sudo initctl restart logstash 

Inside /var/log/logstash/logstash-plain.log - everything works an actual indexing into Elasticsearch is occuring!

However if I add another SQL input into this config file:

input {      jdbc {          jdbc_connection_string => "jdbc:mysql://localhost:3306/mydatabase"          jdbc_user => "root"          jdbc_password => ""          jdbc_driver_library => "/etc/logstash/mysql-connector/mysql-connector-java-5.1.21.jar"          jdbc_driver_class => "com.mysql.jdbc.driver"          schedule => "*/5 * * * *"          statement => "select * from orders"          use_column_value => false          clean_run => true      }   } 

The indexing stops because of an error inside the config file!

Inside /var/log/logstash/logstash-plain.log:

[2018-04-06T21:33:54,123][ERROR][logstash.agent ] Pipeline aborted due to error {:exception=>#<LogStash::ConfigurationError: Aggregate plugin: For task_id pattern '%{id}', there are more than one filter which defines timeout options. All timeout options have to be defined in only one aggregate filter per task_id pattern. Timeout options are : timeout, inactivity_timeout, timeout_code, push_map_as_event_on_timeout, push_previous_map_as_event, timeout_task_id_field, timeout_tags>, :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-filter-aggregate-2.6.1/lib/logstash/filters/aggregate.rb:486:in `register'", "org/jruby/ext/thread/Mutex.java:149:in `synchronize'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-filter-aggregate-2.6.1/lib/logstash/filters/aggregate.rb:480:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:281:in `register_plugin'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:292:in `register_plugins'", "org/jruby/RubyArray.java:1613:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:292:in `register_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:302:in `start_workers'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:226:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:398:in `start_pipeline'"]} [2018-04-06T21:33:54,146][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600} [2018-04-06T21:33:57,131][WARN ][logstash.agent ] stopping pipeline {:id=>"main"} 

Am really new to logstash and Elasticsearch...

What does this mean?

Would appreciate if someone could tell me why by just by adding one new input causes this tool to crash?!

1 Answers

Answers 1

Would appreciate if someone could tell me why by just by adding one new input causes this tool to crash?!

You can't add two input statement inside the same configuration. Like the documentation says, if you want to add more than one input in a config file, you should use something like that:

input {   file {     path => "/var/log/messages"     type => "syslog"   }    file {     path => "/var/log/apache/access.log"     type => "apache"   } } 
If You Enjoyed This, Take 5 Seconds To Share It

0 comments:

Post a Comment