Using logstash 2.3.4-1 on centos 7 with kafka-input plugin I sometimes get
{:timestamp=>"2016-09-07T13:41:46.437000+0000", :message=>#0, :events_consumed=>822, :worker_count=>1, :inflight_count=>0, :worker_states=>[{:status=>"dead", :alive=>false, :index=>0, :inflight_count=>0}], :output_info=>[{:type=>"http", :config=>{"http_method"=>"post", "url"=>"${APP_URL}/", "headers"=>["AUTHORIZATION", "Basic ${CREDS}"], "ALLOW_ENV"=>true}, :is_multi_worker=>false, :events_received=>0, :workers=>"", headers=>{..}, codec=>"UTF-8">, workers=>1, request_timeout=>60, socket_timeout=>10, connect_timeout=>10, follow_redirects=>true, pool_max=>50, pool_max_per_route=>25, keepalive=>true, automatic_retries=>1, retry_non_idempotent=>false, validate_after_inactivity=>200, ssl_certificate_validation=>true, keystore_type=>"JKS", truststore_type=>"JKS", cookies=>true, verify_ssl=>true, format=>"json">]>, :busy_workers=>1}, {:type=>"stdout", :config=>{"ALLOW_ENV"=>true}, :is_multi_worker=>false, :events_received=>0, :workers=>"\n">, workers=>1>]>, :busy_workers=>0}], :thread_info=>[], :stalling_threads_info=>[]}>, :level=>:warn}
this is the config
input { kafka { bootstrap_servers => "${KAFKA_ADDRESS}" topics => ["${LOGSTASH_KAFKA_TOPIC}"] } } filter { ruby { code => "require 'json' require 'base64' def good_event?(event_metadata) event_metadata['key1']['key2'].start_with?('good') rescue true end def has_url?(event_data) event_data['line'] && event_data['line'].any? { |i| i['url'] && !i['url'].blank? } rescue false end event_payload = JSON.parse(event.to_hash['message'])['payload'] event.cancel unless good_event?(event_payload['event_metadata']) event.cancel unless has_url?(event_payload['event_data']) " } } output { http { http_method => 'post' url => '${APP_URL}/' headers => ["AUTHORIZATION", "Basic ${CREDS}"] } stdout { } }
Which is odd, since it is written to logstash.log and not logstash.err
What does this error mean and how can I avoid it? (only restarting logstash solves it, until the next time it happens)
1 Answers
Answers 1
According to this github issue your ruby code could be causing the issue. Basically any ruby exception will cause the filter worker to die. Without seeing your ruby code, it's impossible to debug further, but you could try wrapping your ruby code in an exception handler and logging the exception somewhere (at least until logstash is updated to log it).
0 comments:
Post a Comment