I am using Elasticsearch 5.2 and Spring Boot 1.5.1. I am connecting to it via the Java client in a Spring app. When I connect to it on port 9300
or 9200
and I get NoNodeAvailableException: None of the configured nodes are available
. In my Java client, I have set the client.transport.sniff
property as true
. On sending a request to it via cURL on port 9200, it is working correctly. I have 4 nodes in a single cluster and I cannot connect to any of them. My configuration file has all of the default values in the network
division except for network.host
which has eth0
inet addr
as the value.
I am using Gradle. My dependencies are:
compile('org.springframework.boot:spring-boot-starter-web') compile('org.elasticsearch:elasticsearch:5.2.0') compile('org.elasticsearch.client:transport:5.2.0') compile('org.apache.logging.log4j:log4j-api:2.7') compile('org.apache.logging.log4j:log4j-core:2.7')
My code for connecting to the Elasticsearch cluster:
@Bean public TransportClient elasticClient() { org.elasticsearch.common.settings.Settings settings = Settings.builder() .put("client.transport.sniff", true) .put("cluster.name", "TestCluster") .build(); TransportClient client = null; try { client = new org.elasticsearch.transport.client.PreBuiltTransportClient(settings) .addTransportAddress(new org.elasticsearch.common.transport.InetSocketTransportAddress( InetAddress.getByName("54.175.155.56"), 9200)); } catch (UnknownHostException e) { e.printStackTrace(); } return client; }
My ES logs when ES starts are:
[2017-02-15T10:37:40,664][INFO ][o.e.t.TransportService ] [ip-10-0-29-2] publish_address {10.0.29.2:9300}, bound_addresses {10.0.29.2:9300} [2017-02-15T10:37:40,669][INFO ][o.e.b.BootstrapChecks ] [ip-10-0-29-2] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks [2017-02-15T10:37:43,856][INFO ][o.e.c.s.ClusterService ] [ip-10-0-29-2] detected_master {kafka-stage}{sTIeF8gGTNam0oNW8dkbbA}{TTX6FIRtRp-gDemYY-22Sg}{10.0.20.71}{10.0.20.71:9300}, added {{kafka-stage-2}{jl3oLGgMQ1yxhdMMy65k_g}{ibV8BApjRByUOpDDncddyQ}{10.0.51.31}{10.0.51.31:9300},{ip-10-0-40-144}{t-_THs3wQbC_k9eivDo5eQ}{v-UYoYgXQ265QkdYhtiPYA}{10.0.40.144}{10.0.40.144:9300},{kafka-stage}{sTIeF8gGTNam0oNW8dkbbA}{TTX6FIRtRp-gDemYY-22Sg}{10.0.20.71}{10.0.20.71:9300},}, reason: zen-disco-receive(from master [master {kafka-stage}{sTIeF8gGTNam0oNW8dkbbA}{TTX6FIRtRp-gDemYY-22Sg}{10.0.20.71}{10.0.20.71:9300} committed version [98]]) [2017-02-15T10:37:44,009][INFO ][o.e.h.HttpServer ] [ip-10-0-29-2] publish_address {10.0.29.2:9200}, bound_addresses {10.0.29.2:9200} [2017-02-15T10:37:44,009][INFO ][o.e.n.Node ] [ip-10-0-29-2] started
The answers at these questions don't solve my problem:
2 Answers
Answers 1
In your elasticsearch.yml
configuration file you need to make sure to bind to the correct host and have the following setting:
network.host: 54.175.155.56
Also in your Java code, since you're using the transport client you need to use the port 9300 (for TCP communication) and not 9200, which is meant for HTTP communication (e.g. via curl)
client = new org.elasticsearch.transport.client.PreBuiltTransportClient(settings) .addTransportAddress(new org.elasticsearch.common.transport.InetSocketTransportAddress( InetAddress.getByName("54.175.155.56"), 9300)); ^ | change this
Answers 2
The node to which I was connecting to was only a master node. It had node.ingest
and node.data
as false
. On connecting to a node which had node.ingest
as true
and removing the client.transport.sniff
setting in the Java client, I was able to connect to the ES cluster.
0 comments:
Post a Comment