Sunday, August 21, 2016

What's the purpose of binding vip addr in every container of a service in docker 1.12?

Leave a Comment

Docker uses the NAT mode of ipvs to get service load balancing and in NAT mode the real server knows nothing about the VIP.

From my understanding, VIP is only used for communication between containers from different services, so it should only appear in the mangle table of iptables.

1 Answers

Answers 1

I believe this is discussed right now (last week Aug. 2016) in PR 25414, where container networking in service create is initially reported as:

The containers provisioned in docker swarm mode can be accessed in service discovery either via a Virtual IP (VIP) and routed through the docker swarm ingress overlay network. Or via a DNS round robbin (DNSRR)

But Charles Smith (sfsmithcha) adds:

VIP is not on the ingress overlay network. You need to create a user-defined overlay network in order to use VIP or DNSRR. (See PR 25420)

We should not conflate ingress, which is (--publish ports) with swarm-internal overlay networking.

Charles' illustration of the presence of VIP is (docs/swarm/networking.md)

Docker Engine swarm mode natively supports overlay networks, so you can enable container-to-container networks.
When you use swarm mode, you don't need an external key-value store.

Features of swarm mode overlay networks include the following:

  • You can attach multiple services to the same network.
  • By default, service discovery assigns a virtual IP address (VIP) and DNS entry to each service in the swarm, making it available by its service name to containers on the same network.
  • You can configure the service to use DNS round-robin instead of a VIP.

Use swarm mode service discovery

By default, when you create a service attached to a network, the swarm assigns the service a VIP. The VIP maps to a DNS alias based upon the service name. Containers on the network share DNS mappings for the service via gossip so any container on the network can access the service via its service name.

You don't need to expose service-specific ports to make the service available to other services on the same overlay network.
The swarm's internal load balancer automatically distributes requests to the service VIP among the active tasks.

The OP insists:

Still can't get the reason why VIP is attached on the container's eth0...

Well:

  • The eth0 interface represents the container interface that is connected to the overlay network. So if you create an overlay network, you will have those VIP associated to it.
  • eth1 interface represents the container interface that is connected to the docker_gwbridge network, for external connectivity outside of container cluster.

Now issue 25325 is about Docker 1.12 swarm mode load balancing not consistently working, where the the IPVS table is not being populated correctly.

That illustrate the role of those ipv, and the bug should be fixed in 1.12.1-rc1.

If You Enjoyed This, Take 5 Seconds To Share It

0 comments:

Post a Comment