Showing posts with label <img src="//i.stack.imgur.com/r0vlJ.png" height="16" width="18" alt="" class="sponsor-tag-img">google-compute-engine. Show all posts
Showing posts with label <img src="//i.stack.imgur.com/r0vlJ.png" height="16" width="18" alt="" class="sponsor-tag-img">google-compute-engine. Show all posts

Friday, January 19, 2018

“Heavy” simultaneous users Nginx - Laravel - Google compute engine

Leave a Comment

I'm running a server in nginx with Laravel (medium static web) and I'm doing for example 500 constant load simultaneous users during 1 minute (not distributed users during that minute).

And getting this error:

unix:/var/run/php/php7.1-fpm.sock failed - Resource temporarily unavailable

cginx.conf

worker_processes auto;  events {     use epoll;     worker_connections 1524; #in my case it should be 1024, but well..     multi_accept on; } http {     #with this I reduce disk usage a lot     client_body_buffer_size 10K;     client_header_buffer_size 1k;     large_client_header_buffers 2 1k;     reset_timedout_connection on;      sendfile on;     tcp_nopush on;     tcp_nodelay on; 

www.conf

pm.max_children = 500 pm.start_servers = 20 pm.min_spare_servers = 20 pm.max_spare_servers = 64 

Results with Google compute engine:

f1-micro (1 vCPU, 0,6 GB) - Is supporting 40 - 60 requests per second g1-small (1 vCPU, 1,7 GB) - Is maintaining 80 request per second n1-standard (1vCPU, 3,75 GB) - - Is maintaining 130 request per second n1-standard-2 (2vCPU, 7,5 GB) - Is maintaining 250 request per second . . n1-standard-16 (16 vCPU, 60 GB) - Is maintaining 840 request per second 

The last one is the first passing the test, the rest are dropping Bad Gateways errors from 200 users to 400

If I test for example not 2.000 users distributed in 30 secs with the micro instance then is fine, but not simultaneous sending requests.

Starting with 2 cores, CPUs level show perfectly fine, same as disk operations etc..

So after a loooot of tests I have some questions:

1) Is this normal? Not for me, is not normal to need 16 cores to run a simple web.. or the stress test is too heavy and it's normal?

2) Then, am I missing something? Is Google limiting request per second somehow?

3) What would be normal parameters for the given config files?

Any other help is more than welcome

2 Answers

Answers 1

TBH, it is not entirely clear what you're trying to achieve with this test, especially with bringing GCE into the equation.

If your "medium static web" site is doing a dozen SQL queries for each page, possibly with a few JOINs each, as well as various other resource intensive operations, then it is hardly a surprise that you're very far away from achieving C10K.

Your test results throughout various GCE instances look reasonably consistent, proving that it's your code that's to blame. If you want to rule out GCE as the cause of your performance issues, then it seems to be that the next logical step would be to test the performance outside of it.

It seems that you're most concerned with receiving the Bad Gateway errors on the cheaper instances, so, let's figure out why that happens.

  • Your backend is only capable of processing a certain number of requests in a given amount of time, on the order of a few dozens per second on the cheapest plan.

  • It is configured without a clear spec of what's supposed to happen once resources are exhausted. With the configuration at hand, you can only push 40 requests per second on the cheapest instance, yet, the configuration is set to have Laravel process 500 requests simultaneously, on 1 vCPU w/ 0.6GB of total RAM, leaving each request about 1MB of RAM, which is way on the lower scale for a "medium static web" powered by a dynamic framework, resulting in an impedance mismatch.

  • It is then hardly a surprise that you're getting errors, which are clearly due to the impedance mismatch as the backpressure builds up, and the backend likely runs out of RAM trying to process the never-ending requests.

So, what is the solution?

The solution is to have a clear understanding of how many resources are required to generate each page on the backend, and subsequently limit the number of simultaneous connections to the backend from the reverse proxy to never exceed such a certain number of connections, with http://nginx.org/r/limit_req and/or http://nginx.org/r/limit_conn, as appropriate. This way, you could catch and monitor the overload conditions, and provide an appropriate error message to the user, and/or script automatic dynamic resizing of your infrastructure.

In addition to the above, another good idea is to cache results of your backend, provided that it's actually "static" content that's generated, without per-user customisation, which could then let you account for the realistic situation of when a link to your site is posted at Slashdot/Reddit/Twitter, causing a huge spike of traffic to a single "static" page, which can then be cached for the duration of the whole event. Else, if the content is not actually "static", then it's up to you to decide which way to go and which compromise to take — I'd suggest seeing if the per-request customisations are actually warranted, and whether an uncustomised version might be appropriate, especially for the Slashdot-like scenarios.

Answers 2

On a machine with 2vcpu and 7gb ram i can handle more 1000 request/second You didn't mentioned the ram per request do you need, also i suggest change php socket to tcp connection, it allow me to process 10x requests

Read More

Wednesday, November 8, 2017

Allow WebSockets in Google Compute Engine (GCE)

Leave a Comment

I'm using Compute Engine (GCE) to run my socket server with Socket.IO (Node.js)

It's only working with polling. When I try to use a web client I receive this error code:

WebSocket connection to 'ws://myapp-socket.appspot.com/socket.io/?EIO=3&transport=websocket&sid=Tt4uNFR2fU82zsCIAADo' failed: Unexpected response code: 400  

What am I doing wrong? Is it GCE configuration problem?

1 Answers

Answers 1

You cannot use the myapp-socket.appspot.com domain in your script when using WebSockets. Instead, you will need to use the external ip of the GCE instance and connect directly to that, opening any firewall ports you may be using.

I believe traffic going to the appspot.com domain is also going through frontend webservers and socket.io needs a direct connection to the server.

Read More

Sunday, January 22, 2017

GCE Third Party for automation (snapshots/images etc…)

Leave a Comment

I am new to Google Compute Engine and I want to do automatic images/snapshot backups every X hours.

Previously I have used Amazon Cloud (EC2 instances) and did the automatic backups with a third party tool called Skeddly (which is UI that by setting some fields, it makes an automation for this backups).

Now, I would like to find a third party tool that will do something similar in GCE instance.

I know that it is possible to do with gcloud commands, or powershell, but I would like to do it with UI (third party tool) if exists.

What could you recommend me?

Thanks in advance.

0 Answers

Read More

Sunday, September 18, 2016

'EntryPoint' object has no attribute 'resolve' when using Google Compute Engine

Leave a Comment

I have an issue related to Cryptography package in Python. Can you please help in resolving these, if possible ? (tried a lot, but couldnt figure out the exact solution)

The python code which initiates this error:

print("Salt: %s" % salt) server_key = pyelliptic.ECC(curve="prime256v1")  # ----->> Line2 print("Server_key: %s" % server_key)   # ----->> Line3 server_key_id = base64.urlsafe_b64encode(server_key.get_pubkey()[1:])  http_ece.keys[server_key_id] = server_key http_ece.labels[server_key_id] = "P-256" encrypted = http_ece.encrypt(data, salt=salt, keyid=server_key_id,             dh=self.receiver_key, authSecret=self.auth_key)  # ----->> Line8 

Value of "Salt" is getting displayed in 100% of the cases.

If Line3 gets executed successfully, I see the the following EntryPoint Error because of http_ece.encrypt() call (Line8):

AttributeError("'EntryPoint' object has no attribute 'resolve'",) 

(Ref. File Link: https://github.com/martinthomson/encrypted-content-encoding/blob/master/python/http_ece/init.py#L128 )

Requirements.txt(partial):

cryptography==1.5 pyelliptic==1.5.7 pyOpenSSL==16.1.0 

On Running the command: sudo pip freeze --all |grep setuptools, I get: setuptools==27.1.2

Please let me know if any more detail is required.

This problem seems to be basically due to some Old/Incompatible packages(related to PyElliptic, Cryptography, PyOpenSSL and/or setuptools) installed on the VM. For Reference: https://github.com/pyca/cryptography/issues/3149

Can someone please suggest a good solution to resolve this issue completely ?

Thanks,

2 Answers

Answers 1

The issue referenced in c66303382 has this traceback (you never gave your traceback so I have to assume yours ends the same way):

File "/usr/local/lib/python2.7/dist-packages/cryptography/hazmat/backends/__init__.py", line 35, in default_backend     _default_backend = MultiBackend(_available_backends()) File "/usr/local/lib/python2.7/dist-packages/cryptography/hazmat/backends/__init__.py", line 22, in _available_backends     "cryptography.backends" 

The full line that triggers the error looks like this:

_available_backends_list = [         ep.resolve()         for ep in pkg_resources.iter_entry_points(             "cryptography.backends"         )     ] 

Searching the repository for EntryPoint definition, then blaming pkg_resources/__init__.py where it is reveals that pkg_resources.EntryPoint.resolve() was added in commit 92a553d3adeb431cdf92b136ac9ccc3f2ef98bf1 (2015-01-05) that went into setuptools v11.3.

Thus you'll see this error if you use an older version.

Answers 2

Ran Following Commands from the project path /opt/projects/myproject-google/myproject and it resolved the Attribute EntryPoint Error Issue:

(Assuming project virtual env path as: /opt/projects/myproject-google/venv)

Command: (from path: /opt/projects/myproject-google/myproject)

export PYTHONPATH=      # [Blank] sudo pip install --upgrade virtualenv setuptools sudo rm -rf ../venv sudo virtualenv ../venv source ../venv/bin/activate sudo pip install --upgrade -r requirements.txt deactivate 

Running the above commands upgraded the virtual environment & the setuptools version inside the virtual Env. located at path: /opt/projects/myproject-google/venv/lib/python2.7/site-packages. To test if setuptools have upgraded successfully, try some of these commands:

  1. Command: sudo virtualenv --version Output: 15.0.3
  2. Command: echo $PYTHONPATH Output: [blank]
  3. Command: python -c 'import pkg_resources; print(pkg_resources.__file__)' Output: ~/.local/lib/python2.7/site-packages/pkg_resources/__init__.pyc
  4. Command: python -c 'import sys; print(sys.path)' Output: ['', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '~/.local/lib/python2.7/site-packages', '/usr/local/lib/python2.7/dist-packages', '/opt/projects/myproject-google/myproject', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages/PILcompat']
  5. Command: ls /opt/projects/myproject-google/venv/lib/python2.7/site-packages Output: easy_install.py pip pkg_resources setuptools-27.2.0.dist-info wheel-0.30.0a0.dist-info easy_install.pyc pip-8.1.2.dist-info setuptools wheel
  6. Command: python -c 'from cryptography.hazmat.backends import default_backend; print(default_backend())' Output: <cryptography.hazmat.backends.multibackend.MultiBackend object at 0x7ff83a838d50>
  7. Command /opt/projects/myproject-google/venv/bin/python -c 'from cryptography.hazmat.backends import default_backend; print(default_backend())' Output Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: No module named cryptography.hazmat.backends
  8. Command: /opt/projects/myproject-google/venv/bin/python -c "import pkg_resources; print(pkg_resources.__file__)" Output: /opt/projects/myproject-google/venv/local/lib/python2.7/site-packages/pkg_resources/__init__.pyc

Ref Link: https://github.com/pyca/cryptography/issues/3149

These Steps resolved the Attribute EntryPoint Issue completely with an updated version of cryptography package & the setuptools.

Update As on 15 September 2016, The Cryptography Team has again added the workaround for supporting old packages too. (Ref. Link: https://github.com/pyca/cryptography/issues/3150 )

Read More

Tuesday, March 29, 2016

Access external client IP from behind Google Compute Engine network load balancer

Leave a Comment

I am running a Ruby on Rails app (using Passenger in Nginx mode) on Google Container Engine. These pods are sitting behind a GCE network load balancer. My question is how to access the external client IP from inside the Rails app.

The Github issue here seems to present a solution, but I ran the suggested:

for node in $(kubectl get nodes -o name | cut -f2 -d/); do   kubectl annotate node $node \     net.beta.kubernetes.io/proxy-mode=iptables;   gcloud compute ssh --zone=us-central1-b $node \     --command="sudo /etc/init.d/kube-proxy restart"; done 

but I am still getting a REMOTE_ADDR header of 10.140.0.1.

On ideas on how I could get access to the real client IP (for geolocation purposes)?

0 Answers

Read More