i have a Angular build and an Laravel backend providing API's running on one server. I've configured them in nginx with the frontend having a proxy to the backend server.
The backend is running on the url (example is placeholder) http://api.example.com and the frontend is running on http://example.com
Frontend config:
server { listen 80; server_name example.com; location /api { proxy_pass http://api.example.com; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Host $host; } location / { root /var/www/angular/em-frontend/dist; index index.html index.htm; try_files $uri $uri/ /index.html$is_args$args; } }
Backend config:
server { listen 80; server_name api.example.com; root /var/www/angular/em-backend/public; index index.php index.html index.htm; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ /index.php?$query_string; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } }
Now when I do any api call from the frontend I get the a 502 Bad Gateway error from nginx.
From nginx error log:
2017/12/09 23:30:40 [alert] 5932#5932: 768 worker_connections are not enough 2017/12/09 23:30:40 [error] 5932#5932: *770 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: IP_MASKED, server: example.com, request: "GET /api/endpoint HTTP/1.1", upstream: "http://IP_ADDRESS:80/api/endpoint", host: "example.com", referrer: "http://example.com/dashboard"
Any idea how I can fix this?
2 Answers
Answers 1
you must use proxy-pass in location block like this example:
upstream myproject { server ip1 ; server ip2 ; server ip3 ; } location / { proxy_pass http://myproject; }
Answers 2
I believe your issue is hostname configuration creating a recursive loop in which a single request is proxied back to the front-end quickly exhausting all workers. You'll recognize this by a single request to the frontend generating many entries in the access log.
I was able to quickly recreate that error using the config you provided. Below is a modified version that eliminates config serving up 2 different static files on backend server to illustrate the minimum config required. If this works, you can add the cgi_pass config back in.
#set api domain to use alternate port, could also just tack onto proxy_pass. upstream api.example.com { server localhost:8081; } #frontend listening on port 8080 server { listen 8080; server_name example.com; location /api { proxy_pass http://api.example.com; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Host $host; } location / { root /usr/local/var/www; index index.html index.htm; try_files $uri $uri/ /index.html$is_args$args; } } #backend listening on 8081 server { listen 8081; server_name api.example.com; index index.php index.html index.htm; location / { # will match any url not ending in .php root /usr/local/var/www; try_files $uri $uri/ /index.html; } location ~ \.php { #successfully responds to http://example.com:8080/api/*.php root /usr/local/var/www; try_files $uri $uri/ /service.html; } }
0 comments:
Post a Comment