Showing posts with label configuration. Show all posts
Showing posts with label configuration. Show all posts

Thursday, September 13, 2018

DNS and nginx server setup problem, slow server and 502 response

Leave a Comment

I'm setting up a new server with Ubuntu 18.04.1 Nginx 1.14.0 and PHP 7.2.7 Everything works fine except a test page where I set up a lot of broken links to missing images.

It seems to take forever for the server to realise they are missing and respond to the http request. Some missing files give a HTTP status of 404 and some give 502. What causes these delays and 502 errors? Did I do something wrong in the nginx or php configuration?

On my old server I have the exact same page (https://vuyk.eu/portfolio-2) which loads very quick. There must be a difference in server setup that I would like to solve.

Edit: After a suggestion from Dayo I did some tests. It seems to be a DNS problem. When I remove the line "listen [::]:443 ssl http2;" in the nginx server conf file the problem is gone. Still why would this be a problem?

Edit: When accessing IP 2a03:b0c0:0:1010::190:6001 through a browser, there is a certificate mismatch notification. This is strange because the nginx server setup (see contents listed below) leads both IPv4 and IPv6 to the same certificate.

Edit: So the server doesn't recognize the IPv6 address being test.vuyk.eu but accessing IP https://37.139.19.66 immediately shows https://test.vuyk.eu

The zone file records:

AAAA    test.vuyk.eu    directs to 2a03:b0c0:0:1010::190:6001 3600 A   test.vuyk.eu    directs to 37.139.19.66           3600 

Dayo suggested the hosts file might be a problem, here is the contents:

127.0.0.1 localhost ::1 localhost 2a03:b0c0:0:1010::190:6001 localhost 127.0.1.1 vuykhost2.vuyk.eu   # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts 

The nginx server configuration, when I remove the line "listen [::]:443 ssl http2;" everything works fine:

server {         listen 443 ssl http2;         listen [::]:443 ssl http2;         ssl_certificate /etc/letsencrypt/live/test.vuyk.eu/fullchain.pem;         ssl_certificate_key /etc/letsencrypt/live/test.vuyk.eu/privkey.pem;         include snippets/ssl-params.conf;          server_name test.vuyk.eu;         root /var/www/vuyk.eu/webroot;         index index.php index.html index.htm ;          location / {             try_files $uri $uri/ /index.php?$args;         }          location ~ \.php$ {             include fastcgi.conf;             fastcgi_pass unix:/run/php/php7.2-fpm.sock;         } } 

nginx.conf

user www-data; worker_processes auto; pid /run/nginx.pid; include /etc/nginx/modules-enabled/*.conf;  events {     worker_connections 2048;     multi_accept on; }  http {      ##     # Basic Settings     ##      sendfile on;     tcp_nopush on;     tcp_nodelay on;     #   keepalive_timeout 65;     types_hash_max_size 2048;     # server_tokens off;      # server_names_hash_bucket_size 64;     # server_name_in_redirect off;      include /etc/nginx/mime.types;     default_type application/octet-stream;      ##     # SSL Settings     ##      ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE     ssl_prefer_server_ciphers on;      ##     # Logging Settings     ##      access_log /var/log/nginx/access.log;     error_log /var/log/nginx/error.log;      ##     # Gzip Settings     ##      gzip             on;     gzip_comp_level  2;     gzip_min_length  1000;     gzip_proxied     expired no-cache no-store private auth;     gzip_types       text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;      ##     # Virtual Host Configs     ##      include /etc/nginx/conf.d/*.conf;     include /etc/nginx/sites-enabled/*;     client_body_buffer_size 10K;     client_header_buffer_size 1k;     client_max_body_size 100m;     large_client_header_buffers 4 8k;     fastcgi_buffers 16 16k;     fastcgi_buffer_size 32k;     fastcgi_read_timeout 500; #gateway probleem     client_body_timeout 12;     client_header_timeout 12;     keepalive_timeout 25;     send_timeout 10; } 

The php app I use is Joomla 3.8.11 with a custom script to show a custom error page:

header("HTTP/1.0 404 Not Found");  echo file_get_contents('https://test.vuyk.eu/404-page-not-found');  exit; 

After removing file_get_contents there are no errors anymore. However I'm wondering why, as it used to work fine on my old server (see edit above about DNS). Also I need this script to properly show a HTTP status 404 and a custom error page without changing the addressbar.

A part of the nginx error.log:

2018/08/30 16:25:27 [error] 29228#29228: *76 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 2a02:a440:91e3:1:4481:654b:a3e8:9617, server: test.vuyk.eu, request: "GET /images/klanten1/JHoogeveen.gif HTTP/2.0", upstream: "fastcgi://unix:/run/php/php7.2-fpm.sock:", host: "test.vuyk.eu", referrer: "https://test.vuyk.eu/portfolio-2" 

Messages from the php7.2-fpm.log (there are a lot similar lines)

[30-Aug-2018 16:16:08] WARNING: [pool www] server reached pm.max_children setting (15), consider raising it [30-Aug-2018 16:16:27] WARNING: [pool www] child 29026, script '/var/www/vuyk.eu/webroot/index.php' (request: "GET /index.php") execution timed out (22.937711 sec), terminating [30-Aug-2018 16:16:27] WARNING: [pool www] child 29245 exited on signal 15 (SIGTERM) after 20.490546 seconds from start [30-Aug-2018 16:16:27] NOTICE: [pool www] child 29263 started 

The timeline of HTTP requests and replies, that shows the requests on non-existent files and the response of the server, some give a 404 which is good, some give 502 bad gateway (on my old server they're all 404's):

GET https://test.vuyk.eu/portfolio-2 [HTTP/2.0 200 OK 132ms] GET https://test.vuyk.eu/templates/purity_iii/css/bootstrap.css [HTTP/2.0 200 OK 40ms] GET https://test.vuyk.eu/templates/system/css/system.css [HTTP/2.0 200 OK 50ms] GET https://test.vuyk.eu/templates/purity_iii/css/template.css [HTTP/2.0 200 OK 50ms] GET https://test.vuyk.eu/templates/purity_iii/fonts/font-awesome/css/font-awesome.min.css [HTTP/2.0 200 OK 50ms] GET https://test.vuyk.eu/templates/purity_iii/css/layouts/corporate.css [HTTP/2.0 200 OK 50ms] GET https://test.vuyk.eu/media/jui/js/jquery.min.js?48b6d1b3850bca834b403c58682b4579 [HTTP/2.0 200 OK 60ms] GET https://test.vuyk.eu/media/jui/js/jquery-noconflict.js?48b6d1b3850bca834b403c58682b4579 [HTTP/2.0 200 OK 60ms] GET https://test.vuyk.eu/media/jui/js/jquery-migrate.min.js?48b6d1b3850bca834b403c58682b4579 [HTTP/2.0 200 OK 60ms] GET https://test.vuyk.eu/media/system/js/caption.js?48b6d1b3850bca834b403c58682b4579 [HTTP/2.0 200 OK 70ms] GET https://test.vuyk.eu/plugins/system/t3/base-bs3/bootstrap/js/bootstrap.js? 8b6d1b3850bca834b403c58682b4579 [HTTP/2.0 200 OK 80ms] GET https://test.vuyk.eu/plugins/system/t3/base-bs3/js/jquery.tap.min.js [HTTP/2.0 200 OK 80ms] GET https://test.vuyk.eu/plugins/system/t3/base-bs3/js/script.js [HTTP/2.0 200 OK 70ms] GET https://test.vuyk.eu/plugins/system/t3/base-bs3/js/menu.js [HTTP/2.0 200 OK 70ms] GET https://test.vuyk.eu/templates/purity_iii/js/script.js [HTTP/2.0 200 OK 70ms] GET https://test.vuyk.eu/plugins/system/t3/base-bs3/js/nav-collapse.js [HTTP/2.0 200 OK 70ms] GET https://test.vuyk.eu/templates/purity_iii/css/custom-vuyk.css [HTTP/2.0 200 OK 70ms] GET https://test.vuyk.eu/images/klanten1/schipper2.gif [HTTP/2.0 502 Bad Gateway 23988ms] GET https://test.vuyk.eu/images/klanten1/Kuiper.gif [HTTP/2.0 502 Bad Gateway 24038ms] GET https://test.vuyk.eu/images/klanten1/WindMatch.gif [HTTP/2.0 502 Bad Gateway 24008ms] GET https://test.vuyk.eu/images/klanten1/Tuinland.gif [HTTP/2.0 502 Bad Gateway 24018ms] GET https://test.vuyk.eu/images/klanten1/Wezenberg.gif [HTTP/2.0 502 Bad Gateway 24038ms] GET https://test.vuyk.eu/images/klanten1/Morgenster.gif [HTTP/2.0 502 Bad Gateway 23998ms] GET https://test.vuyk.eu/images/klanten1/Harrie-boerhof.gif [HTTP/2.0 502 Bad Gateway 24028ms] GET https://test.vuyk.eu/images/klanten1/Lococensus.gif [HTTP/2.0 502 Bad Gateway 23998ms] GET https://test.vuyk.eu/images/klanten1/JHoogeveen.gif [HTTP/2.0 502 Bad Gateway 23978ms] GET https://test.vuyk.eu/images/klanten1/DeDeur.gif [HTTP/2.0 502 Bad Gateway 23988ms] GET https://test.vuyk.eu/images/klanten1/Runhaar.gif [HTTP/2.0 502 Bad Gateway 23958ms] GET https://test.vuyk.eu/images/klanten1/Schunselaar-schildersbedrijf.gif [HTTP/2.0 502 Bad Gateway 23948ms] GET https://test.vuyk.eu/images/klanten1/Capelle.gif [HTTP/2.0 502 Bad Gateway 23958ms] GET https://test.vuyk.eu/images/klanten1/Distantlake.gif [HTTP/2.0 502 Bad Gateway 24038ms] GET https://test.vuyk.eu/images/klanten1/Eikenaar.gif [HTTP/2.0 502 Bad Gateway 24018ms] GET https://test.vuyk.eu/images/klanten1/FFWD.gif [HTTP/2.0 404 Not Found 26274ms] GET https://test.vuyk.eu/images/klanten1/Veltec.gif [HTTP/2.0 404 Not Found 26791ms] GET https://test.vuyk.eu/images/klanten1/Heutink.gif [HTTP/2.0 404 Not Found 26811ms] GET https://test.vuyk.eu/images/klanten1/Lindeboom.gif [HTTP/2.0 404 Not Found 26777ms] GET https://test.vuyk.eu/images/klanten1/aataxi.gif [HTTP/2.0 404 Not Found 26828ms] GET https://test.vuyk.eu/images/klanten1/Aewind.gif [HTTP/2.0 404 Not Found 26811ms] GET https://test.vuyk.eu/images/klanten1/Praatmaatgroep.gif [HTTP/2.0 404 Not Found 26800ms] GET https://test.vuyk.eu/media/system/css/system.css [HTTP/2.0 200 OK 20ms] JQMIGRATE: Migrate is installed, version 1.4.1 jquery-migrate.min.js:2:542 GET https://test.vuyk.eu/images/logo.gif [HTTP/2.0 200 OK 20ms] GET https://test.vuyk.eu/images/reclame-en-communicatie.gif [HTTP/2.0 200 OK 20ms] GET https://test.vuyk.eu/fonts/opensans-regular-webfont.woff [HTTP/2.0 200 OK 40ms] GET https://test.vuyk.eu/templates/purity_iii/fonts/font-awesome/fonts/fontawesome-webfont.woff2?v=4.7.0 [HTTP/2.0 200 OK 70ms] 

fastcgi.conf

fastcgi_param  PATH_TRANSLATED    $document_root$fastcgi_path_info; fastcgi_param  SCRIPT_FILENAME    $document_root$fastcgi_script_name; fastcgi_param  PATH_INFO          $fastcgi_path_info; fastcgi_param  QUERY_STRING       $query_string; fastcgi_param  REQUEST_METHOD     $request_method; fastcgi_param  CONTENT_TYPE       $content_type; fastcgi_param  CONTENT_LENGTH     $content_length;  fastcgi_param  SCRIPT_NAME        $fastcgi_script_name; fastcgi_param  REQUEST_URI        $request_uri; fastcgi_param  DOCUMENT_URI       $document_uri; fastcgi_param  DOCUMENT_ROOT      $document_root; fastcgi_param  SERVER_PROTOCOL    $server_protocol; fastcgi_param  REQUEST_SCHEME     $scheme; fastcgi_param  HTTPS              $https if_not_empty;  fastcgi_param  GATEWAY_INTERFACE  CGI/1.1; fastcgi_param  SERVER_SOFTWARE    nginx/$nginx_version;  fastcgi_param  REMOTE_ADDR        $remote_addr; fastcgi_param  REMOTE_PORT        $remote_port; fastcgi_param  SERVER_ADDR        $server_addr; fastcgi_param  SERVER_PORT        $server_port; fastcgi_param  SERVER_NAME        $server_name;  # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param  REDIRECT_STATUS    200; 

php.ini

[PHP]  engine = On short_open_tag = Off precision = 14 output_buffering = 4096 zlib.output_compression = Off implicit_flush = Off unserialize_callback_func = serialize_precision = -1 disable_functions = pcntl_alarm,pcntl_fork,pcntl_waitpid,pcntl_wait,pcntl_wifexited,pcntl_wifstopped,pcntl_wifsignaled,pcntl_wifcontinued,pcntl_wexitstatus,pcntl_wtermsig,pcntl_wstopsig,pcntl_signal,pcntl_signal_get_handler,pcntl_signal_dispatch,pcntl_get_last_error,pcntl_strerror,pcntl_sigprocmask,pcntl_sigwaitinfo,pcntl_sigtimedwait,pcntl_exec,pcntl_getpriority,pcntl_setpriority,pcntl_async_signals, disable_classes = zend.enable_gc = On expose_php = Off max_execution_time = 30 max_input_time = 60 memory_limit = 128M error_reporting = E_ALL & ~E_DEPRECATED & ~E_STRICT display_errors = Off display_startup_errors = Off log_errors = On log_errors_max_len = 1024 ignore_repeated_errors = Off ignore_repeated_source = Off report_memleaks = On html_errors = On variables_order = "GPCS" request_order = "GP" register_argc_argv = Off auto_globals_jit = On post_max_size = 28M auto_prepend_file = auto_append_file = default_mimetype = "text/html" default_charset = "UTF-8" doc_root = user_dir = enable_dl = Off cgi.fix_pathinfo=1 file_uploads = On upload_max_filesize = 20M max_file_uploads = 20 allow_url_fopen = On allow_url_include = Off default_socket_timeout = 60 cli_server.color = On date.timezone = "Europe/Amsterdam"  [Pdo_mysql] pdo_mysql.cache_size = 2000 pdo_mysql.default_socket= [mail function] SMTP = localhost smtp_port = 25 mail.add_x_header = Off  [ODBC] odbc.allow_persistent = On odbc.check_persistent = On odbc.max_persistent = -1 odbc.max_links = -1 odbc.defaultlrl = 4096 odbc.defaultbinmode = 1  [Interbase] ibase.allow_persistent = 1 ibase.max_persistent = -1 ibase.max_links = -1 ibase.timestampformat = "%Y-%m-%d %H:%M:%S" ibase.dateformat = "%Y-%m-%d" ibase.timeformat = "%H:%M:%S"  [MySQLi] mysqli.max_persistent = -1 mysqli.allow_persistent = On mysqli.max_links = -1 mysqli.cache_size = 2000 mysqli.default_port = 3306 mysqli.default_socket = mysqli.default_host = mysqli.default_user = mysqli.default_pw = mysqli.reconnect = Off [mysqlnd] mysqlnd.collect_statistics = On mysqlnd.collect_memory_statistics = Off  [PostgreSQL] pgsql.allow_persistent = On pgsql.auto_reset_persistent = Off pgsql.max_persistent = -1 pgsql.max_links = -1 pgsql.ignore_notice = 0 pgsql.log_notice = 0  [bcmath] bcmath.scale = 0  [Session] session.save_handler = files session.use_strict_mode = 0 session.use_cookies = 1 session.use_only_cookies = 1 session.name = PHPSESSID session.auto_start = 0 session.cookie_lifetime = 0 session.cookie_path = / session.cookie_domain = session.cookie_httponly = session.serialize_handler = php session.gc_probability = 0 session.gc_divisor = 1000 session.gc_maxlifetime = 1440 session.referer_check = session.cache_limiter = nocache session.cache_expire = 180 session.use_trans_sid = 0 session.sid_length = 26 session.trans_sid_tags = "a=href,area=href,frame=src,form=" session.sid_bits_per_character = 5  [Assertion] zend.assertions = -1  [mbstring] mbstring.func_overload = 0  [Tidy] tidy.clean_output = Off  [soap] soap.wsdl_cache_enabled=1 soap.wsdl_cache_dir="/tmp" soap.wsdl_cache_ttl=86400 soap.wsdl_cache_limit = 5 [ldap] ldap.max_links = -1 

3 Answers

Answers 1

Heres whats happening.

It says in your error log server reached pm.max_children setting (15), consider raising it

So the max.children limit of 15 means PHP-FPM will stop launching processes once an app has 15 processes running, and any more requests for processes which come in will be queued until one of the previous processes ends.

You are using a php script to generate a 404 page, you then load a page with a load of broken links, your Nginx try files directive ends with a php script:

try_files $uri $uri/ /index.php?$args;

From the Nginx docs that means:

If none of the files were found, an internal redirect to the uri specified in the last parameter is made.

So for every broken link you just added an extra php process to the queue. If you count your 502 errors in the log you'll see there are 15. Because Nginx looks for 15 /index.php?$args which it can't find so tries to display a 404 which guess what? Is generated in php and now everything is broken.

15 processes which cant return 404 because the process limit has been reached and they each need another process to generate a 404 page, so until they time out no more processes for you.

The whole idea of serving a 404 page this way is crazy anyway. It's a static page, you should be serving it from Nginx because web servers are really really good at delivering static content fast, passing it to php, which in turn requests it from your own server again makes absolutely no sense.

Download your custom page to a file:

curl -o /var/www/vuyk.eu/webroot/404.html https://test.vuyk.eu/404-page-not-found 

Now add an error page directive in your Nginx conf:

error_page 404 /404.html; 

and now you have Nginx serving a custom error page without changing the client url and absolutely no load on your server.

Answers 2

It's seems that images you try to load are unavailable and requests are passed to PHP where 404 page is generated. Your custom 404 page is fetching resources via http

echo file_get_contents('https://test.vuyk.eu/404-page-not-found');  

If this fetch is slow your script might execute for very long time which might lead to timeouts. Also this might result in requests piling up and draining server resources.

Try to replace this fetch with something faster, you can try to read/include 404 page data directly from filesystem.

Answers 3

There are two broad possibilities as to why this is slow on the new server.

  1. Problems with Webserver / PHP
  2. Problems with DNS

To troubleshoot, enter the command line on your server and try to fetch a missing file using wget or cUrl. If you get a response as fast as you expect, then you most likely have an issue with your Webserver / PHP. If it is also slow, then the issue is with the DNS setup on your new server.

In any case, it appears that using file_get_contents for external URLs can lead to funky results. (Yes, The files are on your server but as you have a full url, it is treated like an external url).

So instead of ...

echo file_get_contents('https://test.vuyk.eu/404-page-not-found');  

use

echo file_get_contents('/server/path/to/404-page-not-found'); 

If you can't do this because 404-page-not-found is not a physical file and has to be run through Joomla to be generated, then why not use cUrl instead? This is specifically for 'external' files.

function curlFile($url) {   $ch = curl_init();    curl_setopt($ch, CURLOPT_HEADER, 0);   curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);   curl_setopt($ch, CURLOPT_URL, $url);    $ret = curl_exec($ch);   curl_close($ch);    return $ret; }  echo curlFile('https://test.vuyk.eu/404-page-not-found'); 

Note that if you found a DNS issue, you will need to resolve that notwithstanding.

Read More

Friday, July 27, 2018

Gradle Spring Boot Custom Configuration

Leave a Comment

I have an application which uses a mysql database to persist information. I would like to create a version of this application which uses an embedded database (mariaDB4j) and add it as a service to our CI environment, so when we launch this embedded version with our end-to-end test, the QA team gets a clean database.

I read a lot about this online, and it looks like gradle configurations are the way to go. The closest I found on this was:

sourceSets {     qaci {         java {             srcDir 'src/qa/java'         }         compileClasspath += sourceSets.main.runtimeClasspath         compileClasspath += sourceSets.main.resources     } }  configurations {     qaciCompile.extendsFrom compile }  bootRepackage {     customConfiguration = myCustomConfig } 

Unfortunately, bootRepackage was replaced by bootJar. I'm using gradle spring boot plugin 2.0.1.RELEASE, and when I try to use bootJar.customConfiguration I got an error saying that this is an unknown property.

Also, mariaDB4j requires a configuration class to work properly, I've included it at src/qa/java and created a new source set in order to it be added.

Does anyone knows how to tell the gradle spring boot plugin to use a custom configuration?

2 Answers

Answers 1

Have you thought about using spring profiles?

https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-profiles.html

It will allow you to define a different set of values, beans or configuration sets for a handful of profiles (i.e. development, QA, production) and you can use them to run your app with different configurations or different libraries.

@Configuration @Profile("production") public class ProductionConfiguration {} 

This class will load all the values you set for that profile in your .properties or .yml file.

And the to run the app you can pass the profile you want to use as command line parameter

--spring.profiles.active=dev 

The configuration for your mariaDB will be under a particular profile.

For example (if you are using yml ):

spring:   profiles: QA   datasource:     url: jdbc:mariadb://qaserver:1234;databaseName=qaDatabase     username: USER     password: PASSWORD     driverClassName: org.mariadb.jdbc.Driver     //aditional mariadb properties, configs spring:   profiles: production   datasource:     url: jdbc:sqlserver://proddatabase:1433;databaseName=production     username: USER     password: PASSWORD     driverClassName: com.microsoft.sqlserver.jdbc.SQLServerDriver 

Classes and properties belonging to other profiles will not be processed.

Answers 2

Automatically file system will be created in local path you have provided so you can restore the data while restarting/providing to qa.

-- Location of db files. delete this directory if you need to recreate from scratch

mariaDB4j.dataDir=./data/local 

Default is 3306, so using 3307 just in case it is already running on this machine

mariaDB4j.port=3307 app.mariaDB4j.databaseName=app_alpha spring.datasource.url=jdbc:mariadb://localhost:3307/ spring.datasource.username=root spring.datasource.password= spring.datasource.driver-class-name=org.mariadb.jdbc.Driver spring.jpa.database-platform=org.hibernate.dialect.MySQL5InnoDBDialect 

you can refer this for more information https://objectpartners.com/2017/06/19/using-mariadb4j-for-a-spring-boot-embedded-database/

Read More

Friday, March 23, 2018

Why does Visual Studio Code show User Settings instead of running a PHP file?

Leave a Comment

I noticed that in Visual Studio Code there is a menu item called "Start Without Debugging" under the "Debug" menu. When I have a PHP file open, I expected this to run the PHP file through the PHP executable and give me the output. Instead, when I click on "Start Without Debugging", the User Settings page shows up. Why does the User Settings page show up? It's not clear why this page is presented to me. Does it want me to configure something? How do I get it to just run the PHP file that I have open through the PHP executable. Is this even possible?

I noticed in the Default Settings there is a property called "php.validate.executablePath" that is set to null. I tried overriding this setting in my User Settings by pointing it to the path of my PHP executable like this:

{     "php.validate.executablePath": "/usr/bin/php" } 

But that didn't solve anything. The User Settings page still shows up when I click "Start Without Debugging".

1 Answers

Answers 1

After doing more research, I found the solution to my problem. Based on this section in the vscode docs and this comment that mentions creating a global launch configuration, all you have to do is add a launch object to your User Settings JSON.

In my case, I added this to my User Settings:

"launch": {     "version": "0.2.0",     "configurations": [         {         "type": "php",         "request": "launch",         "name": "Launch Program",         "program": "${file}",         "runtimeExecutable": "/usr/bin/php"         }     ] } 

Your value for runtimeExecutable may be different depending on the path to your PHP executable.

Read More

Tuesday, February 20, 2018

View Complete WCF response on error using Visual Studio 2010

Leave a Comment

In Visual Studio 2010 I get an error which tells me the first 1024 bytes of a response from a WCF service when consumed, but no more.

I would really like to see the entire response so I can work out what is going wrong, where can I get this info from? Is there a way of logging the full text of an error or are they all limited by the 1024 byte rule?

How to View more than 1024 bytes of a wcf response when an error occurs in Visual Studio 2010?

2 Answers

Answers 1

If you are doing this in debugging mode, where you have the exact steps pre-identified - you could try if setting maxReceivedMessageSize to a large value helps.

As the description says on the docs:

maxReceivedMessageSize

A positive integer that specifies the maximum message size, in bytes, including headers, that can be received on a channel configured with this binding. The sender of a message exceeding this limit will receive a SOAP fault. The receiver drops the message and creates an entry of the event in the trace log. The default is 65536.

In your case, it might have been set to a lower value.

You could also check if the maxBufferPoolSize has been set correctly - it seems that only one buffer worth of 1024 bytes are being transmitted back, which is possible if someone set the pool size as 1 instead of default 512.

Answers 2

Updated:

Use SvcConfigEditor.exe tool for tracing and logging need to be enabled in WCF configuration (app.config or web.config). Or you can use this SvcTraceViewer.exe tool for viewing the large XML's file.

For instance, you can below web.config settings for initializeData attribute of the tracelistener.

<system.serviceModel>     <diagnostics>         <messageLogging logEntireMessage="true" logMalformedMessages="true" logMessagesAtServiceLevel="true" logMessagesAtTransportLevel="true" />     </diagnostics> </system.serviceModel> <system.diagnostics>     <sources>         <source name="System.ServiceModel" switchValue="Information, ActivityTracing" propagateActivity="true">             <listeners>                 <add name="traceListener" type="System.Diagnostics.XmlWriterTraceListener" initializeData="C:\Temp\SvcLog\Traces.svclog" />             </listeners>         </source>     </sources> </system.diagnostics> 
Read More

Saturday, January 6, 2018

Encode and decode pathname in nginx

Leave a Comment

Normally files can be accessed at:

http://example.com/cats/cat1.zip

I want to encode/encrypt the pathname (/cats/cat1.zip) so that the link is not normally accessible but accessible after the pathname is encrypted/encoded:

http://example.com/Y2F0cy9jYXQxLnppcAo=

I'm using base64 encoding above for simplicity but would prefer encryption. How do I do about doing this? Do I have to write a custom module?

4 Answers

Answers 1

If your only concern is limiting access to certain URLs you may take a look at this post on Securing URLs with the Secure Link Module in Nginx.

It provides fairly simple method for securing your files — the most basic and simple way to encrypt your URLs is by using the secure_link_secret directive:

server {      listen 80;     server_name example.com;      location /cats {         secure_link_secret yoursecretkey;         if ($secure_link = "") { return 403; }           rewrite ^ /secure/$secure_link;     }       location /secure {         internal;         root /path/to/secret/files;     } } 

The URL to access cat1.zip file will be http://example.com/cats/80e2dfecb5f54513ad4e2e6217d36fd4/cat1.zip where 80e2dfecb5f54513ad4e2e6217d36fd4 is the MD5 hash computed on a text string that concatenates two elements:

  1. The part of the URL that follows the hash, in our case cat1.zip
  2. The parameter to the secure_link_secret directive, in this case yoursecretkey

The above example also assumes the files accessible via the encrypted URLs are stored at /path/to/secret/files/secure directory.

Additionally, there is a more flexible, but also more complex method, for securing URLs with ngx_http_secure_link_module module by using the secure_link and secure_link_md5 directives, to limit the URL access by IP address, define expiration time of the URLs etc.

If you need to completely obscure your URLs (including the cat1.zip part), you'll need to make a decision between:

  1. Handling the decryption of the encrypted URL on the Nginx side — writing your own, or reusing a module written by someone else
  2. Handling the decryption of the encrypted URL somewhere in your application — basically using Nginx to proxy your encrypted URLs to your application where you decrypt them and act accordingly, as @cnst writes above.

Both approaches have pros and cons, but IMO the latter one is simpler and more flexible — once you set up your proxy you don't need to worry much about Nginx, nor need to compile it with some special prerequisites; no need to write or compile code in language other than what you already are writing in your application (unless your application includes code in C, Lua or Perl).

Here's an example of a simple Nginx/Express application where you'd handle the decryption within your application. The Nginx configuration might look like:

server {      listen 80;     server_name example.com;      location /cats {         proxy_set_header Host $http_host;         proxy_set_header X-Real-IP $remote_addr;         proxy_set_header X-Forwarded-For $remote_addr;         proxy_set_header X-NginX-Proxy true;         proxy_http_version 1.1;         proxy_set_header Upgrade $http_upgrade;         proxy_set_header Connection "upgrade";         proxy_pass http://127.0.0.1:8000;     }      location /path/to/secured/files {         internal;     } } 

and on the application (Node.js/Express) side you may have something like:

const express = require ('express'); const app = express();  app.get('/cats/:encrypted', function(req, res) {   const encrypted = req.params.encrypted;    //    // Your decryption logic here   //   const decryptedFileName = decryptionFunction(encrypted);    if (decryptedFileName) {     res.set('X-Accel-Redirect', `/path/to/secured/files/${decryptedFileName}`);   } else {     // return error   } });   app.listen(8000); 

The above example assumes that the secured files are located at /path/to/secured/files directory. Also it assumes that if the URL is accessible (properly encrypted) you are sending the files for download, but the same logic would apply if you need to do something else.

Answers 2

The easiest way would be to write a simple backend (with interfacing through proxy_pass, for example) that would decrypt the filename from the $uri, and provide the results within the X-Accel-Redirect response header (which is subject to proxy_ignore_headers in nginx), which would subsequently be subject to an internal redirect within nginx (to a location that cannot be accessed without going through the backend first), and served with all the optimisations that are already part of nginx.

location /sec/ {     proxy_pass http://decryptor/; } location /x-accel-redirect-here/ {     internal;     alias …; } 

The above approach follows the ‘microservices’ architecture, in that your decryptor service's only job is to perform decryption and access control, leaving it up to nginx to ensure files are served correctly and in the most efficient way possible through the use of the internal specially-treated X-Accel-Redirect HTTP response header.

Answers 3

Consider using something like OpenResty with Lua.

Lua can do almost everything you want in nginx.

https://openresty.org/

https://github.com/openresty/

Answers 4

You can use a Nginx rewrite rule rewrite the url (from encoded to unencoded). And, to apply your encoding logic you can use a custom function (I did it with the perl module).

Could be something like this:

 http {   ...     perl_modules perl/lib;     ...     perl_set $uri_decode 'sub {       my $r = shift;       my $uri = $r->uri;       $uri = perl_magic_to_decode_the_url;       return $uri;     }';     ...     server {     ...       location /your-protected-urls-regex {         rewrite ^(.*)$ $scheme://$host$uri_decode;       } 
Read More

Monday, October 16, 2017

Tomcat webservice configuration

Leave a Comment

I have made a webservice (XML-RPC) built on Tomcat 8.5.16 it makes digital signiture of the data sent to it and saves them in MySQL (or MariaDB). It runns fine on windows (without security). Now I want to deploy it on CentOS (with SSL security). always makes errors addressed in: Failed to initialize end point associated with ProtocolHandler and: Tomcat mariadb connection configuration

To understand the problem:

  1. I have made a simple XML-RPC web service (summ of 2 nummbers) and it runns correctly.
  2. I have made a java application that accesses the database and it runns correctly.

I couldnt define the problem of the main app. could you please help me?

1 Answers

Answers 1

It look like your tomcat cannot open network sockets due to permission problems. Could be firewall and / or SELINUX.

You are using CENTOS. Did you opened your firewall :

firewall-cmd --permanent --zone=public --add-service=http firewall-cmd --permanent --zone=public --add-service=https firewall-cmd --reload 

Also try to turn of SELINUX security for a moment with

setenforce=0 

When this changes are made, restart tomcat.

I also suggest you to create a virtual CentOS image and try there first.

Read More

Tuesday, September 19, 2017

How to configure a physical naming strategy in hibernate.cfg.xml?

Leave a Comment

I’m learning Java and Hibernate. Right now, I’m having trouble understanding how to use a custom physical naming strategy: While the PhysicalNamingStrategy object is indeed instantiated, the toPhysicalTableName or toPhysicalColumnName methods are never called – not that I can see with a debugger, at least.

Versions: Java 1.8, Hibernate 5.2.10.Final, on macOS 10.12.

Here’s a minimal project:

@Entity public class Cake {     @Id     private long id;     private String name;     private String FLAVOUR;     private int sErViNg;      public Cake(String name, String flavour, int serving) {         this.name = name;         this.FLAVOUR = flavour;         this.sErViNg = serving;     }      // getters and setters 

public class Main {      public static void main (String[] args) {         Transaction tx = null;          try (                 SessionFactory sessionFactory = new Configuration().configure().buildSessionFactory();                 Session session = sessionFactory.openSession();         ) {             tx = session.beginTransaction();              Cake cake = new Cake("Molten Chocolate Cake", "chocolate", 1);             session.save(cake);              tx.commit();         }         catch (Exception e) {             e.printStackTrace();             if ( tx != null  ) {                 tx.rollback();             }         }     } } 

public class AllCapsPhysicalNamingStrategy     extends PhysicalNamingStrategyStandardImpl implements Serializable {      public static final AllCapsPhysicalNamingStrategy INSTANCE         = new AllCapsPhysicalNamingStrategy();      @Override     public Identifier toPhysicalTableName(Identifier name, JdbcEnvironment context) {         return new Identifier(name.getText().toUpperCase(), name.isQuoted());     }      @Override     public Identifier toPhysicalColumnName(Identifier name, JdbcEnvironment context) {         return new Identifier(name.getText().toUpperCase(), name.isQuoted());     } } 

<hibernate-configuration>     <session-factory>         <property name="hibernate.connection.driver_class">com.mysql.jdbc.Driver</property>         <property name="hibernate.connection.url">jdbc:mysql://localhost/cake</property>         <property name="hibernate.connection.username">root</property>         <property name="hibernate.connection.password"></property>         <property name="hibernate.dialect">org.hibernate.dialect.MySQL5Dialect</property>         <property name="hibernate.hbm2ddl.auto">create</property>         <property name="hibernate.physical_naming_strategy">com.example.AllCapsPhysicalNamingStrategy</property>         <mapping class="com.example.Cake"/>     </session-factory> </hibernate-configuration> 

Here’s the table I get:

[cake]> SELECT * FROM cake; +----+-----------+-----------------------+---------+ | id | FLAVOUR   | name                  | sErViNg | +----+-----------+-----------------------+---------+ |  0 | chocolate | Molten Chocolate Cake |       1 | +----+-----------+-----------------------+---------+ 

I would expect:

+----+-----------+-----------------------+---------+ | ID | FLAVOUR   | NAME                  | SERVING | +----+-----------+-----------------------+---------+ |  0 | chocolate | Molten Chocolate Cake |       1 | +----+-----------+-----------------------+---------+ 

What am I doing wrong here?

3 Answers

Answers 1

This isn't very well documented but unfortunately it seems Hibernate doesn't support that particular property being set in hibernate.cfg.xml. To quote from a very old Hibernate forum post:

You can set the properties given Environment.java class only in hibernate.properties or hibernate.cfg.xml. Rest of the properties like NamingStrategy has to be configured with Configuration class.

So would recommend removing the property and instead setting this in code on the Configuration instance, as proposed by Shiv Raghuwanshi.

Answers 2

You can set in configuration also.

public class Main {  public static void main (String[] args) {     Transaction tx = null;      try (             Configuration configuration =new Configuration();             configuration.setPhysicalNamingStrategy(new AllCapsPhysicalNamingStrategy());             SessionFactory sessionFactory = configuration.configure().buildSessionFactory();             Session session = sessionFactory.openSession();     ) {         tx = session.beginTransaction();          Cake cake = new Cake("Molten Chocolate Cake", "chocolate", 1);         session.save(cake);          tx.commit();     }     catch (Exception e) {         e.printStackTrace();         if ( tx != null  ) {             tx.rollback();         }     }   } } 

Answers 3

There is nothing wrong with the your configuration. It is just that bootstrapping hibernate using Configuration object requires you to set some of the config property on configuration object itself. These configuration specified via properties will get ignored.

Also, bootstrapping hibernate using Configuration object is considered as "legacy" way (as per official hibernate docs) and newer way is recommended of bootstrapping the hibernate as shown below.

    public static void main(String[] args) {         Transaction tx = null;          StandardServiceRegistry standardRegistry = new StandardServiceRegistryBuilder()                 .configure() // using "hibernate.cfg.xml"                 .build();         Metadata metadata = new MetadataSources(standardRegistry).buildMetadata();         try (                 SessionFactory sessionFactory = metadata.getSessionFactoryBuilder().build();                 Session session = sessionFactory.openSession();) {             tx = session.beginTransaction();              Cake cake = new Cake("Molten Chocolate Cake", "chocolate", 1);             session.save(cake);              tx.commit();         } catch (Exception e) {             e.printStackTrace();             if (tx != null) {                 tx.rollback();             }         }     } 

This will pick the Physical naming strategy specified as hibernate property in hibernate.cfg.xml file.

Read More

Monday, August 8, 2016

Gmail API configuration issue (in Java)

Leave a Comment

Here is my Gmail service configuration/factory class:

import java.io.File; import java.io.IOException; import java.security.GeneralSecurityException;  import org.springframework.beans.factory.annotation.Autowired; import org.springframework.core.env.Environment;  import com.google.api.client.auth.oauth2.Credential; import com.google.api.client.googleapis.auth.oauth2.GoogleCredential; import com.google.api.client.googleapis.javanet.GoogleNetHttpTransport; import com.google.api.client.http.HttpRequestInitializer; import com.google.api.client.http.javanet.NetHttpTransport; import com.google.api.client.json.jackson2.JacksonFactory; import com.google.api.services.gmail.Gmail; import com.google.api.services.gmail.GmailScopes;  public class GmailServiceFactoryBean {      private @Autowired Environment env;      private final NetHttpTransport transport;     private final JacksonFactory jacksonFactory;      public GmailServiceFactoryBean() throws GeneralSecurityException, IOException {         this.transport = GoogleNetHttpTransport.newTrustedTransport();         this.jacksonFactory = JacksonFactory.getDefaultInstance();     }      public Gmail getGmailService() throws IOException, GeneralSecurityException {         return new Gmail.Builder(transport, jacksonFactory, getCredential())                 .setApplicationName(env.getProperty("gmail.api.application.name")).build();     }      private HttpRequestInitializer getCredential() throws IOException, GeneralSecurityException {         File p12File = new File(this.getClass().getClassLoader().getResource("google-key.p12").getFile());          Credential credential = new GoogleCredential.Builder()             .setServiceAccountId(env.getProperty("gmail.api.service.account.email"))             .setServiceAccountPrivateKeyId(env.getProperty("gmail.api.private.key.id"))             .setServiceAccountPrivateKeyFromP12File(p12File)             .setTransport(transport)             .setJsonFactory(jacksonFactory)             .setServiceAccountScopes(GmailScopes.all())             //.setServiceAccountUser(env.getProperty("gmail.api.user.email"))             .build();          credential.refreshToken();          return credential;     }  } 

Here is my inner mailing service that uses previous bean under the hood:

import java.io.ByteArrayOutputStream; import java.io.IOException; import java.security.GeneralSecurityException; import java.util.List; import java.util.Properties;  import javax.mail.MessagingException; import javax.mail.Session; import javax.mail.internet.InternetAddress; import javax.mail.internet.MimeMessage; import javax.mail.internet.MimeMessage.RecipientType;  import org.springframework.beans.factory.annotation.Autowired; import org.springframework.core.env.Environment; import org.springframework.stereotype.Service;  import com.google.api.client.repackaged.org.apache.commons.codec.binary.Base64; import com.google.api.services.gmail.Gmail; import com.google.api.services.gmail.model.Message; import com.example.factory.GmailServiceFactoryBean; import com.example.service.MailService; import com.example.service.exception.MailServiceException;  @Service public class MailServiceImpl implements MailService {      private @Autowired GmailServiceFactoryBean gmailServiceFactoryBean;     private @Autowired Environment env;      @Override     public void send(com.example.model.Message message, String recipient) throws MailServiceException {         try {             Gmail gmailService = gmailServiceFactoryBean.getGmailService();             MimeMessage mimeMessage = createMimeMessage(message, recipient);             Message gMessage = createMessageWithEmail(mimeMessage);             gmailService.users().messages().send("me", gMessage).execute();         } catch(MessagingException | IOException | GeneralSecurityException e) {             throw new MailServiceException(e.getMessage(), e.getCause());         }     }      @Override     public void send(com.example.model.Message message, List<String> recipients) throws MailServiceException {         for (String recipient : recipients) {             send(message, recipient);         }     }      private MimeMessage createMimeMessage(com.example.model.Message message, String recipient) throws MessagingException {         Session session = Session.getDefaultInstance(new Properties());          MimeMessage email = new MimeMessage(session);         InternetAddress toAddress = new InternetAddress(recipient);         InternetAddress fromAddress = new InternetAddress(env.getProperty("gmail.api.service.account.email"));          email.setFrom(fromAddress);         email.addRecipient(RecipientType.TO, toAddress);         email.setSubject(message.getTitle());         email.setText(message.getContent(), env.getProperty("application.encoding"));          return email;     }      private Message createMessageWithEmail(MimeMessage email) throws MessagingException, IOException {         ByteArrayOutputStream baos = new ByteArrayOutputStream();         email.writeTo(baos);         return new Message().setRaw(Base64.encodeBase64URLSafeString(baos.toByteArray()));     } } 

When I execute method send(Message message, String recipient) of class MailServiceImpl I get following response:

400 Bad Request {   "code" : 400,   "errors" : [ {     "domain" : "global",     "message" : "Bad Request",     "reason" : "failedPrecondition"   } ],   "message" : "Bad Request" } 

Does anyone know what's wrong?

2 Answers

Answers 1

For GMail API to work, you have to "Delegate domain-wide authority to the service account" within your Google Apps account.

Service account doesn't represent a human Google account. You also can't delegate authority to whole Google domain(***@gmail.com).

The other way out could be with OAuth 2.0 for Web Server Applications or Java Mail api

For more do check: GMail REST API: Using Google Credentials Without Impersonate

Answers 2

Check if you have enabled gmail to send mails using 3rd party applications.

Go to my account ->Sign in and Security -> Connected Apps now scroll to the bottom of the page you will get Less secure apps ->change it to on !! Hope this will work

Read More

Sunday, April 17, 2016

Jira: Display Story Points in pending Sprints also

Leave a Comment

In JIRA v7.0.10

When Im on the backlog screen, I can see the estimated story points of the started sprint, even if I toggle them. (Green Circles) But we have some planned sprints also, where the story points are not visible when the given sprint is toggled. (Red Circles)

How can the Jira be configured to show the story points on not started sprints also?

enter image description here

1 Answers

Answers 1

one solution came to my mind, is you can enable parallel sprint feature which enables starting more than one sprint at a time.with this feature you will able to see the story points in all sprints. to enable parallel sprints you can login as admin and go to JIRA Agile Labs and enable the feature. you can find more information with this link

Read More

Thursday, March 31, 2016

Hadoop: …be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and no node(s) are excluded in this operation

Leave a Comment

I'm getting the following error when attempting to write to HDFS as part of my multi-threaded application

could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and no node(s) are excluded in this operation. 

I've tried the top-rated answer here around reformatting but this doesn't work for me: HDFS error: could only be replicated to 0 nodes, instead of 1

What is happening is this:

  1. My application consists of 2 threads each one configured with their own Spring Data PartitionTextFileWriter
  2. Thread 1 is the first to process data and this can successfully write to HDFS
  3. However, once Thread 2 starts to process data I get this error when it attempts to flush to a file

Thread 1 and 2 will not be writing to the same file, although they do share a parent directory at the root of my directory tree.

There are no problems with disk space on my server.

I also see this in my name-node logs, but not sure what it means:

2016-03-15 11:23:12,149 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy 2016-03-15 11:23:12,150 WARN org.apache.hadoop.hdfs.protocol.BlockStoragePolicy: Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=1, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2016-03-15 11:23:12,150 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable:  unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2016-03-15 11:23:12,151 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 9000, call org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 10.104.247.78:52004 Call#61 Retry#0 java.io.IOException: File /metrics/abc/myfile could only be replicated to 0 nodes instead of [2016-03-15 13:34:16,663] INFO [Group Metadata Manager on Broker 0]: Removed 0 expired offsets in 1 milliseconds. (kafka.coordinator.GroupMetadataManager) 

What could be the cause of this error?

Thanks

1 Answers

Answers 1

This error is caused by the block replication system of HDFS since it could not manage to make any copies of a specific block within the focused file. Common reasons of that:

  1. Only a NameNode instance is running and it's not in safe-mode
  2. There is no DataNode instances up and running, or some are dead. (Check the servers)
  3. Namenode and Datanode instances are both running, but they cannot communicate with each other, which means There is connectivity issue between DataNode and NameNode instances.
  4. Running DataNode instances are not able to talk to the server because of some networking of hadoop-based issues (check logs that include datanode info)
  5. There is no hard disk space specified in configured data directories for DataNode instances or DataNode instances have run out of space. (check dfs.data.dir // delete old files if any)
  6. Specified reserved spaces for DataNode instances in dfs.datanode.du.reserved is more than the free space which makes DataNode instances to understand there is no enough free space.
  7. There is no enough threads for DataNode instances (check datanode logs and dfs.datanode.handler.count value)
  8. Make sure dfs.data.transfer.protection is not equal to “authentication” and dfs.encrypt.data.transfer is equal to true.

Also please:

  • Verify the status of NameNode and DataNode services and check the related logs
  • Verify if core-site.xml has correct fs.defaultFS value and hdfs-site.xml has a valid value.
  • Verify hdfs-site.xml has dfs.namenode.http-address.. for all NameNode instances specified in case of PHD HA configuration.
  • Verify if the permissions on the directories are correct

Ref: https://wiki.apache.org/hadoop/CouldOnlyBeReplicatedTo

Ref: https://support.pivotal.io/hc/en-us/articles/201846688-HDFS-reports-Configured-Capacity-0-0-B-for-datanode

Also, please check: Writing to HDFS from Java, getting "could only be replicated to 0 nodes instead of minReplication"

Read More