Showing posts with label logging. Show all posts
Showing posts with label logging. Show all posts

Tuesday, July 17, 2018

Fluentd error Cannot open connection to elasticsearch cluster

Leave a Comment

I have setup elasticsearch and kibana in my windows laptop and they both are working fine. I can access elasticsearch using 127.0.0.1:9200 and kibana at http://localhost:5601. I have installed fluentd in my ubuntu and below is the configuration file I am using:

<source>   @type tail   path /home/user/log.json   pos_file /home/user/log.json.pos   format json   time_format %Y-%m-%d %H:%M%:%S   tag logger </source>  <match *logger*>   @type elasticsearch   hosts 127.0.0.1:9200   index_name device   type_name randnum   id_key Count </match> 

When I am starting the fluentd which is monitoring a log file, I am getting the below error:

2018-06-16 15:36:56 +0000 [warn]: #0 got unrecoverable error in primary and no secondary error_class=Fluent::Plugin::ElasticsearchOutput::ConnectionFailure error="Can not reach Elasticsearch cluster ({:host=>\"127.0.0.1\", :port=>9200, :scheme=>\"http\"})!" 2018-06-16 15:36:56 +0000 [warn]: #0 bad chunk is moved to /tmp/fluent/backup/worker0/object_1152bdc/56ec41a04c21dcb239184388c1f0c1ec.log

Please can anyone help me with this.

Thanks

0 Answers

Read More

Tuesday, February 13, 2018

How to debug Azure swapping process (sometimes bringing site down)

Leave a Comment

We have a pretty large project that is running on Azure. For some reason swap times became really slow recently, like at least 10 minutes.

Somtimes during the swap the site becomes superslow, like that it doesn't respond for minutes. Other times the swap just doesn't work for one reason or another.

We are using initializationPage to warmup the most specific pages, but it doesn't seem to help.

Question Is it possible to see what's going on during the swap? I'm trying to debug why it's so slow. Is there any log that I can see why it's stuck on what?

We can't deploy emergency fixes without bringing the whole site down. and sometimes the whole site goes down.

Any help to debug swapping problems would greatly appreciated.

Update

I found the following in 'Activity log' on the Azure Portal, but I still can't find any details or any hint what is going on exactly.

enter image description here

So: The resource operation completed with terminal provisioning state 'Failed'.

Where can I find details? It really annoys me that I have to buy Azure Developer support while I'm spending hundreds euros per month already on something that seems broken or at least very uninformative about what is going wrong.

1 Answers

Answers 1

So: The resource operation completed with terminal provisioning state 'Failed'.

Where can I find details?

Microsoft has a few things that may help you.

You can view the operations for a deployment through the Azure portal. You may be most interested in viewing the operations when you have received an error during deployment so this article focuses on viewing operations that have failed. The portal provides an interface that enables you to easily find the errors and determine potential fixes.

The "View deployment operations with Azure Resource Manager" is directly from Microsoft it has several steps to follow. Follow the URL: Microsoft

I hope this helps.

Read More

Monday, January 8, 2018

How to log file contents in request body of a multipart/form-data request

Leave a Comment

I am trying to log all requests in my asp.net web API project to a text file. I am using DelegationHandler feature to implement logging mechanism in my application, below is the code snippet for that,

public class MyAPILogHandler : DelegatingHandler     {         protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)         {            // Captures all properties from the request.             var apiLogEntry = CreateApiLogEntryWithRequestData(request);             if (request.Content != null)             {                     await request.Content.ReadAsStringAsync()                         .ContinueWith(task =>                         {                             apiLogEntry.RequestContentBody = task.Result;                         }, cancellationToken);             }              return await base.SendAsync(request, cancellationToken)                 .ContinueWith(task =>                 {                     var response = task.Result;                      // Update the API log entry with response info                     apiLogEntry.ResponseStatusCode = (int)response.StatusCode;                     apiLogEntry.ResponseTimestamp = DateTime.Now;                      if (response.Content != null)                     {                         apiLogEntry.ResponseContentBody = response.Content.ReadAsStringAsync().Result;                         apiLogEntry.ResponseContentType = response.Content.Headers.ContentType.MediaType;                         apiLogEntry.ResponseHeaders = SerializeHeaders(response.Content.Headers);                     }                         var logger = new LogManager();                     logger.Log(new LogMessage()                     {                        Message = PrepareLogMessage(apiLogEntry),                        LogTo = LogSource.File                     });                      return response;                 }, cancellationToken);         } } 

Above implementation is working as expected and it is logging all required request/response information to the file.

But when we make any multipart/form-data POST api call with some images attached, after logging this request, log file becomes huge, because all image/binary content is getting converted into string and writing down it the text file. please find below log file content,

Body:  ----------------------------079603462429865781513947 Content-Disposition: form-data; name="batchid"  22649EEE-3994-4225-AF73-D9A6B659CAE3 ----------------------------079603462429865781513947 Content-Disposition: form-data; name="files"; filename="d.png" Content-Type: image/png  PNG   IHDR    í   %v ¸   sRGB ®Îé   gAMA  ±üa      pHYs  à  ÃÇo¨d  ÿ¥IDATx^ìýX]K¶( ·îsß»ß÷þï{O÷iÛ    Á2âîîîÁe¹âîî,<@   Á$÷w_ÈZó5$Dwvv×} ----------------------------4334344396037865656556781513947 Content-Disposition: form-data; name="files"; filename="m.png" Content-Type: image/png  PNG   IHDR    í   %v ¸   sRGB ®Îé   gAMA  ±üa      pHYs  à  ÃÇo¨d  ÿ¥IDATx^ìýX]K¶( ·îsß»ß÷þï{O÷iÛ    Á2âîîîÁe¹âîî,<@   Á$÷w_ÈZó5$Dwvv×} 

I don't want to log the binary content of a request body, it could be sufficient to log only request body file contents like,

    ----------------------------079603462429865781513947     Content-Disposition: form-data; name="batchid"      22649EEE-3994-4225-AF73-D9A6B659CAE3     ----------------------------079603462429865781513947     Content-Disposition: form-data; name="files"; filename="d.png"     Content-Type: image/png      ----------------------------4334344396037865656556781513947     Content-Disposition: form-data; name="files"; filename="m.png"     Content-Type: image/png 

Can you please suggest how to prevent logging binary content of request body and log only file contents of a request body.

2 Answers

Answers 1

From what I gather, you are implementing something similar to this approach. When uploading a file (i.e., a request type "multipart/form-data") its actual content always begins with the "Content-Type: {ContentTypeValue}\r\n\r\n" sequence and the next header begins with "\r\n--" sequence (as it is illustrated in your logs). You can explore more info about raw request data parsing at ReferenceSource. So, you can strip everything about file (if exists), for example, via RegEx:

Content-Type: {ContentTypeOrOctetStream}\r\n\r\n{FileContentBytesToRemove}\r\n

using System.Text.RegularExpressions; ... string StripRawFileContentIfExists(string input) {     if(input.IndexOf("Content-Type") == -1)         return input;     string regExPattern = "(?<ContentTypeGroup>Content-Type: .*?\\r\\n\\r\\n)(?<FileRawContentGroup>.*?)(?<NextHeaderBeginGroup>\\r\\n--)";     return Regex.Replace(input, regExPattern, me => me.Groups["ContentTypeGroup"].Value + string.Empty + me.Groups["NextHeaderBeginGroup"].Value); } ... //apiLogEntry.RequestContentBody = task.Result; apiLogEntry.RequestContentBody = StripRawFileContentIfExists(task.Result); 

Answers 2

I'd suggest separate huge content from log. like you encountered, it just screws everything up in the log. to some extent disables log functionality.

I'd suggest you organize those huge content in file system. like this:

---request-a/   |--request-a-body-multi-part1.txt   |--request-a-body-multi-part2.txt 

and just maintain a link in your log to reference this file system path.

hope it helps.

Read More

Monday, August 7, 2017

Java Console Pushes Input Text When Logging Occurs

Leave a Comment

Through MobaXterm's SSH feature, I'm running a Java application on a remote Linux server. A problem arises when I attempt to type into the terminal (to process user input requests via Scanner) and any logging occurs. The text I'm typing is automatically pushed into the logging section when any print statements happen.

Clarifying example:

  1. I manually type "MY_INPUT_TO_SET_SOME_VARIABLE 50" into the console (and never press ENTER).

    enter image description here

  2. Some logging on the server occurs and automatically "sends" the manually typed "MY_INPUT_TO_SET_SOME_VARIABLE 50" into the display area.

    enter image description here

    (above, you can see 50 is appended to 09:08 when I never pressed enter).

The desired behavior is to allow the power user to simply type text in the terminal's text area (or somewhere reasonable) until the ENTER key is pressed. The text in the terminal's text area should not automatically be pushed upon logged or printed statements. I looked in terminal settings and wasn't able to find anything to modify this behavior.

1 Answers

Answers 1

As others already mentioned in the comment section there is not much you can do about that behaviour.

However usually you don't want logging on the tty you're working with.

If you have root rights on the system you connect to try to suppress the log messages on the console and redirect them to a logfile unless there is a good reason not to do so. Since it depends who is sending the messages the method to do so differs.

Another possibility is to start a screen session in your terminal to open a new tty. For ease of use I would connect directly into a screen session:

ssh -t user@server /usr/bin/screen 

If you create a .screenrc file in the home directory of the user you connect to, put

startup_message off 

in it if you don't like the screen start message. You can even start your console app with it, so that the screen session ends when you stop your app.

ssh -t user@server /usr/bin/screen your_start_command_here 

Screen has more features like naming a session, reattaching to a session etc. See the manual for further details.

(The screen solution apparently only works if the log messages on the screen are not produced by your application. In that case configure your logger that it does not log to stdout)

Read More

Friday, August 4, 2017

If passing a file writable stream to the Console constructor in Node.js, is writing to it asynchronous?

Leave a Comment

I've read the docs about the Console object and A note on process I/O, but can't figure out if the following would result in a synchronous or asynchronous operations:

const out = fs.createWriteStream('./out.log') const logger = new Console(out)  logger.log('foo') 

I'm curious about how this acts, especially on a *Nix system. But I wouldn't expect this to act differently on a Windows. The reason I am asking is because I had built a logger which leveraged the Console object, but I don't want the logger to be blocking when writing logs to files while in production.

2 Answers

Answers 1

tldr;
According to Node's official documentation, what you are doing here is synchronous because you are using files.


Writes may be synchronous depending on the what the stream is connected to and whether the system is Windows or Unix:

  • Files: synchronous on Windows and Linux
  • TTYs (Terminals): asynchronous on Windows, synchronous on Unix
  • Pipes (and sockets): synchronous on Windows, asynchronous on Unix

Warning: I strongly recommending not to use these synchronous actions on production services, because synchronous writes block the event loop until the write has completed. This can be a serious drawback when doing production logging.

Reference: Node.js Official Documentations / A note on process I/O

Answers 2

It will be asynchronous.

Internally Console class maintains a callback _stdoutErrorHandler to trigger after the write operation is completed and check for errors. We can test for asynchronicity using it.

const fs = require('fs'); const { Console } = require('console'); const str = new Array(100).fill('').map(() => 'o'.repeat(1000 * 1000)).join(''); const out = fs.createWriteStream('./o.txt'); const logger = new Console(out); logger._stdoutErrorHandler = () => { console.log('written');}; logger.log(str); console.log('hey'); 

You'll see that 'hey' get printed before 'written'.

The note on process I/O applies to process.stdin and process.stdout which are special streams. When they point to files, as in the following:

$ node someCode.js > file.txt 

... in Unix the write operation in Unix will be synchronous. This is handled in lines here. In such cases, process.stdout stream will be connected to a file and not the usual unix file descriptor fd1.

Read More

Monday, May 29, 2017

Syslog not forwarding remote messages

Leave a Comment

I configured /etc/syslog.conf with below configuration

*.* @10.10.10.2:514 *.* @@10.10.10.2:514 

and logged through below code

openlog("Test-Msg", LOG_PID, LOG_LOCAL0); for (int i = 0; i <10; i++) {     syslog(LOG_ALERT, "My msg %d", i);     std::cout<<"-------------Writing Syslog "<<i<<"\n"; }  closelog(); 

but its not forwarding to remote server. instead of that it creates a file "@10.10.10.2:514" & "@@10.10.10.2:514" and logging all the message there.

Tested with wireshark, no messages are forwarded to remote system.

I am using yocto platform and busybox 1.22 syslog implementation.

Update

In yocto I saw one more configuration file /etc/syslog-startup.conf and there I configured

DESTINATION=remote  # log destinations (buffer file remote) REMOTE=10.10.10.2:514          # where to log (syslog remote) 

Now its started forwarding all the messages, but as per the linux manuals syslog conf must support *.=alert @<host:port> filter. If I have to use the above configuration how can I apply the filters?

2 Answers

Answers 1

By default Yocto-based systems use Busybox to provide minimal versions of many basic tools. syslog is one of those tools. This is a quote from Busybox documentation:

Note that this version of syslogd ignores /etc/syslog.conf.

To get full syslog functionality you'd have to include a more complete implementation on your image. There are several options in meta-openembedded, rsyslog in meta-oe is probably a good default choice.

Answers 2

I would first use logger (tool incluided in busybox) to ensure that your syslog configuration is correct. If the messages are sent well by this method, then we can investigate the code.

logger [OPTIONS] [MESSAGE]  Write MESSAGE to the system log. If MESSAGE is omitted, log stdin.  Options:          -s      Log to stderr as well as the system log         -t TAG  Log using the specified tag (defaults to user name)         -p PRIO Priority (numeric or facility.level pair) 
Read More

Wednesday, May 24, 2017

best way to save nginx request as a file?

Leave a Comment

i am looking for a solution to save data sent via http (e.g. as a POST) as quickly as possible (with lowest overhead) via nginx (v1.2.9). i tried the following nginx configuration, but am not seeing any files written in the directory:

server {   listen 9199;   location /saveme {     client_body_in_file_only on;     client_body_temp_path /tmp/bodies;   } } 

what am i doing wrong? and/or is there a better way to accomplish this? (the data that is written should ideally be one file per request, and it does not matter if it is fairly "raw" in nature. post-processing of the files will be done via a separate process via a queue.)

2 Answers

Answers 1

This question has already been answered here:

Basically, you need to combine log_format and fastcgi_pass. You can then use the access_log directive for example, to specify where the saved variable should be dumped to.

location = /saveme {   log_format postdata $request_body;   access_log  /var/log/nginx/postdata.log  postdata;   fastcgi_pass php_cgi; } 

It could also work with your method but I think you're missing client_body_buffer_size and `client_max_body_size

Answers 2

Do you mean save cache for HTTP post while someone access and request file and store on hdd rather than memory? I may suggest use proxy_cache_path and proxy_cache. The proxy_cache_path directive sets the path and configuration of the cache, and the proxy_cache directive activates it.

proxy_cache_path /path/to/cache levels=1:2 keys_zone=my_cache:10m max_size=10g                  inactive=60m use_temp_path=off; server { ...     location / {         proxy_cache my_cache;         proxy_pass http://my_upstream;     } } 
  • The local disk directory for the cache is called /path/to/cache
  • levels sets up a two‑level directory hierarchy under /path/to/cache/
  • keys_zone sets up a shared memory zone for storing the cache keys and metadata such as usage timers
  • max_size sets the upper limit of the size of the cache
  • inactive specifies how long an item can remain in the cache without being accessed

    the proxy_cache directive activates caching of all content that matches the URL of the parent location block (in the example, /). You can also include the proxy_cache directive in a server block; it applies to all location blocks for the server that don’t have their own proxy_cache directive.

Read More

Tuesday, January 10, 2017

How to get event details in middleware for socket.io

Leave a Comment

I am trying to log the event name and parameter for each event on my Node server. For this purpose I used

io.use(function(socket, next){   // how to get event name out of socket. }); 

Now, I got stuck while trying to get event name and arguments. To me, it looks like common demand from API dev, so I am pretty sure there must be some way to the library to get that, I have tried to read the docs and source but I am not able to get the stuff.

1 Answers

Answers 1

The socket events needs to be handled properly,in any case if an event is not handled there will be no response.

var io = require('socket.io')(server); var sessionMiddleWare=(session({secret: 'secret key', resave: true, saveUninitialized: true,cookie: { path: '/', httpOnly: true, maxAge: 300000 },rolling: true}));  app.use(sessionMiddleWare)  io.use(function(socket, next) {   sessionMiddleWare(socket.request, socket.request.res, next); });  io.on('connection', function(socket) {  // On Socket connection.    // inside this you can use different events     //event name and parameters can be found in socket variable.     console.log(socket.id) // prints the id sent from the client.    console.log(socket.data) // prints the data sent from the client.     // example event     socket.on('subscribe', function(room) {  // Event sample.         console.log('joining room', room);         socket.room=room;         socket.join(room);     }); }) 

Hope this helps.

Read More

Tuesday, August 23, 2016

Customize logging for external/third-party libs

Leave a Comment

I followed the advice of the django docs, and use logging like this:

import logging logger = logging.getLogger(__name__)  def today(...):     logger.info('Sun is shining, the weather is sweet') 

With my current configuration, the output looks like this:

2016-08-11 14:54:06 mylib.foo.today: INFO Sun is shining, the weather is sweet 

Unfortunately some libraries which I can't modify use logging like this:

import logging  def third_party(...):     logging.info('Make you want to move your dancing feet') 

The output unfortunately looks like this:

2016-08-09 08:28:04 root.third_party: INFO Make you want to move your dancing feet 

I want to see this:

2016-08-09 08:28:04 other_lib.some_file.third_party: INFO Make you want to move your dancing feet 

Difference:

root.third_party ==> other_lib.some_file.third_party

I want to see the long version (not root) if code uses logging.info() instead of logger.info()

Update

This is not a duplicate of Elegant setup of Python logging in Django, since the solution of it is:

Start of quote

In each module, I define a logger using

logger = logging.getLogger(__name__) 

End of quote.

No, I won't modify third-party-code which uses logging.info() instead of logger.info().

Follow Up Question

Avoid logger=logging.getLogger(__name__) without loosing way to filter logs

2 Answers

Answers 1

As Wayne Werner suggested, I would use the Log Record format options. Here's an example.

File 1: external_module

import logging def third_party():     logging.basicConfig(level=logging.DEBUG)     logger = logging.getLogger()      logger.info("Hello from %s!"%__name__) 

File 2: main

import external_module import logging  logging.basicConfig(level=logging.DEBUG,                     format='%(asctime)s %(module)s.%(funcName)s: %(levelname)s %(message)s') logger = logging.getLogger(__name__)  def cmd():     logger.info("Hello from %s!"%__name__)     external_module.third_party() cmd() 

Output:

2016-08-11 09:18:17,993 main.cmd: INFO Hello from __main__! 2016-08-11 09:18:17,993 external_module.third_party(): INFO Hello from external_module! 

Answers 2

That's because they're using the root logger (which is what you get by default when you just do

import logging  logging.info("Hi! I'm the root logger!") 

If you want to do something different you have two (or three) options. The best would be to use the Log Record format options. Alternatively, you could monkey patch the libraries that you're using, e.g.

import logging import mod_with_lazy_logging  mod_with_lazy_logging.logger = logging.getLogger(mod_with_lazy_logging.__name__) 

Or you could do something gnarly with parsing the ast and rewriting their bits of logging code. But, don't do that.

Read More

Saturday, June 11, 2016

Piping timechart into streamstats

Leave a Comment

We have splunk index for certain events. Events are categorized by event type.

I need to find fixed size (let say, 5 min) windows where frequency (events per second) of any events drops/rises more than a preset percentage (let say, 50%) as compared to a preceding window.

I, unsuccessfully, tried something like this:

 index=index_of_events | eval cnt=1 | timechart span=20s limit=40 per_second(cnt) as ev  by ev_type useother=f usenull=f |  streamstats window=40 global=false first(ev) as start last(ev) as end by ev_type |   eval diff=abs(start-end) | eval max_val=max(start, end) |   where diff > 0 AND max > 0 | eval prc=100*diff/max_val | where prc > 50 

Is it approach doable? Can I pipe timechart directly to streamstats or do I need something like untable between them?

Is there a better way to accomplish such task?

If possible I would also like to exclude low frequency events (do not care if 2/sec becomes 1/sec).

0 Answers

Read More

Wednesday, April 20, 2016

How to generate a JSON log from nginx?

Leave a Comment

I'm trying to generate a JSON log from nginx.

I'm aware of solutions like this one but some of the fields I want to log include user generated input (like HTTP headers) which need to be escaped properly.

I'm aware of the nginx changelog entries from Oct 2011 and May 2008 that say:

*) Change: now the 0x7F-0x1F characters are escaped as \xXX in an    access_log. *) Change: now the 0x00-0x1F, '"' and '\' characters are escaped as \xXX    in an access_log. 

but this still doesn't help since \xXX is invalid in a JSON string.

I've also looked at the HttpSetMiscModule module which has a set_quote_json_str directive, but this just seems to add \x22 around the strings which doesn't help.

Any idea for other solutions to log in JSON format from nginx?

2 Answers

Answers 1

You can try to use that one https://github.com/jiaz/nginx-http-json-log - addition module for Nginx.

Answers 2

You can try to use:

PS: The if parameter (1.7.0) enables conditional logging. A request will not be logged if the condition evaluates to “0” or an empty string:

map $status $http_referer{     ~\xXX  0;     default 1; }  access_log /path/to/access.log combined if=$http_referer; 

It’s a good idea to use a tool such as https://github.com/zaach/jsonlint to check your JSON data. You can test the output of your new logging format and make sure it’s real-and-proper JSON.

Read More