You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The latest changes break logging in AWS lambda functions. In lambda functions, all async processing must be done before you exit the lambda handler else that processing will basically not occur (the process gets frozen). You'll lose logs calls if you can't flush the stream before exiting. There's a bunch of detail on this in this Winston issue: winstonjs/winston#1250
This no longer works with the most recent changes. Specifically, moving the call to callback() into the callback from node-loggly-bulk breaks this (7a6d534#diff-6aa8c2a6db3f74b4c5896963f56925feR138). I have some details below from a bunch of debugging of what fails if you're interested.
That move of the callback does make sense to me. Seems like you should wait to call it until node-loggly-bulk has finished actually sending logs. But, there does seem to be a back pressure issue. I'm not sure if this lies here or in winston though so I'm starting here.
BUG DETAILS:
There appears to be a back pressure issue. If you have a lot of logs queued up in a short period of time you'll end up causing a problem. In my case, we have ~40 lines of logging in about 20ms and then call logger.end() to flush the stream since the lambda function is ready to exit. The # of log events appears to exceed the high water mark for the stream so not all of them log messages are queued up (I'm not sure where the buffering of the remaining logs occurs when there's back pressure. My guess is either here or in winston). The buffer gets flushed and then end is processed. But, the remaining log events get pushed into the stream after end. This causes an exception to be thrown (write after end). Which is caught in node-loggly-bulk here: https://github.com/loggly/node-loggly-bulk/blob/master/lib/loggly/client.js#L202.
The exception was actually thrown FROM the call to callback above that line on line 196. The stream was flushing the buffer that had gotten more events which is what triggers the write after end. And then the callback is called again in the catch block. That causes another exception (which is what ultimately kills execution) because the callback inside the stream is cleared after the error state. So, you get a very unclear error that cb is not a function from deep inside the streams code. The actual stack trace looks like this:
/app/node_modules/readable-stream/lib/_stream_writable.js:447
cb(er);
^
TypeError: cb is not a function
at onwriteError (/app/node_modules/readable-stream/lib/_stream_writable.js:447:5)
at onwrite (/app/node_modules/readable-stream/lib/_stream_writable.js:470:11)
at WritableState.onwrite (/app/node_modules/readable-stream/lib/_stream_writable.js:180:5)
at logged (/app/node_modules/winston-loggly-bulk/lib/winston-loggly.js:148:17)
at /app/node_modules/node-loggly-bulk/lib/loggly/client.js:204:9
at Request._callback (/app/node_modules/node-loggly-bulk/lib/loggly/common.js:209:15)
at Request.self.callback (/app/node_modules/request/request.js:185:22)
at Request.emit (events.js:198:13)
at Request.<anonymous> (/app/node_modules/request/request.js:1154:10)
at Request.emit (events.js:198:13)
We can mask the error by adding an error handler on the stream (i.e., logger.on('error', ...)). That will prevent killing the result of the function. But, we'll still lose a bunch of logging which stinks.
The text was updated successfully, but these errors were encountered:
The latest changes break logging in AWS lambda functions. In lambda functions, all async processing must be done before you exit the lambda handler else that processing will basically not occur (the process gets frozen). You'll lose logs calls if you can't flush the stream before exiting. There's a bunch of detail on this in this Winston issue: winstonjs/winston#1250
Previously, we could just call end and wait for the 'finish' event to flush the stream. See here for an example (from that issue): winstonjs/winston#1250 (comment). This is what
winston
specifically recommends for flushing the stream: https://github.com/winstonjs/winston#awaiting-logs-to-be-written-in-winston.This no longer works with the most recent changes. Specifically, moving the call to
callback()
into the callback fromnode-loggly-bulk
breaks this (7a6d534#diff-6aa8c2a6db3f74b4c5896963f56925feR138). I have some details below from a bunch of debugging of what fails if you're interested.That move of the callback does make sense to me. Seems like you should wait to call it until
node-loggly-bulk
has finished actually sending logs. But, there does seem to be a back pressure issue. I'm not sure if this lies here or inwinston
though so I'm starting here.BUG DETAILS:
There appears to be a back pressure issue. If you have a lot of logs queued up in a short period of time you'll end up causing a problem. In my case, we have ~40 lines of logging in about 20ms and then call
logger.end()
to flush the stream since the lambda function is ready to exit. The # of log events appears to exceed the high water mark for the stream so not all of them log messages are queued up (I'm not sure where the buffering of the remaining logs occurs when there's back pressure. My guess is either here or inwinston
). The buffer gets flushed and then end is processed. But, the remaining log events get pushed into the stream after end. This causes an exception to be thrown (write after end). Which is caught innode-loggly-bulk
here: https://github.com/loggly/node-loggly-bulk/blob/master/lib/loggly/client.js#L202.The exception was actually thrown FROM the call to
callback
above that line on line 196. The stream was flushing the buffer that had gotten more events which is what triggers the write after end. And then the callback is called again in the catch block. That causes another exception (which is what ultimately kills execution) because the callback inside the stream is cleared after the error state. So, you get a very unclear error thatcb
is not a function from deep inside the streams code. The actual stack trace looks like this:We can mask the error by adding an error handler on the stream (i.e.,
logger.on('error', ...)
). That will prevent killing the result of the function. But, we'll still lose a bunch of logging which stinks.The text was updated successfully, but these errors were encountered: