Giter VIP home page Giter VIP logo

cloud-pine's Introduction

  • ๐Ÿ‘‹ Hi, Iโ€™m @metcoder95
  • ๐Ÿ“ซ How to reach me - Feel free to reach me for any topic, question, support, and so on.

cloud-pine's People

Contributors

dependabot[bot] avatar kruczyna avatar metcoder95 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

Forkers

kruczyna

cloud-pine's Issues

Unhandled 'error' event for the initial example

Describe the bug
The (slightly modified) example from the docs results in Unhandled 'error' event.
Although the logs were successfully sent (!).

To Reproduce

import fs from 'node:fs';
const CRED_FILE_PATH = './path-to-credentials.json';
const creds = JSON.parse(fs.readFileSync(CRED_FILE_PATH, 'utf-8'));

import pino from 'pino';
const logger = pino({
    transport: {
        target: 'cloud-pine',
        options: {
            logName: 'my_cloud-pine',
            cloudLoggingOptions: {
                googleCloudOptions: {
                    projectId:   creds.project_id,
                    credentials: creds,
                },     
            }
        }
    }
});

logger.info('Hello from pino');
logger.error({ andSomedata: 1 }, 'Pino Error');

Expected behavior
No errors, graceful exit

Screenshots

$node pino.js
node:events:491
      throw er; // Unhandled 'error' event
      ^

Error: end() took too long (10s)
    at end (C:\Users\...\test-google-logs\node_modules\thread-stream\index.js:406:15)
    at ThreadStream.end (C:\Users\...\test-google-logs\node_modules\thread-stream\index.js:258:5)
    at autoEnd (C:\Users\...\test-google-logs\node_modules\pino\lib\transport.js:75:10)
    at process.wrap (C:\Users\...\test-google-logs\node_modules\on-exit-leak-free\index.js:10:7)
    at Object.onceWrapper (node:events:628:26)
    at process.emit (node:events:513:28)
Emitted 'error' event on ThreadStream instance at:
    at destroy (C:\Users\...\test-google-logs\node_modules\thread-stream\index.js:349:12)
    at end (C:\Users\...\test-google-logs\node_modules\thread-stream\index.js:415:5)
    at ThreadStream.end (C:\Users\...\test-google-logs\node_modules\thread-stream\index.js:258:5)
    [... lines matching original stack trace ...]
    at process.emit (node:events:513:28)

Desktop (please complete the following information):

  • OS: Windows 10
  • Nodejs: v18.16.0

Add Documentation

The documentation for how to properly use the library is missing.
Adding documentation will enable us to understand the different functionalities exposed by the library meanwhile showing examples of how to use it properly in its different modes.

Enabling error message truncation

Is your feature request related to a problem? Please describe.

Entire app is crashing with Log entry with size 300.0K exceeds maximum size of 256.0K error. This is happening for exemple with prisma when some errors are catched after huge failed SQL queries.

Describe the solution you'd like

Add a default error message truncation and an maxEntrySize param. E.g. with winston

Describe alternatives you've considered

Truncating error message in NestJs logger class extension:

import { Injectable } from '@nestjs/common';
import { Logger } from 'nestjs-pino';

@Injectable()
export class LoggerService extends Logger {
  error(message: any, ...context: any[]): void {
    super.error(this.limitErrorMessageTo256KB(message));
  }

  // log, warn, etc..

   /**
   * Temporary fix for the following issue:
   * @see https://github.com/metcoder95/cloud-pine/issues/34
   *
   * Google Cloud Platform doesn't accept more than 256KB for Logging Message
   * see: https://cloud.google.com/logging/quotas
   */
  private limitErrorMessageTo256KB(error: unknown) {
    if (!(error instanceof Error)) {
      return error;
    }

    const MAX_BYTES = 50_000; // Set the limit to 50KB

    const encoder = new TextEncoder();
    const messageBytes = encoder.encode(error.message);
    const stackBytes = encoder.encode(error.stack);

    if (messageBytes.length + stackBytes.length <= MAX_BYTES) {
      return error;
    } else {
      const decoder = new TextDecoder();
      const truncatedErrorMessage =
        decoder.decode(messageBytes.slice(0, MAX_BYTES)) + '...';

      error.message = truncatedErrorMessage;
      error.stack = undefined;
      return error;
    }
  }
}

Way to reproduce

import Pino from ('pino')

const logger = Pino({
   transport: {
      target: 'cloud-pine',
      options: {
         cloudLoggingOptions: {
            skipInit: true,
            sync: true,
         }
      }
   }
})

function generateLargeJSON(sizeInBytes) {
  const characters =
    'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789';
  const dataSize = sizeInBytes - 2; // Account for braces '{}' in JSON

  // Generate random string data for the given size
  let data = '';
  while (data.length < dataSize) {
    const randomIndex = Math.floor(Math.random() * characters.length);
    data += characters[randomIndex];
  }

  // Convert the data string into a JSON object
  const jsonObject = { data };

  return JSON.stringify(jsonObject);
}

// Generate a JSON object of size ~260 KB (260,000 bytes)
const jsonSizeInBytes = 260 * 1024; // 260 KB in bytes
const largeJSON = generateLargeJSON(jsonSizeInBytes);

logger.error(JSON.stringify(largeJSON))

(Just for documentary reasons): Using cloud-pine in a Google Cloud Run service while logging extensively is not a good idea

Describe the bug

Hi. I used cloud-pine in 2 Google Cloud Run services (because I didn't know better back that day). I did some "heavy logging" for debugging reasons in my QA-environment and everytime at a certain (logging-heavy) function, my service crashed. Even 8 CPUs and 32 GB RAM wasn't enough. Then I turned cloud-pine off (Google Cloud Run works pretty fine without cloud-pine because it streams stdout and stderr to Google Cloud Logging anyway) and everything works like a charm (1 CPU, not more than 150 MB RAM).

I don't know much about the nitty gritty of cloud-pine. Perhaps its not because of your code but more because of the resource-intense API calls that are made to send the logs to Google Cloud Logging?

This issue is just for documentary reasons. I don't need this solved. Perhaps it helps you. I just wanted to share this story with you. I think you should know about it.

To Reproduce

Steps to reproduce the behavior:

  1. Start a new node js project
  2. Configure pino logger with cloud-pine as target
  3. Do some really heavy logging
  4. Deploy to GCR
  5. Monitor the load
  6. Turn cloud-pine off
  7. Deploy again
  8. Monitor the load
  9. Compare

Here my logger config WITH cloud-pine activated. Perhaps I did something wrong?

import pino from 'pino';
import dotenv from 'dotenv';

dotenv.config(); // I have to init dotenv here as it isn't initialized yet

function getTransport() {
  if ('LOCAL_LOGGER' in process.env) {
    return {};
  }
  return {
    transport: {
      target: 'cloud-pine',
    },
  };
}

const logger = pino({
  ...getTransport(),
  timestamp: pino.stdTimeFunctions.isoTime,
  redact: {
    paths: ['email', 'password', 'token'],
  },
});

export default logger;

Here my logger config without cloud-pine:

import pino from 'pino';
import dotenv from 'dotenv';

dotenv.config(); // I have to init dotenv here as it isn't initialized yet

const logger = pino({
  timestamp: pino.stdTimeFunctions.isoTime,
  redact: {
    paths: ['email', 'password', 'token'],
  },
});

export default logger;

Environment

  • Google Cloud Run, 1 CPU, 512 MB (but also tried with 8 CPU and 32 GB RAM)
  • Docker Image: node:18-alpine (but also tried node:18)

[Performance] Implement back-pressure

          Hey ๐Ÿ‘‹ 

Thanks for the feedback.

I think that should be doable, so far we have two things here, we can start by extending the options also to allow configure split2 with the args maxLength and skipOverflow so the behaviour is customizable.

The second one is that the current implementation is quite basic, and does not implement backpressure, which can be implemented as further enhancement.

Originally posted by @metcoder95 in #34 (comment)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.