Giter VIP home page Giter VIP logo

nanolog's People

Contributors

derekxgl avatar hellium666 avatar iyengar111 avatar stephanusvictor avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nanolog's Issues

gcc 4.8 build

I'm trying to build NanoLog using gcc 4.8 on CentOS 7.3 64bit and it fails. I receive 2 errors:

NanoLog.cpp:357:86: error: size of array ‘padding’ is too large
char padding[256 - sizeof(std::atomic_flag) - sizeof(char) - sizeof(NanoLogLine)];
^
NanoLog.cpp: In constructor ‘nanolog::RingBuffer::RingBuffer(size_t)’:
NanoLog.cpp:371:6: error: static assertion failed: Unexpected size != 256
static_assert(sizeof(Item) == 256, "Unexpected size != 256");

The sizeof(NanoLogLine) is 256 so that seems to be the issue. Do I need a newer gcc to build this?

Remove full path from __FILE__ and only keep filename

I feel it would preferable and cleaner to remove the full path of a file from the log.
Depending on how deep you are, the path can take a lot of space and clutter the log.

Assuming that you have seen g3log, this is also done there.

   std::string splitFileName(const std::string& str) {
      size_t found;
      found = str.find_last_of("(/\\");
      return str.substr(found + 1);
   }

nanolog probably have a problem with char array

Hi,

I have to use an old library which returns me information as char arrays, for example char[9] to represent the time. All char arrays have trailing '\0', and the array space may not be fully used. For example , sometimes it returns char[9] with 9 '\0'. However when I log the char array using LOG_INFO<<time;, I get garbage log, especially for debug version. I temporarily walk around it by LOG_INFO<<std::string(time), so that I can get correct result for both debug and release versions, however this slows down the logger.
Could you suggest what could be wrong?

Thanks,

Unnecessary/unintentional flushing of the log

It seems like you're using std::endl instead of \n here which includes an implicit flush and then doing a flush a couple of lines below for critical log entries. I'm guessing this use of std::endl was unintentional otherwise the flushing of the critical log entries is entirely pointless because you're flushing every single line already.

Try out linked list based data structure

struct LinkedList : public BufferBase
{
    LinkedList() : m_head(nullptr)
    {
    }

    struct Item
    {
        Item(NanoLogLine && logline_) : next(nullptr), logline(std::move(logline_))
        {}
        std::atomic < Item * > next;
        NanoLogLine logline;
    };

    void push(NanoLogLine && logline) override
    {
        Item * item = new Item(std::move(logline));
        Item * head = m_head.load(std::memory_order_relaxed);
        do
        {
            item->next = head;
        } while(!m_head.compare_exchange_weak(head, item, std::memory_order_release, std::memory_order_relaxed));
    }

    bool try_pop(NanoLogLine & logline) override
    {
        if (!m_read_buffer.empty())
        {
            logline = std::move(m_read_buffer.front());
            m_read_buffer.pop_front();
            return true;
        }
        Item * head = get_current_head();
        while (head != nullptr)
        {
            Item * next = head->next.load(std::memory_order_acquire);
            m_read_buffer.push_front(std::move(head->logline));
            delete head;
            head = next;
        };
        if (m_read_buffer.empty())
            return false;
        else
            return try_pop(logline);
    }

    Item * get_current_head()
    {
        Item * current_head = m_head.load(std::memory_order_acquire);
        while (!m_head.compare_exchange_weak(current_head, nullptr, std::memory_order_release, std::memory_order_relaxed));
        return current_head;
    }


private:
    std::atomic < Item * > m_head;
    std::deque < NanoLogLine > m_read_buffer;
};

Please show also worst case latencies

Sweet library! Nice work

I found the bench mark lacking in a couple of aspects

  1. It doesn't show how the logger performce under congestion. I.e when the buffer gets full
  2. Could you please also add worst case latencies and not only average?
    For time critical operations one cares less about the average as how the slowest log call does (especially when the ring buffer gets full).

Thanks
Kjell

Support for dll builds in Win

Is this something that would interest you ?
If yes I can make a pull request or I can paste the code here as it is only minor changes in the header file.

Console sink?

(I have swapped from boostlog, to spdlog, to g3log, and now im using nanolog)

I like the simple interface of nanolog, but have lots of code that expects INFO and above logs to end up on console. How do I do this in nanolog?

Please publish bench results with more threads (e.g. 30) - spdlog is much faster than nanolog in those cases

For example, I tried with 30 threads:

spdlog:

Thread count: 30
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 2| 2008| 5363| 7.327570|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2034| 4904| 7.465930|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2121| 4601| 7.421450|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2104| 6542| 7.751890|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2182| 6650| 7.929030|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2069| 5047| 7.685910|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2162| 6604| 7.818460|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 2| 2136| 6884| 8.048350|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2122| 5321| 8.257710|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2049| 5449| 7.846350|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2197| 6425| 8.230680|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2153| 6359| 7.982550|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2157| 5757| 8.220220|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2095| 6169| 7.888390|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2186| 7179| 8.334880|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2069| 4141| 8.039100|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2147| 5234| 8.304780|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2095| 5316| 8.281490|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2171| 5917| 8.673500|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2148| 5867| 8.482180|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2166| 5445| 8.655320|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 4| 2134| 5746| 8.466010|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2094| 5471| 8.202370|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2061| 5310| 7.812070|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2146| 5571| 8.565020|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 10| 2032| 4975| 8.168500|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2162| 5578| 8.285580|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2069| 5720| 8.202470|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2022| 5511| 8.259040|
spdlog percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 2011| 4762| 8.241690|

Nanolog:

Thread count: 30
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 4310| 178761|50.629680|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 43| 264004|52.347730|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 8004| 311883|52.771020|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 12005| 147729|54.932310|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 14102| 164018|57.345880|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 7267| 192764|57.332810|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 12005| 163999|57.516070|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 3960| 198338|59.072780|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 12006| 180000|58.336400|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 17070| 208002|58.790100|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 12005| 267919|58.528500|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 15592| 196010|58.751410|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 16004| 192001|57.880670|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 23810| 172006|59.647280|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 16015| 200010|59.560980|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 16003| 235997|59.704740|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 18501| 179637|60.219780|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 20006| 168489|60.772730|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 12004| 224008|60.729360|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 15729| 134244|60.417340|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 16016| 219633|60.976690|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 19583| 212665|59.847950|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 15125| 240006|61.047630|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 17420| 167854|61.279070|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 14772| 188001|60.885090|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 18502| 171997|61.185230|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 19992| 204001|60.691190|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 11461| 231877|60.039350|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 12007| 160004|60.264890|
nanolog_guaranteed percentile latency numbers in microseconds
50th| 75th| 90th| 99th| 99.9th| Worst| Average|
1| 1| 2| 3| 15697| 212002|60.925000|

VS 2015 - ATOMIC_FLAG_INIT, ...

Hi,
I am not sure if this is the place or method, but I would like to report some issues.

I am trying out this lib in Visual Studio 15 (v140) and had to make three modifications to get it to compile and run.

  1. Initialisation of the atomic_flag. (2x locations)
    I changed "flag(ATOMIC_FLAG_INIT )" to "flag{ ATOMIC_FLAG_INIT }"
    ATOMIC_FLAG_INIT is defined as {0} and flag(ATOMIC_FLAG_INIT ) tries to invoke the copy constructor for the flag.
  2. Initialise m_current_read_buffer to nullptr in class QueueBuffer.
  3. #include in cpp for format_timestamp() to work.

regards
Stephanus

Add an setting to append to the existing log file

Hi, It would be nice to have an "append" setting to the nanolog::initialize.
Currently the file is just truncated on initialization. The problem is when one uses supervisor and if application crashes, supervisor restarts it and all the previous logs get lost.

a better ring buffer

1st of all, I think ring buffer should be compile time fixed sized so it's trivial to do modulus op.

I'm thinking about a better ring buffer which can be multiple producers & multiple consumers without using spinlock. It's kind of like https://en.wikipedia.org/wiki/Seqlock. Say a ring buffer of size 1024 * 1024. It has a seq number member variable (default 1). Each item in the ring buffer has a seq number atomic var (instead of your atomic_flag) as well. When ring buffer gets created, all items have default seq no 0. When a producer wants to add an item, ringbuffer::seqNo.fetch_add(1...) return the seq no of the item to be written by this producer. Obviously the index of item (or called bucket) is seq no % SIZE. Even all producers want to write items at same time, all got unique bucket to work on without race. So far it's similar to what you are doing now. I think it's fair to assume it's super fast to copy the data to bucket, so it won't happen that a producer logs so fast that it can compete same bucket with another producer. Basically it should be safe to assume that it's not possible that a producer or a few producers can log the whole ring buffer while one producer is still working on a particular bucket. Actually before producer writes data to bucket, it set bucket seq no to seq No - 1 indicating data might be dirty. Once finishing writing, update bucket seq no to seq No.

Now back to the consumer (or consumers). Consumer knows that 1st item is seq no 1. So it read the seq no atomic variable of the bucket (index 1 of the ring buffer). If atomic var has value less than the seq no consumer is expecting, it means data is not ready. So consumer keeps trying to read the atomic var. If var is greater than seq no consumer is expecting, then consumer is too slow. Assume consumer is fast, and atomic var is 1. Consumer copies the data, and read bucket seq no again to make sure data is not corrupted(it's possible that producer is updating same buffer while consumer is reading, which also means consumer is too slow). Suppose all good. Then consumer moves on to seq no 2, 3, ... . Consumer never updates anything on ring buffer, so multiple consumers are supported (assuming all read same data).

What do you think? It's similar to what i have done in the past (single producer though), but I think this should work, and it's not very complicated). I'm happy to write the code, but i think you probably have more free time than me.

Fix warning messages during compilation

Hi, I have few warning messages during compilation (clang++). It would be nice to fix them.


NanoLog/NanoLog.cpp:57:34: warning: format specifies type 'unsigned long long' but the argument has type 'unsigned long' [-Wformat]
sprintf(microseconds, "%06llu", timestamp % 1000000);
                       ~~~~~~   ^~~~~~~~~~~~~~~~~~~
                       %06lu
NanoLog/NanoLog.cpp:349:11: warning: braces around scalar initializer [-Wbraced-scalar-init]
: flag{ATOMIC_FLAG_INIT}
       ^~~~~~~~~~~~~~~~
.../include/c++/6.3.0/bits/atomic_base.h:157:26: note: expanded from macro 'ATOMIC_FLAG_INIT'
#define ATOMIC_FLAG_INIT { 0 }
                         ^~~~~
NanoLog/NanoLog.cpp:484:15: warning: braces around scalar initializer [-Wbraced-scalar-init]
, m_flag{ATOMIC_FLAG_INIT}
         ^~~~~~~~~~~~~~~~
.../include/c++/6.3.0/bits/atomic_base.h:157:26: note: expanded from macro 'ATOMIC_FLAG_INIT'
#define ATOMIC_FLAG_INIT { 0 }
                         ^~~~~

Integration with a crash handling library

NanoLog does not contain crash handling code. I think this is the right decision, as it is the task of a crash handling library. But it would be nice if NanoLog had a function that can be called when a crash happened to be sure that all log messages were written. Something like:

CrashHandling::setCrashCallback([] {
    nanolog::flush();
});

some type mismatches

Hi,

I see following type mismatch warnings when compiling nanolog:
NanoLog.cpp(232): warning C4267: '=': conversion from 'size_t' to 'uint32_t', possible loss of data NanoLog.cpp(239): warning C4267: '=': conversion from 'size_t' to 'uint32_t', possible loss of data NanoLog.cpp(268): warning C4267: '+=': conversion from 'size_t' to 'uint32_t', possible loss of data NanoLog.cpp(571): warning C4244: '+=': conversion from 'std::streamoff' to 'uint32_t', possible loss of data
According to the code, they may cause unexpected behaviors when uint32_t variable is overflow. To be safe, I can not simply ignore them.

Thanks,

benchmark

Hi,

Something I thought of. The higher number of percentile the worse the performance, right? (unless my skim reading was bad).

I think it would be even clearer if you explained that in your benchmark result table. Also maybe adding the numbers from "0 - 50 percentile" in one percentile "bucket" to even more highlight the kick-ass Nanolog :)

Scalability

How does this library respond to scalability? One example is multi-process logging (Possibly, multiple processes writing on the same file).

Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.