Giter VIP home page Giter VIP logo

sexilog's People

Contributors

rschitz avatar shartge avatar vmdude avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sexilog's Issues

/sexilog on whole disk /dev/sdb (not /dev/sdb1 partition)

Please make ext4 filesystem on whole /dev/sdb (not /dev/sdb1)

When ext4 is on whole disk online re-sizing /sexilog filesystem is as easy as:

echo 1 > /sys/devices/pci0000:00/0000:00:10.0/host0/target0:0:1/0:0:1:0/rescan
resize2fs /dev/sdb

zoom out feature causes SexiLog to Crash

Can any lend some incite on recovering from Sexilog crash. I clicked the zoom out feature 2x and Sexilog web server stop responding. Appliance still responds to ping. tried a reboot with no luck

Wrong IP set with seximenu in /etc/hosts

When using SexiMenu to update network settings, there is an issue with /etc/hosts file as ip address is wrongly declared.

For instance:

root@log02:/usr/share/nginx/www# cat /etc/hosts
127.0.0.1   localhost
   log02.skynet.local

Instead of:

root@log02:/usr/share/nginx/www# cat /etc/hosts
127.0.0.1   localhost
192.168.0.218   log02.skynet.local

monitored vm hostname not displaying

I receive the logs of a monitored vm via rsyslog and I can see the logs on kibana the host value is there but the hostname field is void. how can I see my vm hostname in the hostname column ?

vcsa 6.5 logs

I have successfully configured our new VCSA 6.5 to Forward syslog-messages to sexilog v0_99g

I can see appliance-logmessages in Kibana, but they have not been parsed correctly:
no entries for message_program and hostname,
unparsed text in 'message_body':
<13>1 2017-03-23T13:16:45.654835+01:00 vcsa vmon 2237 - - Executing op API_HEALTH on service rbd...

Any Idea?

Christian.

Can't deploy OVF compressed disks inside and not supported

Hello,

I try to deploy OVF file but I've got an error.
This OVF file contains compressed disks that's not supported.

I'm using vsphere 6.5 and I've no problem with the OVF file for sexigraf for example.

Thanks in advance for any clue.

JP

Adding in-place upgrade

In order to offer appliance continuity, we must add some in-place upgrade process.

We have to consider appliance is not directly connected to The Internet, so the solution should be some web-based page hosted on the appliance that could provide self upload-and-update operation.

curator does not delete indices

The curator command in /etc/crontab is wrong, it does not work:

# curator delete --disk-space 40
Usage: curator delete [OPTIONS] COMMAND [ARGS]...

Error: Missing command.

Correct usage is:

curator delete --disk-space 40 indices --all-indices

Switch to hourly logstash indices?

After fixing the "curator does not delete" issue I had another look at the amount of data that is pushed into elasticsearch. On my system this averages at about 30GB per 24 hours, so I increased /sexilog to 120GB and set the curator cutoff at 100GB, giving me about 3 days worth of information.

But when curator kicks in, this will delete a big chunk of data, a complete day.

I propose to switch to hourly indices which will result in much smoother data deletion.

small optimization for vApp

I noticed some small optimizations are possible for the Debian part of the Sexilog vApp:

Debian Wheezy includes rpcbind and nfs-common in its default install, but you don't need them unless you use NFS (or something else using RPC calls).

Also the atd and dbus daemon are running, also not needed for the Sexilog vApp.

And while we are at it, /var/cache/apt/archives can be cleaned and the elasticsearch, logstash and riemann DEBs removed from /root/, this saves some bytes to download.

Please do

apt-get purge at rpcbind nfs-common dbus
apt-get autoremove
aptitude purge ~c
apt-get clean
rm /root/*.deb

to get rid of all unneeded services and bytes.

Any plans for Kibana 5 support?

Now that Kibana 5 has been out for a while: any plans for it?

Especially since Kibana 3 doesn't come with authentication and authorisation, but Kibana 5 does.

disk full

After root(/) was full I moved /sexilog to another disk and now I am not getting proper results in http://192.168.111.101/index.html#/dashboard/elasticsearch/SexiBoard:msg

it displays very little info which is almost 24hrs old and anything before 24hrs is not there.

I am getting upto date email alerts from riemann but seems like kibana is broken.

root@test-sexilog:/etc/elasticsearch# cat elasticsearch.yml|grep -v '#'|grep -v "^$"
threadpool.search.type: fixed
threadpool.search.size: 20
threadpool.search.queue_size: 100
threadpool.index.type: fixed
threadpool.index.size: 60
threadpool.index.queue_size: 200
index.translog.flush_threshold_ops: 50000
cluster.name: sexilog
index.number_of_shards: 1
index.number_of_replicas: 0
path.data: /sexilog
bootstrap.mlockall: true
discovery.zen.ping.multicast.enabled: false
indices.memory.index_buffer_size: 50%

root@test-sexilog:~# curl http://192.168.111.101:9200/_cat/shards      
kibana-int          0 p STARTED       24 177.2kb 192.168.111.101 Alexander Bont 
logstash-2017.11.14 0 p STARTED 34803750  23.7gb 192.168.111.101 Alexander Bont 
root@test-sexilog:~# 

What do I need to do start seeing latest messages in http://192.168.111.101/index.html#/dashboard/elasticsearch/SexiBoard:msg

100% CPU utilization

There's about 10-30 of these processes: /usr/lib/jvm/java-7-openjdk-amd64 and even now with 3 vCores it's been at 100% CPU usage for over 48 hours now. I've restarted sexilog and the issue remains.
sexilog_github

rsyslog service not restarted from SexiMenu

Updated my appliance to the latest config files for the SNMP/Veeam fixes but it seems that SexiMenu doesn't restart rsyslog right now.

Current workarounds:

  1. reboot appliance ;-)
  2. Run the following command

/etc/init.d/rsyslog restart

First disk 100% used

Hi,

Last week, I saw the first disk going to 100% space utilisation mounted on / (was around 7 or 8GB) so I extended to 15GB.

Today I can see that it is the same...
disk
Any idea why ?

Issue with vCenter stats dashboard

Somethimes, "ScoreboardStats" field is created with a string type and not an integer one ("37" instead of 37).

As a result, some dashboard are not generated correctly.
We must force float type of GROK pattern

curator cronjob fills mailqueue with junk

The current curator cronjob is very verbose and generates a useless mail on every run.

Those mails cannot be delivered and create frozen mails in the mail queue because the exim4 daemon on the appliance is only set to do local deliveries and the target user "sexiuser" from /etc/aliases does not exist.

But even if the user existed, the mails would still be mostly useless.

I suggest to silence it by adding --loglevel CRITICAL:

*/15 * * * * root curator --loglevel CRITICAL delete --disk-space 100 indices --all-indices

syslog-snmp.conf doesn't handle hostnames with dashes in them

Looks like the grok match in /etc/logstash/conf.d/syslog-snmp.conf should use:

(?<hostname>[a-zA-Z0-9\-_]+[.][a-zA-Z0-9\-_\.]+|[0-9]+[\.][0-9]+[\.][0-9]+[\.][0-9]+)

instead of:

(?<hostname>[a-zA-Z0-9-_]+[.][a-zA-Z0-9-_\.]+|[0-9]+[\.][0-9]+[\.][0-9]+[\.][0-9]+)

How to have more than 6077 page result ?

Hello,

I use sexilog with my Stormshield and it works great with the syslog. Unfortunately i can not have more than 6077 pages result so its a bit complicated if i want to check some logs yesterday or 7 days ago.

I d like to have like 365 days of result. Before sexilog i used VMWARE LOG INSIGHT which was similar to sexilog and had 365 days of syslog result.

I'm totally new with this one, so if someone can help, it would be much appreciate :) !

Thx you !

seximenu not displaying correct elasticsearch/logstash status

elasticsearch and logstash status are reversed each other in seximenu.sh.
current is:

stateelasticsearch=/etc/init.d/logstash status
statelogstash=/etc/init.d/elasticsearch status

but should be:

stateelasticsearch=/etc/init.d/elasticsearch status
statelogstash=/etc/init.d/logstash status

diff output if needed:

diff seximenu.sh seximenu.sh.bad
496,497c496,497
< stateelasticsearch=/etc/init.d/elasticsearch status
< statelogstash=/etc/init.d/logstash status

stateelasticsearch=/etc/init.d/logstash status
statelogstash=/etc/init.d/elasticsearch status

disable bloom filter so save memory?

The curator documentation http://www.elastic.co/guide/en/elasticsearch/client/curator/current/bloom.html states:

It can sometime make sense to disable bloom filters. For instance, if you are logging into an index per day, and you have thousands of indices, the bloom filters can take up a sizable amount of memory. For most queries you are only interested in recent indices, so you don’t mind CRUD operations on older indices taking slightly longer.

This setting is especially relevant to Logstash and Marvel use-cases. The bloom command allows you to disable the bloom filter cache for indices.

Since SexiLog uses ES 1.3.9 (ES 1.4 no longer enables bloom filters for search) one should use a command like this ...

curator --loglevel CRITICAL bloom indices --all-indices

... to save memory, if I interpret the documentation and several websites on using ES with time series data correctly.

CPU stuck at 99%

Hi,

Since a few days ago, cpu is stuck at 99%, see screen :
top

I tried to reboot the VM or restart services but still the same.

Problem with curator ?

Hi,

I saw this morning that my disk mounted on /sexilog was full (100%) with 530GB used (Size is 542GB).
Curator disk-space in crontab was set at 500GB, is it normal ?

Thanks

mail alert exclusion

we need to add a way to exclude known issues or false positive alerts from beeing send by mail

Update console resolution

We planned on set the console resolution to 1024x768 (instead of the default one).

This is done by editing file /etc/default/grub and adding these lines:

GRUB_GFXMODE=1024x768x32
GRUB_GFXPAYLOAD_LINUX=keep

Then, we ran update-grub and reboot

multiple output{} result in duplicate events

I noticed I see every event 4 times in the dashboards and wondered why that is.
And then I noticed that every config file in /etc/logstash/conf.d includes an output to elasticsearch.

4 outputs -> 4 events

After removing the duplicate elasticsearch outputs (leaving the riemann one alone) and putting only one elasticsearch output into a separate es-output.conf file and restarting logstash the duplicate events are (of course) gone and the event counters for specific events like snapshot creation etc. look way more sensible now.

Sexilog No Data in Kibana

I setup Sexilog as NFS Server and mount /sexilog in VCSA vsphere 5.5 appliance to ship logs. I restarted VCSA to take up mount and confirmed it was mounted and logs were being written to Sexilog /sexilog directory as root:root but no data was showing up in Kibana. How do fix this? I ran:
chmod -R 777 . /sexilogs and I ran
chown elasticsearch:elasticsearch in /sexilogs/*
Restarted services but nothing changed.

vpxd-profiler-* _grokparsefailure tag and some lines commented out

I use filebeat on windows to ship vpxd-*.log files to logstash.
Logstash parse the log files with the filter file "filter-json-vcenter.conf".

I have many log lines with the tag "_grokparsefailure" which means that logstash failed to parse the log.

Examples :
This line has not "_grokparsefailure" tag :
"--> /VcServiceStats/StartupTime/ServerApp::Start/CustomFieldsManagerMo::Start/numSamples 1"

This line has "_grokparsefailure" tag :
"--> /VcServiceStats/StartupTime/ServerApp::Start/CustomFieldsManagerMo::Start/min 28"

I think, this is because you set "break_on_match" to false at line 41. Am i right ?

Other thing, i'm asking me why there are 2 blocks for testing the "OpMoLockStats" pattern. the first block begins at line 45 and the second block begins at line 91.

And finally, why line 62, 64 and 66 are commented out ?

Thank you in advance for your help !
Regards.

/dev/null all local mails per default

In addition to issue #21 I would suggest to silence all local mails, which are generated by daily/weekly/monthly cronjobs, etc. which normally nobody will ever read on such an appliance.

Add

sexiuser: :blackhole:

to /etc/aliases to throw all mails away immediately.

If any admin really wants to forward those mails to an external system, he can edit the alias file himself and reconfigure exim4 to use an external smarthost.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.