Giter VIP home page Giter VIP logo

elsa's People

Contributors

dougburks avatar kb1 avatar mcholste avatar mpursley avatar petiepooo avatar srunnels avatar terencemo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

elsa's Issues

Display of number of logs indexed and archived

In a multi-node setup, the display of the number of logs indexed and archived in the top-right corner of the web interface is inconsistent. It seems that sometimes it is pulling the number of local logs indexed and archived on that search head, but refreshing will sometimes display a number that seems more representative of the entire cluster.

Large offset in ELSA query

When wanting to query ELSA with a large offset, we have to use a large limit too (eg. offset: 10000, limit: 10100 to get the 10 000 results to 10 100) which result on a very slow query.

Is it possible to make a faster query with a large offset ?

install.sh doesn't work on debian stretch.

Stretch switched from mysql to mariadb for all mysql related packages etc. I was able to progress to a point by past the first two blocks of errors with:

  • apt install libmariadbclient-dev this got me to have mysql_config in place
  • apt install mariadb-server got me the mysql user
  • also needed to run apt install syslog-ng, even though it seems we try to compile it from source...

The first error encountered when running contrib/install.sh node is:

--> Working on DBD::mysql
Fetching http://www.cpan.org/authors/id/M/MI/MICHIELB/DBD-mysql-4.042.tar.gz ... OK
Configuring DBD-mysql-4.042 ... N/A
! Configure failed for DBD-mysql-4.042. See /root/.cpanm/work/1498578041.30211/build.log for details.

the build log shows:

Running Makefile.PL
Can't exec "mysql_config": No such file or directory at Makefile.PL line 87.

Cannot find the file 'mysql_config'! Your execution PATH doesn't seem
not contain the path to mysql_config. Resorting to guessed values!


PLEASE NOTE:

For 'make test' to run properly, you must ensure that the
database user 'root' can connect to your MySQL server
and has the proper privileges that these tests require such
as 'drop table', 'create table', 'drop procedure', 'create procedure'
as well as others.

mysql> grant all privileges on test.* to 'root'@'localhost' identified by 's3kr1t';

You can also optionally set the user to run 'make test' with:

perl Makefile.PL --testuser=username

Can't exec "mysql_config": No such file or directory at Makefile.PL line 574.
Failed to determine directory of mysql.h. Use

  perl Makefile.PL --cflags=-I<dir>

to set this directory. For details see DBD::mysql::INSTALL,
section "C Compiler flags" or type

  perl Makefile.PL --help
Can't find mysql_config. Use --mysql_config option to specify where mysql_config is located
-> N/A
-> FAIL Configure failed for DBD-mysql-4.042. See /root/.cpanm/work/1498578041.30211/build.log for details.

However, after fixing the above as mentioned earlier, syslog-ng fails to compile with the following error:

In file included from /usr/include/openssl/bn.h:31:0,
                 from /usr/include/openssl/asn1.h:24,
                 from /usr/include/openssl/objects.h:916,
                 from /usr/include/openssl/evp.h:27,
                 from /usr/include/openssl/x509.h:23,
                 from /usr/include/openssl/ssl.h:50,
                 from tlscontext.h:31,
                 from tlscontext.c:24:
/usr/include/openssl/asn1.h:553:1: note: declared here
 DEPRECATEDIN_1_1_0(unsigned char *ASN1_STRING_data(ASN1_STRING *x))
 ^
Makefile:910: recipe for target 'libsyslog_ng_crypto_la-tlscontext.lo' failed
make[4]: *** [libsyslog_ng_crypto_la-tlscontext.lo] Error 1
make[4]: Leaving directory '/tmp/syslog-ng-3.4.7/lib'
Makefile:1517: recipe for target 'all-recursive' failed
make[3]: *** [all-recursive] Error 1
make[3]: Leaving directory '/tmp/syslog-ng-3.4.7/lib'
Makefile:679: recipe for target 'all' failed
make[2]: *** [all] Error 2
make[2]: Leaving directory '/tmp/syslog-ng-3.4.7/lib'
Makefile:514: recipe for target 'all-recursive' failed
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory '/tmp/syslog-ng-3.4.7'
Makefile:418: recipe for target 'all' failed
make: *** [all] Error 2
build_syslogng success
Executing set_syslogng_conf
Updating syslog-ng.conf...
contrib/install.sh: 825: contrib/install.sh: /usr/local/syslog-ng/bin/pdbtool: not found
contrib/install.sh: 827: contrib/install.sh: /usr/local/syslog-ng/bin/pdbtool: not found
contrib/install.sh: 848: contrib/install.sh: cannot create /usr/local/syslog-ng/etc/syslog-ng.conf: Directory nonexistent
set_syslogng_conf FAIL

Any suggestions on how I might be able to get this to proceed beyond this point?

groupby Value Empty

Hey,
I created a custom Class and patternDB for our Sophos Firewall.
Everything works correctly, but when I try to groupby: e.g. srcip, ELSA shows only 1 graph and without Value. The Count looks right but added all srcip's together.
I only created one new Field (fwrule(int)).

Also, if I don't add the "archive:0/1" it shows no result at all.
auto generated inline image 1

Output of web.log during Query:

 * TRACE [2016/05/04 14:44:49] /usr/local/elsa/web/lib/SyncMysql.pm (29) SyncMysql::query 7489 [undef]
 query: SELECT id, program FROM programs WHERE id IN (?)
 values:
* ERROR [2016/05/04 14:44:49] /usr/local/elsa/web/lib/Query/SQL.pm (652) Query::SQL::__ANON__ 7489 [undef]
 Did not get extra field value rows though we had values: $VAR1 = {
           '' => undef
         };
 * WARN [2016/05/04 14:44:49] /usr/local/elsa/web/lib/Fields.pm (646) Fields::resolve_value 7489 [undef]
 No field_order found for col
 * TRACE [2016/05/04 14:44:49] /usr/local/elsa/web/lib/Query/SQL.pm (572) Query::SQL::_format_records_groupby 7489 [undef]
 field_order:  key
 * WARN [2016/05/04 14:44:49] /usr/local/elsa/web/lib/Fields.pm (646) Fields::resolve_value 7489 [undef]
 No field_order found for col
 * TRACE [2016/05/04 14:44:49] /usr/local/elsa/web/lib/Query/SQL.pm (572) Query::SQL::_format_records_groupby 7489 [undef]
 field_order:  key
 * WARN [2016/05/04 14:44:49] /usr/local/elsa/web/lib/Fields.pm (646) Fields::resolve_value 7489 [undef]
 No field_order found for col
 * TRACE [2016/05/04 14:44:49] /usr/local/elsa/web/lib/Query/SQL.pm (572) Query::SQL::_format_records_groupby 7489 [undef]
 field_order:  key
 * WARN [2016/05/04 14:44:49] /usr/local/elsa/web/lib/Fields.pm (646) Fields::resolve_value 7489 [undef]
 No field_order found for col
 * TRACE [2016/05/04 14:44:49] /usr/local/elsa/web/lib/Query/SQL.pm (572) Query::SQL::_format_records_groupby 7489 [undef]
 field_order:  key
 * WARN [2016/05/04 14:44:49] /usr/local/elsa/web/lib/Fields.pm (646) Fields::resolve_value 7489 [undef]
 No field_order found for col
 * TRACE [2016/05/04 14:44:49] /usr/local/elsa/web/lib/Query/SQL.pm (572) Query::SQL::_format_records_groupby 7489 [undef]
 field_order:  key
 * WARN [2016/05/04 14:44:49] /usr/local/elsa/web/lib/Fields.pm (646) Fields::resolve_value 7489 [undef]
 No field_order found for col
 * TRACE [2016/05/04 14:44:49] /usr/local/elsa/web/lib/Query/SQL.pm (572) Query::SQL::_format_records_groupby 7489 [undef]
 field_order:  key
 * WARN [2016/05/04 14:44:49] /usr/local/elsa/web/lib/Fields.pm (646) Fields::resolve_value 7489 [undef]
 No field_order found for col
 * TRACE [2016/05/04 14:44:49] /usr/local/elsa/web/lib/Query/SQL.pm (572) Query::SQL::_format_records_groupby 7489 [undef]
 field_order:  key
 * WARN [2016/05/04 14:44:49] /usr/local/elsa/web/lib/Fields.pm (646) Fields::resolve_value 7489 [undef]
 No field_order found for col
 * TRACE [2016/05/04 14:44:49] /usr/local/elsa/web/lib/Query/SQL.pm (572) Query::SQL::_format_records_groupby 7489 [undef]
 field_order:  key
 * WARN [2016/05/04 14:44:49] /usr/local/elsa/web/lib/Fields.pm (646) Fields::resolve_value 7489 [undef]
 No field_order found for col 

MySQL Query:

USE syslog;

INSERT INTO classes (id, class, parent_id) VALUES(10001, "SOPHOS_FIREWALL", 0);

INSERT INTO fields (field, field_type, pattern_type) VALUES ("fwrule","int", "NUMBER");

INSERT INTO fields_classes_map (class_id, field_id, field_order) VALUES ((SELECT id FROM classes WHERE class="SOPHOS_FIREWALL"), (SELECT id FROM fields WHERE field="fwrule"), 5);
INSERT INTO fields_classes_map (class_id, field_id, field_order) VALUES ((SELECT id FROM classes WHERE class="SOPHOS_FIREWALL"), (SELECT id FROM fields WHERE field="srcip"), 6);
INSERT INTO fields_classes_map (class_id, field_id, field_order) VALUES ((SELECT id FROM classes WHERE class="SOPHOS_FIREWALL"), (SELECT id FROM fields WHERE field="dstip"), 7);
INSERT INTO fields_classes_map (class_id, field_id, field_order) VALUES ((SELECT id FROM classes WHERE class="SOPHOS_FIREWALL"), (SELECT id FROM fields WHERE field="srcport"), 8);
INSERT INTO fields_classes_map (class_id, field_id, field_order) VALUES ((SELECT id FROM classes WHERE class="SOPHOS_FIREWALL"), (SELECT id FROM fields WHERE field="dstport"), 9);
INSERT INTO fields_classes_map (class_id, field_id, field_order) VALUES ((SELECT id FROM classes WHERE class="SOPHOS_FIREWALL"), (SELECT id FROM fields WHERE field="action"), 11);

The Fields look good and have all the right value:
2016-05-04 16_09_10-elsa

I just noticed, if I groupby host, I get value.

Database missing tables

Hello,

I just installed ELSA on Ubuntu server 16.04.1 and it appears to be up and running (I can get to the web interface just fine). When I point my devices at this new server to send their syslogs over, I'm getting the following error in /data/elsa/log/node.log:
Error: DBD::mysql::st execute failed: Table 'syslog.buffers' doesn't exist at /usr/local/elsa/web/../node/Indexer.pm line 945. TRACE [2017/02/07 12:14:03] /usr/local/elsa/web/cron.pl (99) main:: 64445 [undef] cron.pl finished.

Sure enough, when I log into MySQL, it appears that the table referenced was not created. This is all I have in that database:
+--------------------+ | Tables_in_syslog | +--------------------+ | class_program_map | | classes | | fields | | fields_classes_map | | programs | | table_types | +--------------------+

Is there an extra step to be taken to initialize this database to store syslogs correctly? Thank you!
-Dennis

datasource:_node_stats Shows Data in Future

When performing the following query: datasource:_node_stats groupby:day, the resulting bar chart shows data for tomorrow. But if that timeframe is selected without doing the groupby, then no data is returned. The time/date on the system is correct. I'm using ODE if that makes a difference.

nodestats_groupby
nodestats_nogroupby

Dashboard still requires credentials

Should public dashboards (auth == none) be accessible to anybody no matter the auth/method?

I'm running SecurityOnion and attempting to build some public dashboards for users who don't have an account. The auth/method is security_onion, which uses the sguildb for authentication. Essentially, a "Public" (auth == none) dashboard ends up as if it was set to "Any authenticated user" (auth = authenticated).

Is this normal behavior?

To help test, is there an easier way to logout (so I can test an unauthenticated user) other than clearing browser cache?

Errors: Internal error

Occasionally, when performing a search, the webUI immediately responds with "Errors: Internal error" in red. If you click Submit a couple more times it will usually work.

Here's a partial log. I'll send you the full log privately since it contains internal data.

  • ERROR [2015/05/04 10:47:17] /usr/local/elsa/web/lib/Utils.pm (725) Utils::ANON 17781 [undef]
    Peer 10.0.1.133 got error: Internal error at /usr/local/elsa/web/lib/Utils.pm line 724.
    Utils::ANON(undef, HASH(0x7f765bee7860)) called at /usr/local/share/perl5/AnyEvent/HTTP.pm line 695
    AnyEvent::HTTP::_error(HASH(0x7f765bc27bd8), CODE(0x7f765beef320), HASH(0x7f765bee7860)) called at /usr/local/share/perl5/AnyEvent/HTTP.pm line 1160
    AnyEvent::HTTP::ANON() called at /usr/local/lib64/perl5/AnyEvent/Socket.pm line 1001
    AnyEvent::Socket::ANON called at /usr/local/lib64/perl5/AnyEvent.pm line 1305
    AnyEvent::_postpone_exec(EV::Timer=SCALAR(0x7f7657ca9b40), 256) called at /usr/local/lib64/perl5/AnyEvent/Impl/EV.pm line 88
    eval {...} called at /usr/local/lib64/perl5/AnyEvent/Impl/EV.pm line 88
    AnyEvent::CondVar::Base::_wait(AnyEvent::CondVar=HASH(0x7f765600c9e8)) called at /usr/local/lib64/perl5/AnyEvent.pm line 1994
    AnyEvent::CondVar::Base::recv(AnyEvent::CondVar=HASH(0x7f765600c9e8)) called at /usr/local/elsa/web/lib/View.pm line 166
    View::ANON(CODE(0x7f7659a11e78)) called at /usr/local/share/perl5/Plack/Util.pm line 301
    Plack::Util::ANON(CODE(0x7f765ba41908)) called at /usr/local/share/perl5/Plack/Util.pm line 301
    Plack::Util::ANON(CODE(0x7f765b637900)) called at /usr/local/share/perl5/Plack/Handler/Apache2.pm line 89
    Plack::Handler::Apache2::call_app("Plack::Handler::Apache2", Apache2::RequestRec=SCALAR(0x7f765b547050), CODE(0x7f765b58f648)) called at /usr/local/share/perl5/Plack/Handler/Apache2.pm line 126
    Plack::Handler::Apache2::handler(Apache2::RequestRec=SCALAR(0x7f765b547050)) called at -e line 0
    eval {...} called at -e line 0
  • WARN [2015/05/04 10:47:17] /usr/local/elsa/web/lib/Warnings.pm (18) Warnings::add_warning 17781 [undef]
    500: Internal error, $VAR1 = undef;
  • INFO [2015/05/04 10:47:17] /usr/local/elsa/web/lib/Controller.pm (1614) Controller::ANON 17781 [undef]
    Query 770 returned 0 rows
  • TRACE [2015/05/04 10:47:17] /usr/local/elsa/web/lib/Query.pm (251) Query::_set_time_taken 17781 [undef]
    Set time taken for query 770 to 78

apache2-mpm-prefork

Hi.
When I try to install the web, it shows the following error in Wily distribution:
Package apache2-mpm-prefork is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
However the following packages replace it:
apache2-bin:i386 apache2-bin

E: Package 'apache2-mpm-prefork' has no installation candidate
ubuntu_get_web_packages FAIL

Default peer 127.0.0.1 and "groupby:host" limitations

Hi Martin,

Is peer "127.0.0.1" mandatory now? Changing it to IP of eth0 results in crashed apache with notice 'from_peer' => '_external' in web.log.
As expected with default peer left all results by "groupby:host" are from host "127.0.0.1".
Is this behavior feature or a bug?

Karolis

Remove old "get_elsa" googlecode portion of install.sh

Would it be safe to remove this now from install.sh with the google code repository no longer existing:

get_elsa(){
    # Find our current md5
    BEFORE_MD5=$($MD5SUM $SELF | cut -f1 -d\ )
    echo "Current MD5: $BEFORE_MD5"
    # Get the latest code from Google Code
    cd $BASE_DIR
    # Check to see if svn accepts --trust-server-cert
    SVN_TRUST_SERVER_CERT=" --trust-server-cert"
    svn help export | grep trust
    if [ $? -ne 0 ]; then 
        SVN_TRUST_SERVER_CERT=""
    fi
    svn -r $VERSION --non-interactive $SVN_TRUST_SERVER_CERT --force export "https://enterprise-log-search-and-archive.googlecode.com/svn/branches/elsa/1.5" elsa &&
    mkdir -p "$BASE_DIR/elsa/node/tmp/locks" && 
    touch "$BASE_DIR/elsa/node/tmp/locks/directory"
    touch "$BASE_DIR/elsa/node/tmp/locks/query"
    UPDATE_OK=$?

    DOWNLOADED="$BASE_DIR/elsa/contrib/$THIS_FILE"
    AFTER_MD5=$($MD5SUM $DOWNLOADED | cut -f1 -d\ )
    echo "Latest MD5: $AFTER_MD5"

    if [ "$BEFORE_MD5" != "$AFTER_MD5" ] && [ "$USE_LOCAL_INSTALL" != "1" ]; then
        echo "Restarting with updated install.sh..."
        echo "$SHELL $DOWNLOADED $INSTALL $OP"
        $SHELL $DOWNLOADED $INSTALL $OP;
        exit;
    else
        return $UPDATE_OK
    fi
}

Cannot start web interface

Fresh install of ELSA on CentOS 6 using Apache resulted in a non-functioning installation for me. Apache would start and then immediately crash when trying to bring up the web app with the following error.

[Tue Feb 16 11:41:44 2016] [error] Error while loading /usr/local/elsa/web/lib/Web.psgi: Type of arg 1 to shift must be array (not anonymous list ([])) at /usr/local/elsa/web/lib/Query/Sphinx.pm line 1291, near "];"\nCompilation failed in require at /usr/local/elsa/web/lib/QueryParser.pm line 28.\nBEGIN failed--compilation aborted at /usr/local/elsa/web/lib/QueryParser.pm line 28.\nCompilation failed in require at /usr/local/elsa/web/lib/Controller.pm line 34.\nBEGIN failed--compilation aborted at /usr/local/elsa/web/lib/Controller.pm line 34.\nCompilation failed in require at /usr/local/elsa/web/lib/Web.psgi line 10.\nBEGIN failed--compilation aborted at /usr/local/elsa/web/lib/Web.psgi line 10.\nBEGIN failed--compilation aborted at /etc/httpd/conf/elsa_startup.pl line 19.\nCompilation failed in require at (eval 2) line 1.\n

I was able to get things up and running by changing line 1291 in /usr/local/elsa/web/lib/Query/Sphinx.pm from this:

my $key = shift [ keys %{ $ret->{results} } ];

To this:

my $key = shift @{[ keys %{ $ret->{results} } ]};

Seems to be working as expected once this change was made.

Dashboard chart title not saved to db

When a new chart is created in a dashboard, the chart title is not saved to database. Steps to reproduce:

  1. Create a dashboard
  2. Add Chart
  3. Enter Chart Title and other fields
  4. Save

elsa web log shows args { 'title' => "Foo Bar"... }, but in database SELECT * from charts ORDER BY id DESC LIMIT 1; shows the newly created record for chart but "title" : null in the options column

Saved Searches Convert to Lower Case

When saving a search, it is converted to all lower case. This makes a query using the OR operator not work. But if the saved search is edited, then the case is preserved and the query works.

CentOS 7 Does Not Include MySQL

CentOS 7 uses mariadb, so the install.sh stuff depending on MySQL will fail. Some notes in case anyone wants to run with this before I get a chance:

-Add detection for CentOS 7 maybe by looking for /etc/os-release (don't solely trust the existence of redhat-release anymore. It's now a link to os-release.
-Replace the "yum -yq install mysql-server mysql-libs mysql-devel" lines with "yum -yq install mariadb-server mariadb-libs mariadb-devel"
-systemctl is now used in place of service, so something like /bin/systemctl start mariadb.service is needed.

Multiple class search result export to excel does not work.

Hi,

ELSA has a nice feature of exporting search results to excel and other formats. It does this perfectly when you export only single class of logs. However if I want to export multiple class search results in one excel sheet it exports only the results of the first class entry in the search and misses all the other results of other classes.
To overcome this limitation I use dirty hack. Viewing multiple classes results in “Grid display” mode then copy and paste to the excel. However it would be nice to have “Export results” feature for multiple class results.

Karolis

Remove/rework set_version portion of install.sh related to google code?

Would it also be safe to remove this portion from install.sh or rework it to be compatible with github?

        # set ELSA version
        if [ "$VERSION" = "HEAD" ]; then
                svn info http://enterprise-log-search-and-archive.googlecode.com/svn/ | grep "Last Changed" | sed -e "s/Last Changed //g" | perl -e 'use Config::JSON; my $c = new Config::JSON("/etc/elsa_web.conf") or die($!); while(<>){ chomp; my ($k,$v) = split(/:/, $_, 2); next unless $k and $v; $c->set("version/$k", $v); } $c->write;'
        else
                echo "revision:$VERSION" | perl -e 'use Config::JSON; my $c = new Config::JSON("/etc/elsa_web.conf") or die($!); while(<>){ chomp; my ($k,$v) = split(/:/, $_, 2); next unless $k and $v; $c->set("version/$k", $v); } $c->write;'
        fi
        $BASE_DIR/sphinx/bin/searchd --help | head -1 | perl -e 'use Config::JSON; my $c = new Config::JSON("/etc/elsa_web.conf") or die($!); while(<>){ chomp; exit unless $_; $c->set("version/Sphinx", $_); } $c->write;'
}

Problem correlating results in a subsearch

Hello,

I've been trying to get a subsearch to work but it doesn't seem to do what i expect. I've boiled down what i'm doing to the following scenario...

query: class=bro_http srcip:"192.168.1.1" groupby:dstip
expected result: [192.168.101.10, 192.168.13.7]
actual result: [192.168.101.10, 192.168.13.7]

query: class=bro_conn srcip:"192.168.1.1" groupby:dstip
expected result: [192.168.101.10, 192.168.13.7]
actual result: [192.168.101.10, 192.168.13.7]

query: class=bro_http srcip:"192.168.1.1" groupby:dstip | subsearch(class=bro_conn groupby:dstip,dstip)
expected result: [192.168.101.10, 192.168.13.7]
actual result: []

I though that the sub search must be creating a query where the input IPs were ANDed together however after looking into the code in Query.pm it seems that they are OR'd together. If i force the initial query to return only one dstip then the result still comes back empty so there must be more to it.

If i leave out the second parameter of subsearch() then i get a lot of results but it does not constrain the ips to the dstip.

Id be happy to look into this further but i cannot seem to see the subsearch query being written to a log...
line 795: $self->log->trace('Subsearch query: ' . $subsearch_query_string);

All the best

Jim

Invalid JSON after XSS changes

I'm experiencing the following after applying the changes to remedy XSS found in commit 57f9ff7:

If viewing results in non-grid view, clicking on Info causes a "JSON error parsing response: [object Object]" error in the browser, and the web log has the following:

"
ERROR ... elsa/web/lib/View.pm (161) View::catch ...
... Invalid JSON args ...
'q' => '...b.a.s.e.6.4..b.a.s.e.6.4... ...b.a.s.e.6.4..b.a.s.e.6.4.....=='
... invalid character encountered while parsing JSON string, at character offset ...
... at /usr/local/elsa/web/lib/Controller.pm line 1914.
... at /usr/local/elsa/web/lib/Controller.pm line 1916
"

It's breaking at line 1910, within of Controller.pm:
$decode = $self->json->decode(decode_base64($args->{q}));

The base64 encoded sData coming from the browser has a space in it. Running the version of elsa.js before this fix has a plus sign instead.

I was able to solve my problem by adding "$args->{q} =~ s/ /+/g;" near the top of subroutine get_log_info:
sub get_log_info {
my ($self, $args, $cb) = @_;
my $user = $args->{user};
$args->{q} =~ s/ /+/g; ### <----- NEW LINE OF CODE

my $decode;
eval {
    $decode = $self->json->decode(decode_base64($args->{q}));
};

I created a pull request, but I'm not sure if it's the best solution, or place for the solution.

MySQL problem I guess

Hi,

Is the project still alive ? (No commit for the last 6 months)

I installed a new server under Debian 8.7.1 64 bits, then I tried to install Elsa but that didn't work. The web interface works properly but I can't see any logs on it.

After some researches, I understood that syslog-ng gets the logs from the network, then redirect them to /usr/local/elsa/node/elsa.pl which store them in /data/elsa/tmp/buffers/. But after, the perl script /usr/local/elsa/web/cron.pl launched every minutes by cron shows errors like this :
DBD::mysql::st execute failed: Duplicate entry 'syslog_data.syslogs_index_1' for key 'table_name' at /usr/local/elsa/web/../node/Indexer.pm line 1636.
DBD::mysql::st execute failed: Table 'syslog_data.syslogs_index_1' doesn't exist at /usr/local/elsa/web/../node/Indexer.pm line 1342.
And logs are just accumulating into the buffer.

Any help would be greatly appreciated :-)
Thanks in advance !

Strange results from elsa queries

Not sure if this is a bug or misuse but with the following queries for a time period i get strange results.

class=bro_conn 74691 records
(0 or 1 or "-") class=bro_conn 82754 records
(0 or 1 or "-" or "dns") class=bro_conn 147472 records

I thought that class=bro_conn would yield the higher result count. if I groupby class then the record count matches the above result.

Question about UNIQUE KEY for "fields" table

Martin,
not sure the Google Code page is still active, so I'm opening issue 229 again here.

As pointed out on the Security Onion mailing list I was asking myself if this documentation is correct.

Given that the "fields" table has UNIQUE KEY field (field,field_type), it might happen that the query below fails because the inner subquery returns more than one result:

INSERT INTO fields_classes_map (class_id, field_id, field_order) VALUES ((SELECT id FROM classes WHERE class="NEWCLASS"), (SELECT id FROM fields WHERE field="dstip"), 7);

Don't you think it would be better to change the unique key to "field" only, or changing the documentation in order to use both "field" and "field_type" in the WHERE clause (like below)?

INSERT INTO fields_classes_map (class_id, field_id, field_order) VALUES ((SELECT id FROM classes WHERE class="NEWCLASS"), (SELECT id FROM fields WHERE field="dstip" AND field_type="int"), 7);

Thanks,
-- Andrea

No new data in database

This is a strange issue and I have been working on it for over a week and cannot figure out the issue. This is a new build Ubuntu 14.04. The install.sh file went through without an issue but for some reason I am not getting any new data in the database. I received 100 logs and that is all. If I reboot it I receive 100 more additional logs. If I manually execute syslog-ng -Fevd it shows a multitude of data on screen. I see no issues in any of the log files. If I log into mysql and run select * from tables; I see that the start and end times of the syslog_data.syslogs_index_1 table are 10 seconds apart.

There is 1 exception I have found. If i execute livetail.pl I see everything that elsa is doing and all of that data is put into the database but only searchable from the archive. The moment I end the livetail the logs stop showing up in the database. I cannot figure where the disconnect is. Please assist in troubleshooting. Thank you.

Elsa Syslog-NG default query returning duplicate results

The ELSA Syslog-NG query for both host and program are returning 2 entries for each cron entry for one of the linux servers ossec is monitoring.

If I query through the "All OSSEC Log" tab, I get the correct information.
I also verified the information is correct in the mysql database.

The problem is limited to only cron entry in the auth.log on only one host

CentOS 7 Apache Config

The version of Apache bundled with CentOS 7 requires "Require all granted" instead of "Allow from all" in /etc/httpd/conf.d/ZZelsa.conf.

Parsing issues when logs contain pipes

I have some BRO_HTTP logs containing pipes in some of the tab-separated fields, e.g. an Havij-made SQL injection attempt with double-pipe string concatenation like the following:

1430161421.432863    Z1b8x42wevxlEVgEbz      10.0.0.2   32314   10.0.0.1   80      1       GET     www.example.com /index.asp?id=convert(int,chr(114)||chr(51)||chr(100)||chr(109)||chr(48)||chr(118)||chr(51)||chr(95)||chr(104)||chr(118)||chr(106)||chr(95)||chr(105)||chr(110)||chr(106)||chr(101)||chr(99)||chr(116)||chr(105)||chr(111)||chr(110))--      -       Mozilla/4.0       0       1239  200      OK      -       -       -       (empty) -       -       -       -       -       KKBOqc6QgqXwbOaIkz      text/html

When the BRO_HTTP file is processed by syslog-ng, every tab gets replaced by pipes:

source s_bro_http { file("/nsm/bro/logs/current/http_eth1.log" flags(no-parse) program_override("bro_http")); };
rewrite r_pipes { subst("\t", "|", value("MESSAGE") flags(global)); };
parser p_db { db-parser(file("/opt/elsa/node/conf/patterndb.xml")); };
template t_db_parsed { template("$R_UNIXTIME\t$HOST\t$PROGRAM\t${.classifier.class}\t$MSGONLY\t${i0}\t${i1}\t${i2}\t${i3}\t${i4}\t${i5}\t${s0}\t${s1}\t${s2}\t${s3}\t${s4}\t${s5}\n"); };
destination d_elsa { program("perl /opt/elsa/node/elsa.pl -c /etc/elsa_node.conf" template(t_db_parsed)); };

log {
    source(s_bro_http);
    rewrite(r_pipes);
    parser(p_db);
    destination(d_elsa);
};

As the Security Onion PatternDB configuration /opt/elsa/node/conf/patterndb.xml splits BRO_HTTP fields based on pipes, ELSA cannot show the above sample log correctly.

To my knowledge, this bug could be fixed by:

  • escaping pre-existing pipes before the r_pipes rewrite directive; or
  • changing the Security Onion parsers to use tabs instead of pipes.

@mcholste @dougburks what do you think?

Results options after search not working

Hi,
After running a search I am unable to save the search or create an alert for the search.

I am running the below version.

Rev 1205
Date 2014-07-18 01:12:58 +0300 (Fri, 18 Jul 2014)
Author [email protected]
Sphinx Sphinx 2.1.3-id64-dev (r4319)

On Ubuntu 14.04.2 LTS

This is the error I get when I try and save a search. It brings a prompt for me to enter a name but once I click submit I get the following error.

DBD::mysql::db prepare failed: handle 2 is owned by thread 7f149c5d55c0 not current thread 7f14a1d5cd50 (handles can't be shared between threads and your driver may need a CLONE method added) at /usr/local/elsa/web/lib/Controller.pm line 2494.

This is the error I get when I try and Alert or schedule. I can select the frequency and time but once I submit I get the below.

DBD::mysql::db prepare failed: handle 2 is owned by thread 7f149c5d55c0 not current thread 7f14aac3d750 (handles can't be shared between threads and your driver may need a CLONE method added) at /usr/local/elsa/web/lib/Controller.pm line 1142.

Query Log not working

Hi,
Clicking on Query log I get a Data error.
I should be able to see a log of all the queries that have been run. Instead I get the following Data error under Query History.
I am running
Rev 1205
Date 2014-07-18 01:12:58 +0300 (Fri, 18 Jul 2014)
Author [email protected]
Sphinx Sphinx 2.1.3-id64-dev (r4319)

On Ubuntu 14.04.2 LTS

adding new parser - so close - no cigar

I cant figure out where I am going wrong.

I have built the following pattern:

<ruleset name="FORTINET_FSSO" id='21000'> <pattern>fortinet</pattern> <rules> <rule provider="ADMIN" class='21000' id='21000'> <patterns> <pattern>date=@ESTRING:: @time=@ESTRING:: @devname=@ESTRING:: @devid=@ESTRING:: @logid=@ESTRING:: @type=event subtype=user level=notice vd=@ESTRING:: @logdesc="FSSO logon authentication status" srcip=@IPv4:i0:@ user=@QSTRING:s0:"@ server=@QSTRING:s1:"@@ANYSTRING::@</pattern> </patterns> <examples> <example> <test_message program="fortinet">date=2015-12-15 time=13:41:16 devname=FG300C391xxxxxxx devid=FG300C391xxxxxxxx logid=0102043014 type=event subtype=user level=notice vd="DMZ1" logdesc="FSSO logon authentication status" srcip=x.x.x.x user="USERNAME" server="SERVERNAME" action=FSSO-logon msg="FSSO-logon event from SERVERNAME: user USERNAME logged on x.x.x.x"</test_message> <test_values> <test_value name="i0">x.x.x.x</test_value> <test_value name="s0">USERNAME</test_value> <test_value name="s1">SERVERNAME</test_value> </test_values> </example> </examples> </rule> </rules> </ruleset>
Which I have merged using pdbtool after successfully testing it.

I have added the following class and fields_classes_map entries to the db.. (fields were pre-existing)
'
+-------+---------------+-----------+
| id | class | parent_id |
+-------+---------------+-----------+
| 21000 | FORTINET_FSSO | 0 |
+-------+---------------+-----------+

+----------+----------+-------------+
| field_id | class_id | field_order |
+----------+----------+-------------+
| 15 | 21000 | 5 |
| 26 | 21000 | 6 |
| 45 | 21000 | 7 |
+----------+----------+-------------+
`
and restarted syslog-ng....

The logs are being classified properly - showing up as class=FORTINET_FSSO but the srcip is being parsed as 0.0.67.64, user as 0 and device as 0 (consistently)...

I've got to be missing/misunderstanding something simple?

installation fails on centos 6.4 with perl 5.8.8

run:
sudo sh -c "sh contrib/install.sh node && sh contrib/install.sh web"

error:
build_web_perl FAIL

Appending installation info to /usr/lib/perl5/5.8.8/x86_64-linux-thread-multi/perllocal.pod

Upgrade Sandbox Connector from 1.0 to 1.5

When using Result Options -> Send to connector -> Send to malware analysis sandbox, I get no pop-ups, but the following in '/nsm/elsa/data/elsa/log/web.log' (I'm hand-typing the messages because the server is currently air-gapped):

* DEBUG [2016/07/28 18:29:12] /opt/elsa/web/lib/Controller.pm (2201) Controller::_send_to 55264 [undef]
loading plugin Connector::Sandbox
* ERROR [2016/07/28 18:29:12] /opt/elsa/web/lib/Controller.pm (2263) Controller::_send_to 55264 [undef]
Error creating plugin Connector::Sandbox with data $VAR1 = bless( {
<snip>
: Can't locate object method "api" via package "Connector::Sandbox" at /opt/elsa/web/lib/Connector/Sandbox.pm line 19.

When using Info -> Plugin -> Send to Sandbox, I receive a pop-up with a title of 'Error' and message of 'Send failed' and get the following in '/nsm/elsa/data/elsa/log/web.log':

* DEBUG [2016/07/28 18:47:35] /opt/elsa/web/lib/View.pm (380) View::_send_to 58803 [undef]
Decoded HASH(0x56413aeb1148) as : $VAR1 = {
<SNIP>
* DEBUG [2016/07/28 18:47:35] /opt/elsa/web/lib/QueryParser.pm (447) QueryParser::_parse_qury 58803 [undef]
<SNIP>
* DEBUG [2016/07/28 18:47:35] /opt/elsa/web/lib/QueryParser.pm (233) QueryParser::parse 58803 [undef]
<SNIP>
* DEBUG [2016/07/28 18:47:35] /opt/elsa/web/lib/Query.pm (174) BUILD 58803 [undef]
Received query with qid 40 at 1469731655
* ERROR [2016/07/28 18:47:35] /opt/elsa/web/lib/View.pm (161) View::catch {...} 58803 [undef]
Not an ARRAY reference at /opt/elsa/web/lib/Results.pm line 89.

I made the following configuration update to /etc/elsa_web.conf and restarted apache2:

{
<snip>
  "connectors": {
    "sandbox": {
      "site": "192.168.10.24",
      "url": "http://192.168.10.24:8090/tasks/create/file"
    }
  },
<snip>
}

FYI: I'm currently running off the Security Onion ISO 14.04.4.1, ELSA Rev 1205.

Field missing in the Web interface the firt time is loaded

I am not sure it is a bug or a parameter to adjuste, but when it load the first time the ELSA web Interface with any of my browser some fields are empty like for the version of sphinx (not set) or the addterm menu (only "unclassified"). I have to reload it and then it works! Any idea of the problem?
(I am woking on Centos 7 with Apache)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.