Giter VIP home page Giter VIP logo

Comments (72)

davidmckenzie avatar davidmckenzie commented on May 27, 2024 1

Merged your change in, and updated DB creation code :)

from pagermon.

jklnz avatar jklnz commented on May 27, 2024

Interesting, I think I am getting the same error here too. It was working fine until this morning when I got up and now it is constantly waiting on this network request:
http://192.168.1.143:3000/api/messages?limit=&page=1

from pagermon.

jklnz avatar jklnz commented on May 27, 2024

This is the error message:

2017-06-22 08:29 +12:00: { Error: SQLITE_MISUSE: Database is closed
    at Database.<anonymous> (C:\Temp\pagermon-0.1.1-beta\pagermon-0.1.1-beta\server\node_modules\sqlite3\lib\sqlite3.js:20:25)
    at Database.object.(anonymous function) [as all] (C:\Temp\pagermon-0.1.1-beta\pagermon-0.1.1-beta\server\node_modules\sqlite3\lib\trace.js:31:20)
    at C:\Temp\pagermon-0.1.1-beta\pagermon-0.1.1-beta\server\routes\api.js:130:8
    at Layer.handle [as handle_request] (C:\Temp\pagermon-0.1.1-beta\pagermon-0.1.1-beta\server\node_modules\express\lib\router\layer.js:95:5)
    at next (C:\Temp\pagermon-0.1.1-beta\pagermon-0.1.1-beta\server\node_modules\express\lib\router\route.js:137:13)
    at Route.dispatch (C:\Temp\pagermon-0.1.1-beta\pagermon-0.1.1-beta\server\node_modules\express\lib\router\route.js:112:3)
    at Layer.handle [as handle_request] (C:\Temp\pagermon-0.1.1-beta\pagermon-0.1.1-beta\server\node_modules\express\lib\router\layer.js:95:5)
    at C:\Temp\pagermon-0.1.1-beta\pagermon-0.1.1-beta\server\node_modules\express\lib\router\index.js:281:22
    at Function.process_params (C:\Temp\pagermon-0.1.1-beta\pagermon-0.1.1-beta\server\node_modules\express\lib\router\index.js:335:12)
    at next (C:\Temp\pagermon-0.1.1-beta\pagermon-0.1.1-beta\server\node_modules\express\lib\router\index.js:275:10) errno: 21, code: 'SQLITE_MISUSE' }

from pagermon.

jklnz avatar jklnz commented on May 27, 2024

Must have been an OS issue, restarted Windows and it works

from pagermon.

davidmckenzie avatar davidmckenzie commented on May 27, 2024

The DB closed error might have been due to a close statement I left in for error handling. Removed most of them but looks like a couple stayed in there. Have updated now, hopefully that prevents it happening again. :)

@marshyonline do the messages show up fine after reloading? Any errors in the stdout or stderr logs on the server?

Edit: Also, if you can replicate it, can you watch the network tab of the browser console when it happens, and grab the contents of the response? http://i.imgur.com/iFqXYfr.jpeg

A correct response would look something like:

{"id":1103,"address":480112,"message":"STU TWN - FIRECALL - #107024 - MOOKERAWA RD STUART TOWN","timestamp":1498101660,"source":"OAG","alias":"ORA - Dubbo Group","agency":"RFS","icon":"fire","color":"darkred","ignore":0,"MAX(capcodes.address)":480}

from pagermon.

marshyonline avatar marshyonline commented on May 27, 2024

from pagermon.

jklnz avatar jklnz commented on May 27, 2024

Sweet, I have updated and had no issues!

from pagermon.

jklnz avatar jklnz commented on May 27, 2024

I think I know what caused this, I just created an Ignore filter on mine, and as soon as I did that some messages are coming through as you describe in the top comment (and in the same way as you say).
I just watched another message come in and do exactly the same thing when it matched the ignore filter

from pagermon.

jklnz avatar jklnz commented on May 27, 2024

Deleted the ignore filter and it works.

from pagermon.

davidmckenzie avatar davidmckenzie commented on May 27, 2024

Ah, good catch! Sounds like I need to add some better error handling on the ignored messages. Am offline tonight but will see if I can push a patch tomorrow :)

from pagermon.

marshyonline avatar marshyonline commented on May 27, 2024

Can confirm this has not solved the issue
http://imgur.com/a/piLis

I have a few ignore aliases, these messages hit the server roughly every minute, around the same time these error's are thrown.
My best guess is it is an issue with this somehow.

from pagermon.

davidmckenzie avatar davidmckenzie commented on May 27, 2024

Latest commit should resolve this - I ended up resolving #8 at the same time, so now the socket emits the full message. The client side still does a quick check if any filters are present, but if a message is set to be ignored (either through an alias or through enabling PDW mode), it will never be sent to the client.

Could you test and let me know how that goes? :)

from pagermon.

marshyonline avatar marshyonline commented on May 27, 2024

This has solved the issue, but in the process created another issue.
Loading Messages or changing pages is now pinning the CPU to max.
Initial load time has increased dramatically, and the same with changing pages.
DB Currently is storing 14877 Messages with 737 capcodes

http://imgur.com/a/gi1GK

from pagermon.

marshyonline avatar marshyonline commented on May 27, 2024

This could also be due to #42

from pagermon.

davidmckenzie avatar davidmckenzie commented on May 27, 2024

Damn :( CPU just on server, or any browser performance issues too?

from pagermon.

marshyonline avatar marshyonline commented on May 27, 2024

from pagermon.

davidmckenzie avatar davidmckenzie commented on May 27, 2024

No probs mate. I think I know what would be causing it anyway. Any chance I could get a copy of your messages.db file for testing? Can shoot you an email :)

from pagermon.

marshyonline avatar marshyonline commented on May 27, 2024

from pagermon.

davidmckenzie avatar davidmckenzie commented on May 27, 2024

Just a quick update for you guys - I dropped some timers in the api code to see what operations are taking the longest:

init: 1ms
sql: 4051ms
array: 4ms
send: 4ms
xx.xx.xx.xx - - [24/Jun/2017:22:06:06 +0000] "GET /api/messages?limit=100&page=1 HTTP/1.1" 200 -  "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36"

That SQL code is this bit:

    console.time('sql');
    var sql = "SELECT messages.*, capcodes.alias, capcodes.agency, capcodes.icon, capcodes.color, capcodes.ignore, MAX(capcodes.address) ";
        sql += " FROM messages";
        sql += " LEFT JOIN capcodes ON messages.address LIKE (capcodes.address || '%')";
        sql += " GROUP BY messages.id ORDER BY messages.id DESC";
    db.all(sql,function(err,rows){
        if (err) {
            console.log(err);
        } else if (rows) {
            console.timeEnd('sql');

Will try now with indexes :)

from pagermon.

jklnz avatar jklnz commented on May 27, 2024

Just been profiling that statement in Sqlitestudio:
11.69 seconds: SELECT messages., capcodes.alias, capcodes.agency, capcodes.icon, capcodes.color, capcodes.ignore, MAX(capcodes.address)
FROM messages
LEFT JOIN capcodes ON messages.address LIKE (capcodes.address || '%') GROUP BY messages.id ORDER BY messages.id DESC
0.051seconds: SELECT messages.
, capcodes.alias, capcodes.agency, capcodes.icon, capcodes.color, capcodes.ignore, MAX(capcodes.address)
FROM messages
LEFT JOIN capcodes ON messages.address = capcodes.address GROUP BY messages.id ORDER BY messages.id DESC

Obviously that would wreck the wildcard matching (I don't use that, presume you do?)

from pagermon.

jklnz avatar jklnz commented on May 27, 2024

I changed the relevant SQL statements on mine and it improves it massively,this is just a stop gap though until you find another option.

from pagermon.

davidmckenzie avatar davidmckenzie commented on May 27, 2024

Yeah it's definitely the wildcard matching that's killing it. Indexes don't do a thing when globbing like that. Have been able to scrape some marginal performance gains, but it's like two steps forward one step back at the moment.

Will mull it over for a while - I'm sure there's a smarter way to do this 😄

The array manipulation is super fast, so might look at just grabbing all messages and all capcodes into an array, and doing the joins on the javascript side. Not sure how that will scale when we start talking about millions of rows though.

I reckon the "right" way would be to go straight to nosql, but I was hoping to keep the choice between sqlite and mongo/dynamodb optional

from pagermon.

jklnz avatar jklnz commented on May 27, 2024

Yeah I have been running a few options through SQLIte but no joy yet.

One way to keep the current DB system could be to specify the wildcard in the filter instead of presuming all filters are wildcard? This way in the query the only wildcards would be the ones you specified... Not sure if that would help you though depending on how many you have in your filters...

from pagermon.

davidmckenzie avatar davidmckenzie commented on May 27, 2024

Omg I'm an idiot 🤣

If the capcodes table address field is stored as text, using underscore as a single character wildcard, it looks like it works perfectly. That's what I had in mind originally when I was looking at doing the import from PDW's filters.ini. I don't know why, but I always assumed it would escape the special chars.

Will test out performance with that on @marshyonline's dataset, and I think I can rig up some kind of upgrade script for users that use the wildcard functionality, with an option to clear out the capcodes table and start fresh if you haven't modified since the filter import.

Will also have a think about what to do with leading 0s on this.

from pagermon.

jklnz avatar jklnz commented on May 27, 2024

Nice, that sounds like it will work out well

from pagermon.

davidmckenzie avatar davidmckenzie commented on May 27, 2024

Ok, so running some tests, I think I have something workable. I'll be converting the client side scripts to send the full 7 char capcode for POCSAG (including leading zeroes), and a minimum of 7 chars for FLEX. Since from what I've seen, there's near 0 usage of > 7 char FLEX capcodes.

Both tables will have the address column changed to text, with some indexes added for good measure. Then will run the following to convert existing data:

UPDATE messages SET address = '0' || address WHERE LENGTH(address)=6;
UPDATE messages SET address = '00' || address WHERE LENGTH(address)=5;
UPDATE messages SET address = '000' || address WHERE LENGTH(address)=4;
UPDATE messages SET address = '0000' || address WHERE LENGTH(address)=3;
UPDATE messages SET address = '00000' || address WHERE LENGTH(address)=2;
UPDATE messages SET address = '000000' || address WHERE LENGTH(address)=1;

This will pad zeroes on the existing messages, so everything is neat and tidy.

UPDATE capcodes SET address = '0' || address WHERE LENGTH(address)=6;
UPDATE capcodes SET address = '%' || address || '%' WHERE LENGTH(address)<6;

For 6 char entries in the capcodes table, I'm assuming they're not wildcard based and popping a zero in front. For anything less than 6 chars, I'll assume they are wildcard based and will wrap in % so that they match properly.

This would probably need some manual clean-up after, depending on your data. I know I've got a few 2 or 3 char aliases that I'll need to update.

Did some testing with this setup, and it goes from >4000ms per query to ~700ms per query. Got a few more ideas to milk a bit more performance out of it though.

Will likely be a few days until I can push something out for this, will keep you guys updated.

from pagermon.

jklnz avatar jklnz commented on May 27, 2024

Sweet, that's sounding great.

Looks like that checks in with my capcode database too. None of mine are under 6 chars long, 848 are 6 char long and 150 are 7 char long.
I had a few shorter capcode aliases but they were causing too many false positives so I will just re-add them afterwards.
My message DB currently has 25k messages in it.

from pagermon.

davidmckenzie avatar davidmckenzie commented on May 27, 2024

Do you use the "PDW mode" where messages without an alias are ignored?

from pagermon.

jklnz avatar jklnz commented on May 27, 2024

Nah I show them all

from pagermon.

davidmckenzie avatar davidmckenzie commented on May 27, 2024

Getting closer now :)

Follow up question for you guys - would you prefer if aliases that are set to ignore/filter out were not saved to the DB?

Would mean that you couldn't untick the ignore button and have them magically come back, but would also mean a good boost to performance not having a whole bunch of stale messages in the DB.

I'm leaning towards dropping them, but depends on how you guys are using the system.

from pagermon.

marshyonline avatar marshyonline commented on May 27, 2024

The way I see it, I'd you have set an ignore filter it's garbage anyway, no point storing anything that has an alias set to ignore.
Aslong as that does not affect messages that have no Alias then I don't see an issue with it.

from pagermon.

jklnz avatar jklnz commented on May 27, 2024

Yeah I would have to agree with @marshyonline - I only have two ignore filters set and they are both for messages which I probably don't want to see again so should be good to just bin them.

from pagermon.

davidmckenzie avatar davidmckenzie commented on May 27, 2024

Perfect, thanks :D

from pagermon.

davidmckenzie avatar davidmckenzie commented on May 27, 2024

Response times getting lower now - getting sub 200ms query time for @marshyonline's data.

Helps when like 90% of it matched the ignore aliases though.

Should be able to get some further optimisations - since the ignored data no longer needs to be filtered out, we can do some pagination in the DB side, which means no more parsing gigantic arrays full of misery and tears.

@jklnz - any chance I could get a copy of your messages.db file too for testing?

from pagermon.

jklnz avatar jklnz commented on May 27, 2024

Sweet, that'll be good!

Yeah what's your email?

from pagermon.

davidmckenzie avatar davidmckenzie commented on May 27, 2024

Updated times - with 'PDW Mode' enabled:

init: 178ms
sql: 8ms
send: 3ms
GET /api/messages?limit=20&page=1

Without PDW mode (so all messages):

init: 1ms
sql: 1ms
send: 2ms
GET /api/messages?limit=20&page=1

The increased time on PDW mode is due to a join required to get the count of messages, should be able to reduce that further though.

    if (pdwMode) {
        initSql =  "SELECT COUNT(*) AS msgcount FROM messages";
        initSql += " INNER JOIN capcodes ON capcodes.id = (SELECT id FROM capcodes WHERE address LIKE messages.address LIMIT 1);";
    } else {
        initSql = "SELECT COUNT(*) AS msgcount FROM messages;";
    }

from pagermon.

davidmckenzie avatar davidmckenzie commented on May 27, 2024

Searching is still a little expensive, will need to see how this goes with a bigger dataset:

PDW mode (311 messages)

init: 0ms
sql: 201ms
search: 60ms
GET /api/messageSearch?limit=20&page=1&q=FIRE
init: 0ms
sql: 184ms
search: 42ms
GET /api/messageSearch?limit=20&page=7&q=FIRE

All messages (2619 messages)

init: 0ms
sql: 333ms
search: 281ms
GET /api/messageSearch?limit=20&page=1&q=FIRE
init: 0ms
sql: 208ms
search: 197ms
GET /api/messageSearch?limit=20&page=7&q=FIRE

Can't paginate the results in SQL since we need to do full text searching across multiple columns - ends up quicker to just grab all the data and parse the big-ass array.

I've been thinking about ways to do smarter filtering anyway, so now might be a good time to implement it. What I'd really like to be able to do is right click a capcode/agency/source/alias and say "Filter out" or "Show matching" - you could do this multiple times to construct a nice bookmarkable link that shows only the data you want. (This feeds into #47 too, as we could add a right click option to copy URL or open in new tab... or both.)

Would still keep the full text search for message content, but have all other columns filtered smarter.

from pagermon.

davidmckenzie avatar davidmckenzie commented on May 27, 2024

Testing @jklnz's dataset now, PDW mode slows to a crawl - looks like that option will always be impacted by the number of messages:

PDW mode (14288 messages)

init: 1469ms
sql: 4ms
send: 2ms

All messages (27295 messages)

init: 1ms
sql: 2ms
send: 1ms

So I guess some good news and bad news :)

Full text search struggles even more:

init: 0ms
sql: 1785ms
search: 2530ms
GET /api/messageSearch?limit=20&page=1&q=acute
init: 0ms
sql: 1773ms
search: 4719ms
GET /api/messageSearch?limit=20&page=10&q=acute

from pagermon.

jklnz avatar jklnz commented on May 27, 2024

Glad I don't use PDW mode hahaha
I do use the search a bit though.

from pagermon.

davidmckenzie avatar davidmckenzie commented on May 27, 2024

Quick update - haven't been able to find a neat way around the speed of PDW mode with large record sets, so I'm going to ignore that for now and concentrate on optimising the search functionality. Once that's looking better, I'll push this code out and then prioritise #10 - as that will always be the better choice for the PDW mode use cases :)

from pagermon.

davidmckenzie avatar davidmckenzie commented on May 27, 2024

Idea time!

Add a foreign key column to the messages table, which links to a corresponding alias ID. This field is updated either by pressing a button in settings after making changes to capcode aliases, or as a trigger on the capcodes table (former would be better, especially for bulk updates). This would make both things super fast, as you could get the count simply by whether or not that foreign key is populated.

Will have a look at that tonight, see whether it's plausible.

from pagermon.

jklnz avatar jklnz commented on May 27, 2024

I think that's probably the best idea. I think that will improve performance a lot, as you would only need to do that wild card search once instead of 10,000x when you do the query.
Couldn't you just do it on the insertion code, just check if the alias matches then add the FK if it does? No need for a trigger?

from pagermon.

davidmckenzie avatar davidmckenzie commented on May 27, 2024

Yeah will see what performance is like - it might be a bit much to have a 5-10 second delay whenever you update an alias, since it would have to go through all of them every time (i.e. you wouldn't want to create an alias for '%' and have it overwrite the alias for everything else).

from pagermon.

jklnz avatar jklnz commented on May 27, 2024

Maybe have an option saying do you want to update existing messages? (when you update an alias that is)

Actually, I assume most people will update them anyway so not sure if that would help haha

from pagermon.

davidmckenzie avatar davidmckenzie commented on May 27, 2024

Looking a lot better now, refactored searching to use proper queries. The filters on agency/capcode/source are between 100ms and 500ms depending on which page you're on and what you're searching (being on the last page of a filter on the source had the longest result, filtering on capcode was the quickest).

The full text search still sucks, but is bearable. Takes about 2s on average for a really broad search term. Further improvements there will have to wait until either #10 or when I construct a more advanced filtering page.

I'm thinking with the alias updates we'll have two buttons - "Save" and "Save & Update" with help text to indicate the latter will update existing messages, and may take a while on large databases. Will also add a button next on the alias list page to trigger an update. It'll eventually become a problem again when message databases keep growing, but that's a problem for future Dave (probably solved by #12 and #10 anyway).

In other news - on the front page view (so no filters/searches), both PDW mode and non-PDW mode are logging <10ms processing times from first page to last. Yay!

from pagermon.

marshyonline avatar marshyonline commented on May 27, 2024

This makes me very happy!
Great work, and thank you!

from pagermon.

jklnz avatar jklnz commented on May 27, 2024

Nice work. Regarding #12 - can you make rotation optional?, as in my old system I keep all messages so ideally want to do the same... currently it has 1.4million messages.

from pagermon.

davidmckenzie avatar davidmckenzie commented on May 27, 2024

Definitely will be optional :) We'll have to see how performance looks with a dataset that large after #10 - I imagine it wouldn't be much of an issue with a proper relational DB.

from pagermon.

davidmckenzie avatar davidmckenzie commented on May 27, 2024

Hrm, a potential issue with the wildcarding...

sqlite> SELECT * FROM capcodes WHERE '0401200' LIKE address ORDER BY address DESC;
id|address|alias|agency|icon|color|ignore
1118|0401___|test|RFS|question|black|0
121|0401200|Ruatoria|Ambo|medkit|#003300|0
1119|0401%|test wildcard|RFS|question|#000000|0
sqlite> SELECT * FROM capcodes WHERE '0401200' GLOB address ORDER BY address DESC;
id|address|alias|agency|icon|color|ignore
1118|0401???|test|RFS|question|black|0
121|0401200|Ruatoria|Ambo|medkit|#003300|0
1119|0401*|test wildcard|RFS|question|#000000|0

Both GLOB and LIKE are ordering wrong, may have to look at some fancy case ordering.

Edit: Disregard, easy fix :)

sqlite> SELECT * FROM capcodes WHERE '0401200' GLOB address ORDER BY REPLACE(address, '?', '*') DESC;                                                                                                                                                                                               
id|address|alias|agency|icon|color|ignore
121|0401200|Ruatoria|Ambo|medkit|#003300|0
1120|040120?|test|Test|question|black|0
1121|04012??|test|test|question|black|0
1122|04012*|test|test|question|black|0
1118|0401???|test|RFS|question|black|0
1119|0401*|test wildcard|RFS|question|#000000|0
1123|040????|test|test|question|black|0
1124|?40*|test|test|question|black|0

from pagermon.

davidmckenzie avatar davidmckenzie commented on May 27, 2024

Ok, I think we're ready to rock. I've done as much testing as possible, but there's still a high probability that I've missed something. Are you guys keen to pull down the 0.1.3 branch that I have pending?

https://github.com/davidmckenzie/pagermon/tree/0.1.3

I've put up some upgrade instructions here: https://github.com/davidmckenzie/pagermon/wiki/0.1.3-Upgrade-Instructions

Hoping to just get a bit of a sanity check before I push this to master ;)

from pagermon.

marshyonline avatar marshyonline commented on May 27, 2024

Ill pull it down tonight and give it a crack :)

from pagermon.

jklnz avatar jklnz commented on May 27, 2024

That was nice and easy then (lucky!).
I'm about to break it out and test it right now

from pagermon.

jklnz avatar jklnz commented on May 27, 2024

Ignore that

from pagermon.

davidmckenzie avatar davidmckenzie commented on May 27, 2024

I tested in DB Browser for SQLite on OSX, and in the linux sqlite3 command. :)

Just noticed some code from previous commits to fix the websockets issue is missing... not sure how that happened, will pop it back in now though.

from pagermon.

jklnz avatar jklnz commented on May 27, 2024

All good, I started reading your wiki and found the answer

Bugger, just downloaded it, haha, let me know you've done it and i'll re-download

from pagermon.

davidmckenzie avatar davidmckenzie commented on May 27, 2024

Go for it, just pushed the fix. :)

from pagermon.

jklnz avatar jklnz commented on May 27, 2024

Sweet, downloading now.

FYI just run all the DB commands and no errors so looking good

from pagermon.

jklnz avatar jklnz commented on May 27, 2024

Looks like it worked, I must've done something wrong as it reset my API keys to the defaults. But otherwise looking good

from pagermon.

jklnz avatar jklnz commented on May 27, 2024

Feels alot faster, nice work

from pagermon.

jklnz avatar jklnz commented on May 27, 2024

And I see you fixed the glitch where it did not pre-load the capcode when clicking to add aliases, that's great

from pagermon.

jklnz avatar jklnz commented on May 27, 2024

I know what I did, i removed the old config folder instead of keeping it

from pagermon.

davidmckenzie avatar davidmckenzie commented on May 27, 2024

Excellent :D

Shouldn't be any compatibility issues with the old config files, so feel free to restore it if you need your settings back :)

from pagermon.

marshyonline avatar marshyonline commented on May 27, 2024

Working beautifully!

Loading the home page

[8:27:49 pm] 2017-06-29 20:27 +10:00: init: 18.280ms
[8:27:49 pm] 2017-06-29 20:27 +10:00: sql: 1.712ms
[8:27:49 pm] 2017-06-29 20:27 +10:00: send: 2.891ms
[8:27:49 pm] 2017-06-29 20:27 +10:00: ************* - - [29/Jun/2017:10:27:48 +0000] "GET /api/messages?limit=&page=1 HTTP/1.1" 304 - "https://**********.net/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36"

Text Search

[8:29:51 pm] 2017-06-29 20:29 +10:00: ********** - - [29/Jun/2017:10:29:50 +0000] "GET /stylesheets/style.css HTTP/1.1" 304 - https://************.net/?q=RFS" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36
[8:29:51 pm] 2017-06-29 20:29 +10:00: init: 0.380ms
[8:29:52 pm] 2017-06-29 20:29 +10:00: sql: 589.791ms
[8:29:52 pm] 2017-06-29 20:29 +10:00: search: 2.614ms
[8:29:52 pm] 2017-06-29 20:29 +10:00: searchFullText: 203.242ms
[8:29:52 pm] 2017-06-29 20:29 +10:00: sort: 0.569ms
[8:29:52 pm] 2017-06-29 20:29 +10:00: initEnd: 0.042ms

Doing a capcode refresh throws and unauthed error, but still seems todo the refresh.

[8:32:15 pm] 2017-06-29 20:32 +10:00: ******** - - [29/Jun/2017:10:32:14 +0000] "POST /api/capcodes/56 HTTP/1.1" 200 23 "https://********.net/admin/aliases/56" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36"
[8:32:20 pm] 2017-06-29 20:32 +10:00: { AuthenticationError: Unauthorized
    at allFailed (/opt/PagerMonServer/node_modules/passport/lib/middleware/authenticate.js:159:21)
    at attempt (/opt/PagerMonServer/node_modules/passport/lib/middleware/authenticate.js:167:28)
    at Strategy.strategy.fail (/opt/PagerMonServer/node_modules/passport/lib/middleware/authenticate.js:284:9)
    at Strategy.authenticate (/opt/PagerMonServer/node_modules/passport-localapikey-update/lib/passport-localapikey/strategy.js:76:17)
    at attempt (/opt/PagerMonServer/node_modules/passport/lib/middleware/authenticate.js:348:16)
    at authenticate (/opt/PagerMonServer/node_modules/passport/lib/middleware/authenticate.js:349:7)
    at Layer.handle [as handle_request] (/opt/PagerMonServer/node_modules/express/lib/router/layer.js:95:5)
    at next (/opt/PagerMonServer/node_modules/express/lib/router/route.js:137:13)
    at Route.dispatch (/opt/PagerMonServer/node_modules/express/lib/router/route.js:112:3)
    at Layer.handle [as handle_request] (/opt/PagerMonServer/node_modules/express/lib/router/layer.js:95:5)
  name: 'AuthenticationError',
  message: 'Unauthorized',
  status: 401 }
[8:32:23 pm] 2017-06-29 20:32 +10:00: updateMap: 2749.205ms
[8:32:23 pm] 2017-06-29 20:32 +10:00: ********* - - [29/Jun/2017:10:32:22 +0000] "POST /api/capcodeRefresh HTTP/1.1" 200 15 "https:/***********.net/admin/aliases/56" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36"

from pagermon.

marshyonline avatar marshyonline commented on May 27, 2024

It also seems that the DB creation for first install is not upto date with the new schema.

"C:\Program Files\JetBrains\WebStorm 2017.1.4\bin\runnerw.exe" "C:\Program Files\nodejs\node.exe" F:\Users\Marshy\Documents\pagermon\server\app.js
::ffff:127.0.0.1 - - [29/Jun/2017:11:31:06 +0000] "GET / HTTP/1.1" 304 - "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36"
::ffff:127.0.0.1 - - [29/Jun/2017:11:31:06 +0000] "GET /stylesheets/style.css HTTP/1.1" 304 - "http://localhost:3000/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36"
{ Error: SQLITE_ERROR: no such column: alias_id
    at Error (native) errno: 1, code: 'SQLITE_ERROR' }

I would suggest merging #49 before fixing tho.

from pagermon.

davidmckenzie avatar davidmckenzie commented on May 27, 2024

Yeah the unauth error is normal - with the API routes there's two stages of authentication. It checks whether you're using API key, and if not it throws an error but continues, then it checks whether you have an active auth session from the password auth, if you don't, it redirects to /login.

Bit of a dodgy workaround since with Passport there's only two options - soft fail with error, or hard fail and die.

Awesome to see it's working well :D If no issues I'll merge the PR tomorrow morning.

Edit: Ooh good catch, thanks for the PR!

from pagermon.

davidmckenzie avatar davidmckenzie commented on May 27, 2024

Merged into master, updated my main prod version running on a t2 EC2 instance. FWIW, with my tiny DB I'm getting:

Front page load:

2017-06-29 21:05 +00:00: init: 0.788ms
2017-06-29 21:05 +00:00: sql: 0.693ms
2017-06-29 21:05 +00:00: send: 0.850ms

Alias refresh:

2017-06-29 21:06 +00:00: updateMap: 46.523ms

Full text search:

2017-06-29 21:06 +00:00: init: 0.047ms
2017-06-29 21:06 +00:00: sql: 19.870ms
2017-06-29 21:06 +00:00: search: 1.145ms
2017-06-29 21:06 +00:00: searchFullText: 146.091ms
2017-06-29 21:06 +00:00: sort: 0.096ms
2017-06-29 21:06 +00:00: initEnd: 0.005ms

from pagermon.

davidmckenzie avatar davidmckenzie commented on May 27, 2024

May be an issue with duplicate filtering, saw a message come through twice that shouldn't have. Will do some more testing.

from pagermon.

davidmckenzie avatar davidmckenzie commented on May 27, 2024

Fixed and pushed - the duplicate checking SQL wasn't wrapping the address in quotes, so it was treating it as a number and trimming the leading 0.

from pagermon.

marshyonline avatar marshyonline commented on May 27, 2024

Can confirm dupe checking was/is broken.
Not home tonight, will update soon and test

http://imgur.com/a/bhp1D

from pagermon.

marshyonline avatar marshyonline commented on May 27, 2024

Yeah, I think its still broken

from pagermon.

davidmckenzie avatar davidmckenzie commented on May 27, 2024

Hmm, I haven't seen any duplicates since the last update. If you're in PDW mode, could be that junk messages are coming in between them? What does it look like with PDW mode disabled?

from pagermon.

marshyonline avatar marshyonline commented on May 27, 2024

Have watched the system for a few days and can confirm all is working fine 👍

from pagermon.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.