Giter VIP home page Giter VIP logo

Comments (228)

johnolafenwa avatar johnolafenwa commented on September 26, 2024 13

Hello @AndrewHoover @aesterling @VorlonCD

Thanks for bringing this up and it is exciting to see how DeepStack is being used.
We truly haven't done much development as we would have wanted to in recent times. It took a bit of settling down for us into our new job. Apologies for the pause in recent times.

We are planning on stabilising the development and would have new releases in coming weeks with significant improvements. The project is not abandoned and a lot is coming on it soon.

Please bear with us.
Thanks

from bi-aidetection.

OlafenwaMoses avatar OlafenwaMoses commented on September 26, 2024 8

@VorlonCD @AndrewHoover @aesterling To add to what @johnolafenwa said, the maker and open source community has really inspired us to pursue and continually developing DeepStack to serve the community in new ways, considering the wealth of tools and ecosystem built on top of its capabilities.

These are the reasons why we will open source the project to accelerate further development as well as open the door for more applications and impact.

from bi-aidetection.

johnolafenwa avatar johnolafenwa commented on September 26, 2024 5

Hello @VorlonCD @githubDiversity @classObject @aesterling @NicholasBoccio @Tinbum1 @johnjoemorgan @balucanb @AndrewHoover

Thank you all for the feedback in the past days, we are excited to share the latest builds with massive improvements in speed for the FACE APIS.
The prior updates applied only to object detection and now we have extended that to face detection, face recognition and face match.

Run

For CPU

deepquestai/deepstack:cpu-x6-beta

or

deepquestai/deepstack:latest

For GPU

deepquestai/deepstack:gpu-x5-beta

or

deepquestai/deepstack:gpu

Would love to know your thoughts and feedbacks on this.
Note that the new face apis are not only faster but more accurate than the previous ones.
Thank you all

from bi-aidetection.

johnolafenwa avatar johnolafenwa commented on September 26, 2024 5

Hello everyone. As @petermai6655 suggested, training on your own images can greatly improve accuracy.
To this end, I am excited to share today that we have added support for training and deploying object detection on your own images with DeepStack.

End to End instructions for doing this is documented here https://docs.deepstack.cc/custom-models/index.html

Note that, this feature requires running the latest DeepStack,

Supported DeepStack versions are

deepquestai/deepstack:cpu-2020.12

deepquestai/deepstack:gpu-2020.12

and yes, nvidia jetson is supported to, just use

deepquestai/deepstack:jetpack

Do give this a try and would love to help with any issues and see how this improves the ability to customize deepstack to your needs.

And , DeepStack is now open source on Github, https://github.com/johnolafenwa/DeepStack

from bi-aidetection.

AndrewHoover avatar AndrewHoover commented on September 26, 2024 4

Excellent!!!
Thank you so much for the response @johnolafenwa ! I'm not sure how much you anticipated that DeepStack would be used in the hobbyist / maker community but projects like this one that @gentlepumpkin and @VorlonCD have developed have allowed an huge step forward in automation and functionality by bridging DeepStack to our projects.

from bi-aidetection.

johnolafenwa avatar johnolafenwa commented on September 26, 2024 4

Thanks guys for these details. Would share details from our end towards resolving this before the week ends. @petermai6655 , support for custom training is scheduled for release this November. I believe this will open up a lot of more possibilities. Thanks for the patience

from bi-aidetection.

johnolafenwa avatar johnolafenwa commented on September 26, 2024 4

Hello everyone, if you are trying out the custom model feature and running into any issues. Please watch this video we made to demonstrate the whole process.
https://www.youtube.com/watch?v=wQKUQ6Y2n3Q

You can also check the updated docs https://docs.deepstack.cc/custom-models/ for guidance

from bi-aidetection.

johnolafenwa avatar johnolafenwa commented on September 26, 2024 3

Hello @VorlonCD @githubDiversity @aesterling @AndrewHoover @Tinbum1 @classObject

We have just released update to the cpu version and will follow up soon with the gpu version.
run

deepquestai/deepstack:latest

or

deepquestai/deepstack:cpu-x4-beta

The new update is so much faster and a lot more accurate.

Would love to know your feedbacks here on on the forum https://forum.deepstack.cc on this new release

I can't say enough how much this conversation here has contributed to our energy. Thank you all

from bi-aidetection.

johnolafenwa avatar johnolafenwa commented on September 26, 2024 3

Hello everyone, the issue with no detections has been fixed. Run

deepquestai/deepstack:latest

or

deepquestai/deepstack:cpu-x5-beta

If any further bugs are encountered, would love to know and address it as soon as possible

from bi-aidetection.

zeliant avatar zeliant commented on September 26, 2024 3

I know this isn't a deepstack suppor thread, but...does anybody know how to get back to the above view? I added the --restart always flag to the docker container last time I ran it, and now my vm just sits at a command prompt with no deepstack verbosity showing. :-)

Run below command to find deepstack container name

Sudo docker ps

Then run below command with the container name (assuming deepstack)

sudo docker logs -f deepstack

from bi-aidetection.

doudar avatar doudar commented on September 26, 2024 3

@johnolafenwa thats some impressive documentation! I can't wait to try it out.

Thank you for your hard work!

from bi-aidetection.

aesterling avatar aesterling commented on September 26, 2024 2

@johnolafenwa That's great news and can't wait to hear more. Hope you're doing well and thanks again for the incredible tools. Appreciate the fast response!

from bi-aidetection.

johnolafenwa avatar johnolafenwa commented on September 26, 2024 2

Hello @Tinbum1 , the current pi version has a number of issues. The beta will be docker based as we are switching to supporting DeepStack only on docker. I can't give a definite time for when this will be released, but it will out before the end of the year. This will likely be a November release.

from bi-aidetection.

Tinbum1 avatar Tinbum1 commented on September 26, 2024 2

@balucanb all the instructions are on the first page of the AITool forum thread.

from bi-aidetection.

VorlonCD avatar VorlonCD commented on September 26, 2024 2

Ok, this version of AITOOL will ignore BadRequest error 400 and ASSUME it means 'false alert' for NOW, but this really should be fixed on the Deepstack side since we DO want to know when an actual error happens and not ignore it.

@OlafenwaMoses the JSON response should be sent rather than error, but with no 'prediction' objects in the case of no predictions being found. Like it did before. Error 400 is great for bad image or other unexpected error.

THANK YOU for your hard work on this project, its amazing to use to prevent false security camera alerts!

AITOOL-VORLONCD.zip

from bi-aidetection.

johnolafenwa avatar johnolafenwa commented on September 26, 2024 2

@aesterling
OCR is well in scope of DeepStack. DeepStack is currently focussed on computer vision and ocr is within that target. The long term goal for DeepStack, is to support vision, language and speech. We don't have a timeline for the ocr feature but this is on our mind.

from bi-aidetection.

ipeterski avatar ipeterski commented on September 26, 2024 2

Hello everyone. As @petermai6655 suggested, training on your own images can greatly improve accuracy.
To this end, I am excited to share today that we have added support for training and deploying object detection on your own images with DeepStack.

End to End instructions for doing this is documented here https://docs.deepstack.cc/custom-models/index.html

Note that, this feature requires running the latest DeepStack,

Supported DeepStack versions are

deepquestai/deepstack:cpu-2020.12

deepquestai/deepstack:gpu-2020.12

and yes, nvidia jetson is supported to, just use

deepquestai/deepstack:jetpack

Do give this a try and would love to help with any issues and see how this improves the ability to customize deepstack to your needs.

And , DeepStack is now open source on Github, https://github.com/johnolafenwa/DeepStack

Jetson Version is AWEOME!!! I have went from 700ms with cpu (x3) to 450ms with gpu(x4) down to 178-200ms with the Jetson Nano.. at one point i get a few MicroSeconds Process times for very clear images.. ( i use 4k cameras and yes send 4k resolution at the AI because i want to give it as much resolutions as possible.. however with jetson nano i did Lower the res to 2k which still yeilded a 200~/300~ms process time.. Since AI Tools shows " Deepstack Url(s) " i am going to play running multiple jetson nanos for processing.. just out of curosity how it will process..

There are times i get..... 600 images sent to deepstack within a 20minute period.. (heavy traffic in backyard + 22+ cameras)

I have been secretely been testing and researching the best method of running deepstack, even purchased a physical server with 32 cores and will be installing a few quadro gpus to see how it can process.. and if i can get processing down to nano seconds that would be the most idea.. i have not yet tried with intel, however im sticking with Nvidia and GPU support via Docker/Unbuntu Installations...

Sorry for the long reply.. but like I said I have litterally spend the past 2 months working on different aspects of BI, AI, DS, HA Intergrations and processing to create a solution as a whole. Hearing that DS is open sources is amazing.. After the new years I defenitly plan on contributing as much as I can in both, R&D, Time, and even some investments if necessary..

Thanks. Dont know all to @ yet.. but i will soon.. i plan to be around more often and more active to contribute as i can.. Thanks to all for everything!!

from bi-aidetection.

b0ddu avatar b0ddu commented on September 26, 2024 2

@johnolafenwa thank you for making this opening source.. I have been using this for a longtime and like they support its getting. Also waiting on them release for NCS2

@ipeterski I'm using on Intel NCS2.. did you see better results on Jetson Nano 2gb? or better.. hoping to get one of those..

from bi-aidetection.

ipeterski avatar ipeterski commented on September 26, 2024 2

@SankeerthB I have the 4gb newer version, do not quote me but I think I read they are going to
Move more towards the jetson nano, I want to see if I can potentially cluster 2 nanos together to speed up the process even faster my goal is realistically in the microseconds processing time frame

in my experience with research and testing running deep stack off a machine with just CPU support and running it off a PC with obviously the same machine but with a yep with the GPU addition with cuda was quite a bit faster so the Jetson nano does make a lot of sense to me

from bi-aidetection.

VorlonCD avatar VorlonCD commented on September 26, 2024 1

Most likely it wouldn't be that hard to use a different engine, and I do want to provide that ability at some point.

If anyone wants to research to find the best ones that would be great! Or even implement!

I took a quick look a while back.... Don't remember if any of these can run fully locally or not, but I think there is a free or low cost option for most.

Sighthound
ImageAI
AWS Rekognition
Google AutoML

from bi-aidetection.

johnolafenwa avatar johnolafenwa commented on September 26, 2024 1

@githubDiversity That's a good question. Sure, we could just open source the current codebase, however we have been developing a new version of DeepStack that is significantly faster and more stable than the existing version of DeepStack, it is source for this that we are releasing soon. FYI, before this weekend ends, we shall release the CPU version as it is completed, and the gpu version the next week. The codebase will be made public before November ends. In the meantime, this new releases coming up will require purchasing no activation, with all features free perpetually.

from bi-aidetection.

Tinbum1 avatar Tinbum1 commented on September 26, 2024 1

@johnolafenwa Many thanks for the prompt reply. That's great, can't wait so I can get my power consumption down! Love the work you've done, thank you,.

from bi-aidetection.

classObject avatar classObject commented on September 26, 2024 1

@johnolafenwa Thanks for the update! I'm seeing a dramatic speed increase. My response times have gone from an average of 630ms down to an average of 230ms.

from bi-aidetection.

Tinbum1 avatar Tinbum1 commented on September 26, 2024 1

Mines gone from about 1000 to 210ms but coming back as bad request in AITool so will have to investigate that.
Got http status code '400' in 241ms: Bad Request|81414|1||24
Empty string returned from HTTP post.|81415|1||24

from bi-aidetection.

classObject avatar classObject commented on September 26, 2024 1

@Tinbum1 @johnolafenwa DeepStack seems to be returning a 400 Bad Request if it does not detect any objects in the image. I changed the exposure on an image that worked until it was so dark there were no detections. It returned a Bad Request. Your image returns a bad request for me as well.

from bi-aidetection.

johnolafenwa avatar johnolafenwa commented on September 26, 2024 1

Thanks @classObject @Tinbum1 @VorlonCD

Returning error on no detection is not by design, this is a bug. I have confirmed this on our side too. This would be fixed and a new update will be released soonest.

from bi-aidetection.

Tinbum1 avatar Tinbum1 commented on September 26, 2024 1

@johnolafenwa

Thank you, that's great.

from bi-aidetection.

Tinbum1 avatar Tinbum1 commented on September 26, 2024 1

It also seems to be using a lot less CPU.

from bi-aidetection.

AndrewHoover avatar AndrewHoover commented on September 26, 2024 1

The latest version of DS appears to have corrected my errors!

from bi-aidetection.

johnjoemorgan avatar johnjoemorgan commented on September 26, 2024 1

from bi-aidetection.

NicholasBoccio avatar NicholasBoccio commented on September 26, 2024 1

I am still running the GentlePumpkin AITool but WOW look at these improvements!

My previous MODE=HIGH times were 300-500ms MODE=LOW times were sometimes under 200 but mostly 250ish...

Blue Iris is running on RAID0 NVME Win10Pro I7-8500 32GB RAM and I have a separate Ubuntu box running Docker/deepstack:latest with the same processor/nvme/ram:

High
https://imgur.com/9TS6Yrm
Low
https://imgur.com/EjFD6q0

There seems to be no difference between HIGH/LOW so I am just going to remove the ENV entry completely until we learn about whether that is still supported. Either way, these are INSANE times, and as mentioned, will allow for practically instant triggers. I have 7 cameras that face a busyish street (13 in total) and now could theoretically provide about 10fps on CPU of vision detection! I cannot wait for the GPU to get finished. Each machine has a Quadro 620, which isn't super powerful, but I am happy to keep the cpu as free as possible for blue iris.

Great work guys - THANK YOU for giving this more of your precious time! https://tenor.com/baakT.gif

from bi-aidetection.

johnolafenwa avatar johnolafenwa commented on September 26, 2024 1

This is great to know @NicholasBoccio. Thanks a lot . We are excited about the great things you will build with DeepStack.
The Low is meant to be faster, we shall investigate why the speed are the same.

In the meantime, earlier than promised,
we are happy to share the new GPU version is available on docker hub.

run

deepquestai/deepstack:gpu

or

deepquestai/deepstack:gpu-x4-beta

The accuracy is the same as the cpu version but the speeds are much higher.

Would love to know your experience using this.

We have a lot planned in the next few weeks both in the short term and the long term.

from bi-aidetection.

johnolafenwa avatar johnolafenwa commented on September 26, 2024 1

@Tinbum1 Cool. The issue is gpu support even with WSL2 is a little complicated. Basically you have to do the following.

This is quite a long process and it is a preview feature.

from bi-aidetection.

NicholasBoccio avatar NicholasBoccio commented on September 26, 2024 1

Wow - The GPU version finally VWERKS!

The MODE=High/Low also works fine:
High:
https://imgur.com/6fKKQaw

Low:
https://imgur.com/jGva8VP

I am still running this on a separate box, but I am confused at the times. These are similar to the CPU times.

Here is what nvtop shows with this running for about 30 minutes at High:
https://imgur.com/iqTMeT5

BTW, I am happy with either the GPU or CPU being @ or under 100ms, that exceeds my needs - but I was expecting an order of magnitude improvement based on what the forum said about the GPU being 5-20x faster.

Thank you for all of this work! I am now going to get this working on the same machine that AITool is running on Windows.

from bi-aidetection.

johnolafenwa avatar johnolafenwa commented on September 26, 2024 1

We shall investigate the speed difference on gpu. Thanks for the details.

It is good to know you got the gpu version working.
On the error in WSL, follow this to install the nvidia container toolkit.
https://docs.nvidia.com/cuda/wsl-user-guide/index.html#installing-nvidia-docker

from bi-aidetection.

NicholasBoccio avatar NicholasBoccio commented on September 26, 2024 1

@johnolafenwa Thank you for the instructions, I will give it a go this afternoon.

I think I found our problem (assuming you were also trying to use the Docker Windows version:
https://imgur.com/a/hNScb31
from: https://docs.nvidia.com/cuda/wsl-user-guide/index.html#installing-nvidia-docker (about half way down).

Its almost 4 am, I will finish this with their recommendations when I get up. Feel free to jump ahead of me @Tinbum1

from bi-aidetection.

Tinbum1 avatar Tinbum1 commented on September 26, 2024 1

@johnolafenwa
I've just downloaded the latest version and I think it has increased my processing times significantly, probably nearly doubled. I don't think AITool uses face detection, face recognition and face match so I'm surprised it would have an effect. If this is the case perhaps different releases would be an idea. I shall check my findings on another computer and pay more attention.

from bi-aidetection.

NicholasBoccio avatar NicholasBoccio commented on September 26, 2024 1

The photon vm has 4 logical cpu's and 2048Mb ram; is there a way to speed up the processing?

Add
-e MODE=low
And see if that helps

from bi-aidetection.

wilddoktor avatar wilddoktor commented on September 26, 2024 1

Thanks @NicholasBoccio; that seems to have helped significantly!
image

from bi-aidetection.

VorlonCD avatar VorlonCD commented on September 26, 2024 1

@ncrispi - the detection would be a little better if you turned off motion highlighting in BlueIris (the last three you posted).

from bi-aidetection.

VorlonCD avatar VorlonCD commented on September 26, 2024 1

@petermai6655 - I think we are looking for cases where it is very obvious that an object should be detected, but its not. Haven't had time to go through mine for the last few nights yet but there is always a cat or fox it misses for sure. False detection's may not be possible to fully prevent with this tech, but when it misses a human walking around your house at night thats an issue. For @johnolafenwa to correctly analyze the image there should not be any annotation, rectangles, text, etc.

from bi-aidetection.

AndrewHoover avatar AndrewHoover commented on September 26, 2024 1

@johnolafenwa That is great news and one thing that I would love to take advantage of.

from bi-aidetection.

Yonny24 avatar Yonny24 commented on September 26, 2024 1

Awesome definitely going to go through this custom made option. Possibly to train it to recognize my neighbors cats. Although after changing mode to high and retesting did help.

from bi-aidetection.

Yonny24 avatar Yonny24 commented on September 26, 2024 1

@Yonny24 you appear to have the same question as me in ref. to running 2 different models at the same time, I am running this with Docker desktop can you explain how to do what @johnolafenwa is telling us to do. Sorry if it is a simple task but I am brand new to using Docker. TIA

Yep I have deepstack (vision detection) running windows for docker right now. I do not want to break this. I think I understand that vision detection and custom model detection can work on the same deepstack port in tandem (same container etc just with an extra element to customize more objects that are not standard)?
Just need to add an additional volume to the current container? I can probably achieve this with portainer under Volumes and redeploy but am cautious.
Also bit confused by the instructions using colab google still. Thought we were using our own local cpu. Maybe the colab (new to me) was just an example.

Confused is the key word for me! Basically I am in the same position- I have a working copy and do not want to mess that up- took me to long to get it working bc of my lack of knowledge/skill with this. Like you I am stuck on getting the custom model I have trained to work. Not understanding how to make this new volume, I am trying to read the Docker docs now but it is all greek to me. You mentioned portainer, I have heard of it but don't know what it is really- is this part of Docker or some add on where you can put/make new containers ( I think that is the correct wording) I assume they are run locally?

Portainer is just another container but a nice user friendly interface to manage all your other containers without having to use command line.

from bi-aidetection.

balucanb avatar balucanb commented on September 26, 2024 1

@Yonny24 are you deploying portainer as a standalone or swarm? Also the last screenshot is exactly what it should be doing when it trains. Just FYI I had around 300 images in my train folder and I let it do all 300 Epoch's took me about 6 1/2 hours to get that done. the next step is where I am stuck. ergo- my standalone / swarm question

from bi-aidetection.

aesterling avatar aesterling commented on September 26, 2024

Andrew, I agree it doesn't look very promising, but I'm still hopeful they'll release it open-sourced as promised.

From the information on their website, the two Deepstack developers, Moses and John, are brothers from Nigera and since early 2020 appear as "Software Engineers at Microsoft." It seems like their priorities have shifted, but they're both still active online and provide their contact info on the website. Not sure if there's any point to reach out and ask for an update (I've tagged them below), but they both seem very nice. :)

Developed and Maintained by Moses Olafenwa and John Olafenwa, brothers, creators of TorchFusion, Authors of Introduction to Deep Computer Vision and creators of DeepStack AI Server.

Moses Olafenwa
Email: [email protected]
Website: http://olafenwamoses.me
Twitter: @OlafenwaMoses
Medium: @guymodscientist
Facebook: moses.olafenwa
Github: @OlafenwaMoses

John Olafenwa
Email: [email protected]
Website: https://john.aicommons.science
Twitter: @johnolafenwa
Medium: @johnolafenwa
Facebook: olafenwajohn
Github: @johnolafenwa

from bi-aidetection.

githubDiversity avatar githubDiversity commented on September 26, 2024

Out of curiosity, can you guys please tell us why the delay?

Sure we all have day jobs and that is a really valid argument but I am not quite sure yet why uploading the source code to github should be a challenge.

Please illuminate me as I am here to learn, not b*tch at you.

from bi-aidetection.

johnolafenwa avatar johnolafenwa commented on September 26, 2024

Also, we are planning a lot more than just releasing the source code. We are making significant efforts to setup a stable dev community, proper documentation and a strong ecosystem similar to what we have for projects like kubernetes.

from bi-aidetection.

Tinbum1 avatar Tinbum1 commented on September 26, 2024

Hi, any idea when you will be releasing the beta for the Pi? I just can't get the alpha to work. I must have installed it over 20 times on different machines with new sd cards. It reports the positions of the object back in the wrong place. If the object is at the top it moves it down and if it's near the bottom it moves it up. Left and right are ok.

1DriveHouse 20200928_004514662 DriveHouse

from bi-aidetection.

balucanb avatar balucanb commented on September 26, 2024

@OlafenwaMoses Would deepquestai/deepstack:latest be for running this on Docker? I am using the windows version is why I am asking. thanks.

from bi-aidetection.

johnolafenwa avatar johnolafenwa commented on September 26, 2024

@balucanb This is for running on Docker. Note that we are switching to docker only for all deepstack editions, the windows version has been discontinued. Note that the docker version runs on windows as well.
Should you have any challenges running the docker version, please let us know.

from bi-aidetection.

balucanb avatar balucanb commented on September 26, 2024

Thanks! I assumed that was the answer. No clue how to use Docker, looks very confusing to me! I am sure I will have questions. Will the current windows version stop working completely or is it just not being updated anymore? Thanks again.

from bi-aidetection.

johnolafenwa avatar johnolafenwa commented on September 26, 2024

@Tinbum1 In the previous versions of DeepStack, all requests returned 200 | Success. In this new version, we have improved error reporting. Images that are not able to be processed possibly due to sending a corrupt file or a file that is not an image will return 400 | Bad Request. I recognize this is a breaking change and existing integrations will need to take this into account.

Can you share the input you sent that returned this?

I am excited to here about the speed increase being experienced. This has been a top priority for us

from bi-aidetection.

Tinbum1 avatar Tinbum1 commented on September 26, 2024

@johnolafenwa Thanks for the reply, I'm afraid I only use AITool and only know a bit about computing so will have to find out how to do that.

from bi-aidetection.

Tinbum1 avatar Tinbum1 commented on September 26, 2024

This was one of the images.

GranaryAI 20201021_195624951 Granary

GranaryAI.20201021_195624951.Granary.zip

from bi-aidetection.

Tinbum1 avatar Tinbum1 commented on September 26, 2024

@classObject Well figured out.
I've just put some images in my input folder from this morning and they were processed without any problem.

from bi-aidetection.

aesterling avatar aesterling commented on September 26, 2024

Screen Shot 2020-10-21 at 04 27 23 PM@2x

The speed increase is excellent, so thank you @johnolafenwa!

But, I too am getting 400 Bad Request errors from Deepstack. Is that something that @VorlonCD can adjust AI Tool to handle? Or is there something else that needs to change?

from bi-aidetection.

johnolafenwa avatar johnolafenwa commented on September 26, 2024

Thanks @johnjoemorgan , this is great to know.
Based on feedbacks and due to issues with using gpu in docker on windows, we would release a native windows version sometime in November.
When possible though, i advice using docker as it makes it simple to run both on the edge and the cloud

from bi-aidetection.

NicholasBoccio avatar NicholasBoccio commented on September 26, 2024

I should add that I actually have the GPU version (currently stopped) on that Ubuntu box, and it would run for a few seconds and then stop. Since I am new to linux/portainer/deepstack I don't know how to proceed with providing useful information to you - but really am excited to let the GPU get some work in, and hopefully improve results with better accuracy (assuming it will be more accurate)

from bi-aidetection.

Tinbum1 avatar Tinbum1 commented on September 26, 2024

@ johnolafenwa Thank you, I'll be giving it a try as well as I had similar problems as @NicholasBoccio with it running for a while and then stopping.

from bi-aidetection.

Tinbum1 avatar Tinbum1 commented on September 26, 2024

@ johnolafenwa Can I check, should this GPU version run on Windows Docker Desktop?

from bi-aidetection.

NicholasBoccio avatar NicholasBoccio commented on September 26, 2024

Update on the CPU version:
The High/Low seem to be working now:
High:
https://imgur.com/ZL9ctb6
Low:
https://imgur.com/QWE1RLP

These are with me sending as many triggers as I can in BlueIris, the times are obviously better when the triggers are more natural.

Regarding the new GPU... I cannot get it to start the server after the install or upgrade. So now I no longer have a semi-working GPU version.

Ubuntu 20.04 LTS
Portainer 19.03.13
nVidia Quadro 620 / NVIDIA-SMI 450.80.02 Driver Version: 450.80.02 CUDA Version: 11.0

I will now try and go around Portainer and see if I can make it work and report back

from bi-aidetection.

Tinbum1 avatar Tinbum1 commented on September 26, 2024

I've tried installation of the GPU version on 3 different computers in Windows Docker Desktop. I can see that deepstack is activated in a web browser but using AITool there is this error,

2020-10-23 20:43:13.623174|Debug|AITOOLS.EXE|IsValidImage|127.0.0.1:82|Gate Day|None| Image file is valid: 1Car1.20201023_204307855.Gate.jpg|239|2||28
2020-10-23 20:43:13.624171|Debug|AITOOLS.EXE|DetectObjects|127.0.0.1:82|Gate Day|1Car1.20201023_204307855.Gate.jpg| (1/6) Uploading a 1359798 byte image to DeepQuestAI Server at http://127.0.0.1:82/v1/vision/detection|240|1||28
2020-10-23 20:43:23.631247|Error|AITOOLS.EXE|DetectObjects|127.0.0.1:82|Gate Day|1Car1.20201023_204307855.Gate.jpg| A task was canceled. [TaskCanceledException] Mod: d__30 Line:990:48|241|1||29

from bi-aidetection.

johnolafenwa avatar johnolafenwa commented on September 26, 2024

@Tinbum1 , nvidia gpu access is not supported on Windows Docker Desktop, except via WSL.

We have plans on bringing the GPU version to docker via a native windows version or a DirectML based docker approach.

@NicholasBoccio How did you run the gpu version?
Typically you would start the gpu version with the command

docker run --gpus all -e VISION-DETECTION=True -v localstorage:/datastore -p 80:5000 deepquestai/deepstack:gpu

Note also that you need to have installed nvidia container toolkit. see https://python.deepstack.cc/using-deepstack-with-nvidia-gpus

from bi-aidetection.

Tinbum1 avatar Tinbum1 commented on September 26, 2024

I have WSL 2 enabled in Docker Desktop

image

from bi-aidetection.

Tinbum1 avatar Tinbum1 commented on September 26, 2024

@johnolafenwa Thank you for the instructions, I will give it a go this afternoon.

from bi-aidetection.

NicholasBoccio avatar NicholasBoccio commented on September 26, 2024

I have tried to install with both Ubuntu 18 & 20 Windows app and WSL 2.0 setup. I keep getting this error:

nicholasboccio@System:~$ sudo docker run --gpus all -e VISION-DETECTION=True -v localstorage:/datastore -p 80:5000 deepquestai/deepstack:gpu
docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]].
ERRO[0001] error waiting for container: context canceled

from bi-aidetection.

balucanb avatar balucanb commented on September 26, 2024

Thanks! Can't wait to try them out!

from bi-aidetection.

aesterling avatar aesterling commented on September 26, 2024

@VorlonCD Do the face API speed improvements that @johnolafenwa mentioned in this latest version of deepstack benefit us AI Tool users at all? I'm not aware of any features specifically for "faces." Thanks!

from bi-aidetection.

petermai6655 avatar petermai6655 commented on September 26, 2024

Would be interesting to see face detection (and maybe even license plate detection?) on AITool.

from bi-aidetection.

aesterling avatar aesterling commented on September 26, 2024

Would be interesting to see face detection (and maybe even license plate detection?) on AITool.

License plate detection and reading (OCR) of the plate would be an incredible addition. There are other tools offer it, but having it here as an "all in one" solution would be great.

@johnolafenwa is that feature outside the scope of Deepstack, or is it something you would consider?

from bi-aidetection.

wilddoktor avatar wilddoktor commented on September 26, 2024

This morning I spun up a brand new Photon VM on my esxi host.
docker run -d -p 80:80 vmwarecna/nginx - the webserver was installed and started
docker run /deepquestai/deepstack:latest - got deepquest installed
docker run -e VISION-DETECTION=True -v localstorage:/datastore -p 80:5000 deepquestai/deepstack:cpu-x6-beta - I found out that if the first line after this isn't "/v1/vision/detection" then I need to stop and restart the above command 3 to 5 times until that line shows up.
Once it finally shows up that way and aitool can talk to it, I'm getting pretty horrible times:
1
The photon vm has 4 logical cpu's and 2048Mb ram; is there a way to speed up the processing?

from bi-aidetection.

wilddoktor avatar wilddoktor commented on September 26, 2024

I know this isn't a deepstack suppor thread, but...does anybody know how to get back to the above view? I added the --restart always flag to the docker container last time I ran it, and now my vm just sits at a command prompt with no deepstack verbosity showing. :-)

from bi-aidetection.

NicholasBoccio avatar NicholasBoccio commented on September 26, 2024

I am new to Ubuntu/Linux, so I would also like to know this!

from bi-aidetection.

ncrispi avatar ncrispi commented on September 26, 2024

Hi @johnolafenwa - I am one of the AI Tool users as well and I am noticing that during ideal lighting conditions, the photos are identifying people 90% of the time, however during the night or darker conditions, deepstack is hardly able to identify people. (I am able to identify people myself by looking at the photos, but deepstacks is not)... is there some way that we can help improve the program identify these? I have also heard from some other folks about night time issues

from bi-aidetection.

VorlonCD avatar VorlonCD commented on September 26, 2024

@ncrispi @johnolafenwa - agree, night image detection has been bad for all my cameras. I pretty much have to stand in front of the camera at night with a sign saying "I am a person" :) One camera is 4k with infrared and the other is 1080p with reasonable ambient like from street lights. I'm assuming the dataset used was trained mostly with daylight images.

@classObject - I think you were experimenting with tweaking brightness/contrast on the images? Does that help much? Like, could we apply a set value to camera images from dusk till dawn? (great movie)

from bi-aidetection.

johnolafenwa avatar johnolafenwa commented on September 26, 2024

Hello @ncrispi @VorlonCD Thanks for reporting this. Can you share any sample images so we can reproduce this on our end and work on a solution?

from bi-aidetection.

ncrispi avatar ncrispi commented on September 26, 2024

@johnolafenwa - I deleted most of the older ones, but here are a few:

https://imgur.com/YLsz2Dp
https://imgur.com/PqE39kq
https://imgur.com/GvwWAg2
https://imgur.com/rMWl6un

I'll keep reviewing the logs and find any additional night time photos that do not pick up people.

from bi-aidetection.

rhatguy avatar rhatguy commented on September 26, 2024

Is there any way we could use images from our cameras and somehow tag them in our local deepstack instances to help deepstack perform better on our specific cameras?

from bi-aidetection.

petermai6655 avatar petermai6655 commented on September 26, 2024

Here are some of my images of false detections at night. A lot of the good examples I had got deleted, but one of my cameras developed some spiderwebs yesterday which also caused false detections. I sometimes see DeepStack detecting a tree or pole in the distance as a person.

https://user-images.githubusercontent.com/28712950/98568653-f82b3a80-2276-11eb-89a3-3df0a91f4b9e.jpg
https://user-images.githubusercontent.com/28712950/98568657-f8c3d100-2276-11eb-9bb3-1faaf8e8c01d.jpg
https://user-images.githubusercontent.com/28712950/98568659-f8c3d100-2276-11eb-88a0-91e72f7a9ca7.jpg
https://user-images.githubusercontent.com/28712950/98568662-f8c3d100-2276-11eb-9d75-55692a0b6e59.jpg
https://user-images.githubusercontent.com/28712950/98571657-6291aa00-227a-11eb-8f24-dc4244d34d3f.jpg

from bi-aidetection.

ncrispi avatar ncrispi commented on September 26, 2024

@johnolafenwa

Here are a few more images from tonight where people were not detected.
https://imgur.com/WLyO4Bw
https://imgur.com/ivbpVDs
https://imgur.com/YfbdJ5R

from bi-aidetection.

petermai6655 avatar petermai6655 commented on September 26, 2024

I'm wondering if it's currently possible to train DeepStack on our own as it might allow for better detection during darker conditions? How about training faces?

from bi-aidetection.

Tinbum1 avatar Tinbum1 commented on September 26, 2024

File attached where it failed to spot person at night.

1Gate.20201128_211540398.Gate.zip

from bi-aidetection.

doudar avatar doudar commented on September 26, 2024

@johnolafenwa - I have a couple of false detections here if it helps. Mostly it's working great!

IMG_6864
IMG_6865
IMG_6866

This is with the deepstack:latest - any updates on your progress? Thanks for your awesome contribution!

from bi-aidetection.

OlafenwaMoses avatar OlafenwaMoses commented on September 26, 2024

You will notice that most of the time, the confidence of the false positives are much lower ( < 70% ) than accurate detections.

Setting a threshold/minimum confidence for the detection is a way of dealing with false positives.

from bi-aidetection.

doudar avatar doudar commented on September 26, 2024

You will notice that most of the time, the confidence of the false positives are much lower ( < 70% ) than accurate detections.

Setting a threshold/minimum confidence for the detection is a way of dealing with false positives.

I agree. However with the latest deepstack, most of my correct detections fall in the range of 50%-75% so I've had to lower the threshold to 50%.

from bi-aidetection.

Yonny24 avatar Yonny24 commented on September 26, 2024

I've been creating static masks when this happens. Are you using the fork of the AI tool?

from bi-aidetection.

doudar avatar doudar commented on September 26, 2024

I've been creating static masks when this happens. Are you using the fork of the AI tool?

Yes, I'm able to workaround the issue, just providing information to further the AI development.

from bi-aidetection.

Yonny24 avatar Yonny24 commented on September 26, 2024

Yes just for 3 days now and still playing around with the settings as I'm trying to capture the dog and cat also.
Haven't had much success at night yet where the neighbours cat jumps over the wall in the same place to do poop. The camera is quite far so got the BI motion settings on max sens. Keep tweaking them. It triggers from BI for the cat but the AI tool hasn;t picked it up in the dark yet. We'll see again tonight.
(nodered triggers a sprinkler on it ! lol) Trying to get rid of the cat using our nice lawn as a dumping ground)

Reduced the pixel movement to 25 now. Constant tweaking for your specific environment.
Still got the trial BI version. Highly likely I'll pay for it. Exhausted all other free options.

from bi-aidetection.

balucanb avatar balucanb commented on September 26, 2024

@johnolafenwa or anyone! Having issues with trying to get the custom model working- following the video but not working for me. see attached screenshot. Now my docker exp. is a whopping week so that is most likley the issue LOL. I had/have a docker image running on docker desktop, I assume I didn't need to stop/delete that before I started if that helps at all. TIA for any help
custom model problem

from bi-aidetection.

Yonny24 avatar Yonny24 commented on September 26, 2024

Trying to use colab and go through the steps. Getting stuck at this:
Looks like the zip is missing the classes.txt. But just checked and it is there

!python3 train.py --dataset-path "/content/Dataset"

Traceback (most recent call last): File "train.py", line 466, in <module> with open(classes_file,"r") as f: FileNotFoundError: [Errno 2] No such file or directory: '/content/Dataset/classes.txt'

from bi-aidetection.

Yonny24 avatar Yonny24 commented on September 26, 2024

Got it

!python3 train.py --dataset-path "/content/Dataset/Dataset"

Now how do I update my container but with Portainer? Add a new Volume or Env.?
The documentation and video is not clear.

sudo docker run -v /path-to/my-models:/modelstore/detection -p 80:5000
deepquestai/deepstack

Does this container replace the Vision detection deepstack? Sorry it's not too clear.

from bi-aidetection.

johnolafenwa avatar johnolafenwa commented on September 26, 2024

Hello @balucanb , i see your ran deepquestai/deepstack:cpu 2020.12, note that there should be a dash between cpu and 2020, so it should be cpu-2020.12

I believe the space in between is causing the error

@Yonny24 , you need to add a new volume mapping to map your model directory to the /modelstore/detection directory in docker, you can enable both your custom model and the vision detection in DeepStack.

from bi-aidetection.

Yonny24 avatar Yonny24 commented on September 26, 2024

Perfect thanks. So I can use my original deepstack container and just redeploy it once I've added the new volume in settings?
Also rather confused watching the video as it uses Colab Google cpu. I'll be using my local cpu and not colab cloud service I understand. Is this feasible? Maybe I got the wrong end of the stick? I used label to tag a specific animal that the AI often misses at nighttime so the idea was to train it to catch this object movement more efficiently.

from bi-aidetection.

balucanb avatar balucanb commented on September 26, 2024

@johnolafenwa WOW. This is why I never learned to code! LOL. Thanks so much John. I will try that. Question- understanding my first (and only) intro to Docker, coding, etc has been with this project, I have a vision-detection model running in docker desktop right now on port 8383,(working normal and fine) Can I run that one and the custom detection model at the same time on the same port or do I need to use a different port for the new one or can I only run 1 model at a time? I just read your reply to @Yonny24 I think they are describing the same thing I am asking... TIA!

from bi-aidetection.

balucanb avatar balucanb commented on September 26, 2024

@Yonny24 you appear to have the same question as me in ref. to running 2 different models at the same time, I am running this with Docker desktop can you explain how to do what @johnolafenwa is telling us to do. Sorry if it is a simple task but I am brand new to using Docker. TIA

from bi-aidetection.

Yonny24 avatar Yonny24 commented on September 26, 2024

@Yonny24 you appear to have the same question as me in ref. to running 2 different models at the same time, I am running this with Docker desktop can you explain how to do what @johnolafenwa is telling us to do. Sorry if it is a simple task but I am brand new to using Docker. TIA

Yep I have deepstack (vision detection) running windows for docker right now. I do not want to break this. I think I understand that vision detection and custom model detection can work on the same deepstack port in tandem (same container etc just with an extra element to customize more objects that are not standard)?

Just need to add an additional volume to the current container? I can probably achieve this with portainer under Volumes and redeploy but am cautious.
Also bit confused by the instructions using colab google still. Thought we were using our own local cpu. Maybe the colab (new to me) was just an example.

from bi-aidetection.

balucanb avatar balucanb commented on September 26, 2024

@Yonny24 you appear to have the same question as me in ref. to running 2 different models at the same time, I am running this with Docker desktop can you explain how to do what @johnolafenwa is telling us to do. Sorry if it is a simple task but I am brand new to using Docker. TIA

Yep I have deepstack (vision detection) running windows for docker right now. I do not want to break this. I think I understand that vision detection and custom model detection can work on the same deepstack port in tandem (same container etc just with an extra element to customize more objects that are not standard)?

Just need to add an additional volume to the current container? I can probably achieve this with portainer under Volumes and redeploy but am cautious.
Also bit confused by the instructions using colab google still. Thought we were using our own local cpu. Maybe the colab (new to me) was just an example.

Confused is the key word for me! Basically I am in the same position- I have a working copy and do not want to mess that up- took me to long to get it working bc of my lack of knowledge/skill with this. Like you I am stuck on getting the custom model I have trained to work. Not understanding how to make this new volume, I am trying to read the Docker docs now but it is all greek to me. You mentioned portainer, I have heard of it but don't know what it is really- is this part of Docker or some add on where you can put/make new containers ( I think that is the correct wording) I assume they are run locally?

from bi-aidetection.

Yonny24 avatar Yonny24 commented on September 26, 2024

Would the volume be created like this in portainer on the deepstack?
Bind to the D drive where the train images were created using labelling?

image

from bi-aidetection.

Yonny24 avatar Yonny24 commented on September 26, 2024

What is this step also?

image

Appear to be making some progress running the training after labelling various snapshots. Not entirely sure what its doing. :)

image

from bi-aidetection.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.