Giter VIP home page Giter VIP logo

stashplugins's Introduction

StashPlugins

A collection of python plugins for stash

Minimum stash version: v0.4.0-71

Currently available plugins:

Plugin config Description Notes
set_ph_urls.yml Add urls to pornhub scenes downloaded by Youtube-dl
gallerytags.yml Copy information from attached scene to galleries
bulk_url_scraper.yml Bulk scene and gallery url scraping Config (/py_plugins/config.py) has to be edited manually, until plugin parameters get implemented
update_image_titles.yml Update all image titles (Fixes natural sort)
yt-dl_downloader.yml Download Videos automated with yt-dl and add the scrape tag for burl_url_scraper Config files in yt-dl_downloader/ folder. Add all urls line by line to urls.txt and change download dir in config.ini

Download instructions:

Drop the py_plugins folder as well as all desired plugin configurations in stash's plugin folder and press the Reload plugins button in the Plugin settings

All plugins require python 3, as well as the requests module, which can be installed with the command pip install requests. If the python installation requires you to call python via python3, you have to change python to python3 in the exec block of each plugin config.

Docker instructions:

To use the plugins with a stash instance running in a (remote-) docker container it is required to install python inside of it:

  • Open a shell in the docker container: docker exec -it <container-id> sh (get the container id from docker ps -a)
  • In the container execute the following commands:
    apt update
    apt install python3
    apt install python3-pip
    pip3 install requests
    
  • If you want to use the yt-dl_downloader plugin, you also have to run the following commands:
    pip3 install youtube_dl
    pip3 install configparser
    pip3 install pathlib
  • Leave the container via Ctrl+P,Ctrl+Q
  • Drop the py_plugins folder as well as all desired plugin configurations in stash's plugin folder located in config/plugins. Create the plugins folder if it is not already there
  • Change python to python3 in the plugin configuration (.yml) files
  • Press the Reload plugins button in stash's plugin settings

stashplugins's People

Contributors

darkfighterluke avatar niemands avatar optinux avatar y0ngg4n avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

stashplugins's Issues

No handler for incorrect URLs during Bulk Media Scrapping

Under the bulk scraping function when it checks if the scenes URL returns a Null, your script assumes if it does that its because of a missing scraper when in fact could be do to multiple reasons such as an incorrect link. The following result is that it assumes if one URL for that site doesn't work, that all URLs for that site wont work because in the subroutine you add the URLs netloc to a blacklist. So for example you have a 20 scenes tagged scrape all from site abc.abc how ever the 5th scene to be scraped has a incorrect link the first 4 scenes will be scraped successfully but the script will simply skip scenes 6-20 as they have the same netloc as the incorrect link. You can fix this by simply removing the portion of the code that adds the netloc and instead just adds the whole URL to missing_scrapers. It could be helpful to output this list to a file for informational reasons. However if you do make this change you would no longer have protection for missing scrappers, perhaps there is a way to use the stash interface to see what scrapers are loaded and add them to a whitelist at the beginning of the script?

[Plugin / Youtube-dl Downloader] TypeError: expected str, bytes or os.PathLike object, not bool

Hello. Not sure what's going on here.

Python 3.10.5
Stash v0.16.0

The plugin crashes when clicking [Download Video]. This is in the logs:

Plugin returned error: exit status 1
[Plugin / Youtube-dl Downloader] TypeError: expected str, bytes or os.PathLike object, not bool
[Plugin / Youtube-dl Downloader]     path = os.fspath(path)
[Plugin / Youtube-dl Downloader]   File "C:\Python\Python310\lib\ntpath.py", line 293, in expanduser
[Plugin / Youtube-dl Downloader]     return os.path.expandvars(compat_expanduser(s))
[Plugin / Youtube-dl Downloader]   File "C:\Python\Python310\lib\site-packages\youtube_dl\utils.py", line 2163, in expand_path
[Plugin / Youtube-dl Downloader]     opts_cookiefile = expand_path(opts_cookiefile)
[Plugin / Youtube-dl Downloader]   File "C:\Python\Python310\lib\site-packages\youtube_dl\YoutubeDL.py", line 2376, in _setup_opener
[Plugin / Youtube-dl Downloader]     self._setup_opener()
[Plugin / Youtube-dl Downloader]   File "C:\Python\Python310\lib\site-packages\youtube_dl\YoutubeDL.py", line 422, in __init__
[Plugin / Youtube-dl Downloader]     ydl = youtube_dl.YoutubeDL({
[Plugin / Youtube-dl Downloader]   File "C:\stash\plugins\py_plugins\yt-dl_downloader.py", line 166, in download
[Plugin / Youtube-dl Downloader]     download(url.strip(), downloaded)
[Plugin / Youtube-dl Downloader]   File "C:\stash\plugins\py_plugins\yt-dl_downloader.py", line 134, in read_urls_and_download
[Plugin / Youtube-dl Downloader]     read_urls_and_download(client)
[Plugin / Youtube-dl Downloader]   File "C:\stash\plugins\py_plugins\yt-dl_downloader.py", line 39, in run
[Plugin / Youtube-dl Downloader]     run(json_input, output)
[Plugin / Youtube-dl Downloader]   File "C:\stash\plugins\py_plugins\yt-dl_downloader.py", line 22, in main
[Plugin / Youtube-dl Downloader]     main()
[Plugin / Youtube-dl Downloader]   File "C:\stash\plugins\py_plugins\yt-dl_downloader.py", line 241, in <module>
[Plugin / Youtube-dl Downloader] Traceback (most recent call last):

Some new ph URLs don't contain the prefix "ph"

I have a few videos downloaded using youtube-dl in which the id doesn't contain a "ph" prefix, and upon checking, neither does the original link. Adding the "ph" brings me to a 404, so even manually adding "ph" to the file would most likely make the matching fail. Any chance the plugin could be updated to reflect this change?

Bulk scrape by fragment

It would be great if the bulk scraper could scrape by fragment and not just url, I've got a lot of scenes that just have a title and not a url and scraping them manually is a tedious task.

Crash: "No connection could be made because the target machine actively refused it"

Getting this error when attempting to tag.

Details:

ERRO[2021-06-11 20:27:52] [Plugin] Traceback (most recent call last):
ERRO[2021-06-11 20:27:52] [Plugin] File "C:\Users\softl\AppData\Local\Programs\Python\Python39\lib\site-packages\urllib3\connection.py", line 169, in _new_conn
ERRO[2021-06-11 20:27:52] [Plugin] conn = connection.create_connection(
ERRO[2021-06-11 20:27:52] [Plugin] File "C:\Users\softl\AppData\Local\Programs\Python\Python39\lib\site-packages\urllib3\util\connection.py", line 96, in create_connection
ERRO[2021-06-11 20:27:52] [Plugin] raise err
ERRO[2021-06-11 20:27:52] [Plugin] File "C:\Users\softl\AppData\Local\Programs\Python\Python39\lib\site-packages\urllib3\util\connection.py", line 86, in create_connection
ERRO[2021-06-11 20:27:52] [Plugin] sock.connect(sa)
ERRO[2021-06-11 20:27:52] [Plugin] ConnectionRefusedError: [WinError 10061] No connection could be made because the target machine actively refused it

ERRO[2021-06-11 20:27:52] [Plugin] During handling of the above exception, another exception occurred:
ERRO[2021-06-11 20:27:52] [Plugin] Traceback (most recent call last):
ERRO[2021-06-11 20:27:52] [Plugin] File "C:\Users\softl\AppData\Local\Programs\Python\Python39\lib\site-packages\urllib3\connectionpool.py", line 699, in urlopen
ERRO[2021-06-11 20:27:52] [Plugin] httplib_response = self._make_request(
ERRO[2021-06-11 20:27:52] [Plugin] File "C:\Users\softl\AppData\Local\Programs\Python\Python39\lib\site-packages\urllib3\connectionpool.py", line 394, in _make_request
ERRO[2021-06-11 20:27:52] [Plugin] conn.request(method, url, **httplib_request_kw)
ERRO[2021-06-11 20:27:52] [Plugin] File "C:\Users\softl\AppData\Local\Programs\Python\Python39\lib\site-packages\urllib3\connection.py", line 234, in request
ERRO[2021-06-11 20:27:52] [Plugin] super(HTTPConnection, self).request(method, url, body=body, headers=headers)
ERRO[2021-06-11 20:27:52] [Plugin] File "C:\Users\softl\AppData\Local\Programs\Python\Python39\lib\http\client.py", line 1253, in request
ERRO[2021-06-11 20:27:52] [Plugin] self._send_request(method, url, body, headers, encode_chunked)
ERRO[2021-06-11 20:27:52] [Plugin] File "C:\Users\softl\AppData\Local\Programs\Python\Python39\lib\http\client.py", line 1299, in _send_request
ERRO[2021-06-11 20:27:52] [Plugin] self.endheaders(body, encode_chunked=encode_chunked)
ERRO[2021-06-11 20:27:52] [Plugin] File "C:\Users\softl\AppData\Local\Programs\Python\Python39\lib\http\client.py", line 1248, in endheaders
ERRO[2021-06-11 20:27:52] [Plugin] self._send_output(message_body, encode_chunked=encode_chunked)
ERRO[2021-06-11 20:27:52] [Plugin] File "C:\Users\softl\AppData\Local\Programs\Python\Python39\lib\http\client.py", line 1008, in _send_output
ERRO[2021-06-11 20:27:52] [Plugin] self.send(msg)
ERRO[2021-06-11 20:27:52] [Plugin] File "C:\Users\softl\AppData\Local\Programs\Python\Python39\lib\http\client.py", line 948, in send
ERRO[2021-06-11 20:27:52] [Plugin] self.connect()
ERRO[2021-06-11 20:27:52] [Plugin] File "C:\Users\softl\AppData\Local\Programs\Python\Python39\lib\site-packages\urllib3\connection.py", line 200, in connect
ERRO[2021-06-11 20:27:52] [Plugin] conn = self._new_conn()
ERRO[2021-06-11 20:27:52] [Plugin] File "C:\Users\softl\AppData\Local\Programs\Python\Python39\lib\site-packages\urllib3\connection.py", line 181, in _new_conn
ERRO[2021-06-11 20:27:52] [Plugin] raise NewConnectionError(
ERRO[2021-06-11 20:27:52] [Plugin] urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x000001CE900CFAF0>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it

ERRO[2021-06-11 20:27:52] [Plugin] During handling of the above exception, another exception occurred:
ERRO[2021-06-11 20:27:52] [Plugin] Traceback (most recent call last):
ERRO[2021-06-11 20:27:52] [Plugin] File "C:\Users\softl\AppData\Local\Programs\Python\Python39\lib\site-packages\requests\adapters.py", line 439, in send
ERRO[2021-06-11 20:27:52] [Plugin] resp = conn.urlopen(
ERRO[2021-06-11 20:27:52] [Plugin] File "C:\Users\softl\AppData\Local\Programs\Python\Python39\lib\site-packages\urllib3\connectionpool.py", line 755, in urlopen
ERRO[2021-06-11 20:27:52] [Plugin] retries = retries.increment(
ERRO[2021-06-11 20:27:52] [Plugin] File "C:\Users\softl\AppData\Local\Programs\Python\Python39\lib\site-packages\urllib3\util\retry.py", line 574, in increment
ERRO[2021-06-11 20:27:52] [Plugin] raise MaxRetryError(_pool, url, error or ResponseError(cause))
ERRO[2021-06-11 20:27:52] [Plugin] urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=6969): Max retries exceeded with url: /graphql (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x000001CE900CFAF0>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))
ERRO[2021-06-11 20:27:52] [Plugin] During handling of the above exception, another exception occurred:
ERRO[2021-06-11 20:27:52] [Plugin] Traceback (most recent call last):
ERRO[2021-06-11 20:27:52] [Plugin] File "m:\stash\plugins\py_plugins\bulk_url_scraper.py", line 212, in
ERRO[2021-06-11 20:27:52] [Plugin] main()
ERRO[2021-06-11 20:27:52] [Plugin] File "m:\stash\plugins\py_plugins\bulk_url_scraper.py", line 18, in main
ERRO[2021-06-11 20:27:52] [Plugin] run(json_input, output)
ERRO[2021-06-11 20:27:52] [Plugin] File "m:\stash\plugins\py_plugins\bulk_url_scraper.py", line 47, in run
ERRO[2021-06-11 20:27:52] [Plugin] add_tag(client)
ERRO[2021-06-11 20:27:52] [Plugin] File "m:\stash\plugins\py_plugins\bulk_url_scraper.py", line 192, in add_tag
ERRO[2021-06-11 20:27:52] [Plugin] tag_id = client.findTagIdWithName(control_tag)
ERRO[2021-06-11 20:27:52] [Plugin] File "m:\stash\plugins\py_plugins\stash_interface.py", line 95, in findTagIdWithName
ERRO[2021-06-11 20:27:52] [Plugin] result = self.__callGraphQL(query)
ERRO[2021-06-11 20:27:52] [Plugin] File "m:\stash\plugins\py_plugins\stash_interface.py", line 39, in __callGraphQL
ERRO[2021-06-11 20:27:52] [Plugin] response = requests.post(self.url, json=json, headers=self.headers, cookies=self.cookies)
ERRO[2021-06-11 20:27:52] [Plugin] File "C:\Users\softl\AppData\Local\Programs\Python\Python39\lib\site-packages\requests\api.py", line 119, in post
ERRO[2021-06-11 20:27:52] [Plugin] return request('post', url, data=data, json=json, **kwargs)
ERRO[2021-06-11 20:27:52] [Plugin] File "C:\Users\softl\AppData\Local\Programs\Python\Python39\lib\site-packages\requests\api.py", line 61, in request
ERRO[2021-06-11 20:27:52] [Plugin] return session.request(method=method, url=url, **kwargs)
ERRO[2021-06-11 20:27:52] [Plugin] File "C:\Users\softl\AppData\Local\Programs\Python\Python39\lib\site-packages\requests\sessions.py", line 542, in request
ERRO[2021-06-11 20:27:52] [Plugin] resp = self.send(prep, **send_kwargs)
ERRO[2021-06-11 20:27:52] [Plugin] File "C:\Users\softl\AppData\Local\Programs\Python\Python39\lib\site-packages\requests\sessions.py", line 655, in send
ERRO[2021-06-11 20:27:52] [Plugin] r = adapter.send(request, **kwargs)
ERRO[2021-06-11 20:27:52] [Plugin] File "C:\Users\softl\AppData\Local\Programs\Python\Python39\lib\site-packages\requests\adapters.py", line 516, in send
ERRO[2021-06-11 20:27:52] [Plugin] raise ConnectionError(e, request=request)
ERRO[2021-06-11 20:27:52] [Plugin] requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=6969): Max retries exceeded with url: /graphql (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x000001CE900CFAF0>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))
ERRO[2021-06-11 20:27:52] Plugin returned error: exit status 1

apt not installed in official Image

Using the docker instructions provided (and the official stashapp/stash:development docker hub image), attempting to run the apt commands to install python as required fails:

root@Jupiter:~# docker exec -it fa3efc0e3779 sh
/ # apt update
sh: apt: not found
/ #

yt-dl config

It's probably just something simple I'm doing wrong but I can't figure out the config for the youtube-dl.

I'm trying to set a proxy and a download archive for instance.

I'm not sure if the proxy is right with simply proxy IP-ADDRESS:PORT

It fails completely when I tried to add a download archive.

Thanks.

[Update Image Titles] StashInterface not found?

Probably a dumb issue, but I'm attempting to use the Update Image Titles plugin and I keep getting this error. Did I miss something in the installation?

2023-12-10 21:19:05 Error Plugin returned error: exit status 1
2023-12-10 21:19:05 Error [Plugin / Update Image Titles] from stash_interface import StashInterface
2023-12-10 21:19:05 Error [Plugin / Update Image Titles] ModuleNotFoundError: No module named 'stash_interface'
2023-12-10 21:19:05 Error [Plugin / Update Image Titles] File "/root/.stash/plugins/update_image_titles.py", line 6, in
2023-12-10 21:19:05 Error [Plugin / Update Image Titles] Traceback (most recent call last):

set_ph_urls.py always fails to find files

Fresh Stash install.

To replicate:
Added one single video by URL and one channel download to the urls.txt
Ran the Download portion of the plug-in
Ran the Tag Downloads - populated the tags on the single URL video.
Ran the set_ph_urls - No updates to the files, logs show zero files found.

Log entries.
`Debug
Plugin returned: ok

Info
[Plugin / Set PH Urls] Set urls for 0 scene(s)

Debug
[Plugin / Set PH Urls] Regex found a total of 0 scene(s)

Debug
[Plugin / Set PH Urls] Regex found 0 scene(s) on page 1

Debug
[Plugin / Set PH Urls] Using stash GraphQl endpoint at http://localhost:9999/graphql`

Confirmed the filenames pulled should pass the regex, phXXXXXXXXXXXXX.mp4 titles. Nothing modified in the scripts outside of the download location being set, the python3 change as noted in the readme. Tested a few options in the yt-dl config.ini with no luck either populating the performer info, urls for the channel dump, etc.

[Request] Merge tags on batch scrape url

I like the idea of batch scraping urls for metadata, however, in my opinion, there needs to be a merge option (at least for tags) rather than strictly a replace. I've found for some studios, the source tags are really good and others stashdb has tags that the original didn't that are better. So merge really is the only way to preserve that type of info.

Plugins don't show up

Hi,

I downloaded the plugins and copied them to the given location, added URLs and download path as per instructions.
However, after reloading the plugins in Stash, nothing shows up in the task section.
I was especially looking for the yt-dl plugin.

Did I miss something here?
Thanks!

Movie Titles

hi, some scrapers (like jules jordan) include movie titles.
would be nice if they could also be saved.

batch scrape movies

#18 is working great, but now i have hundrets of movies without a cover and metainfo.

if possible add a batch movie scraper, maybe something like this:

if movie have url, but no poster => scrape

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.