initstring / cloud_enum Goto Github PK
View Code? Open in Web Editor NEWMulti-cloud OSINT tool. Enumerate public resources in AWS, Azure, and Google Cloud.
License: MIT License
Multi-cloud OSINT tool. Enumerate public resources in AWS, Azure, and Google Cloud.
License: MIT License
First of all very nice tool
i have used commonspeak subdomain wordlist to check it
but it kinda breaks when tested
Mutations: /home/sanath/tools/juicy/cloud_enum/enum_tools/fuzz.txt
Brute-list: /home/sanath/tools/juicy/cloud_enum/enum_tools/fuzz.txt
[+] Mutations list imported: 484943 items
[+] Mutated results: 2909659 items
++++++++++++++++++++++++++
amazon checks
++++++++++++++++++++++++++
[+] Checking for S3 buckets
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/lib/python3.8/encodings/idna.py", line 165, in encode
raise UnicodeError("label empty or too long")
UnicodeError: label empty or too long
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "cloud_enum.py", line 243, in
main()
File "cloud_enum.py", line 228, in main
aws_checks.run_all(names, args)
File "/home/sanath/tools/juicy/cloud_enum/enum_tools/aws_checks.py", line 130, in run_all
check_s3_buckets(names, args.threads)
File "/home/sanath/tools/juicy/cloud_enum/enum_tools/aws_checks.py", line 84, in check_s3_buckets
utils.get_url_batch(candidates, use_ssl=False,
File "/home/sanath/tools/juicy/cloud_enum/enum_tools/utils.py", line 81, in get_url_batch
batch_results[url] = batch_pending[url].result(timeout=30)
File "/usr/lib/python3.8/concurrent/futures/_base.py", line 432, in result
return self.__get_result()
File "/usr/lib/python3.8/concurrent/futures/_base.py", line 388, in __get_result
raise self._exception
File "/usr/lib/python3.8/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/sanath/.local/lib/python3.8/site-packages/requests/sessions.py", line 533, in request
resp = self.send(prep, **send_kwargs)
File "/home/sanath/.local/lib/python3.8/site-packages/requests/sessions.py", line 646, in send
r = adapter.send(request, **kwargs)
File "/home/sanath/.local/lib/python3.8/site-packages/requests/adapters.py", line 439, in send
resp = conn.urlopen(
File "/home/sanath/.local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 665, in urlopen
httplib_response = self._make_request(
File "/home/sanath/.local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 387, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/lib/python3.8/http/client.py", line 1255, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/lib/python3.8/http/client.py", line 1301, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/lib/python3.8/http/client.py", line 1250, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/lib/python3.8/http/client.py", line 1010, in _send_output
self.send(msg)
File "/usr/lib/python3.8/http/client.py", line 950, in send
self.connect()
File "/home/sanath/.local/lib/python3.8/site-packages/urllib3/connection.py", line 184, in connect
conn = self._new_conn()
File "/home/sanath/.local/lib/python3.8/site-packages/urllib3/connection.py", line 156, in _new_conn
conn = connection.create_connection(
File "/home/sanath/.local/lib/python3.8/site-packages/urllib3/util/connection.py", line 61, in create_connection
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
File "/usr/lib/python3.8/socket.py", line 918, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
Hi
I would like to use the too but I have some questions which I could not resolve from the git description:
Hello!
Description
I am trying to access the output of this tool programmatically, and it is being slightly more complicated than it could be due to the all the logging and colouring in the output. I've dropped here a possible solution I've managed to come up with; let me know your thoughts!
Possible solution
--colourless
flag that makes the tool output everything without colours.--silent
flag that makes the tool not output anything to stdout other than the results. This would imply the output being colourless too.--logfile -
or -l -
to mean "output to stdout". This is common behaviour defined in the POSIX utility syntax guidelines (for utilities that use operands to represent files to be opened for either reading or writing, the '-' operand should be used only to mean standard input (or standard output when it is clear from context that an output file is being specified)). If such a value was specified, then the output should be silent too.--format
flag which allows selection of the output format desired (e.g.: --format=jsonlines
, --format=csv
, etc...). In the former, the tool could return a format such as {"platform": "aws", "status":"protected", "url":"xxxx.xxx.xxx"}
for every result. Formatted results would then be output to the specified logfile. Note that I've used those formats as examples on purpose, as they would allow outputting results "on the go".Thank you!
I don't think a GPL license makes sense for this project, MIT seems a better fit.
I would like to change it, but want to make sure everyone who has contributed so far is ok with that change, as it was GPL when you committed code.
I'll leave this issue open for 30 days, if no one disagrees I will make the change in the next release.
If I tagged you and you are ok with this, it would be great if you could ๐ this.
Thanks!
@sg3-141-592 @codingo @gpxlnx @OrDuan @nalauder @octaviovg @N7WEra @ShiftLefter
Look into viability of allowing the user to interactively skip running tests by pressing 'S' or something.
Received a request in a security-focused Slack channel to add bucket write checks for S3.
I'll take a look at this for both S3 and GCS (and eventually Azure too). I don't plan to ever add functionality that requires keys or credentials, so it will be dependent on whether or not those actions are possible in a totally pre-auth manner.
Hi,
Currently in the setup part of readme.md, the method to install from requirements.txt is given as follows:
pip3 install -r ./requirements.txt
This should be changed to
pip3 install -r requirements.txt
Thanks for the awesome work,
Kiran
can i benefit from this result of the tool or just ignore it
When running the Azure enumerations, getting an FileNotFoundError: [WinError 2] The system cannot find the file specified error. However - I have verified the AWS and GCP enumerations are able to be ran just fine.
I have attempted to fix the file paths defaults within cloud_enum.py/parse_arguements() to fit Windows conventions, however that did not change anything.
# Use included mutations file by default, or let the user provide one
parser.add_argument('-m', '--mutations', type=str, action='store',
default=script_path + '\\enum_tools\\fuzz.txt',
help='Mutations. Default: enum_tools/fuzz.txt')
# Use include container brute-force or let the user provide one
parser.add_argument('-b', '--brute', type=str, action='store',
default=script_path + '\\enum_tools\\fuzz.txt',
help='List to brute-force Azure container names.'
' Default: enum_tools/fuzz.txt')
Running on Windows 10
I have just started to use this tool. It seems to be the nice one but I wonder why this tool reports Disabled Storage Account. I don't know if we can do anything with this. Please let me know the attack vector so that i can learn more to exploit this.
Disabled Storage Account: http://abc.blob.core.windows.net/
Add common exception handling. Priority on recovering from failed HTTP connections.
Initial release is living life on the wild side, which will become problematic when using with larger wordlists and sketchy Internet connections.
When I execute cloud_enum from inside another script it sometimes won't exit out of the script. I just see the "All done, happy hacking!" message and I have to ctrl-c out of the script.
Getting error while running,
python cloud_enum.py -k xxx
[!] Cannot access mutations file: /enum_tools/fuzz.txt
The wordlists for brute-forcing DNS and container names are quite short, and were created by me manually to get a working tool going.
Longer wordlists mean more results, of course. But I want to be selective and not just import something random from another tool. Long wordlists also mean longer runtime, higher chance of detection/ban/etc.
Need to investigate this.
The following is what it output before and after the crash...
root@csi-analyst:/home/csi/cloud_enum# python cloud_enum.py -k binance
##########################
cloud_enum
github.com/initstring
##########################
Keywords: binance
Mutations: /home/csi/cloud_enum/enum_tools/fuzz.txt
Brute-list: /home/csi/cloud_enum/enum_tools/fuzz.txt
Traceback (most recent call last):
File "cloud_enum.py", line 234, in
main()
File "cloud_enum.py", line 213, in main
mutations = read_mutations(args.mutations)
File "cloud_enum.py", line 152, in read_mutations
with open(mutations_file, encoding="utf8", errors="ignore") as infile:
TypeError: 'errors' is an invalid keyword argument for this function
root@csi-analyst:/home/csi/cloud_enum# python cloud_enum.py -k binance -ns binance.com
##########################
[!] DNS Timeout on content.REMOVED.awsapps.com. Investigate if there are many of these. [!] DNS Timeout on REMOVED.awsapps.com. Investigate if there are many of these.
Need to improve DNS lookups, and also allow the user to specify a DNS server.
Possibly, could use subprocess to queue OS commands to handle this in batches. Need to test.
[!] Cannot access mutations file: /enum_tools/fuzz.txt
Getting DNS Timeouts during execution, below is the command used along with a sample output. There were no errors with the Google Checks, just Amazon and Azure.
command:
./cloud_enum.py -kf ./enum_tools/<redacted>_keyfile.txt -m ./enum_tools/<redacted>_fuzz.txt -t 5 -l ./output.txt
output:
++++++++++++++++++++++++++
amazon checks
++++++++++++++++++++++++++
[+] Checking for S3 buckets
Protected S3 Bucket: http://<redacted>amazon.s3.amazonaws.com/
Protected S3 Bucket: http://<redacted>-backup.s3.amazonaws.com/
Protected S3 Bucket: http://client-<redacted>.s3.amazonaws.com/
Protected S3 Bucket: http://<redacted>-demo.s3.amazonaws.com/
Protected S3 Bucket: http://<redacted>-images.s3.amazonaws.com/
Protected S3 Bucket: http://<redacted>-prod.s3.amazonaws.com/
Protected S3 Bucket: http://<redacted>-production.s3.amazonaws.com/
Protected S3 Bucket: http://production-<redacted>.s3.amazonaws.com/
Protected S3 Bucket: http://<redacted>.store.s3.amazonaws.com/
Protected S3 Bucket: http://<redacted>.s3.amazonaws.com/
Protected S3 Bucket: http://<redacted>.s3.amazonaws.com/
Elapsed time: 00:02:35
[+] Checking for AWS Apps
[*] Brute-forcing a list of 11346 possible DNS names
[!] DNS Timeout on test.<redacted>.awsapps.com. Investigate if there are many of these.
[!] DNS Timeout on <redacted>.backup.awsapps.com. Investigate if there are many of these.
++++++++++++++++++++++++++
azure checks
++++++++++++++++++++++++++
[+] Checking for Azure Storage Accounts
[*] Brute-forcing a list of 3486 possible DNS names
HTTPS-Only Account: http://<redacted>.blob.core.windows.net/
HTTPS-Only Account: http://<redacted>1.blob.core.windows.net/
HTTPS-Only Account: http://storage<redacted>.blob.core.windows.net/
HTTPS-Only Account: http://<redacted>test.blob.core.windows.net/
Elapsed time: 00:00:51
[] Checking 4 accounts for status before brute-forcing
[] Brute-forcing container names in 4 storage accounts
[] Brute-forcing 274 container names in <redacted>1.blob.core.windows.net
[] Brute-forcing 274 container names in storage<redacted>.blob.core.windows.net
[] Brute-forcing 274 container names in <redacted>.blob.core.windows.net
[!] Breaking out early, auth required.
[] Brute-forcing 274 container names in <redacted>test.blob.core.windows.net
[!] Breaking out early, auth required.
Elapsed time: 00:00:15
[+] Checking for Azure File Accounts
[*] Brute-forcing a list of 3486 possible DNS names
[!] DNS Timeout on <redacted>pro.file.core.windows.net. Investigate if there are many of these.
[!] DNS Timeout on <redacted>syslog.file.core.windows.net. Investigate if there are many of these.
[!] DNS Timeout on builds<redacted>.file.core.windows.net. Investigate if there are many of these.
[!] DNS Timeout on <redacted>graphite.file.core.windows.net. Investigate if there are many of these.
[!] DNS Timeout on <redacted>client.file.core.windows.net. Investigate if there are many of these.
HTTPS-Only Account: http://<redacted>.file.core.windows.net/
HTTPS-Only Account: http://<redacted>1.file.core.windows.net/
HTTPS-Only Account: http://storage<redacted>.file.core.windows.net/
HTTPS-Only Account: http://<redacted>test.file.core.windows.net/
I assess the AWS S3 bucket checks are not working. Anyone able to validate?
I have a domain I'm trying to cloud_enum. Let's say this is "preprod-second-hand-elastic-standalone-abcdefghi-abcdefgh.REDCTcloud.com"
This is an acceptable length for a subdomain, and it does resolve. But, adding the fuzz to it makes it too long, and thus fails.
Perhaps a length check on subdomain + fuzz strings before attempting the check? If any component is too long, then skip as there's no way it'd be a positive result?
[+] Checking for project/zones with Google Cloud Functions.
[*] Testing across 1 regions defined in the config file
Traceback (most recent call last):
File "/home/dnx/3rdparty/cloud_enum/cloud_enum.py", line 255, in <module>
main()
File "/home/dnx/3rdparty/cloud_enum/cloud_enum.py", line 244, in main
gcp_checks.run_all(names, args)
File "/home/dnx/3rdparty/cloud_enum/enum_tools/gcp_checks.py", line 390, in run_all
check_functions(names, args.brute, args.quickscan, args.threads)
File "/home/dnx/3rdparty/cloud_enum/enum_tools/gcp_checks.py", line 338, in check_functions
utils.get_url_batch(candidates, use_ssl=False,
File "/home/dnx/3rdparty/cloud_enum/enum_tools/utils.py", line 88, in get_url_batch
batch_results[url] = batch_pending[url].result(timeout=30)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.11/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.11/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/usr/lib64/python3.11/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dnx/venv/lib64/python3.11/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dnx/venv/lib64/python3.11/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dnx/venv/lib64/python3.11/site-packages/requests/adapters.py", line 486, in send
resp = conn.urlopen(
^^^^^^^^^^^^^
File "/home/dnx/venv/lib64/python3.11/site-packages/urllib3/connectionpool.py", line 790, in urlopen
response = self._make_request(
^^^^^^^^^^^^^^^^^^^
File "/home/dnx/venv/lib64/python3.11/site-packages/urllib3/connectionpool.py", line 496, in _make_request
conn.request(
File "/home/dnx/venv/lib64/python3.11/site-packages/urllib3/connection.py", line 395, in request
self.endheaders()
File "/usr/lib64/python3.11/http/client.py", line 1281, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/lib64/python3.11/http/client.py", line 1041, in _send_output
self.send(msg)
File "/usr/lib64/python3.11/http/client.py", line 979, in send
self.connect()
File "/home/dnx/venv/lib64/python3.11/site-packages/urllib3/connection.py", line 243, in connect
self.sock = self._new_conn()
^^^^^^^^^^^^^^^^
File "/home/dnx/venv/lib64/python3.11/site-packages/urllib3/connection.py", line 203, in _new_conn
sock = connection.create_connection(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dnx/venv/lib64/python3.11/site-packages/urllib3/util/connection.py", line 58, in create_connection
raise LocationParseError(f"'{host}', label empty or too long") from None
urllib3.exceptions.LocationParseError: Failed to parse: 'us-central1-preprod-second-hand-elastic-standalone-abcdefghi-abcdefgh.REDCTcloud.com.cloudfunctions.net', label empty or too long
Need to implement logging. Currently, using something like tee
to manually create logs is workable, but all the sys.stdout.write / flush / unicode color escapes make re-reading those logs a bit messy.
Probably will log only found items.
Your tool has been awesome! Thanks for your hard work.
Just a thought about adding an extra functionality to download the file content for an open storage/bucket would be pretty cool as well.
For some reason, GCP does not like bucket requests with capital letters.
I should either:
Is it possible to develop/implement an output feature?
Thanks
In the enum_tools dir , check the python file azure_checks.py. In this file,
utils.fast_dns_lookup(candidates, nameserver, callback=print_website_response, threads=threads)
These lines do nothing after getting a variable valid_names from fast_dns_lookup.
is it needed for some processing or just an accidental return ?
Hi,
Whatever I'm trying to search, amazon checks are working but once it starts the azure cheks, I get the following message:
[+] Checking for S3 buckets
Elapsed time: 00:02:15
++++++++++++++++++++++++++
azure checks
++++++++++++++++++++++++++
[+] Checking for Azure Storage Accounts
[*] Brute-forcing a list of 445 possible DNS names
Traceback (most recent call last):
File "./cloud_enum.py", line 218, in
main()
File "./cloud_enum.py", line 205, in main
azure_checks.run_all(names, args)
File "/cloud_enum/enum_tools/azure_checks.py", line 286, in run_all
valid_accounts = check_storage_accounts(names, args.threads,
File "/cloud_enum/enum_tools/azure_checks.py", line 77, in check_storage_accounts
valid_names = utils.fast_dns_lookup(candidates, nameserver)
File "/cloud_enum/enum_tools/utils.py", line 129, in fast_dns_lookup
batch_pending[name] = subprocess.Popen(cmd,
File "/usr/lib/python3.8/subprocess.py", line 854, in init
self._execute_child(args, executable, preexec_fn, close_fds,
File "/usr/lib/python3.8/subprocess.py", line 1702, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'host'
Please advise.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.