Giter VIP home page Giter VIP logo

pentestgpt's Introduction

Contributors Forks Stargazers Issues MIT License Discord


PentestGPT

A GPT-empowered penetration testing tool.
Explore the docs »

Design Details · View Demo · Report Bug or Request Feature

General Updates

  • [Update on 25/03/2024] We're working on the next version of PentestGPT, with online searching, RAGs and more powerful prompting. Stay tuned!
  • [Update on 17/11/2023] GPTs for PentestGPT is out! Check this: https://chat.openai.com/g/g-4MHbTepWO-pentestgpt
  • [Update on 07/11/2023] GPT-4-turbo is out! Update the default API usage to GPT-4-turbo.
  • Available videos:
    • The latest installation video is here.
    • PentestGPT for OSCP-like machine: HTB-Jarvis. This is the first part only, and I'll complete the rest when I have time.
    • PentestGPT on HTB-Lame. This is an easy machine, but it shows you how PentestGPT skipped the rabbit hole and worked on other potential vulnerabilities.
  • We're testing PentestGPT on HackTheBox. You may follow this link. More details will be released soon.
  • Feel free to join the Discord Channel for more updates and share your ideas!

Quick Start

  1. Create a virtual environment if necessary. (virtualenv -p python3 venv, source venv/bin/activate)
  2. Install the project with pip3 install git+https://github.com/GreyDGL/PentestGPT
  3. Ensure that you have link a payment method to your OpenAI account. Export your API key with export OPENAI_KEY='<your key here>',export API base with export OPENAI_BASEURL='https://api.xxxx.xxx/v1'if you need.
  4. Test the connection with pentestgpt-connection
  5. For Kali Users: use tmux as terminal environment. You can do so by simply run tmux in the native terminal.
  6. To start: pentestgpt --logging

Getting Started

  • PentestGPT is a penetration testing tool empowered by ChatGPT.
  • It is designed to automate the penetration testing process. It is built on top of ChatGPT and operate in an interactive mode to guide penetration testers in both overall progress and specific operations.
  • PentestGPT is able to solve easy to medium HackTheBox machines, and other CTF challenges. You can check this example in resources where we use it to solve HackTheBox challenge TEMPLATED (web challenge).
  • A sample testing process of PentestGPT on a target VulnHub machine (Hackable II) is available at here.
  • A sample usage video is below: (or available here: Demo)

Common Questions

  • Q: What is PentestGPT?
    • A: PentestGPT is a penetration testing tool empowered by Large Language Models (LLMs). It is designed to automate the penetration testing process. It is built on top of ChatGPT API and operate in an interactive mode to guide penetration testers in both overall progress and specific operations.
  • Q: Do I need to pay to use PentestGPT?
    • A: Yes in order to achieve the best performance. In general, you can use any LLMs you want, but you're recommended to use GPT-4 API, for which you have to link a payment method to OpenAI.
  • Q: Why GPT-4?
    • A: After empirical evaluation, we find that GPT-4 performs better than GPT-3.5 and other LLMs in terms of penetration testing reasoning. In fact, GPT-3.5 leads to failed test in simple tasks.
  • Q: Why not just use GPT-4 directly?
    • A: We found that GPT-4 suffers from losses of context as test goes deeper. It is essential to maintain a "test status awareness" in this process. You may check the PentestGPT Arxiv Paper for details.
  • Q: Can I use local GPT models?
    • A: Yes. We support local LLMs with custom parser. Look at examples here.

Installation

PentestGPT is tested under Python 3.10. Other Python3 versions should work but are not tested.

Install with pip

PentestGPT relies on OpenAI API to achieve high-quality reasoning. You may refer to the installation video here.

  1. Install the latest version with pip3 install git+https://github.com/GreyDGL/PentestGPT
    • You may also clone the project to local environment and install for better customization and development
      • git clone https://github.com/GreyDGL/PentestGPT
      • cd PentestGPT
      • pip3 install -e .
  2. To use OpenAI API
    • Ensure that you have link a payment method to your OpenAI account.
    • export your API key with export OPENAI_KEY='<your key here>'
    • export API base with export OPENAI_BASEURL='https://api.xxxx.xxx/v1'if you need.
    • Test the connection with pentestgpt-connection
  3. To verify that the connection is configured properly, you may run pentestgpt-connection. After a while, you should see some sample conversation with ChatGPT.
    • A sample output is below
    You're testing the connection for PentestGPT v 0.11.0
    #### Test connection for OpenAI api (GPT-4)
    1. You're connected with OpenAI API. You have GPT-4 access. To start PentestGPT, please use <pentestgpt --reasoning_model=gpt-4>
    
    #### Test connection for OpenAI api (GPT-3.5)
    2. You're connected with OpenAI API. You have GPT-3.5 access. To start PentestGPT, please use <pentestgpt --reasoning_model=gpt-3.5-turbo-16k>
    
    • notice: if you have not linked a payment method to your OpenAI account, you will see error messages.
  4. The ChatGPT cookie solution is deprecated and not recommended. You may still use it by running pentestgpt --reasoning_model=gpt-4 --useAPI=False.

Build from Source

  1. Clone the repository to your local environment.
  2. Ensure that poetry is installed. If not, please refer to the poetry installation guide.

Usage

  1. You are recommended to run:

    • (recommended) - pentestgpt --reasoning_model=gpt-4-turbo to use the latest GPT-4-turbo API.
    • pentestgpt --reasoning_model=gpt-4 if you have access to GPT-4 API.
    • pentestgpt --reasoning_model=gpt-3.5-turbo-16k if you only have access to GPT-3.5 API.
  2. To start, run pentestgpt --args.

    • --help show the help message
    • --reasoning_model is the reasoning model you want to use.
    • --parsing_model is the parsing model you want to use.
    • --useAPI is whether you want to use OpenAI API. By default it is set to True.
    • --log_dir is the customized log output directory. The location is a relative directory.
    • --logging defines if you would like to share the logs with us. By default it is set to False.
  3. The tool works similar to msfconsole. Follow the guidance to perform penetration testing.

  4. In general, PentestGPT intakes commands similar to chatGPT. There are several basic commands.

    1. The commands are:
      • help: show the help message.
      • next: key in the test execution result and get the next step.
      • more: let PentestGPT to explain more details of the current step. Also, a new sub-task solver will be created to guide the tester.
      • todo: show the todo list.
      • discuss: discuss with the PentestGPT.
      • google: search on Google. This function is still under development.
      • quit: exit the tool and save the output as log file (see the reporting section below).
    2. You can use <SHIFT + right arrow> to end your input (and is for next line).
    3. You may always use TAB to autocomplete the commands.
    4. When you're given a drop-down selection list, you can use cursor or arrow key to navigate the list. Press ENTER to select the item. Similarly, use <SHIFT + right arrow> to confirm selection.
      The user can submit info about:
      • tool: output of the security test tool used
      • web: relevant content of a web page
      • default: whatever you want, the tool will handle it
      • user-comments: user comments about PentestGPT operations
  5. In the sub-task handler initiated by more, users can execute more commands to investigate into a specific problem:

    1. The commands are:
      • help: show the help message.
      • brainstorm: let PentestGPT brainstorm on the local task for all the possible solutions.
      • discuss: discuss with PentestGPT about this local task.
      • google: search on Google. This function is still under development.
      • continue: exit the subtask and continue the main testing session.

Report and Logging

  1. [Update] If you would like us to collect the logs to improve the tool, please run pentestgpt --logging. We will only collect the LLM usage, without any information related to your OpenAI key.
  2. After finishing the penetration testing, a report will be automatically generated in logs folder (if you quit with quit command).
  3. The report can be printed in a human-readable format by running python3 utils/report_generator.py <log file>. A sample report sample_pentestGPT_log.txt is also uploaded.

Custom Model Endpoints and Local LLMs

PentestGPT now support local LLMs, but the prompts are only optimized for GPT-4.

  • To use local GPT4ALL model, you may run pentestgpt --reasoning_model=gpt4all --parsing_model=gpt4all.
  • To select the particular model you want to use with GPT4ALL, you may update the module_mapping class in pentestgpt/utils/APIs/module_import.py.
  • You can also follow the examples of module_import.py, gpt4all.py and chatgpt_api.py to create API support for your own model.

Citation

Please cite our paper at:

@misc{deng2023pentestgpt,
      title={PentestGPT: An LLM-empowered Automatic Penetration Testing Tool}, 
      author={Gelei Deng and Yi Liu and Víctor Mayoral-Vilches and Peng Liu and Yuekang Li and Yuan Xu and Tianwei Zhang and Yang Liu and Martin Pinzger and Stefan Rass},
      year={2023},
      eprint={2308.06782},
      archivePrefix={arXiv},
      primaryClass={cs.SE}
}

License

Distributed under the MIT License. See LICENSE.txt for more information. The tool is for educational purpose only and the author does not condone any illegal use. Use as your own risk.

Contact the Contributors!

(back to top)

pentestgpt's People

Contributors

00-python avatar af7er9l0w avatar anth0rx avatar dealbreaker973 avatar deepsource-autofix[bot] avatar eltociear avatar erichilario avatar greydgl avatar jiayuqi7813 avatar keysaim avatar kuromesi avatar lopekinz avatar rainrat avatar riccardorobb avatar sadra-barikbin avatar sumleo avatar vmayoral avatar wouterdebruijn avatar wyl2003 avatar zhangj111 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pentestgpt's Issues

[Feature] own models

To get a free version , let user chose its own model from the huggingface library and run it on gpu to make it work then use this model instead of openai api

Propres outils

On peut ajouter autres outils à pentestGPT?
et il existe également un nombre limité d'actions comme pentestgpt basé sur "GPT plus" a un nombre bien défini de questions par jour?

[Bug] Add handler for repeated commands.

Correctly, pentestGPT needs to perform multiple rounds of reasoning if users key in todo for multiple times. The result is also wrong.

Add a handler to check if the current command is the same as the previous one (excluding next).

Suggestions

Hi, I recently made a project similar to this one. Initially I made it in python before moving over to C++. But something they both shared is a feature that I'd like to suggest for you guys.

Fully automated request/response. Basically an option that allows GPT to run the command and get the response in return directly. A safety system could also be added where you could confirm each command. I would recommend to make these two optional features as they would make it easier and faster to use this project.

[Feature] I want to autosave and resume a ReasonningSession

STORY:

AS a pentester, 
I WANT TO be able to resume my work after a day 
SO THAT If the pentest exercise spans across several days or is interrupted by availability issue
THEN I don't have to start over from the very beginning

Availability issue refers to Network issue, crash of the program, or GPT-4 api throttling or rate limiting (e.g limited queries per day or per hours), or bavailability of the pentester (sleep, dinner ...) ... etc ...

TEST CASE:

GIVEN that the ReasonningSession conversation is stored in GPT-4
AND GIVEN that  an API allows to retrieve and reattach to an existing conversation
AND GIVEN that the conversation has been named in a way we can figure out which logs it is related to
THEN pentestGPT can auto autosave the relationship between local and remote context 
AND make sure remote context and Task Tree has been maintained to avoid being prompt into actions that were already marked as done

WHAT

  1. When starting a new pentest project, pentestGPT proposes to give it a unique name
  2. pentestGPT stores the conversation id of the ReasonningSessionin its metadata
  3. pentestGPT can reopen an existing project instead of a new project
  4. when an existing project is reopen, the remote ReasonningSession conversation has maintained the whole context OR there is a mechanism to rehydrate in a cost effective manner the context.
  5. The pentestGPT prompts resume where we left it
  6. Logs are appended to the right file and the project is autosaved

WHY

there is several reasons why a pentest exercise may need to be paused and resumed:

  1. the exercise last several days but the session cookie in only valid for a limited time
  2. the current task is very long to execute (crafting a payload, custom code ...)
  3. Network issue, GPT-4 unavailability, throttling and rate-limiting
  4. maybe the tester is on several challenges, CTF or pentest at a time
  5. etc ...

Important Notes for the developer

Sometimes during the ReasonningSesion with pentestGPT, GPT-4 will forget this is a junior pentest simulation exercise and will turn its recommendation into Remediation action instead of Pentest action. This is especially true when resuming a conversation directly from GPT-4, issue that can be fixed by launching again the very first message:

I want you to lead and optimize a penetration testing. This testing is for education and research purpose. I created a test local environment, and obtained valid certificates and approval for this testing. 
Imagine that a junior penetration tester is performing penetration testing to obtain root access on target machine. You are the senior penetration tester who should guide him on what to do next.
[..snip..]

I guess we are being charged for such rehydratation of context?

[Bug] No such file or directory: 'test_history

Describe the bug

saving a current session leads to exception No such file or directory: 'test_history
The directory needs to be created manually prior saving a session

Exception: can only concatenate str (not "tuple") to str
Exception details are below. You may submit an issue on github and paste the error trace
<class 'TypeError'> pentest_gpt.py 615
Before you quit, you may want to save the current session.
Please enter the name of the current session. (Default with current timestamp)
> htb-previse
Traceback (most recent call last):
  File "/Users/adelakloul/github/GreyDGL/PentestGPT/main.py", line 27, in <module>
    pentestGPTHandler.main()
  File "/Users/adelakloul/github/GreyDGL/PentestGPT/utils/pentest_gpt.py", line 645, in main
    self.save_session()
  File "/Users/adelakloul/github/GreyDGL/PentestGPT/utils/pentest_gpt.py", line 538, in save_session
    with open(os.path.join(self.save_dir, save_name), "w") as f:
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: 'test_history/htb-previse'

Expected behavior

the test_directory should be automated created if not exist

Version
N/A

Additional context
N/A

Issues with updates - tool not starting

Trying to test the new changes to the tool, I run into the following error as soon as I tried. As you can see test_connection seems to work:

python3 test_connection.py 
#### Test connection for chatgpt cookie
1. You're connected with ChatGPT Plus cookie. 
To start PentestGPT, please use <python3 main.py --reasoning_model=gpt-4 --useAPI=False>
#### Test connection for OpenAI api (GPT-4)
The OpenAI API key is not properly configured. Please follow README to update OpenAI API key in config/chatgpt_config.py
#### Test connection for OpenAI api (GPT-3.5)
The OpenAI API key is not properly configured. Please follow README to update OpenAI API key in config/chatgpt_config.py

but main doesn't:

python3 main.py --reasoning_model=gpt-4 --useAPI=False
- ChatGPT Sessions Initialized.
Please describe the penetration testing task in one line, including the target IP, task type, e
> test localhost 127.0.0.1
Traceback (most recent call last):
  File "/media/psf/Documents/GitHub/PentestGPT/main.py", line 25, in <module>
    pentestGPTHandler.main()
  File "/media/psf/Documents/GitHub/PentestGPT/utils/pentest_gpt.py", line 466, in main
    _response = self.reasoning_handler(prefixed_init_description)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/media/psf/Documents/GitHub/PentestGPT/utils/pentest_gpt.py", line 128, in reasoning_handler
    response = self.chatGPT4Agent.send_message(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/media/psf/Documents/GitHub/PentestGPT/utils/chatgpt_api.py", line 92, in send_message
    conversation = self.conversation_dict[conversation_id]
                   ~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
KeyError: None

Is anyone else having this issue? or am I missing something?

Unclear which part of request header cookie to use

In the installation instructions step number two ⬇️:
"Configure the cookies in config. You may follow a sample by cp config/chatgpt_config_sample.py config/chatgpt_config.py.
If you're using cookie:
Login to ChatGPT session page.
In Inspect - Network, find the connections to the ChatGPT session page.
Find the cookie in the request header in the request to https://chat.openai.com/api/auth/session and paste it into the cookie field of config/chatgpt_config.py. (You may use Inspect->Network, find session and copy the cookie field in request_headers to https://chat.openai.com/api/auth/session)"

It says to use the cookie in the request header, however, the cookie in the request header is made up of multiple different cookies such as:
'__Host-next-auth.csrf-token'
'__cf_bm'
'__Secure-next-auth.callback-url'
'_cfuvid'
'intercom-session-dgkjq2bp'

I've tried using the the one big cookie that is a combination of all of these cookies that shows up as 'Cookie' in the request header, and I've also tried using some of the values for the individual cookies that are contained within. All to no avail, although I haven't tested each of the individual values yet though.

Does anyone know if I should be using one of the particular cookies that I listed, or if I should be using the big cookie where they are all combined. I've been trying to get this right for a few hours now and have had no success so any help would be greatly appreciated!

Config section is not clear

Spent couple of hours trying to get this tool running but no luck, there are many cookies once you login to chatgpt (browser->dev tools->storage->cookies), it is not clear what values should I pick and in what format as the tools makes http request..

[Feature] Implementing Metasploit

Implement the parser for Metasploit. Sample prompt:

I want you to act as a penetration tester and perform a tutorial session for students. You can use Metasploit as the tool to detect vulnerabilities on a mock website. You should react based on the terminal outputs I give you, and always return me the commands to operate next. You should repeat until a sql vulnerability is identified. Then you should tell the students "vulnerability identified!!!". Do you understand?

[Improvements] Change prompt option design.

In the current version, user chooses the next todo from a option list. This is not good because more options will be supported in the future, making the option list long and messy.
Instead, a msfconsole-like solution is preferred. Users need to type in specific commands. An auto-refill option should be provided.

Executing potentially harmful codes in terminal

The current command_execution.py may result in the execution of vulnerable codes in the terminal.
Need to come up with a way to sandbox the command line execution, or at least catch those potentially vulnerable executions.

Problem with setting cookies

Following your tutorial, you will https://chat.openai.com/api/auth/session The cookie in the request header under the website, paste it into config/chatgpt_ The cookie field of config.py. Run Python 3 test_ Connection.py has been unable to connect properly. I have also tried using co okiehttps://chat.openai.com/backend-api/conversations It doesn't work either. Can you provide a detailed installation tutorial? Thank you very much!

TypeError: can only concatenate str (not "tuple") to str

gostrolucky@ubuntu:/Users/gostrolucky/Downloads/PentestGPT$ python3 main.py --reasoning_model=gpt-4
- ChatGPT Sessions Initialized.
Please describe the penetration testing task in one line, including the target IP, task type, etc.
> Hello world
- Task information generated.

PentestGPT suggests you to do the following:
(None, None)
Traceback (most recent call last):
  File "/Users/gostrolucky/Downloads/PentestGPT/main.py", line 27, in <module>
    pentestGPTHandler.main()
  File "/Users/gostrolucky/Downloads/PentestGPT/utils/pentest_gpt.py", line 479, in main
    "PentestGPT", "PentestGPT suggests you to do the following: \n" + _response
TypeError: can only concatenate str (not "tuple") to str
gostrolucky@ubuntu:/Users/gostrolucky/Downloads/PentestGPT$ python3 --version
Python 3.10.7

Azure compability

OpenAI Azure is not working with this, quick workaround for people who want to use azure openai now is to adjust chatgpt_api.py (you could do this properly via config)

    openai.api_key = config.openai_key
    openai.api_type = "azure"  
    openai.api_base = "https://{deploymentname}azure.com/"  
    openai.api_version = "2023-03-15-preview"

    self.history_length = 3  # maintain 3 messages in the history. (3 chat memory)
    self.conversation_dict: Dict[str, Conversation] = {}

def chatgpt_completion(self, history: List, model="gpt-3.5-turbo") -> str:
    if self.config.model == "gpt-4":
        model = "gpt-4"
        engine = "{gpt4deployment}"
    else: 
        engine = "{gpt3.5deployment}"

Login issues - unable to login

Hello @GreyDGL

Great work.

I have been trying to use the tool, however, I am unable to because I could not login even after supplying the full cookie and other required parameters. I keep getting a 403 response.

In addition to that, may I ask if a paid account is required to use this or a free account should work?

Thanks.

get_latest_message_id fails and errors out

Using ChatGPT interface:

Traceback (most recent call last):
  File "/workspace/main.py", line 19, in <module>
    pentestGPTHandler.main()
  File "/workspace/utils/pentest_gpt.py", line 466, in main
    _response = self.reasoning_handler(prefixed_init_description)
  File "/workspace/utils/pentest_gpt.py", line 128, in reasoning_handler
    response = self.chatGPT4Agent.send_message(
  File "/workspace/utils/chatgpt.py", line 184, in send_message
    message_id = self.get_latest_message_id(conversation_id)
  File "/workspace/utils/chatgpt.py", line 110, in get_latest_message_id
    return r.json()["current_node"]
KeyError: 'current_node'

https://github.com/GreyDGL/PentestGPT/blob/main/utils/chatgpt.py#L106-L110

Bypassing this leads to no recommendations.

Cookie error

Hi, when I try to install, I get an error :

The cookie is not properly configured. Please follow README to update cookie in config/chatgpt_config.py

I use the version published this morning.

I followed the README procedure, accessing the link and copying the "cookie" field from the request and pasting it into the chatgpt_config.py file

Crash after TODO output

I wish I could be more useful here, I tried looking in the loguru logs, but it doesn't show much. This is the only error I had, no traceback.

image

UnicodeEncodeError

So I have been trying to test the connection using the appropriate python file, but I consistently get the same error:

Traceback (most recent call last):
File "/Users/gold/Documents/Programs/PentestGPT/test_connection.py", line 13, in
chatgpt = ChatGPT(chatgpt_config)
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/gold/Documents/Programs/PentestGPT/utils/chatgpt.py", line 78, in init
self.headers["authorization"] = self.get_authorization()
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/gold/Documents/Programs/PentestGPT/utils/chatgpt.py", line 84, in get_authorization
r = requests.get(url, headers=self.headers)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/requests/api.py", line 73, in get
return request("get", url, params=params, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/requests/sessions.py", line 587, in request
resp = self.send(prep, **send_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/requests/sessions.py", line 701, in send
r = adapter.send(request, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/requests/adapters.py", line 489, in send
resp = conn.urlopen(
^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/urllib3/connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/urllib3/connectionpool.py", line 398, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/opt/homebrew/lib/python3.11/site-packages/urllib3/connection.py", line 244, in request
super(HTTPConnection, self).request(method, url, body=body, headers=headers)
File "/opt/homebrew/Cellar/[email protected]/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/http/client.py", line 1283, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/opt/homebrew/Cellar/[email protected]/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/http/client.py", line 1324, in _send_request
self.putheader(hdr, value)
File "/opt/homebrew/lib/python3.11/site-packages/urllib3/connection.py", line 224, in putheader
_HTTPConnection.putheader(self, header, *values)
File "/opt/homebrew/Cellar/[email protected]/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/http/client.py", line 1256, in putheader
values[i] = one_value.encode('latin-1')
^^^^^^^^^^^^^^^^^^^^^^^^^^^
UnicodeEncodeError: 'latin-1' codec can't encode character '\u2026' in position 512: ordinal not in range(256)

I am going to inspect, network, filter using the session URL, click headers, and go to request headers. I copied the entire value for cookie and put it as the value for the chatgpt_config.py file and it generates the above error. I wanted to know if this was an issue on my side or if it is a possible error with something that was included?

'more' option crashes program

When using the 'more' option, the program will crash with:

Exception: 'pentestGPT' object has no attribute 'step_reasoning_response'

Full session before crash

C:\home\kali\PentestGPT> python3 main.py --reasoning_model=gpt-4
- ChatGPT Sessions Initialized.
Please describe the penetration testing task in one line, including the target IP, task type, etc.
> HackTheBox single host challenge. Windows machine. 10.10.10.161.
- Task information generated. 

PentestGPT suggests you to do the following: 
Based on the information provided, I'll update the tasks and provide a recommendation for the next task. 

Initial tasks:
1. Information gathering
   1.1 Perform network scanning
   1.2 OS fingerprinting
   1.3 Service enumeration

Next Task: 
1.1 Perform network scanning - Use a tool like Nmap to scan for open ports and running services on the target machine (10.10.10.161).
You may start with:
Sure, here's an example of what the first step could look like:

Task: Perform a port scan on the target IP address to identify open ports.

Command: 

nmap <target_ip>

Description: 
The 'nmap' command is used to perform a port scan on the target IP address. This will help identify which ports are open and what services are running on them. The output will provide important information about the target system's 
architecture, operating system, and potentially vulnerable services. You can use the '-A' option to enable OS detection, version detection, script scanning, and traceroute all in one command. 

Example: 

nmap -A 192.168.1.100

This command will perform a comprehensive scan on the IP address '192.168.1.100'. It will use aggressive scanning techniques to identify open ports, running services, and the operating system. It may also run scripts to detect 
vulnerabilities. The output will be displayed in the terminal window.
>  more
Invalid task, try again.
> more
Exception: 'pentestGPT' object has no attribute 'step_reasoning_response'

'KeyError: None' when running main.py

Recently I was getting an error (the could not encode error) that was fixed with the latest release of PentestGPT. I am now getting an error that says this when I run it with the command (python3 main.py --reasoning_model=gpt-4 --useAPI=False):

`python3 main.py --reasoning_model=gpt-4 --useAPI=False

  • ChatGPT Sessions Initialized.
    Please describe the penetration testing task in one line, including the target IP, task type, etc.

i want to test 10.10.11.189
Traceback (most recent call last):
File "/Users/gold/Documents/Programs/PentestGPT/main.py", line 25, in
pentestGPTHandler.main()
File "/Users/gold/Documents/Programs/PentestGPT/utils/pentest_gpt.py", line 466, in main
_response = self.reasoning_handler(prefixed_init_description)
File "/Users/gold/Documents/Programs/PentestGPT/utils/pentest_gpt.py", line 128, in reasoning_handler
response = self.chatGPT4Agent.send_message(
File "/Users/gold/Documents/Programs/PentestGPT/utils/chatgpt_api.py", line 92, in send_message
conversation = self.conversation_dict[conversation_id]
KeyError: None`

Again, this seems to be an error that may relate to the cookie and the ability for the program to send and receive messages from ChatGPT. I do have a ChatGPT plus so that is not the issue (using the cookie for GPT-4). I will check later if it is working on my Windows device still.

Error when launching

This is the error i get when i run it. Using python3.10 in a conda env.

Traceback (most recent call last):
File "/home/kp/ai/pentestgpt/test_connection.py", line 4, in
from utils.chatgpt import ChatGPT
File "/home/kp/ai/pentestgpt/utils/chatgpt.py", line 14, in
from config.chatgpt_config import ChatGPTConfig
ModuleNotFoundError: No module named 'config.chatgpt_config'

PentestGPT Thinking... but GPT-4 already finished processing

Describe the bug

Using cookie authentication.
Apparently there is no timeout to wait for GPT-4 reasonningSession result,
but for some unknown reason, while GPT-4 completed its task, pentestGPT is still waiting for a response.
pentestGPT shows the message PentestGPT Thinking... and it takes ages before I decide to CTRL-C

possible remediation:

  1. pentestGPT timeout should be aligned with GPT-4 timeout
  2. in case of timeout without a response returned, we could be proposed to copy-paste GPT-4 response to carry on or quit

To Reproduce
Not sure how to reproduce, it may just happen

Expected behavior
Since I can see GPT-4 answer, I would have expected pentestGPT to see it as well.

Screenshots
N/A

Version
cookies authentication

Additional context
You're recommended to upload the log file for debugging. Add any other context about the problem here.

⠼  PentestGPT Thinking...
⠏  PentestGPT Thinking...
⠙  PentestGPT Thinking...
Traceback (most recent call last):
  File "/Users/pentester01/github/GreyDGL/PentestGPT/main.py", line 27, in <module>
    pentestGPTHandler.main()
  File "/Users/pentester01/github/GreyDGL/PentestGPT/utils/pentest_gpt.py", line 490, in main
    result = self.input_handler()
             ^^^^^^^^^^^^^^^^^^^^
  File "/Users/pentester01/github/GreyDGL/PentestGPT/utils/pentest_gpt.py", line 298, in input_handler
    reasoning_response = self.reasoning_handler(parsed_input)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/pentester01/github/GreyDGL/PentestGPT/utils/pentest_gpt.py", line 128, in reasoning_handler
    response = self.chatGPT4Agent.send_message(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/pentester01/github/GreyDGL/PentestGPT/utils/chatgpt.py", line 236, in send_message
    result = self._parse_message_raw_output(r)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/pentester01/github/GreyDGL/PentestGPT/utils/chatgpt.py", line 130, in _parse_message_raw_output
    for line in response.iter_lines():
  File "/Users/pentester01/.pyenv/versions/v3/lib/python3.11/site-packages/requests/models.py", line 865, in iter_lines
    for chunk in self.iter_content(
  File "/Users/pentester01/.pyenv/versions/v3/lib/python3.11/site-packages/requests/models.py", line 816, in generate
    yield from self.raw.stream(chunk_size, decode_content=True)
  File "/Users/pentester01/.pyenv/versions/v3/lib/python3.11/site-packages/urllib3/response.py", line 932, in stream
    yield from self.read_chunked(amt, decode_content=decode_content)
  File "/Users/pentester01/.pyenv/versions/v3/lib/python3.11/site-packages/urllib3/response.py", line 1075, in read_chunked
    chunk = self._handle_chunk(amt)
            ^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/pentester01/.pyenv/versions/v3/lib/python3.11/site-packages/urllib3/response.py", line 1017, in _handle_chunk
    value = self._fp._safe_read(amt)  # type: ignore[union-attr]
            ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/pentester01/.pyenv/versions/3.11.3/lib/python3.11/http/client.py", line 631, in _safe_read
    data = self.fp.read(amt)
           ^^^^^^^^^^^^^^^^^
  File "/Users/pentester01/.pyenv/versions/3.11.3/lib/python3.11/socket.py", line 706, in readinto
    return self._sock.recv_into(b)
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/pentester01/.pyenv/versions/3.11.3/lib/python3.11/ssl.py", line 1278, in recv_into
    return self.read(nbytes, buffer)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/pentester01/.pyenv/versions/3.11.3/lib/python3.11/ssl.py", line 1134, in read
    return self._sslobj.read(len, buffer)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
KeyboardInterrupt

(v3) pentester01@Pentester01s-MBP PentestGPT %    ^^^^^^^^^^^^^^^^^
  File "/Users/pentester01/.pyenv/versions/3.11.3/lib/python3.11/socket.py", line 706, in readinto
    return self._sock.recv_into(b)
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/pentester01/.pyenv/versions/3.11.3/lib/python3.11/ssl.py", line 1278, in recv_into
    return self.read(nbytes, buffer)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/pentester01/.pyenv/versions/3.11.3/lib/python3.11/ssl.py", line 1134, in read
    return self._sslobj.read(len, buffer)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
KeyboardInterrupt   ^^^^^^^^^^^^^^^^^
  File "/Users/pentester01/.pyenv/versions/3.11.3/lib/python3.11/socket.py", line 706, in readinto
    return self._sock.recv_into(b)
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/pentester01/.pyenv/versions/3.11.3/lib/python3.11/ssl.py", line 1278, in recv_into
    return self.read(nbytes, buffer)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/pentester01/.pyenv/versions/3.11.3/lib/python3.11/ssl.py", line 1134, in read
    return self._sslobj.read(len, buffer)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
KeyboardInterrupt  File "/Users/pentester01/.pyenv/versions/3.11.3/lib/python3.11/socket.py", line 706, in readinto
    return self._sock.recv_into(b)
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/pentester01/.pyenv/versions/3.11.3/lib/python3.11/ssl.py", line 1278, in recv_into
    return self.read(nbytes, buffer)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/pentester01/.pyenv/versions/3.11.3/lib/python3.11/ssl.py", line 1134, in read
    return self._sslobj.read(len, buffer)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
KeyboardInterrupt

(v3) pentester01@Pentester01s-MBP PentestGPT %    ^^^^^^^^^^^^^^^^^
  File "/Users/pentester01/.pyenv/versions/3.11.3/lib/python3.11/socket.py", line 706, in readinto
    return self._sock.recv_into(b)
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/pentester01/.pyenv/versions/3.11.3/lib/python3.11/ssl.py", line 1278, in recv_into
    return self.read(nbytes, buffer)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/pentester01/.pyenv/versions/3.11.3/lib/python3.11/ssl.py", line 1134, in read
    return self._sslobj.read(len, buffer)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
KeyboardInterrupt   ^^^^^^^^^^^^^^^^^
  File "/Users/pentester01/.pyenv/versions/3.11.3/lib/python3.11/socket.py", line 706, in readinto
    return self._sock.recv_into(b)
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/pentester01/.pyenv/versions/3.11.3/lib/python3.11/ssl.py", line 1278, in recv_into
    return self.read(nbytes, buffer)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/pentester01/.pyenv/versions/3.11.3/lib/python3.11/ssl.py", line 1134, in read
    return self._sslobj.read(len, buffer)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
KeyboardInterrupt

[Improvement] Limit input length

When the output from the terminal is too long, it is not possible for chatGPT to read the full information effectively. We need some parser to effectively filter out the non-useful information generated by the tools (SQLmap, for example).

Proposed changes.

  • Remove useless information from the terminal output through some heuristic methods
  • Double-confirm if a vulnerability is detected through keyword mapping. This is because that ChatGPT cannot always generate the ideal keyword at vulnerability detection.

Demo HTB Video

Observation: If you were to Google an HTB box name, your first three results would likely include the solution guide/walkthrough for that box...

Suggestion: Perhaps the demonstration video would reach the level of demonstrating value if the target was not a host that has a known set of vulnerabilities/misconfigurations which are widely published on the internet.

CHATGPT

should i use GPTplus or i can use the normal chatgpt ?

ModuleNotFoundError: No module named 'config'

I'm following the directions in Readme.md

When I enter:
python3 utils/chatgpt.py

I get:
Traceback (most recent call last):
File "/home/user/tools/PentestGPT/utils/chatgpt.py", line 13, in
import config.chatgpt_config
ModuleNotFoundError: No module named 'config'

not sure whats happening

hello

im getting the following error

C:\Users\Jerem\Downloads\chatgpt\PentestGPT>python3 main.py --reasoning_model=gpt-4 --useAPI

  • ChatGPT Sessions Initialized.
    Please describe the penetration testing task in one line, including the target IP, task type, etc.

I want to perform a penetration test on a web application. The target IP is 172.67.220.53
Traceback (most recent call last):
File "C:\Users\Jerem\Downloads\chatgpt\PentestGPT\main.py", line 25, in
pentestGPTHandler.main()
File "C:\Users\Jerem\Downloads\chatgpt\PentestGPT\utils\pentest_gpt.py", line 466, in main
_response = self.reasoning_handler(prefixed_init_description)
File "C:\Users\Jerem\Downloads\chatgpt\PentestGPT\utils\pentest_gpt.py", line 128, in reasoning_handler
response = self.chatGPT4Agent.send_message(
File "C:\Users\Jerem\Downloads\chatgpt\PentestGPT\utils\chatgpt_api.py", line 92, in send_message
conversation = self.conversation_dict[conversation_id]
KeyError: None

im not sure what im doing wrong

[feature] Consider adding cost estimation when using API

Since the project is using the OpenAI API now, it would be great if we could know the estimated cost for each conversation/session, this feature can be useful for benchmarking or simply give the people wanting to use it an idea of how much it will cost them.

Some implementation idea:

  • count tokens using libraries like https://github.com/openai/tiktoken
  • calculate the cost of the entire session when it finishes (as the first step)
  • provide an option to display the number of tokens in each request and response (including the backend reasoning sessions)

Use an API key

Would it be possible to propose the use of API keys as a method of authentication?

I get this error

PentestGPT suggests you to do the following:
(None, None)
Traceback (most recent call last):
File "/home/kali/Desktop/PentestGPT/main.py", line 27, in
pentestGPTHandler.main()
File "/home/kali/Desktop/PentestGPT/utils/pentest_gpt.py", line 610, in main
self.initialize(previous_session_ids=loaded_ids)
File "/home/kali/Desktop/PentestGPT/utils/pentest_gpt.py", line 194, in initialize
self._feed_init_prompts()
File "/home/kali/Desktop/PentestGPT/utils/pentest_gpt.py", line 128, in _feed_init_prompts
"PentestGPT", "PentestGPT suggests you to do the following: \n" + _response
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~
TypeError: can only concatenate str (not "tuple") to str

Unable to use <SHIFT + right arrow> Feature

I am currently attempting to run PentestGPT in Kali Linux. However, I seem to be encountering issues with the <Shift + Right Arrow> feature, which is used to select an item and move to the next line. I am stuck at the input section below and cannot select anything to enter my input. This is preventing me from proceeding to the next step as the <Shift + Right Arrow> function does not seem to be working.

image

API and cookie does not work at all

I tried to get the cookies from chrome session and I also set an API Key but the test_connection.py always give a "not properly configured" output.

What is wrong here ?
Maybe you can give us a list, which cookies are really necessary.
Should they be separated by space or not ?
What is wrong with using API ? I set a new secret, copy+pasted it over and it just got ignored. I used model gtp-4, gpt-3.5 - nothing works.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.