crewaiinc / crewai-tools Goto Github PK
View Code? Open in Web Editor NEWLicense: MIT License
License: MIT License
guidance_dir = (ROOT / "." / "data" / "guidance").as_posix()
docs_tool = DirectoryReadTool(directory=guidance_dir)
file_tool = FileReadTool()
By default these tools don't work together and give Error 2 File not found.
I fixed this by creating a custom variant of filetool which calls
file_path = file_path.strip().strip('"').strip("'")
before reading the file.
The function is called without iterating before it is called again and iterated upon. Not sure if there is are reason to have such a redundant pattern. It doesn't break anything, but it is odd.
Description
Continuation of #924, which was closed without providing a long-term solution.
With langchain-core==0.2.16
, a new required arg was added to StructuredTool._run() as shown here. This breaks existing implementations using crewai_tools.BaseTool
.
Steps to Reproduce
Create a custom tool using BaseTool.
Expected behavior
Elaborated on in #924, but this works with previous versions of langchain-core
.
Screenshots/Code snippets
If applicable, add screenshots or code snippets to help explain your problem.
Environment Details:
3.11.9
0.35.5
0.4.26
Logs
Thought: <LLM output>
Action: custom_crew
Action Input: {"query": "<LLM-derived query>"}
I encountered an error while trying to use the tool. This was the error: StructuredTool._run() missing 1 required keyword-only argument: 'config'.
Possible Solution
None
Additional Context
langchain==0.2.12
langchain-core==0.2.28
Search is NOT limited to given txt file.
`
from crewai_tools import TXTSearchTool
txt_search_tool = TXTSearchTool(
txt="kunst.txt",
config=dict(
llm=dict(
provider="ollama",
config=dict(
model="llama3.1",
),
),
embedder=dict(
provider="ollama",
config=dict(
model="mxbai-embed-large",
),
),
)
)
`
No error message and at the end the best result is shown, but in between (Verbose) it will also show snippets from other sources, f.e. of a PDF which was searched by XMLSearchTool earlier by using a seperate script....
Normally i use to define an LLM configuration as for example to pass to an agent the following way
llm = ChatOpenAI(name=self.model_name,
openai_api_base= self.openai_api_base,
openai_api_key=token,
default_headers=headers,
http_client=httpx.Client(verify=path_cert))
return Agent(
role='Senior Software Engineer',
goal='Create software as needed',
backstory=dedent("""\
"""),
allow_delegation=False,
llm = llm,
verbose=True
)
for the directory search tool it is now done in a different manner
llm=dict(
provider="openai", # or google, openai, anthropic, llama2, ...
config=dict(
model="llama2-70b",
api_key = token,
base_url="<url>",
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="openai",
config=dict(
api_key = token,
model="...",
api_base="<url>",
),
The above configuration is running but still cannot access the tool
I encountered an error while trying to use the tool. This was the error: Connection error..
Tool Search a directory's content accepts these inputs: Search a directory's content(search_query: 'string') - A tool that can be used to semantic search a query the <dir> directory's content.
How should i configure the llm and embedder to use the same settings as above
i found the keys "model, base_url, api_key" but how to deal with headers and http_client?
thanks
While the options that are given to configure is amazing , I felt adding an option to select indexing algorithm in the configuration would have been nice
I am using SerperDevTool
for a web search task. I have the account and all credentils to use the serpapi. But when I kickoff my crew object, I am getting the following error
I encountered an error while trying to use the tool. This was the error: HTTPSConnectionPool(host='google.serper.dev', port=443): Max retries exceeded with url: /search (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1007)'))).
Tool Search the internet accepts these inputs: Search the internet(search_query: 'string') - A tool that can be used to semantic search a query from a txt's content.
I am attempting to perform RAG (Retrieval-Augmented Generation) using a CSV file, with AWS Bedrock as the provider.
Here is the code I used:
tool = CSVSearchTool(csv=r'path/to/file.csv',
config=dict(
llm=dict(
provider="aws_bedrock",
config=dict(
model="amazon.titan-text-express-v1",
),
),
embedder=dict(
provider="aws_bedrock",
config=dict(
model="amazon.titan-embed-text-v2:0",
),
),
)
)
However, it seems the embedding model on Bedrock is not supported.
Logs:
\path\to\venv\lib\site-packages\langchain\_api\module_import.py:87: LangChainDeprecationWarning: Importing GuardrailsOutputParser from langchain.output_parsers is deprecated. Please replace the import with the following:
from langchain_community.output_parsers.rail_parser import GuardrailsOutputParser
warnings.warn(
Traceback (most recent call last):
File "\path\to\venv\lib\site-packages\schema.py", line 542, in validate
return s.validate(data, **kwargs)
File "\path\to\venv\lib\site-packages\schema.py", line 219, in validate
raise SchemaError(
schema.SchemaError: Or('openai', 'gpt4all', 'huggingface', 'vertexai', 'azure_openai', 'google', 'mistralai', 'nvidia') did not validate 'aws_bedrock'
'openai' does not match 'aws_bedrock'
'gpt4all' does not match 'aws_bedrock'
'huggingface' does not match 'aws_bedrock'
'vertexai' does not match 'aws_bedrock'
'azure_openai' does not match 'aws_bedrock'
'google' does not match 'aws_bedrock'
'mistralai' does not match 'aws_bedrock'
'nvidia' does not match 'aws_bedrock'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "\path\to\venv\lib\site-packages\schema.py", line 483, in validate
nvalue = Schema(
File "\path\to\venv\lib\site-packages\schema.py", line 544, in validate
raise SchemaError(
schema.SchemaError: Or('openai', 'gpt4all', 'huggingface', 'vertexai', 'azure_openai', 'google', 'mistralai', 'nvidia') did not validate 'aws_bedrock'
'openai' does not match 'aws_bedrock'
'gpt4all' does not match 'aws_bedrock'
'huggingface' does not match 'aws_bedrock'
'vertexai' does not match 'aws_bedrock'
'azure_openai' does not match 'aws_bedrock'
'google' does not match 'aws_bedrock'
'mistralai' does not match 'aws_bedrock'
'nvidia' does not match 'aws_bedrock'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "\path\to\venv\lib\site-packages\schema.py", line 483, in validate
nvalue = Schema(
File "\path\to\venv\lib\site-packages\schema.py", line 489, in validate
raise SchemaError(
schema.SchemaError: Key 'provider' error:
Or('openai', 'gpt4all', 'huggingface', 'vertexai', 'azure_openai', 'google', 'mistralai', 'nvidia') did not validate 'aws_bedrock'
'openai' does not match 'aws_bedrock'
'gpt4all' does not match 'aws_bedrock'
'huggingface' does not match 'aws_bedrock'
'vertexai' does not match 'aws_bedrock'
'azure_openai' does not match 'aws_bedrock'
'google' does not match 'aws_bedrock'
'mistralai' does not match 'aws_bedrock'
'nvidia' does not match 'aws_bedrock'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "\path\to\src\simple_bot.py", line 19, in <module>
csv_cost_tool = CSVSearchTool(csv=r'path/to/file.csv',
File "\path\to\venv\lib\site-packages\crewai_tools\tools\csv_search_tool\csv_search_tool.py", line 32, in __init__
super().__init__(**kwargs)
File "\path\to\venv\lib\site-packages\pydantic\main.py", line 176, in __init__
self.__pydantic_validator__.validate_python(data, self_instance=self)
File "\path\to\venv\lib\site-packages\crewai_tools\tools\rag\rag_tool.py", line 47, in _set_default_adapter
app = App.from_config(config=self.config) if self.config else App()
File "\path\to\venv\lib\site-packages\embedchain\app.py", line 363, in from_config
validate_config(config_data)
File "\path\to\venv\lib\site-packages\embedchain\utils\misc.py", line 503, in validate_config
return schema.validate(config_data)
File "\path\to\venv\lib\site-packages\schema.py", line 489, in validate
raise SchemaError(
schema.SchemaError: Key 'embedder' error:
Key 'provider' error:
Or('openai', 'gpt4all', 'huggingface', 'vertexai', 'azure_openai', 'google', 'mistralai', 'nvidia') did not validate 'aws_bedrock'
'openai' does not match 'aws_bedrock'
'gpt4all' does not match 'aws_bedrock'
'huggingface' does not match 'aws_bedrock'
'vertexai' does not match 'aws_bedrock'
'azure_openai' does not match 'aws_bedrock'
'google' does not match 'aws_bedrock'
'mistralai' does not match 'aws_bedrock'
'nvidia' does not match 'aws_bedrock'
Due to this issue, AWS Bedrock seems to be unusable in this context. Is there any workaround for this problem?
"It seems we encountered an unexpected error while trying to use the tool. This was the error: 'OPENAI_API_KEY'"
I got this message while trying to use tools. Can you please let me know how to switch into local LLM models.
Has anyone got an example of how to use this tool in an agent / task setup? eg how do I tell the LLM to search a specific repo, using a specific github token?
I see in the tool Readme.md an example, but, the LLM I am using mostly ignores the repo and token.
I'm new to CrewAI and would really appreciate it if someone has a working example of how this could be used.
Thanks
It would be great to see a crewAI tool examples in docs
Currently, if a PDFSearchTool is instantiated like so:
pdf_search = PDFSearchTool(pdf='pdf_2.pdf')
And then a query is made to the vector database (typically a chromadb vector database via embedchain), the results returned will not be limited to the provided pdf IF other documents have already been embedded into the vector database.
Something like this oughta work:
An updated version of EmbedchainAdapter which saves the pdf "source" when it is added to the vector database, and uses the "source" when querying the vector database (to filter for only that pdf).
class PDFEmbedchainAdapter(Adapter):
embedchain_app: App
summarize: bool = False
src: Optional[str] = None
def query(self, question: str) -> str:
print("Querying pdf from embedchain")
print("pdf source: ", self.src)
where = {"app_id": self.embedchain_app.config.id, "source": self.src} if self.src else None
result, sources = self.embedchain_app.query(
# todo: this where clause is not working for some reason
question, citations=True, dry_run=(not self.summarize), where=where
)
if self.summarize:
return result
return "\n\n".join([source[0] for source in sources])
def add(
self,
*args: Any,
**kwargs: Any,
) -> None:
print("Adding pdf to embedchain")
print("pdf source: ", args[0])
self.src = args[0]
self.embedchain_app.add(*args, **kwargs)
# Vanilla PDFSearchTool which uses the updated PDFEmbedChainAdapter
class PDFSearchTool(RagTool):
name: str = "Search a PDF's content"
description: str = (
"A tool that can be used to semantic search a query from a PDF's content."
)
args_schema: Type[BaseModel] = PDFSearchToolSchema
def __init__(self, pdf: Optional[str] = None, **kwargs):
super().__init__(**kwargs)
if pdf is not None:
self.add(pdf)
self.description = f"A tool that can be used to semantic search a query the {pdf} PDF's content."
self.args_schema = FixedPDFSearchToolSchema
self._generate_description()
@model_validator(mode="after")
def _set_default_adapter(self):
if isinstance(self.adapter, RagTool._AdapterPlaceholder):
app = App.from_config(config=self.config) if self.config else App()
self.adapter = PDFEmbedchainAdapter(
embedchain_app=app, summarize=self.summarize
)
return self
def add(
self,
*args: Any,
**kwargs: Any,
) -> None:
kwargs["data_type"] = DataType.PDF_FILE
super().add(*args, **kwargs)
def _before_run(
self,
query: str,
**kwargs: Any,
) -> Any:
if "pdf" in kwargs:
self.add(kwargs["pdf"])
I am trying to use crew ai for a travel booking use case I have an agent with a task assigned to it. The description of the task is to gather information from the human and I want it to be as interactive as possible. I set the human_input=True for that task, but the agent assumes some random information and only then asks for human input at the end of the task.
My question is:
How to het a human input in the middle of the task?
Do I refine my prompt? Should I give the task a better description?
Should I give the tasks a better expected_output?
Can I use any other task attribute to make the human_input more interactive
Currently there is a dependency on langchain <0.2.0. However langchain is now at 0.2.6 and is needed to install langchain-community. CrewAI cannot use langchain-community at the moment.
I see you guys added FireCrawl support, but the PR didn't add it as a tool to crewai_tools/init.py, so it can't be used.
To reproduce:
from crewai import Crew, Agent, Task, Process
from dotenv import load_dotenv
from crewai_tools import FirecrawlScrapeWebsiteTool
Error:
Traceback (most recent call last):
File "test.py", line 3, in
from crewai_tools import FirecrawlScrapeWebsiteTool
ImportError: cannot import name 'FirecrawlScrapeWebsiteTool' from 'crewai_tools' (site-packages/crewai_tools/init.py)
crewai-tools not installing in python3.10 in ubuntu
Tried the below command
pip3.10 install crewai-tools
Getting below error
Defaulting to user installation because normal site-packages is not writeable
Collecting crewai-tools
Using cached crewai_tools-0.4.26-py3-none-any.whl.metadata (4.6 kB)
Requirement already satisfied: beautifulsoup4<5.0.0,>=4.12.3 in ./.local/lib/python3.10/site-packages (from crewai-tools) (4.12.3)
Requirement already satisfied: chromadb<0.5.0,>=0.4.22 in ./.local/lib/python3.10/site-packages (from crewai-tools) (0.4.24)
Collecting docker<8.0.0,>=7.1.0 (from crewai-tools)
Using cached docker-7.1.0-py3-none-any.whl.metadata (3.8 kB)
Collecting docx2txt<0.9,>=0.8 (from crewai-tools)
Using cached docx2txt-0.8.tar.gz (2.8 kB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [1 lines of output]
ERROR: Can not execute `setup.py` since setuptools is not available in the build environment.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
Tried multiple version of package such as crewai-tools==0.4.25,crewai-tools==0.4.26,crewai-tools==0.4.8,crewai-tools==0.4.7...etc
Also tried to update pip
But getting the same error.
I've encountered an issue with some tools (specifically TXTSearchTool and WebsiteSearchTool, but there may be more) where they do not function correctly unless the source file or URL is passed directly to the constructor during their creation.
Steps to Reproduce:
Create an instance of the tool without passing the source file/URL directly to the constructor.
Attempt to use the tool by passing the source file/URL through the agent (e.g., the agent finds and passes the URL or file path).
Observe that the tool fails to find anything, even though the call appears correct and the documentation suggests that the tool should work without parameters in the constructor.
Expected Behavior:
The tools should function correctly when the source file/URL is passed by the agent after the tool's creation, as indicated by the documentation.
Actual Behavior:
The tools do not find any results when the source file/URL is passed by the agent, but they work correctly if the parameters are provided directly to the constructor.
Examples:
TXTSearchTool and WebsiteSearchTool fail to find results when used by the agent with parameters passed after creation.
Directly passing the source file/URL to the constructor allows the tools to function as expected.
Environment:
python: 3.10.12 (tried also 3.11 and 3.12)
crewai: 0.30.11 (tried also older version and newest git version)
crewai-tools: 0.2.6 (tried also newest git version)
model: gpt3.5-turbo (but i tried gpt4, gpt4o, gpt4-turbo too with same result)
OS: linux mint, windows 11
Used coding agent to execute a python program and save the results in a csv file in local path
from crewai import Agent, Task, Crew
coding_agent = Agent(
role="Python programmer",
goal="Analyze question and generate a python code and execute the same",
backstory="You are an experienced Python programmer with strong Python skills.",
allow_code_execution=True,
llm = llm,
output_file = "output/results.csv"
)
# Create a task that requires code execution
data_analysis_task = Task(
description="generate and execute a python program to read a CSV file in my local path <path name> and do these operations <operations> and save it in output path <output path>",
agent=coding_agent,
expected_output='Successfull generation and execution of a python program'
)
# Create a crew and add the task
analysis_crew = Crew(
agents=[coding_agent],
tasks=[data_analysis_task]
)
# Execute the crew
result = analysis_crew.kickoff()
print(result)
While running the crew, I could see the correct code in the terminal but once it was completed, there was no output file in the specified folder. It seems the generated code didn't properly executed in my machine.
We have a file read tool which can read contents of file present locally, but theres no tool to writing a file which was needed for my use case
What are the differences between them?
Which one can I use to read the entire website by including its internal links/pages?
I am not sure whether they will either read the entire website or just the specified page.
I mean, all pages from the main provided page…
Can anyone help me understand what they do?
I am getting orjson.orjson module not found error while using crewai_tools. I tried using pip install orjson but the error wasn't resolved. How can we deal with this??
Does not work when using ollama locally:
`
from crewai_tools import XMLSearchTool
xml_search_tool = XMLSearchTool(
xml="test.xml",
config=dict(
llm=dict(
provider="ollama",
config=dict(
model="llama3.1",
),
),
embedder=dict(
provider="ollama",
config=dict(
model="mxbai-embed-large",
),
),
)
)
`
I get:
urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)>
Hey guys, it would be awesome if you could also introduce a Qdrant adapter to allow Crewai to interface with Qdrant vector DB
the new version of serperdevtool is broken. I have tried to fix it but i gave it up after 3rd error:
File "/home/jakub/CrewAI-Studio/venv/lib/python3.11/site-packages/crewai_tools/tools/serper_dev_tool/serper_dev_tool.py", line 30
save_file: bool = Field(default=False, description="Flag to determine whether to save the results to a file")
TabError: inconsistent use of tabs and spaces in indentation
File "/home/jakub/CrewAI-Studio/venv/lib/python3.11/site-packages/crewai_tools/tools/serper_dev_tool/serper_dev_tool.py", line 42
payload["gl"] = self.country if self.country
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
SyntaxError: expected 'else' after 'if' expression
I encountered an error while trying to use the tool. This was the error: Object of type FieldInfo is not JSON serializable.
Tool Search the internet accepts these inputs: Search the internet(search_query: 'string') - A tool that can be used to search the internet with a search_query. search_query: 'Mandatory search query you want to use to search the internet'
# Attempt to initialize query_tool
query_tool = LlamaIndexTool.from_query_engine(
query_engine,
name="Knowledge Graph Tool",
description="Use this tool to answer the user query.",
)
chatAgent = Agent(
... other config ...
tools=[query_tool]
)
2024-08-29T13:32:02.389-04:00
Action: Knowledge Graph Tool
2024-08-29T13:32:03.132-04:00
Action Input: {"query": "bhp343"}�
2024-08-29T13:32:03.132-04:00
I encountered an error while trying to use the tool. This was the error: string indices must be integers, not 'str'.
The agent works correctly without setting the tool. When i add the tool list to the agent config with the llamaindex tool it gives this error and does not use the tool.
Currently the RAG base class composes with the official OpenAI Python Package. It is desirable to instead only use the calling convention of OpenAI. This allows for using Local LLMs (such as Llama 3) for producing embeddings in conjunction with Open AIs calling conventions.
Trying to run pytest on crewAI-tools, encounter this error. I suspect it has to do with chromadb versioning, it is calling out the auto-generated files from protobuff.
(crewai-tools-py3.12) (base) bisansuleiman@Bisans-MacBook-Pro crewAI-tools % poetry run pytest
======================================= test session starts ========================================
platform darwin -- Python 3.12.4, pytest-8.3.2, pluggy-1.5.0
rootdir: /Users/bisansuleiman/crewAI-tools
configfile: pyproject.toml
plugins: anyio-4.4.0
collected 2 items / 3 errors
============================================== ERRORS ==============================================
_____________________________ ERROR collecting tests/base_tool_test.py _____________________________
tests/base_tool_test.py:2: in <module>
from crewai_tools import BaseTool, tool
crewai_tools/__init__.py:1: in <module>
from .tools import (
crewai_tools/tools/__init__.py:2: in <module>
from .code_docs_search_tool.code_docs_search_tool import CodeDocsSearchTool
crewai_tools/tools/code_docs_search_tool/code_docs_search_tool.py:3: in <module>
from embedchain.models.data_type import DataType
../Library/Caches/pypoetry/virtualenvs/crewai-tools-yNAtfOkF-py3.12/lib/python3.12/site-packages/embedchain/__init__.py:5: in <module>
from embedchain.app import App # noqa: F401
../Library/Caches/pypoetry/virtualenvs/crewai-tools-yNAtfOkF-py3.12/lib/python3.12/site-packages/embedchain/app.py:22: in <module>
from embedchain.config import AppConfig, CacheConfig, ChunkerConfig, Mem0Config
../Library/Caches/pypoetry/virtualenvs/crewai-tools-yNAtfOkF-py3.12/lib/python3.12/site-packages/embedchain/config/__init__.py:4: in <module>
from .app_config import AppConfig
../Library/Caches/pypoetry/virtualenvs/crewai-tools-yNAtfOkF-py3.12/lib/python3.12/site-packages/embedchain/config/app_config.py:5: in <module>
from .base_app_config import BaseAppConfig
../Library/Caches/pypoetry/virtualenvs/crewai-tools-yNAtfOkF-py3.12/lib/python3.12/site-packages/embedchain/config/base_app_config.py:6: in <module>
from embedchain.vectordb.base import BaseVectorDB
../Library/Caches/pypoetry/virtualenvs/crewai-tools-yNAtfOkF-py3.12/lib/python3.12/site-packages/embedchain/vectordb/base.py:2: in <module>
from embedchain.embedder.base import BaseEmbedder
../Library/Caches/pypoetry/virtualenvs/crewai-tools-yNAtfOkF-py3.12/lib/python3.12/site-packages/embedchain/embedder/base.py:7: in <module>
from chromadb.api.types import Embeddable, EmbeddingFunction, Embeddings
../Library/Caches/pypoetry/virtualenvs/crewai-tools-yNAtfOkF-py3.12/lib/python3.12/site-packages/chromadb/__init__.py:5: in <module>
from chromadb.auth.token import TokenTransportHeader
../Library/Caches/pypoetry/virtualenvs/crewai-tools-yNAtfOkF-py3.12/lib/python3.12/site-packages/chromadb/auth/token/__init__.py:26: in <module>
from chromadb.telemetry.opentelemetry import (
../Library/Caches/pypoetry/virtualenvs/crewai-tools-yNAtfOkF-py3.12/lib/python3.12/site-packages/chromadb/telemetry/opentelemetry/__init__.py:11: in <module>
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
../Library/Caches/pypoetry/virtualenvs/crewai-tools-yNAtfOkF-py3.12/lib/python3.12/site-packages/opentelemetry/exporter/otlp/proto/grpc/trace_exporter/__init__.py:22: in <module>
from opentelemetry.exporter.otlp.proto.grpc.exporter import (
../Library/Caches/pypoetry/virtualenvs/crewai-tools-yNAtfOkF-py3.12/lib/python3.12/site-packages/opentelemetry/exporter/otlp/proto/grpc/exporter.py:39: in <module>
from opentelemetry.proto.common.v1.common_pb2 import (
../Library/Caches/pypoetry/virtualenvs/crewai-tools-yNAtfOkF-py3.12/lib/python3.12/site-packages/opentelemetry/proto/common/v1/common_pb2.py:36: in <module>
_descriptor.FieldDescriptor(
../Library/Caches/pypoetry/virtualenvs/crewai-tools-yNAtfOkF-py3.12/lib/python3.12/site-packages/google/protobuf/descriptor.py:621: in __new__
_message.Message._CheckCalledFromGeneratedFile()
E TypeError: Descriptors cannot be created directly.
E If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
E If you cannot immediately regenerate your protos, some other possible workarounds are:
E 1. Downgrade the protobuf package to 3.20.x or lower.
E 2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
E
E More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates
____________________________ ERROR collecting tests/spider_tool_test.py ____________________________
tests/spider_tool_test.py:1: in <module>
from crewai_tools.tools.spider_tool.spider_tool import SpiderTool
crewai_tools/__init__.py:1: in <module>
from .tools import (
crewai_tools/tools/__init__.py:2: in <module>
from .code_docs_search_tool.code_docs_search_tool import CodeDocsSearchTool
crewai_tools/tools/code_docs_search_tool/code_docs_search_tool.py:3: in <module>
from embedchain.models.data_type import DataType
../Library/Caches/pypoetry/virtualenvs/crewai-tools-yNAtfOkF-py3.12/lib/python3.12/site-packages/embedchain/__init__.py:5: in <module>
from embedchain.app import App # noqa: F401
../Library/Caches/pypoetry/virtualenvs/crewai-tools-yNAtfOkF-py3.12/lib/python3.12/site-packages/embedchain/app.py:38: in <module>
from embedchain.vectordb.chroma import ChromaDB
../Library/Caches/pypoetry/virtualenvs/crewai-tools-yNAtfOkF-py3.12/lib/python3.12/site-packages/embedchain/vectordb/chroma.py:4: in <module>
from chromadb import Collection, QueryResult
../Library/Caches/pypoetry/virtualenvs/crewai-tools-yNAtfOkF-py3.12/lib/python3.12/site-packages/chromadb/__init__.py:5: in <module>
from chromadb.auth.token import TokenTransportHeader
../Library/Caches/pypoetry/virtualenvs/crewai-tools-yNAtfOkF-py3.12/lib/python3.12/site-packages/chromadb/auth/token/__init__.py:26: in <module>
from chromadb.telemetry.opentelemetry import (
../Library/Caches/pypoetry/virtualenvs/crewai-tools-yNAtfOkF-py3.12/lib/python3.12/site-packages/chromadb/telemetry/opentelemetry/__init__.py:11: in <module>
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
../Library/Caches/pypoetry/virtualenvs/crewai-tools-yNAtfOkF-py3.12/lib/python3.12/site-packages/opentelemetry/exporter/otlp/proto/grpc/trace_exporter/__init__.py:22: in <module>
from opentelemetry.exporter.otlp.proto.grpc.exporter import (
../Library/Caches/pypoetry/virtualenvs/crewai-tools-yNAtfOkF-py3.12/lib/python3.12/site-packages/opentelemetry/exporter/otlp/proto/grpc/exporter.py:39: in <module>
from opentelemetry.proto.common.v1.common_pb2 import (
../Library/Caches/pypoetry/virtualenvs/crewai-tools-yNAtfOkF-py3.12/lib/python3.12/site-packages/opentelemetry/proto/common/v1/common_pb2.py:36: in <module>
_descriptor.FieldDescriptor(
../Library/Caches/pypoetry/virtualenvs/crewai-tools-yNAtfOkF-py3.12/lib/python3.12/site-packages/google/protobuf/descriptor.py:621: in __new__
_message.Message._CheckCalledFromGeneratedFile()
E TypeError: Descriptors cannot be created directly.
E If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
E If you cannot immediately regenerate your protos, some other possible workarounds are:
E 1. Downgrade the protobuf package to 3.20.x or lower.
E 2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
E
E More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates
________________________ ERROR collecting tests/tools/rag/rag_tool_test.py _________________________
tests/tools/rag/rag_tool_test.py:8: in <module>
from crewai_tools.adapters.embedchain_adapter import EmbedchainAdapter
crewai_tools/adapters/embedchain_adapter.py:3: in <module>
from embedchain import App
../Library/Caches/pypoetry/virtualenvs/crewai-tools-yNAtfOkF-py3.12/lib/python3.12/site-packages/embedchain/__init__.py:5: in <module>
from embedchain.app import App # noqa: F401
../Library/Caches/pypoetry/virtualenvs/crewai-tools-yNAtfOkF-py3.12/lib/python3.12/site-packages/embedchain/app.py:38: in <module>
from embedchain.vectordb.chroma import ChromaDB
../Library/Caches/pypoetry/virtualenvs/crewai-tools-yNAtfOkF-py3.12/lib/python3.12/site-packages/embedchain/vectordb/chroma.py:4: in <module>
from chromadb import Collection, QueryResult
../Library/Caches/pypoetry/virtualenvs/crewai-tools-yNAtfOkF-py3.12/lib/python3.12/site-packages/chromadb/__init__.py:5: in <module>
from chromadb.auth.token import TokenTransportHeader
../Library/Caches/pypoetry/virtualenvs/crewai-tools-yNAtfOkF-py3.12/lib/python3.12/site-packages/chromadb/auth/token/__init__.py:26: in <module>
from chromadb.telemetry.opentelemetry import (
../Library/Caches/pypoetry/virtualenvs/crewai-tools-yNAtfOkF-py3.12/lib/python3.12/site-packages/chromadb/telemetry/opentelemetry/__init__.py:11: in <module>
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
../Library/Caches/pypoetry/virtualenvs/crewai-tools-yNAtfOkF-py3.12/lib/python3.12/site-packages/opentelemetry/exporter/otlp/proto/grpc/trace_exporter/__init__.py:22: in <module>
from opentelemetry.exporter.otlp.proto.grpc.exporter import (
../Library/Caches/pypoetry/virtualenvs/crewai-tools-yNAtfOkF-py3.12/lib/python3.12/site-packages/opentelemetry/exporter/otlp/proto/grpc/exporter.py:39: in <module>
from opentelemetry.proto.common.v1.common_pb2 import (
../Library/Caches/pypoetry/virtualenvs/crewai-tools-yNAtfOkF-py3.12/lib/python3.12/site-packages/opentelemetry/proto/common/v1/common_pb2.py:36: in <module>
_descriptor.FieldDescriptor(
../Library/Caches/pypoetry/virtualenvs/crewai-tools-yNAtfOkF-py3.12/lib/python3.12/site-packages/google/protobuf/descriptor.py:621: in __new__
_message.Message._CheckCalledFromGeneratedFile()
E TypeError: Descriptors cannot be created directly.
E If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
E If you cannot immediately regenerate your protos, some other possible workarounds are:
E 1. Downgrade the protobuf package to 3.20.x or lower.
E 2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
E
E More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates
========================================= warnings summary =========================================
crewai_tools/tools/base_tool.py:28
/Users/bisansuleiman/crewAI-tools/crewai_tools/tools/base_tool.py:28: PydanticDeprecatedSince20: Pydantic V1 style `@validator` validators are deprecated. You should migrate to Pydantic V2 style `@field_validator` validators, see the migration guide for more details. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.8/migration/
@validator("args_schema", always=True, pre=True)
../Library/Caches/pypoetry/virtualenvs/crewai-tools-yNAtfOkF-py3.12/lib/python3.12/site-packages/pydantic/_internal/_config.py:291
/Users/bisansuleiman/Library/Caches/pypoetry/virtualenvs/crewai-tools-yNAtfOkF-py3.12/lib/python3.12/site-packages/pydantic/_internal/_config.py:291: PydanticDeprecatedSince20: Support for class-based `config` is deprecated, use ConfigDict instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.8/migration/
warnings.warn(DEPRECATION_MESSAGE, DeprecationWarning)
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
===================================== short test summary info ======================================
ERROR tests/base_tool_test.py - TypeError: Descriptors cannot be created directly.
ERROR tests/spider_tool_test.py - TypeError: Descriptors cannot be created directly.
ERROR tests/tools/rag/rag_tool_test.py - TypeError: Descriptors cannot be created directly.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 3 errors during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
================================== 2 warnings, 3 errors in 4.05s ===================================
(crewai-tools-py3.12) (base) bisansuleiman@Bisans-MacBook-Pro crewAI-tools %
28.38 INFO: pip is looking at multiple versions of chromadb to determine which version is compatible with other requirements. This could take a while.
28.38 Collecting chromadb<0.5.0,>=0.4.22 (from crewai-tools>=0.2.7->-r requirements.txt (line 2))
28.41 Downloading chromadb-0.4.23-py3-none-any.whl.metadata (7.3 kB)
28.47 Downloading chromadb-0.4.22-py3-none-any.whl.metadata (7.3 kB)
28.83 ERROR: Cannot install crewai-tools because these package versions have conflicting dependencies.
28.83
28.83 The conflict is caused by:
28.83 chromadb 0.4.24 depends on onnxruntime>=1.14.1
28.83 chromadb 0.4.23 depends on onnxruntime>=1.14.1
28.83 chromadb 0.4.22 depends on onnxruntime>=1.14.1
28.83
I am using YoutubeChannelSearchTool
for a youtube video transcript task. But when I kickoff my crew object, I am getting the following error
ERROR: [youtube:tab] <youtube channel name>: Unable to download API page: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1007) (caused by CertificateVerifyError('[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1007)')); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
I am using the latest version of yt-dlp(yt-dlp==2024.7.9)
It looks like crewai-tools is corrupt on the pypi site.
pip install crewai-tools
does not install the "add" method on in embedchain_adapter.py.
Installing directly from github solves the problem:
pip install git+https://github.com/joaomdmoura/crewAI-tools.git
from crewai_tools import tool
from crewai import Agent, Task , Crew
from crewai_tools import PDFSearchTool
import os
os.environ["HUGGINGFACEHUB_API_TOKEN "] = "hf_............................................................."
tool = PDFSearchTool(
config=dict(
llm=dict(
provider="huggingface", # or google, openai, anthropic, llama2, ...
config=dict(
model="mistralai/Mixtral-8x7B-Instruct-v0.1",
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="huggingface", # or openai, ollama, ...
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
)
I got this error:
ValidationError: 1 validation error for PDFSearchTool
Value error, Please set the HUGGINGFACE_ACCESS_TOKEN environment variable. [type=value_error, input_value={'config': {'llm': {'prov...'retrieval_document'}}}}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.7/v/value_error
Python 3.11
pip list:
Package Version
---------------------------------------- ---------------
aiohttp 3.9.5
aiosignal 1.3.1
alembic 1.13.2
annotated-types 0.7.0
anyio 4.4.0
appdirs 1.4.4
appnope 0.1.4
argon2-cffi 23.1.0
argon2-cffi-bindings 21.2.0
arrow 1.3.0
asgiref 3.8.1
asttokens 2.4.1
async-lru 2.0.4
attrs 23.2.0
Babel 2.15.0
backoff 2.2.1
bcrypt 4.1.3
beautifulsoup4 4.12.3
bleach 6.1.0
boto3 1.34.136
botocore 1.34.136
Brotli 1.1.0
build 1.2.1
cachetools 5.3.3
certifi 2024.6.2
cffi 1.16.0
charset-normalizer 3.3.2
chroma-hnswlib 0.7.3
chromadb 0.4.24
clarifai 10.5.3
clarifai-grpc 10.5.4
click 8.1.7
cohere 5.5.8
coloredlogs 15.0.1
comm 0.2.2
contextlib2 21.6.0
crewai 0.35.3
crewai-tools 0.4.1
cryptography 42.0.8
dataclasses-json 0.6.7
debugpy 1.8.2
decorator 5.1.1
defusedxml 0.7.1
Deprecated 1.2.14
deprecation 2.1.0
distro 1.9.0
dnspython 2.6.1
docker 7.1.0
docstring_parser 0.16
docx2txt 0.8
email_validator 2.2.0
embedchain 0.1.113
embedchain-crewai 0.1.114
executing 2.0.1
fastapi 0.111.0
fastapi-cli 0.0.4
fastavro 1.9.4
fastjsonschema 2.20.0
filelock 3.15.4
flatbuffers 24.3.25
fqdn 1.5.1
frozenlist 1.4.1
fsspec 2024.6.1
gitdb 4.0.11
GitPython 3.1.43
google-api-core 2.19.1
google-auth 2.30.0
google-cloud-aiplatform 1.57.0
google-cloud-bigquery 3.25.0
google-cloud-core 2.4.1
google-cloud-resource-manager 1.12.3
google-cloud-storage 2.17.0
google-crc32c 1.5.0
google-resumable-media 2.7.1
googleapis-common-protos 1.63.2
gptcache 0.1.43
grpc-google-iam-v1 0.13.1
grpcio 1.64.1
grpcio-status 1.62.2
h11 0.14.0
httpcore 1.0.5
httptools 0.6.1
httpx 0.27.0
httpx-sse 0.4.0
huggingface-hub 0.23.4
humanfriendly 10.0
idna 3.7
importlib_metadata 7.1.0
importlib_resources 6.4.0
iniconfig 2.0.0
inquirerpy 0.3.4
instructor 1.3.3
ipykernel 6.29.5
ipython 8.26.0
ipywidgets 8.1.3
isoduration 20.11.0
jedi 0.19.1
Jinja2 3.1.4
jiter 0.4.2
jmespath 1.0.1
json5 0.9.25
jsonpatch 1.33
jsonpointer 3.0.0
jsonref 1.1.0
jsonschema 4.22.0
jsonschema-specifications 2023.12.1
jupyter 1.0.0
jupyter_client 8.6.2
jupyter-console 6.6.3
jupyter_core 5.7.2
jupyter-events 0.10.0
jupyter-lsp 2.2.5
jupyter_server 2.14.1
jupyter_server_terminals 0.5.3
jupyterlab 4.2.3
jupyterlab_pygments 0.3.0
jupyterlab_server 2.27.2
jupyterlab_widgets 3.0.11
kubernetes 30.1.0
lancedb 0.5.7
langchain 0.1.20
langchain-cohere 0.1.5
langchain-community 0.0.38
langchain-core 0.1.52
langchain-openai 0.1.7
langchain-text-splitters 0.0.2
langsmith 0.1.82
Mako 1.3.5
markdown-it-py 3.0.0
MarkupSafe 2.1.5
marshmallow 3.21.3
matplotlib-inline 0.1.7
mdurl 0.1.2
mistune 3.0.2
mmh3 4.1.0
monotonic 1.6
mpmath 1.3.0
multidict 6.0.5
mutagen 1.47.0
mypy-extensions 1.0.0
nbclient 0.10.0
nbconvert 7.16.4
nbformat 5.10.4
nest-asyncio 1.6.0
nodeenv 1.9.1
notebook 7.2.1
notebook_shim 0.2.4
numpy 1.26.4
oauthlib 3.2.2
onnxruntime 1.18.1
openai 1.35.7
opentelemetry-api 1.25.0
opentelemetry-exporter-otlp-proto-common 1.25.0
opentelemetry-exporter-otlp-proto-grpc 1.25.0
opentelemetry-exporter-otlp-proto-http 1.25.0
opentelemetry-instrumentation 0.46b0
opentelemetry-instrumentation-asgi 0.46b0
opentelemetry-instrumentation-fastapi 0.46b0
opentelemetry-proto 1.25.0
opentelemetry-sdk 1.25.0
opentelemetry-semantic-conventions 0.46b0
opentelemetry-util-http 0.46b0
orjson 3.10.5
outcome 1.3.0.post0
overrides 7.7.0
packaging 23.2
pandocfilters 1.5.1
parameterized 0.9.0
parso 0.8.4
pexpect 4.9.0
pfzy 0.3.4
pillow 10.4.0
pip 24.1.1
platformdirs 4.2.2
pluggy 1.5.0
posthog 3.5.0
prometheus_client 0.20.0
prompt_toolkit 3.0.47
proto-plus 1.24.0
protobuf 4.25.3
psutil 6.0.0
psycopg 3.2.1
psycopg-binary 3.1.19
psycopg-pool 3.2.2
ptyprocess 0.7.0
pulsar-client 3.5.0
pure-eval 0.2.2
py 1.11.0
pyarrow 16.1.0
pyasn1 0.6.0
pyasn1_modules 0.4.0
pycparser 2.22
pycryptodomex 3.20.0
pydantic 2.7.4
pydantic_core 2.18.4
PyGithub 1.59.1
Pygments 2.18.0
PyJWT 2.8.0
pylance 0.9.18
PyNaCl 1.5.0
pypdf 4.2.0
PyPika 0.48.9
pyproject_hooks 1.1.0
pyright 1.1.369
pysbd 0.3.4
PySocks 1.7.1
pytest 8.2.2
python-dateutil 2.9.0.post0
python-dotenv 1.0.1
python-json-logger 2.0.7
python-multipart 0.0.9
python-rapidjson 1.18
pytube 15.0.0
PyYAML 6.0.1
pyzmq 26.0.3
qtconsole 5.5.2
QtPy 2.4.1
ratelimiter 1.2.0.post0
referencing 0.35.1
regex 2023.12.25
requests 2.32.3
requests-oauthlib 2.0.0
retry 0.9.2
rfc3339-validator 0.1.4
rfc3986-validator 0.1.1
rich 13.7.1
rpds-py 0.18.1
rsa 4.9
s3transfer 0.10.2
schema 0.7.5
selenium 4.22.0
semver 3.0.2
Send2Trash 1.8.3
setuptools 68.2.0
shapely 2.0.4
shellingham 1.5.4
six 1.16.0
smmap 5.0.1
sniffio 1.3.1
sortedcontainers 2.4.0
soupsieve 2.5
SQLAlchemy 2.0.31
stack-data 0.6.3
starlette 0.37.2
sympy 1.12.1
tabulate 0.9.0
tenacity 8.4.2
terminado 0.18.1
tiktoken 0.7.0
tinycss2 1.3.0
tokenizers 0.19.1
tornado 6.4.1
tqdm 4.66.4
traitlets 5.14.3
trio 0.25.1
trio-websocket 0.11.1
tritonclient 2.47.0
typer 0.12.3
types-python-dateutil 2.9.0.20240316
types-requests 2.32.0.20240622
typing_extensions 4.12.2
typing-inspect 0.9.0
ujson 5.10.0
uri-template 1.3.0
urllib3 2.2.2
uvicorn 0.30.1
uvloop 0.19.0
watchfiles 0.22.0
wcwidth 0.2.13
webcolors 24.6.0
webencodings 0.5.1
websocket-client 1.8.0
websockets 12.0
wheel 0.41.2
widgetsnbextension 4.0.11
wrapt 1.16.0
wsproto 1.2.0
yarl 1.9.4
youtube-transcript-api 0.6.2
yt-dlp 2023.12.30
zipp 3.19.2
My code:
from crewai_tools import PGSearchTool
postgresql_tool = PGSearchTool(
db_uri="postgresql://localhost:password@localhost:5432/database",
table_name='users'
)
And it appeared an error:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[2], line 22
15 convert_task = Task(
16 description='Convert the following natural language input into a rule-based command: Chỉ lấy các ứng viên dưới 30 tuổi nếu là nữ, trên 40 tuổi và đã có gia đình nếu là nam. Nhà giàu, ngoại hình ưa nhìn, đang sống ở Đà Nẵng',
17 agent=rules_creator_agent, # Assigning the task to the researcher
18 expected_output='A refined finalized version of the structured format rules which can be used to query the SQL table with JSON data'
19 )
21 # Initialize the tool with the database URI and the target table name
---> 22 postgresql_tool = PGSearchTool(
23 db_uri=os.getenv("POSTGRESQL_URL"),
24 table_name='candidates'
25 )
27 query_agent = Agent(
28 role='Database Query Specialist',
29 goal='Execute SQL queries on the database and return results',
(...)
35 tools=[postgresql_tool]
36 )
37 query_task = Task(
38 description="Query the table 'candidates' and retrieve data where 'raw_data' column matches specific criteria",
39 expected_output="List resume_id of candidates matched criteria",
40 agent=query_agent
41 )
File ~/Desktop/cc-ai-poc-openai_fine_tune/venv/lib/python3.11/site-packages/crewai_tools/tools/pg_seach_tool/pg_search_tool.py:26, in PGSearchTool.__init__(self, table_name, **kwargs)
24 def __init__(self, table_name: str, **kwargs):
25 super().__init__(**kwargs)
---> 26 self.add(table_name)
27 self.description = f"A tool that can be used to semantic search a query the {table_name} database table's content."
28 self._generate_description()
File ~/Desktop/cc-ai-poc-openai_fine_tune/venv/lib/python3.11/site-packages/crewai_tools/tools/pg_seach_tool/pg_search_tool.py:36, in PGSearchTool.add(self, table_name, **kwargs)
30 def add(
31 self,
32 table_name: str,
33 **kwargs: Any,
34 ) -> None:
35 kwargs["data_type"] = "postgres"
---> 36 kwargs["loader"] = PostgresLoader(config=dict(url=self.db_uri))
37 super().add(f"SELECT * FROM {table_name};", **kwargs)
File ~/Desktop/cc-ai-poc-openai_fine_tune/venv/lib/python3.11/site-packages/embedchain/loaders/postgres.py:18, in PostgresLoader.__init__(self, config)
16 self.connection = None
17 self.cursor = None
---> 18 self._setup_loader(config=config)
File ~/Desktop/cc-ai-poc-openai_fine_tune/venv/lib/python3.11/site-packages/embedchain/loaders/postgres.py:22, in PostgresLoader._setup_loader(self, config)
20 def _setup_loader(self, config: dict[str, Any]):
21 try:
---> 22 import psycopg
23 except ImportError as e:
24 raise ImportError(
25 "Unable to import required packages. \
26 Run `pip install --upgrade 'embedchain[postgres]'`"
27 ) from e
File ~/Desktop/cc-ai-poc-openai_fine_tune/venv/lib/python3.11/site-packages/psycopg/__init__.py:9
5 # Copyright (C) 2020 The Psycopg Team
7 import logging
----> 9 from . import pq # noqa: F401 import early to stabilize side effects
10 from . import types
11 from . import postgres
File ~/Desktop/cc-ai-poc-openai_fine_tune/venv/lib/python3.11/site-packages/psycopg/pq/__init__.py:118
109 sattempts = "\n".join(f"- {attempt}" for attempt in attempts)
110 raise ImportError(
111 f"""\
112 no pq wrapper available.
113 Attempts made:
114 {sattempts}"""
115 )
--> 118 import_from_libpq()
120 __all__ = (
121 "ConnStatus",
122 "PipelineStatus",
(...)
137 "version_pretty",
138 )
File ~/Desktop/cc-ai-poc-openai_fine_tune/venv/lib/python3.11/site-packages/psycopg/pq/__init__.py:104, in import_from_libpq()
102 Escaping = module.Escaping
103 PGcancel = module.PGcancel
--> 104 PGcancelConn = module.PGcancelConn
105 __build_version__ = module.__build_version__
106 elif impl:
AttributeError: module 'psycopg_binary.pq' has no attribute 'PGcancelConn'
The only tool I can get to work is SerperDevTool. Other tools like ScrapeWebsiteTool and WebsiteSearchTool do not work.
I've tried running the tool_instance directly and running it from within the context of an Agent as well. Every method throws the
TypeError: ScrapeWebsiteTool._run() takes 1 positional argument but 2 were given"
exception
from langchain.tools import Tool
from crewai_tools import ScrapeWebsiteTool
tool_instance=ScrapeWebsiteTool()
tool = Tool(
name=tool_instance.name,
func=tool_instance.run,
description=tool_instance.description
)
tool.run('https://en.wikipedia.org/wiki/Clearwater%2C_Florida')
That is the bare minimum to simulate running a tool. They all throw this error;
{
"name": "TypeError",
"message": "ScrapeWebsiteTool._run() takes 1 positional argument but 2 were given",
"stack": "---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[19], line 14
7 tool_instance=ScrapeWebsiteTool()
9 tool = Tool(
10 name=tool_instance.name,
11 func=tool_instance.run,
12 description=tool_instance.description
13 )
---> 14 tool.run('https://en.wikipedia.org/wiki/Clearwater%2C_Florida')
File /workspace/development/frappe-bench/env/lib/python3.11/site-packages/langchain_core/tools.py:419, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, **kwargs)
417 except (Exception, KeyboardInterrupt) as e:
418 run_manager.on_tool_error(e)
--> 419 raise e
420 else:
421 run_manager.on_tool_end(
422 str(observation), color=color, name=self.name, **kwargs
423 )
File /workspace/development/frappe-bench/env/lib/python3.11/site-packages/langchain_core/tools.py:376, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, **kwargs)
373 parsed_input = self._parse_input(tool_input)
374 tool_args, tool_kwargs = self._to_args_and_kwargs(parsed_input)
375 observation = (
--> 376 self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
377 if new_arg_supported
378 else self._run(*tool_args, **tool_kwargs)
379 )
380 except ValidationError as e:
381 if not self.handle_validation_error:
File /workspace/development/frappe-bench/env/lib/python3.11/site-packages/langchain_core/tools.py:584, in Tool._run(self, run_manager, *args, **kwargs)
575 if self.func:
576 new_argument_supported = signature(self.func).parameters.get(\"callbacks\")
577 return (
578 self.func(
579 *args,
580 callbacks=run_manager.get_child() if run_manager else None,
581 **kwargs,
582 )
583 if new_argument_supported
--> 584 else self.func(*args, **kwargs)
585 )
586 raise NotImplementedError(\"Tool does not support sync\")
File /workspace/development/frappe-bench/env/lib/python3.11/site-packages/crewai_tools/tools/base_tool.py:30, in BaseTool.run(self, *args, **kwargs)
24 def run(
25 self,
26 *args: Any,
27 **kwargs: Any,
28 ) -> Any:
29 print(f\"Using Tool: {self.name}\")
---> 30 return self._run(*args, **kwargs)
TypeError: ScrapeWebsiteTool._run() takes 1 positional argument but 2 were given"
}
Hi!
I'm trying to use the Web Site Search Tool, but it always gives an empty result. Could you please explain how to use it correctly? I couldn't find any examples of how to use it in the documentation. Only examples of initialization.
``object_researcher:
role: >
Attribute Researcher
goal: >
Gather all of the attributes of an object for the purpose of generating a profile:
- components
- states
- direct actions it can perform
- indirect actions that can be perfomed on it, with it, to it, etc
- immutable attributes
- mutable attributes
backstory: >
An erudite and meticulous researcher, who spends all day and night thinking about the nature of objects.
allow_delegation: false
verbose: true
tools:
- search_tool
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.