Giter VIP home page Giter VIP logo

langchain-nextjs-template's People

Contributors

alexbsoft avatar alissonsleal avatar bracesproul avatar dqbd avatar jacoblee93 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

langchain-nextjs-template's Issues

variable name typo in ChatWindow.tsx

Line 31:
const intemediateStepsToggle = showIntermediateStepsToggle && (
Line 133:
{intemediateStepsToggle}
These should be intermediateStepsToggle. We're missing the r.

ECONNRESET

I clone the repo, and after adding my env, installing all dependencies, and setting up the dev server, all works fine. When I go to chat, however, I get this.

    at connResetException (node:internal/errors:720:14)
    at abortIncoming (node:_http_server:766:17)
    at socketOnClose (node:_http_server:760:3)
    at Socket.emit (node:events:526:35)
    at TCP.<anonymous> (node:net:323:12)
    at TCP.callbackTrampoline (node:internal/async_hooks:130:17) {
  code: 'ECONNRESET'

I see this issue in the @vercel/ai repo: vercel/ai#483

However, I am unsure where to start or if this is not correctable given any limitations of the template. Thanks for any direction!

Supabase Chat History Issue

Hello all! I am having an issue when using the LangChainStream functionality and saving the chat history to my supabase DB. The main issue I am getting is that the stream seems to get interrupted and does not complete, as a result, the chat is not saved to my DB. The langchain chat history does not seem to be working as well. Any help would go a long way @jacoblee93.

`import { getSession } from '@/app/supabase-server';
import { Database } from '@/lib/db_types';
import { templates } from '@/lib/template';
import { nanoid } from '@/lib/utils';
import { PineconeClient } from '@pinecone-database/pinecone';
import { createServerActionClient } from '@supabase/auth-helpers-nextjs';
import { LangChainStream, Message, StreamingTextResponse } from 'ai';
import { ConversationalRetrievalQAChain } from 'langchain/chains';
import { ChatOpenAI } from 'langchain/chat_models/openai';
import { OpenAIEmbeddings } from 'langchain/embeddings/openai';
import { BufferMemory } from 'langchain/memory';
import { PineconeStore } from 'langchain/vectorstores/pinecone';
import { cookies } from 'next/headers';
import { redirect } from 'next/navigation';
import { NextResponse } from 'next/server';
import { Configuration, OpenAIApi } from 'openai-edge';

export const runtime = 'nodejs';

const configuration = new Configuration({
  apiKey: process.env.OPENAI_API_KEY
});

const openai = new OpenAIApi(configuration);

const formatMessage = (message: Message) => {
  return `${message.role === 'user' ? 'Human' : 'Assistant'}: ${
    message.content
  }`;
};

export async function POST(req: Request) {
  const cookieStore = cookies();
  const supabase = createServerActionClient<Database>({
    cookies: () => cookieStore
  });
  const session = await getSession();
  const userId = session?.user.id;

  if (!userId) {
    return new Response('Unauthorized', {
      status: 401
    });
  }

  const json = await req.json();
  // const messages: Message[] = json.messages ?? [];
  const { messages } = json;
  const formattedPreviousMessages = messages.slice(0, -1).map(formatMessage);
  const question = messages[messages.length - 1].content;

  try {
    const sanitizedQuestion = question.trim().replaceAll('\n', ' ');
    const pinecone = new PineconeClient();
    await pinecone.init({
      environment: process.env.PINECONE_ENVIRONMENT ?? '',
      apiKey: process.env.PINECONE_API_KEY ?? ''
    });

    const pineconeIndex = pinecone.Index(process.env.PINECONE_INDEX_NAME!);

    const vectorStore = await PineconeStore.fromExistingIndex(
      new OpenAIEmbeddings(),
      { pineconeIndex }
    );

    const { stream, handlers } = LangChainStream({
      async onCompletion(completion) {
        const title = json.messages[0].content.substring(0, 100);
        const id = json.id ?? nanoid();
        const createdAt = Date.now();
        const path = `/chat/${id}`;
        const payload = {
          id,
          title,
          userId,
          createdAt,
          path,
          messages: [
            ...messages,
            {
              content: completion,
              role: 'assistant'
            }
          ]
        };

        await supabase.from('chats').upsert({ id, payload }).throwOnError();
      }
    });

    const streamingModel = new ChatOpenAI({
      modelName: 'gpt-4',
      streaming: true,
      verbose: true,
      temperature: 0
    });

    const nonStreamingModel = new ChatOpenAI({
      modelName: 'gpt-4',
      verbose: true,
      temperature: 0
    });

    const chain = ConversationalRetrievalQAChain.fromLLM(
      streamingModel,
      vectorStore.asRetriever(),
      {
        qaTemplate: templates.qaPrompt,
        questionGeneratorTemplate: templates.condensePrompt,
        memory: new BufferMemory({
          memoryKey: 'chat_history',
          inputKey: 'question', // The key for the input to the chain
          outputKey: 'text', // The key for the final conversational output of the chain
          returnMessages: true // If using with a chat model (e.g. gpt-3.5 or gpt-4)
        }),
        questionGeneratorChainOptions: {
          llm: nonStreamingModel
        }
      }
    );

    chain.call(
      {
        question: sanitizedQuestion,
        chat_history: formattedPreviousMessages.join('\n')
      },
      [handlers]
    );

    // Return the readable strea
    return new StreamingTextResponse(stream);
  } catch (error) {
    console.error('Internal server error ', error);
    return NextResponse.json('Error: Something went wrong. Try again!', {
      status: 500
    });
  }
}`

Bedrock responses truncated

I'm using the starter template out of the box, but have swapped the LLM out to use Claude 2 through Bedrock. However, all of my responses are being truncated. I see an issue related to this was fixed in langchain recently, but even when pulling in the latest version my responses are still truncated. Are there any other changes I need to make to this template to get this to function as expected?

Here's the change to enable streaming: langchain-ai/langchainjs#3009

And here's how I'm invoking ChatBedrock in route.ts (in place of ChatOpenAI)

const model = new ChatBedrock({
  model: "anthropic.claude-v2",
  region: "us-west-2",
  streaming: true,
  credentials: {
   accessKeyId: process.env.AWS_ACCESS_KEY_ID,
   secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY
  }

Anthropic Agents - Only shows intermediate steps

I moved over to Anthropic as my LLM for the Agent example and find that it only returns messages if the 'intermediate steps' are turned on. Below is my updated code for api/agents/route.ts I think an event needs to change in the const textEncoder, but I can't figure out what.

More Anthropic examples would be appreciated.

'''
import { NextRequest, NextResponse } from "next/server";
import { Message as VercelChatMessage, StreamingTextResponse } from "ai";

import { createReactAgent } from "@langchain/langgraph/prebuilt";
// import { ChatOpenAI } from "@langchain/openai";
import { ChatAnthropicMessages } from "@langchain/anthropic";
import { SerpAPI } from "@langchain/community/tools/serpapi";
import { Calculator } from "@langchain/community/tools/calculator";
import {
AIMessage,
BaseMessage,
ChatMessage,
HumanMessage,
SystemMessage,
} from "@langchain/core/messages";

export const runtime = "edge";

const convertVercelMessageToLangChainMessage = (message: VercelChatMessage) => {
if (message.role === "user") {
return new HumanMessage(message.content);
} else if (message.role === "assistant") {
return new AIMessage(message.content);
} else {
return new ChatMessage(message.content, message.role);
}
};

const convertLangChainMessageToVercelMessage = (message: BaseMessage) => {
if (message._getType() === "human") {
return { content: message.content, role: "user" };
} else if (message._getType() === "ai") {
return {
content: message.content,
role: "assistant",
tool_calls: (message as AIMessage).tool_calls,
};
} else {
return { content: message.content, role: message._getType() };
}
};

const AGENT_SYSTEM_TEMPLATE = You have a degree in Business Analysis (MBA). Your responses should reflect your education and reseach into business success.;

/**

  • This handler initializes and calls an tool caling ReAct agent.

  • See the docs for more information:

  • https://langchain-ai.github.io/langgraphjs/tutorials/quickstart/
    /
    export async function POST(req: NextRequest) {
    try {
    const body = await req.json();
    const returnIntermediateSteps = body.show_intermediate_steps;
    /
    *

    • We represent intermediate steps as system messages for display purposes,
    • but don't want them in the chat history.
      */
      const messages = (body.messages ?? [])
      .filter(
      (message: VercelChatMessage) =>
      message.role === "user" || message.role === "assistant",
      )
      .map(convertVercelMessageToLangChainMessage);

    // Requires process.env.SERPAPI_API_KEY to be set: https://serpapi.com/
    // You can remove this or use a different tool instead.
    const tools = [new Calculator(), new SerpAPI()];

    const chat = new ChatAnthropicMessages({
    model: "claude-3-5-sonnet-20240620" });

    /**

    if (!returnIntermediateSteps) {
    /**
    * Stream back all generated tokens and steps from their runs.
    *
    * We do some filtering of the generated events and only stream back
    * the final response as a string.
    *
    * For this specific type of tool calling ReAct agents with OpenAI, we can tell when
    * the agent is ready to stream back final output when it no longer calls
    * a tool and instead streams back content.
    *
    * See: https://langchain-ai.github.io/langgraphjs/how-tos/stream-tokens/
    */
    const eventStream = await agent.streamEvents(
    { messages },
    { version: "v2" },
    );

    const textEncoder = new TextEncoder();
    const transformStream = new ReadableStream({
    async start(controller) {
    for await (const { event, data } of eventStream) {
    if (event === "on_chat_model_stream") {
    // Intermediate chat model generations will contain tool calls and no content
    if (!!data.chunk.content) {
    controller.enqueue(textEncoder.encode(data.chunk.content));
    }
    }
    }
    controller.close();
    },
    });

    return new StreamingTextResponse(transformStream);
    } else {
    /**
    * We could also pick intermediate steps out from streamEvents chunks, but
    * they are generated as JSON objects, so streaming and displaying them with
    * the AI SDK is more complicated.
    */
    const result = await agent.invoke({ messages });

    return NextResponse.json(
    {
    messages: result.messages.map(convertLangChainMessageToVercelMessage),
    },
    { status: 200 },
    );
    }
    } catch (e: any) {
    return NextResponse.json({ error: e.message }, { status: e.status ?? 500 });
    }
    }
    '''

Vercel's Chatbot with Langchain, Pinecone and Vercel Kv BOUNTY

Hi everyone!

After playing with this repo for a little bit, I am stuck on implementing some key features I need such as langchain, pinecone and vercel kv. I want the app to function in the exact same manner as it does now, just using a custom knowledge base with my pinecone vector store and utilizing langchain.

If anyone has resources on this, I would greatly appreciate a pointer!

Or, if anyone is willing to build this for me, let me know here.

Why is custom Chain used instead of Langchain's chain

I have been playing with the code, and found out that instead of using Langchain's chain it uses custom chain

type ConversationalRetrievalQAChainInput = {
  question: string;
  chat_history: VercelChatMessage[];
};

Why is it ?

Chat history

I am trying to add chat history, but i keep getting errors. Is there a way to add chat history in this app? I am using upstash redis for memory store.

Module not found: Can't resolve 'fs'

你好,按照教程运行 yarn dev ,出现以下错误:
⨯ ./node_modules/@kwsites/file-exists/dist/src/index.js:6:13
Module not found: Can't resolve 'fs'

https://nextjs.org/docs/messages/module-not-found

Import trace for requested module:
./node_modules/@kwsites/file-exists/dist/index.js
./node_modules/simple-git/dist/esm/index.js
./node_modules/node-llama-cpp/dist/utils/cloneLlamaCppRepo.js
./node_modules/node-llama-cpp/dist/utils/getReleaseInfo.js
./node_modules/node-llama-cpp/dist/index.js
./node_modules/@langchain/community/dist/utils/llama_cpp.js
./node_modules/@langchain/community/dist/llms/llama_cpp.js
./node_modules/@langchain/community/llms/llama_cpp.js
./app/api/chat/route.ts
./node_modules/next/dist/build/webpack/loaders/next-edge-app-route-loader/index.js?absolutePagePath=D%3A%5Cjs%5Clangchain-nextjs-template-main%5Capp%5Capi%5Cchat%5Croute.ts&page=%2Fapi%2Fchat%2Froute&appDirLoader=bmV4dC1hcHAtbG9hZGVyP25hbWU9YXBwJTJGYXBpJTJGY2hhdCUyRnJvdXRlJnBhZ2U9JTJGYXBpJTJGY2hhdCUyRnJvdXRlJmFwcFBhdGhzPSZwYWdlUGF0aD1wcml2YXRlLW5leHQtYXBwLWRpciUyRmFwaSUyRmNoYXQlMkZyb3V0ZS50cyZhcHBEaXI9RCUzQSU1Q2pzJTVDbGFuZ2NoYWluLW5leHRqcy10ZW1wbGF0ZS1tYWluJTVDYXBwJnBhZ2VFeHRlbnNpb25zPXRzeCZwYWdlRXh0ZW5zaW9ucz10cyZwYWdlRXh0ZW5zaW9ucz1qc3gmcGFnZUV4dGVuc2lvbnM9anMmcm9vdERpcj1EJTNBJTVDanMlNUNsYW5nY2hhaW4tbmV4dGpzLXRlbXBsYXRlLW1haW4maXNEZXY9dHJ1ZSZ0c2NvbmZpZ1BhdGg9dHNjb25maWcuanNvbiZiYXNlUGF0aD0mYXNzZXRQcmVmaXg9Jm5leHRDb25maWdPdXRwdXQ9JnByZWZlcnJlZFJlZ2lvbj0mbWlkZGxld2FyZUNvbmZpZz1lMzAlM0Qh&nextConfigOutput=&preferredRegion=&middlewareConfig=e30%3D!

请问如何解决?

langchain/vectorstores/chroma

I'm trying to use the langchain/vectorstores/chroma as a vector store in the POST handler app/api/chat/retrieval/route.ts

Keep getting this error

error ./node_modules/@visheratin/tokenizers-node/tokenizers_wasm.js:367:14 Module not found: Can't resolve 'fs' https://nextjs.org/docs/messages/module-not-found Import trace for requested module: ./node_modules/@visheratin/web-ai-node/node/tokenizer.js ./node_modules/@visheratin/web-ai-node/node/tokenizerLoader.js ./node_modules/@visheratin/web-ai-node/node/index.js ./node_modules/@visheratin/web-ai-node/index.js ./node_modules/chromadb/dist/module/embeddings/WebAIEmbeddingFunction.js ./node_modules/chromadb/dist/module/index.js ./node_modules/langchain/dist/vectorstores/chroma.cjs ./node_modules/langchain/vectorstores/chroma.cjs

I tried to load it like this but no luck... keeps throwing this error. Thoughts?

if(typeof window === "undefined") {
  const {Chroma} = require("langchain/vectorstores/chroma");
}

LangChain/Next.js chatbot displaying incorrect sources

I'm building a chatbot using LangChain, Next.js (vercel framework),and CosmosDB (vector store). My implementation is based on this . I'm trying to display the source documents used by the LLM in my UI, but I'm facing 2 issues:

  1. Source documents not displaying: Despite using a StreamingTextResponse to send the source information in the headers as JSON chunks (see code snippet below), they don't show up in my UI. There are no console errors.
  2. Incorrect sources: When some source documents do appear, they are not the ones actually used by the LLM or contain unrelated information.

So my questions:

  • How can I ensure I'm associating the correct source documents with each LLM response?
  • What debugging techniques can I use to pinpoint where the source information is getting lost or mismatched?

Here's is what my code:

import { Message as VercelChatMessage, StreamingTextResponse } from "ai";
import { AzureCosmosDBVectorStore } from "@langchain/community/vectorstores/azure_cosmosdb";
import {
  AzureOpenAIEmbeddings,
  AzureChatOpenAI,
} from "@langchain/azure-openai";
import { PromptTemplate } from "@langchain/core/prompts";
import { Document } from "@langchain/core/documents";
import { RunnableSequence } from "@langchain/core/runnables";
import {
  BytesOutputParser,
  StringOutputParser,
} from "@langchain/core/output_parsers";

const combineDocumentsFn = (docs: Document[]) => {
  const serializedDocs = docs.map((doc) => doc.pageContent);
  return serializedDocs.join("\n\n");
};

const formatVercelMessages = (chatHistory: VercelChatMessage[]) => {
  const formattedDialogueTurns = chatHistory.map((message) => {
    if (message.role === "user") {
      return `Human: ${message.content}`;
    } else if (message.role === "assistant") {
      return `Assistant: ${message.content}`;
    } else {
      return `${message.role}: ${message.content}`;
    }
  });
  return formattedDialogueTurns.join("\n");
};

const CONDENSE_QUESTION_TEMPLATE = `Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.

<chat_history>
  {chat_history}
</chat_history>

Follow Up Input: {question}
Standalone question:`;
const condenseQuestionPrompt = PromptTemplate.fromTemplate(
  CONDENSE_QUESTION_TEMPLATE
);

const ANSWER_TEMPLATE = `Answer the question based only on the following context and chat history:
<context>
  {context}
</context>

<chat_history>
  {chat_history}
</chat_history>

Question: {question}
`;
const answerPrompt = PromptTemplate.fromTemplate(ANSWER_TEMPLATE);

export async function POST(req: NextRequest) {
  try {
    const body = await req.json();
    const messages = body.messages ?? [];
    const previousMessages = messages.slice(0, -1);
    const currentMessageContent = messages[messages.length - 1].content;

    const vectorstore = new AzureCosmosDBVectorStore(
      new AzureOpenAIEmbeddings(),
      {
        databaseName: process.env.DB_NAME,
        collectionName: process.env.DB_COLLECTION_NAME,
      }
    );

    const model = new AzureChatOpenAI({
      azureOpenAIEndpoint: process.env.AZURE_OPENAI_API_ENDPOINT,
      azureOpenAIApiKey: process.env.AZURE_OPENAI_API_KEY,
      azureOpenAIApiDeploymentName:
        process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME,
      modelName: process.env.AZURE_OPENAI_MODEL_NAME,
    });
    const standaloneQuestionChain = RunnableSequence.from([
      condenseQuestionPrompt,
      model,
      new StringOutputParser(),
    ]);

    let resolveWithDocuments: (value: Document[]) => void;
    const documentPromise = new Promise<Document[]>((resolve) => {
      resolveWithDocuments = resolve;
    });

    const retriever = vectorstore.asRetriever({
      callbacks: [
        {
          handleRetrieverEnd(documents) {
            resolveWithDocuments(documents);
          },
        },
      ],
    });

    const retrievalChain = retriever.pipe(combineDocumentsFn);

    const answerChain = RunnableSequence.from([
      {
        context: RunnableSequence.from([
          (input) => input.question,
          retrievalChain,
        ]),
        chat_history: (input) => input.chat_history,
        question: (input) => input.question,
      },
      answerPrompt,
      model,
    ]);

    const conversationalRetrievalQAChain = RunnableSequence.from([
      {
        question: standaloneQuestionChain,
        chat_history: (input) => input.chat_history,
      },
      answerChain,
      new BytesOutputParser(),
    ]);

    const stream = await conversationalRetrievalQAChain.stream({
      question: currentMessageContent,
      chat_history: formatVercelMessages(previousMessages),
    });

    const documents = await documentPromise;
    console.log("documents ", documents.length);
    const serializedSources = Buffer.from(
      JSON.stringify(
        documents.map((doc) => {
          return {
            pageContent: doc.pageContent.slice(0, 50) + "...",
            metadata: doc.metadata,
          };
        })
      )
    ).toString("base64");
    const sourceMetadata = documents.map((doc) => ({
      title: doc.metadata.title, // Or whatever metadata you want
      url: doc.metadata.url,
    }));

    return new StreamingTextResponse(stream, {
      headers: {
        "x-message-index": (previousMessages.length + 1).toString(),
        "x-message-sources": serializedSources,
      },
    });
  } catch (e: any) {
    return NextResponse.json({ error: e.message }, { status: e.status ?? 500 });
  }
}

Retrieval Agent not providing quality queries.

Hey guys, I picked up this project this morning and got everything set up (had to learn a bunch about supabase in order to get started) but I have a problem, My use case is to create an agent that will be given a task and use tools with access to different tables in supabase to gain a the information needed to answer the question.

First test:
uploaded a write up on "Brand Tracking" to the table:
image
As you can see, the data is filled with the term "Brand Tracking"

Then I asked it about Brand Tracking
image

it seemed to give a pretty lackluster query, is this a normal query for this tool?

Here is the related code for the tool and the agent.

` const chatModel = new ChatOpenAI({
modelName: "gpt-3.5-turbo-1106",
temperature: 0.9,
// IMPORTANT: Must "streaming: true" on OpenAI to enable final output streaming below.
streaming: true,
});

const productKnowledgeStore = new SupabaseVectorStore(
  new OpenAIEmbeddings(),
  {
    client,
    tableName: "productknowledge",
    queryName: "match_documents",
  },
);

const productKnowledgeRetriever = productKnowledgeStore.asRetriever();

the SQL
CREATE FUNCTION match_documents (

query_embedding VECTOR(1536),
match_threshold FLOAT
) RETURNS SETOF documents AS $$
BEGIN
RETURN QUERY
SELECT *
FROM documents
WHERE documents.embedding <#> query_embedding < -match_threshold
ORDER BY documents.embedding <#> query_embedding;
END;
$$ LANGUAGE plpgsql;

const prompt = ChatPromptTemplate.fromMessages([
  ["system", AGENT_SYSTEM_TEMPLATE],
  ...previousMessages.map((msg: { content: string }) => [
    "system",
    msg.content,
  ]), // Add previous context to the system message
  [
    "human",
    `Please formulate a detailed query aimed to provide the human with information on the following input: {input}`,
  ],
  new MessagesPlaceholder("agent_scratchpad"),
]);

const agent = await createToolCallingAgent({
  llm: chatModel,
  tools: [productKnowledge, companyKnowledge],
  prompt,
});

const agentExecutor = new AgentExecutor({
  agent,
  tools: [productKnowledge, companyKnowledge],
  // Set this if you want to receive all intermediate steps in the output of .invoke().
  returnIntermediateSteps,
});

const result = await agentExecutor.invoke({
input: currentMessageContent,
chat_history: previousMessages,
recursionLimit: 5,
});`

I am positive that something is wrong here, I feel like it may be the SQL query or the query text created by the agent. When I make talk to docs apps without supabase I get pretty great results, I feel like I need to tweak something.

Retrieval Works But Doesn't Remember Previous Messages

I've seen something curious in my code. While the retrieval process appears fine, the model doesn't recall past chat messages. This is leading to unexpected results in the conversation.

Here's what's happening: The model's replies don't consider the earlier messages, treating each one separately. This behavior is not just in my code but also on the demo website.

Screen Shot 2023-08-27 at 8 45 08 PM

build error with starter template

when running this template via Vercel Starter UI, i get the following build error. Should work out of the box right? This is the whole idea of those starters....

[20:44:58.702] 
[20:44:58.765]   ▲ Next.js 14.2.3
[20:44:58.765] 
[20:44:58.836]    Creating an optimized production build ...
[20:45:29.608]  ✓ Compiled successfully
[20:45:29.610]    Linting and checking validity of types ...
[20:45:38.082] Failed to compile.
[20:45:38.083] 
[20:45:38.083] ./app/ai_sdk/agent/action.ts:18:31
[20:45:38.083] Type error: Type 'ChatPromptTemplate<any, any>' does not satisfy the constraint 'Runnable<any, any, RunnableConfig>'.
[20:45:38.083]   Property 'lc_runnable' is protected but type 'Runnable<RunInput, RunOutput, CallOptions>' is not a class derived from 'Runnable<RunInput, RunOutput, CallOptions>'.
[20:45:38.083] 
[20:45:38.204] error Command failed with exit code 1.
[20:45:38.204] info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
[20:45:38.226] Error: Command "yarn run build" exited with 1
[20:45:38.590] 

Why should wait for 300ms?

Thank you for providing a good example. Reviewing the code has been very helpful. However, while examining the code, I came across a section where it waits for 300 seconds if there are no messages. I'm curious why this behavior is included and if there are any specific considerations to be aware of.

async function sendMessage(e: FormEvent<HTMLFormElement>) {
e.preventDefault();
if (messageContainerRef.current) {
messageContainerRef.current.classList.add("grow");
}
if (!messages.length) {
await new Promise(resolve => setTimeout(resolve, 300));
}
if (chatEndpointIsLoading ?? intermediateStepsLoading) {
return;
}

Source does not return in UTF-8

the text that are displayed as source is not rendered in UTF8
All of the rest of the text is correctly formatted,

image

I have added
"Content-Type": "application/json; charset=utf-8",
to the StreamingTextResponse in retrival/route but that didn't help,

Any ideas?

Property 'lc_runnable' is protected but type 'Runnable<RunInput, RunOutput, CallOptions>' is not a class derived

Hello everyone,

I'm getting the error below while deploying the repo from the template. Did not change anything on the source code.

./app/ai_sdk/agent/action.ts:18:31
Type error: Type 'ChatPromptTemplate<any, any>' does not satisfy the constraint 'Runnable<any, any, RunnableConfig>'.
  Property 'lc_runnable' is protected but type 'Runnable<RunInput, RunOutput, CallOptions>' is not a class derived from 'Runnable<RunInput, RunOutput, CallOptions>'.

  16 |     const tools = [new TavilySearchResults({ maxResults: 1 })];
  17 |
> 18 |     const prompt = await pull<ChatPromptTemplate>(
     |                               ^
  19 |       "hwchase17/openai-tools-agent",
  20 |     );
  21 |

[Feature] Upload documents

It would be great to have a document upload feature (txt, pdf, mdx) into the pinecone vector store so that we can benefit from having a long-term memory storage, what do you think?

Problems Deploying in Vercel (Module not found)

Hello,

I've tried to deploy this project in Vercel using their template's interface and I got the following error in the logs:

[19:16:22.433] Running build in Washington, D.C., USA (East) – iad1
[19:16:23.054] Cloning github.com/guilherme-argentino/puulpo-mvp-web-summit (Branch: main, Commit: 188b61d)
[19:16:23.418] Previous build cache not available
[19:16:24.073] Cloning completed: 1.018s
[19:16:24.420] Running "vercel build"
[19:16:24.896] Vercel CLI 33.5.3
[19:16:25.332] Warning: Detected "engines": { "node": ">=18" } in your `package.json` that will automatically upgrade when a new major Node.js Version is released. Learn More: http://vercel.link/node-version
[19:16:25.340] Installing dependencies...
[19:16:25.621] yarn install v1.22.17
[19:16:25.690] [1/5] Validating package.json...
[19:16:25.693] [2/5] Resolving packages...
[19:16:27.718] warning Resolution field "@langchain/[email protected]" is incompatible with requested version "@langchain/core@~0.1.41"
[19:16:31.917] warning Resolution field "@langchain/[email protected]" is incompatible with requested version "@langchain/core@~0.1.36"
[19:16:31.932] warning Resolution field "@langchain/[email protected]" is incompatible with requested version "@langchain/core@~0.1.41"
[19:16:34.096] [3/5] Fetching packages...
[19:16:54.951] [4/5] Linking dependencies...
[19:16:54.955] warning "ai > [email protected]" has unmet peer dependency "solid-js@^1.2".
[19:16:54.955] warning "ai > [email protected]" has unmet peer dependency "vue@>=3.2.26 < 4".
[19:16:54.955] warning "ai > [email protected]" has unmet peer dependency "svelte@^4.0.0".
[19:17:01.992] [5/5] Building fresh packages...
[19:17:02.160] success Saved lockfile.
[19:17:02.173] Done in 36.55s.
[19:17:02.321] Detected Next.js version: 14.0.1
[19:17:02.321] Running "yarn run build"
[19:17:02.534] yarn run v1.22.17
[19:17:02.562] $ next build
[19:17:04.496] Attention: Next.js now collects completely anonymous telemetry regarding usage.
[19:17:04.497] This information is used to shape Next.js' roadmap and prioritize features.
[19:17:04.497] You can learn more, including how to opt-out if you'd not like to participate in this anonymous program, by visiting the following URL:
[19:17:04.497] https://nextjs.org/telemetry
[19:17:04.499] 
[19:17:04.598]    ▲ Next.js 14.0.1
[19:17:04.601] 
[19:17:04.601]    Creating an optimized production build ...
[19:17:04.670]  ⚠ Custom webpack configuration is detected. When using a custom webpack configuration, the Webpack build worker is disabled by default. To force enable it, set the "experimental.webpackBuildWorker" option to "true". Read more: https://nextjs.org/docs/messages/webpack-build-worker-opt-out
[19:17:20.527] Failed to compile.
[19:17:20.527] 
[19:17:20.528] ./node_modules/langchain/node_modules/@langchain/openai/dist/chat_models.js:9:0
[19:17:20.528] Module not found: Package path ./output_parsers/openai_tools is not exported from package /vercel/path0/node_modules/@langchain/core (see exports field in /vercel/path0/node_modules/@langchain/core/package.json)
[19:17:20.528] 
[19:17:20.528] https://nextjs.org/docs/messages/module-not-found
[19:17:20.528] 
[19:17:20.529] Import trace for requested module:
[19:17:20.529] ./node_modules/langchain/node_modules/@langchain/openai/dist/index.js
[19:17:20.529] ./node_modules/langchain/node_modules/@langchain/openai/index.js
[19:17:20.529] ./node_modules/langchain/dist/embeddings/openai.js
[19:17:20.529] ./node_modules/langchain/embeddings/openai.js
[19:17:20.529] ./app/api/retrieval/ingest/route.ts
[19:17:20.529] ./node_modules/next/dist/build/webpack/loaders/next-edge-app-route-loader/index.js?absolutePagePath=private-next-app-dir%2Fapi%2Fretrieval%2Fingest%2Froute.ts&page=%2Fapi%2Fretrieval%2Fingest%2Froute&appDirLoader=bmV4dC1hcHAtbG9hZGVyP25hbWU9YXBwJTJGYXBpJTJGcmV0cmlldmFsJTJGaW5nZXN0JTJGcm91dGUmcGFnZT0lMkZhcGklMkZyZXRyaWV2YWwlMkZpbmdlc3QlMkZyb3V0ZSZwYWdlUGF0aD1wcml2YXRlLW5leHQtYXBwLWRpciUyRmFwaSUyRnJldHJpZXZhbCUyRmluZ2VzdCUyRnJvdXRlLnRzJmFwcERpcj0lMkZ2ZXJjZWwlMkZwYXRoMCUyRmFwcCZhcHBQYXRocz0lMkZhcGklMkZyZXRyaWV2YWwlMkZpbmdlc3QlMkZyb3V0ZSZwYWdlRXh0ZW5zaW9ucz10c3gmcGFnZUV4dGVuc2lvbnM9dHMmcGFnZUV4dGVuc2lvbnM9anN4JnBhZ2VFeHRlbnNpb25zPWpzJmJhc2VQYXRoPSZhc3NldFByZWZpeD0mbmV4dENvbmZpZ091dHB1dD0mcHJlZmVycmVkUmVnaW9uPSZtaWRkbGV3YXJlQ29uZmlnPWUzMCUzRCE%3D&nextConfigOutput=&preferredRegion=&middlewareConfig=e30%3D!
[19:17:20.529] 
[19:17:20.530] 
[19:17:20.530] > Build failed because of webpack errors
[19:17:20.584] error Command failed with exit code 1.
[19:17:20.585] info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
[19:17:20.602] Error: Command "yarn run build" exited with 1
[19:17:21.168] 

Since I didn't do any changes in code before deploying. I came here to ask for help to understand what is going on.

Thank you

Main page
Error log

Integrating Vercel KV With LangChain Stream

Howdy All,

I could use some guidance here, as I am trying to integrate the callback feature from "ai" that allows you to write the data to you KV database.

Essentially I am trying to integrate these two blocks of code. The latter block comes from Vercels starter next ai app. Any help would be great!

`const chain = prompt.pipe(model).pipe(outputParser)

const stream = await chain.stream({
chat_history: formattedPreviousMessages.join('\n'),
input: currentMessageContent
})

const stream = OpenAIStream(res, {
async onCompletion(completion) {
const title = json.messages[0].content.substring(0, 100)
const id = json.id ?? nanoid()
const createdAt = Date.now()
const path = /chat/${id}
const payload = {
id,
title,
userId: uid-${userId},
createdAt,
path,
messages: [
...messages,
{
content: completion,
role: 'assistant'
}
]
}
console.log(payload)
await kv.hmset(chat:${id}, payload)
await kv.zadd(user:chat:${userId}, {
score: createdAt,
member: chat:${id}
})
}
})`

Error: failed to pipe response

Hi, I'm using createRetrievalChain from "langchain/chains/retrieval".

When I return StreamingTextResponse I'm getting "Error: failed to pipe response".

Full message:

TypeError [ERR_INVALID_ARG_TYPE]: The "chunk" argument must be of type string or an instance of Buffer or Uint8Array. Received an instance of Object

I'm returning StreamingTextResponse from Next.js POST method in my API route (route.js):

  const response = await retrievalChain.stream({ input: "...my question..." });
  return new StreamingTextResponse(response);

Any ideas what could it be?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.