Giter VIP home page Giter VIP logo

merlinn-co / merlinn Goto Github PK

View Code? Open in Web Editor NEW
238.0 7.0 24.0 3.13 MB

Open source AI on-call developer πŸ§™β€β™‚οΈ Get relevant context & root cause analysis in seconds about production incidents and make on-call engineers 10x better 🏎️

Home Page: https://merlinn.co/?utm_source=github

License: Apache License 2.0

Shell 0.40% JavaScript 0.18% Dockerfile 0.55% TypeScript 78.43% HTML 0.17% CSS 0.21% Python 18.96% HCL 0.04% Jsonnet 0.02% Jupyter Notebook 1.05%
aiops oncall-engineers alerts incident-response metrics observability oncall traces chatops-ai devtools

merlinn's Introduction

Merlinn - open-source AI on-call developer


Docs Β· Demo Β· Report Bug Β· Feature Request Β· Blog Β· Slack

Overview πŸ’«

Merlinn is an AI-powered on-call engineer. It can automatically jump into incidents & alerts with you, and provide you useful & contextual insights and RCA in real time.

Why ❓

Most people don't like to do on-call shifts. It requires engineers to be swift and solve problems quickly. Moreover, it takes time to reach to the root cause of the problem. That's why we developed Merlinn. We believe Gen AI can help on-call developers solve issues faster.

Table of Contents

Key features 🎯

  • Automatic RCA: Merlinn automatically listens to production incidents/alerts and automatically investigates them for you.
  • Slack integration: Merlinn lives inside your Slack. Simply connect it and enjoy an on-call engineer that never sleeps.
  • Integrations: Merlinn integrates with popular observability/incident management tools such as Datadog, Coralogix, Opsgenie and Pagerduty. It also integrates to other tools as GitHub, Notion, Jira and Confluence to gain insights on incidents.
  • Intuitive UX: Merlinn offers a familiar experience. You can talk to it and ask follow-up questions.
  • Secure: Self-host Merlinn and own your data. Always.
  • Open Source: We love open-source. Self-host Merlinn and use it for free.

Demo πŸŽ₯

Checkout our demo video to see Merlinn in action.

Getting started πŸš€

In order to run Merlinn, you need to clone the repo & run the app using Docker Compose.

Prerequisites πŸ“œ

Ensure you have the following installed:

  • Docker & Docker Compose - The app works with Docker containers. To run it, you need to have Docker Desktop, which comes with Docker CLI, Docker Engine and Docker Compose.

Quick installation 🏎️

You can find the installation video here.

  1. Clone the repository:

    git clone [email protected]:merlinn-co/merlinn.git && cd merlinn
  2. Configure LiteLLM Proxy Server:

    We use LiteLLM Proxy Server to interact with 100+ of LLMs in a unified interface (OpenAI interface).

    1. Copy the example files:

      cp config/litellm/.env.example config/litellm/.env
      cp config/litellm/config.example.yaml config/litellm/config.yaml
    2. Define your OpenAI key and place it inside config/litellm/.env as OPENAI_API_KEY. You can get your API key here. Rest assured, you won't be charged unless you use the API. For more details on pricing, check here.

  3. Copy the .env.example file:

    cp .env.example .env
  4. Open the .env file in your favorite editor (vim, vscode, emacs, etc):

    vim .env # or emacs or vscode or nano
  5. Update these variables:

    • SLACK_BOT_TOKEN, SLACK_APP_TOKEN and SLACK_SIGNING_SECRET - These variables are needed in order to talk to Merlinn on Slack. Please follow this guide to create a new Slack app in your organization.

    • (Optional) SMTP_CONNECTION_URL - This variable is needed in order to invite new members to your Merlinn organization via email and allow them to use the bot. It's not mandatory if you just want to test Merlinn and play with it. If you do want to send invites to your team members, you can use a service like SendGrid/Mailgun. Should follow this pattern: smtp://username:password@domain:port.

  6. Launch the project:

    docker compose up -d

That's it. You should be able to visit Merlinn's dashboard in http://localhost:5173. Simply create a user (with the same e-mail as the one in your Slack user) and start to configure your organization. If something does not work for you, please checkout our troubleshooting or reach out to us via our support channels.

The next steps are to configure your organization a bit more (connect incident management tools, build a knowledge base, etc). Head over to the connect & configure section in our docs for more information πŸ’«

Using DockerHub images

If you want, you can pull our Docker images from DockerHub instead of cloning the repo & building from scratch.

In order to do that, follow these steps:

  1. Download configuration files:

    curl https://raw.githubusercontent.com/merlinn-co/merlinn/main/tools/scripts/download_env_files.sh | sh
  2. Follow steps 2 and 5 above to configure LiteLLM Proxy and your .env file respectively. Namely, you'd need to configure your OpenAI key at config/litellm/.env and configure your Slack credentials in the root .env.

  3. Spin up the environment using docker compose:

    curl https://raw.githubusercontent.com/merlinn-co/merlinn/main/tools/scripts/start.sh | sh

That's it πŸ’« You should be able to visit Merlinn's dashboard in http://localhost:5173.

Updating Merlinn πŸ§™β€β™‚οΈ

  1. Pull the latest changes:

    git pull
  2. Rebuild images:

    docker-compose up --build -d

Deployment ☁️

Visit our example guides in order to deploy Merlinn to your cloud.

Visualize Knowledge Base πŸ—ΊοΈ

We use ChromaDB as our vector DB. We also use vector admin in order to see the ingested documents. To use vector admin, simply run this command:

docker compose up vector-admin -d

This command starts vector-admin at port 3001. Head over to http://localhost:3001 and configure your local ChromaDB. Note: Since vector-admin runs inside a docker container, in the "host" field make sure to insert http://host.docker.internal:8000 instead of http://localhost:8000. This is because "localhost" doesn't refer to the host inside the container itself.

Moreover, in the "API Header & Key", you'd need to put "X-Chroma-Token" as the header and the value you have inside .env CHROMA_SERVER_AUTHN_CREDENTIALS as the value.

To learn how to use VectorAdmin, visit the docs.

Support and feedback πŸ‘·β€β™€οΈ

In order of preference the best way to communicate with us:

  • GitHub Discussions: Contribute ideas, support requests and report bugs (preferred as there is a static & permanent for other community members).
  • Slack: community support. Click here to join.
  • Privately: contact at [email protected]

Contributing to Merlinn ⛑️

If you're interested in contributing to Merlinn, checkout our CONTRIBUTING.md file πŸ’« πŸ§™β€β™‚οΈ

Troubleshooting βš’οΈ

If you encounter any problems/errors/issues with Merlinn, checkout our troubleshooting guide. We try to update it regularly, and fix some of the urgent problems there as soon as possible.

Moreover, feel free to reach out to us at our support channels.

Telemetry πŸ”’

By default, Merlinn automatically sends basic usage statistics from self-hosted instances to our server via PostHog.

This allows us to:

  • Understand how Merlinn is used so we can improve it.
  • Track overall usage for internal purposes and external reporting, such as for fundraising.

Rest assured, the data collected is not shared with third parties and does not include any sensitive information. We aim to be transparent, and you can review the specific data we collect here.

If you prefer not to participate, you can easily opt-out by setting TELEMETRY_ENABLED=false inside your .env.

License πŸ“ƒ

This project is licensed under the Apache 2.0 license - see the LICENSE file for details

Learn more πŸ”

Check out the official website at https://merlinn.co for more information.

Contributors ✨

Built with ❀️ by Dudu & Topaz

Dudu: Github, Twitter

Topaz: Github, Twitter

merlinn's People

Contributors

david1542 avatar eltociear avatar topaztee avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

merlinn's Issues

AWS Integration + webhook for alerts

Integrate as much of aws as we can to learn about the environment such as
Ec2's - how many ec2s, os, versions...
S3 buckets - how many s3s...
Load balancers
AWS Config
AWS WAF rules - Webhook for alerts
Cloudtrail
Cloudwatch
SNS Topics
Guardduty - Webhook
Security Hub - Webhook
Inspector - Webhook

SigNoz Integration

Add an integration with SigNoz. The integration, in the initial phase, should do the following:

  • Read alerts information via the API
  • Read logs via the API

Continuous ingestion of data

We would love to be able to ingest data continuously without having to do it manually like on a daily/weekly basis

Fix mismatches between `.env.example` and `.env`

Overview

We tend to modify our environment variables a lot. We do this by changing our .env locally, propagate the changes to our containers via docker-compose-common.yaml and then update .env.example.

However, when people pull the latest changes, their local .env remains the same since it is not committed go git (can contains secret stuff, like Slack bot token). This creates bugs and mismatches in the environments.

We need to find a better way to sync everyone to the latest environment variables setup.

Support configurable OpenAI endpoint

Description:
Being able to use other endpoints will greatly help with adoption and flexibility on the users end. OpenAI compatible endpoints are abundant in both self hosted and enterprise solution. Just being able to define such an endpoint would allow usage of services like AWS Bedrock (via LiteLLM), and LocalAI.

Ask:

  • Allow defining a OPENAI_ENDPOINT or some similar configurable environment variable along the API token.

Langchain throws `Cannot read properties of null`

For somereason, sometimes messages array is null in the backend, thus resulting in an error from langchain

2024-07-23 09:37:04 TypeError: Cannot read properties of null (reading 'length')
2024-07-23 09:37:04 at new AIMessage (/app/node_modules/@langchain/core/dist/messages/ai.cjs:36:30)
2024-07-23 09:37:04 at openAIResponseToChatMessage (/app/node_modules/@langchain/openai/dist/chat_models.cjs:65:20)
2024-07-23 09:37:04 at ChatOpenAI._generate (/app/node_modules/@langchain/openai/dist/chat_models.cjs:641:30)
2024-07-23 09:37:04 at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
2024-07-23 09:37:04 at async Promise.allSettled (index 0)
2024-07-23 09:37:04 at async ChatOpenAI._generateUncached (/app/node_modules/@langchain/core/dist/language_models/chat_models.cjs:118:25)
2024-07-23 09:37:04 at async ChatOpenAI.invoke (/app/node_modules/@langchain/core/dist/language_models/chat_models.cjs:54:24)
2024-07-23 09:37:04 at async RunnableSequence.invoke (/app/node_modules/@langchain/core/dist/runnables/base.cjs:1066:33)
2024-07-23 09:37:04 at async RunnableAgent.plan (/app/node_modules/langchain/dist/agents/agent.cjs:123:24)
2024-07-23 09:37:04 at async AgentExecutor._call (/app/node_modules/langchain/dist/agents/executor.cjs:374:26)
2024-07-23 09:37:04 at async AgentExecutor.call (/app/node_modules/langchain/dist/chains/base.cjs:120:28)
2024-07-23 09:37:04 at async runAgent (/app/services/api/dist/src/agent/helper.js:66:24)
2024-07-23 09:37:04 at async getCompletions (/app/services/api/dist/src/routers/chat.js:99:47)
2024-07-23 09:37:04 developmentError error: AppError: Cannot read properties of null (reading 'length')
2024-07-23 09:37:04 at getCompletions (/app/services/api/dist/src/routers/chat.js:115:19)
2024-07-23 09:37:04 at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {
2024-07-23 09:37:04 statusCode: 500,
2024-07-23 09:37:04 status: 'error',
2024-07-23 09:37:04 internalCode: 33
2024-07-23 09:37:04 }

Seems like langchain has solved this issue in their latest version. We might need to upgrade.

Unable to start docker after first 5 steps of installation guide

I have configured the open ai and slack configs as specified, haven't enabled any integrations, wanted to just see the dashboard once

but the docker compose fails

(.venv) ➜ merlinn git:(main) docker compose up -d
service "data-processor-common" can't be used with extends as it declare depends_on

Google Drive Integration

We would love if it can ingest data from Google Drive daily or weekly so we can query our data from there. We would like data from the following formats:

  1. PDF
  2. xslx
  3. docx
  4. Google Docs
  5. Google Sheets

Advanced Mode Prompt Editing

Description:
Allow advanced users to modify the core prompts to fit their needs more appropriately.

Ask:

  • A page which allows the user to modify the core prompts with templating support
  • A button to reset the prompt to default should the user need to.
  • Bonus: Prompts are configurable via file or env var to allow for better gitops deployment (e.g. k8s)

Move `TELEMETRY_ENABLED` to the main `.env`

Overview
Right now, the TELEMETRY_ENABLED flag is defined only inside the api service (inside services/api/.env.dev. This is sub-optimal because other services might want to send telemetry as well, and they need to be aware of this variable.

Solution
Move TELEMETRY_ENABLED to the main .env file (first move it to .env.example file) and then propagate this variable to all the custom services (api, data-processor, slackbot, dashboard) via an environment variable

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.