Giter VIP home page Giter VIP logo

pfeinsper / drone-swarm-search Goto Github PK

View Code? Open in Web Editor NEW
16.0 1.0 5.0 45.31 MB

The Drone Swarm Search project provides an environment for SAR missions built on PettingZoo, where agents, represented by drones, are tasked with locating targets identified as shipwrecked individuals.

Home Page: https://pfeinsper.github.io/drone-swarm-search/

License: MIT License

Python 88.23% TeX 11.77%
pettingzoo rl ai pygame python multi-agent-reinforcement-learning multiagent-reinforcement-learning

drone-swarm-search's Introduction

Tests Status 🧪 Docs Deployment 📝 PyPI Release 🚀 License: MIT PettingZoo version dependency DOI GitHub stars

DSSE Icon Drone Swarm Search Environment (DSSE)

Welcome to the official GitHub repository for the Drone Swarm Search Environment (DSSE). This project offers a comprehensive simulation platform designed for developing, testing, and refining search strategies using drone swarms. Researchers and developers will find a versatile toolset supporting a broad spectrum of simulations, which facilitates the exploration of complex drone behaviors and interactions in dynamic, real-world scenarios.

In this repository, we have implemented two distinct types of environments. The first is a dynamic environment that simulates maritime search and rescue operations for shipwreck survivors. It models the movement of individuals in the sea using a dynamic probability matrix, with the objective for drones being to locate and identify these individuals. The second is a environment utilizing the Lagrangian particle simulation from the open-source Opendrift library, which incorporates real-world ocean and wind data to create a probability matrix for drone SAR tasks. In this scenario, drones are tasked with covering the full search area within the lowest time possible, while prioritizing higher probability areas.

📚 Documentation Links

  • Documentation Site: Access comprehensive documentation including tutorials, and usage examples for the Drone Swarm Search Environment (DSSE). Ideal for users seeking detailed information about the project's capabilities and how to integrate them into their own applications.

  • Algorithm Details: Explore in-depth discussions and source code for the algorithms powering the DSSE. This section is perfect for developers interested in the technical underpinnings and enhancements of the search algorithms.

  • PyPI Repository: Visit the PyPI page for DSSE to download the latest release, view release histories, and read additional installation instructions.

DSSE - Search Environment

🎥 Visual Demonstrations


Above: A simulation showing how drones adjust their search pattern over a grid.

🎯 Outcome

If target is found If target is not found

⚡ Quick Start

⚙️ Installation

Quickly install DSSE using pip:

pip install DSSE

🛠️ Basic Env Search Usage

from DSSE import DroneSwarmSearch

env = DroneSwarmSearch(
    grid_size=40,
    render_mode="human",
    render_grid=True,
    render_gradient=True,
    vector=(1, 1),
    timestep_limit=300,
    person_amount=4,
    dispersion_inc=0.05,
    person_initial_position=(15, 15),
    drone_amount=2,
    drone_speed=10,
    probability_of_detection=0.9,
    pre_render_time=0,
)


def random_policy(obs, agents):
    actions = {}
    for agent in agents:
        actions[agent] = env.action_space(agent).sample()
    return actions


opt = {
    "drones_positions": [(10, 5), (10, 10)],
    "person_pod_multipliers": [0.1, 0.4, 0.5, 1.2],
    "vector": (0.3, 0.3),
}
observations, info = env.reset(options=opt)

rewards = 0
done = False
while not done:
    actions = random_policy(observations, env.get_agents())
    observations, rewards, terminations, truncations, infos = env.step(actions)
    done = any(terminations.values()) or any(truncations.values())

DSSE - Coverage Environment

🎥 Visual Demonstrations


Above: A simulation showing how drones adjust their search pattern over a grid.

⚡ Quick Start

⚙️ Installation

Install DSSE with coverage support using pip:

pip install DSSE[coverage]

🛠️ Basic Coverage Usage

from DSSE import CoverageDroneSwarmSearch

env = CoverageDroneSwarmSearch(
    drone_amount=3,
    render_mode="human",
    disaster_position=(-24.04, -46.17),  # (lat, long)
    pre_render_time=10, # hours to simulate
)

opt = {
    "drones_positions": [(0, 10), (10, 10), (20, 10)],
}
obs, info = env.reset(options=opt)

step = 0
while env.agents:
    step += 1
    actions = {agent: env.action_space(agent).sample() for agent in env.agents}
    observations, rewards, terminations, truncations, infos = env.step(actions)

print(infos["drone0"])

🤝 Contributing

We welcome contributions from developers to improve and expand our repository. Here are some ways you can contribute:

  1. Creating Issues: If you encounter any bugs, have suggestions for new features, or have a question, please create an issue on our GitHub repository. This helps us keep track of what needs to be addressed and prioritize improvements.

  2. Submitting Pull Requests (PRs): We encourage you to fork the repository and make your own modifications. Once you have made changes, submit a pull request for review. Ensure your PR includes a clear description of the changes and any relevant information to help us understand the modifications.

Testing Your Contributions

To maintain code stability, we have a suite of tests that must be run before any code is merged. We use Pytest for testing. Before submitting your pull request, make sure to run these tests to ensure that your changes do not introduce any new issues.

To run the tests, use the following command:

pytest DSSE/tests/

Our test suite is divided into several parts, each serving a specific purpose:

  • Environment Testing: Found in DSSE/tests/test_env.py and DSSE/tests/test_env_coverage.py, these tests ensure that both the search and coverage environments are set up correctly and function as expected. This includes validating the initialization, state updates, and interaction mechanisms for both environments.

  • Matrix Testing: Contained in DSSE/tests/test_matrix.py, these tests validate the correctness and functionality of the probability matrix.

📖 How to cite this work

If you use this package, please consider citing it with this piece of BibTeX:

@software{Laffranchi_Falcao_DSSE_An_environment_2024,
    author = {
                Laffranchi Falcão, Renato and
                Custódio Campos de Oliveira, Jorás and
                Britto Aragão Andrade, Pedro Henrique and
                Ribeiro Rodrigues, Ricardo and
                Jailson Barth, Fabrício and
                Basso Brancalion, José Fernando
            },
    doi = {10.5281/zenodo.12659848},
    title = {{DSSE: An environment for simulation of reinforcement learning-empowered drone swarm maritime search and rescue missions}},
    url = {https://doi.org/10.5281/zenodo.12659848},
    version = {0.2.5},
    month = jul,
    year = {2024}
}

drone-swarm-search's People

Contributors

enricofd avatar fbarth avatar jorasoliveira avatar leonardodma avatar lfcarrete avatar manuel-castanares avatar pedro2712 avatar renatex333 avatar ricardoribeirorodrigues avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

drone-swarm-search's Issues

Tem algo errado com o critério de finalização do episódio

Ao executar um agente com o algoritmo reinforce, percebi o seguinte comportamento:

Episode = 491, Actions = 7, Rewards = -1105.0
Episode = 492, Actions = 1, Rewards = -1000
Episode = 493, Actions = 1, Rewards = -1000
Episode = 494, Actions = 1, Rewards = -1000
Episode = 495, Actions = 6, Rewards = -1203.0
Episode = 496, Actions = 6, Rewards = -1203.0
Episode = 497, Actions = 4, Rewards = -1003
Episode = 498, Actions = 2, Rewards = -1100.0
Episode = 499, Actions = 18, Rewards = -1314.0
Episode = 500, Actions = 5, Rewards = -1202.0
Episode = 501, Actions = 4, Rewards = -1102.0
Episode = 502, Actions = 3, Rewards = -1101.0
Episode = 503, Actions = 52, Rewards = -1645.0

Trata-se de um ambiente com 1 único drone e posicionado na posição [25,25]. Como é que pode um episódio terminar com apenas 1 action neste cenário?

Bug ao executar basic_env.py

quando executado basic_env.py, acontece o seguinte erro:

RuntimeWarning: invalid value encountered in scalar divide normalizedProb = prob / max_matrix

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.