Giter VIP home page Giter VIP logo

labs's People

Contributors

ciaranmn avatar hashdotai avatar judeallred avatar nonparibus avatar renovate[bot] avatar thehabbos007 avatar timdiekmann avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

labs's Issues

Pyodide Numpy Int Array Bug

User report: The numpy.random.choice function is failing.

The problem is caused by Pyodide not converting numpy types properly. The numpy.random.choice function returns an numpy.int32 type which gets converted to an empty array for some reason.

Mitigation: Use random.choice or convert the numpy choice to a python int.

๐Ÿš€ Dependency Updates

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Repository problems

These problems occurred while renovating this repository. View logs.

  • WARN: File contents are invalid JSON but parse using JSON5. Support for this will be removed in a future release so please change to a support .json5 file name or ensure correct JSON syntax.

Pending Approval

These branches will be created by Renovate only once you click their checkbox below.

  • Update GitHub Action actions/setup-python to v4.7.1
  • Update GitHub Action Swatinem/rust-cache to v2.7.1
  • Update GitHub Action github/codeql-action to v2.22.7
  • Update GitHub Action taiki-e/install-action to v2.21.18
  • Update dependency pygit2 to v1.13.3
  • Update GitHub Action actions/checkout to v4
  • ๐Ÿ” Create all pending approval PRs at once ๐Ÿ”

Detected dependencies

github-actions
.github/actions/setup-rust-ci/action.yml
  • actions/setup-python v4.7.0@61a6322f88396a6271a6ee3565807d608ecaddd1
.github/workflows/rust.yml
  • actions/checkout v3.6.0@f43a0e5ff2bd294095638e18286ca9a3d1956744
  • actions/checkout v3.6.0@f43a0e5ff2bd294095638e18286ca9a3d1956744
  • Swatinem/rust-cache v2.6.2@e207df5d269b42b69c8bc5101da26f7d31feddb4
  • taiki-e/install-action v2.17.7@cc5a5c56a296ea597e6ea38f551f25dee8be1225
  • github/codeql-action v2.21.5@00e563ead9f72a8461b24876bee2d0c2e8bd2ee8
  • actions/checkout v3.6.0@f43a0e5ff2bd294095638e18286ca9a3d1956744
  • Swatinem/rust-cache v2.6.2@e207df5d269b42b69c8bc5101da26f7d31feddb4
  • taiki-e/install-action v2.17.7@cc5a5c56a296ea597e6ea38f551f25dee8be1225
  • actions/setup-python v4.7.0@61a6322f88396a6271a6ee3565807d608ecaddd1
pip_requirements
.github/scripts/rust/requirements.txt
  • pygit2 == 1.9.2
  • toml == 0.10.2

  • Check this box to trigger a request for Renovate to run again on this repository

Unexpected behaviors running experiments in Python simulation models

Describe the bug

The problem occurs in this simulation when I run an experiment. Running the simulation as an experiment delete some of the agents and behaviours and thus wrong results .
The normal run (200 steps) should look like this
https://user-images.githubusercontent.com/7549404/232717172-dce44228-3383-44a0-bcb1-40fb71b9c73a.mov
The balls (workers) move to the blue grids (walls). The simulation also has black squares agents (external walls) and some other agents.

The video below show the run from the experiment (200 steps)
https://user-images.githubusercontent.com/7549404/232717124-1f7e7251-d5cf-4b68-8643-440179388ca5.mov

With the following issues :

  • Balls are not moving
  • External walls are not there
  • Some of agents are deleted

To reproduce

  1. go to https://core.hash.ai/@alaabarazi/actorexperiment/main
  2. select experiments
  3. click on ex from the menu
  4. wait
  5. when the run is done, click on view to the see the results

compare this when the normal run of the simulation

Expected behavior

Experiment should not delete some agents and their behaviours. The run from experiment should look quite similar to the normal run of the simulation.

Project URL

https://core.hash.ai/@alaabarazi/actorexperiment/main

Device

Macos pro 2019

Operating system

13.2.1 (22D68)

Browser

Version 111.0.5563.146 (Official Build) (x86_64)

Additional context

No response

Unexpected behavior running Python Model Experiment

Describe the bug

Whenever I run a Python model which loop over neighbours as an experiment I get an error. The simulation run normally otherwise (normal run)

For example in the simulation https://core.hash.ai/@alaabarazi/actorexperiment/main
I am getting the error

2023-04-21 11:20:56
ERROR simulation: Error with 5 steps taken: Simulation error: Simulation (id: 644247393f4b1a91f8f31cc1) failed with error: Unique("Error in behavior worker_adding_value.py in the Python runtime:\n<class 'TypeError'>: string indices must be integers")

The cause of this error is the line ( if neighbor["steps_needed1"][task_key] > 0 and neighbor["position"]==state["position"] and state["status"] in act_at_work_place:)
When I comment this line the experiment works.

  for neighbor in context.neighbors():
      if neighbor["agent_name"] == state["system"]:
        if state["task"] :
          task_key = state["task"]["prev_delivery_key"]
          if neighbor["steps_needed1"][task_key] > 0 and neighbor["position"]==state["position"] and state["status"] in act_at_work_place:

Any clue what causes this. ?

To reproduce

go to. https://core.hash.ai/@alaabarazi/actorexperiment/main
run the experiment ex
wait for some time
An error message pop up
2023-04-21 11:20:56
ERROR simulation: Error with 5 steps taken: Simulation error: Simulation (id: 644247393f4b1a91f8f31cc1) failed with error: Unique("Error in behavior worker_adding_value.py in the Python runtime:\n<class 'TypeError'>: string indices must be integers")

Expected behavior

Experiment should run without errors

Project URL

https://core.hash.ai/@alaabarazi/actorexperiment/main

Device

Macbook Pro 2,6 GHz 6-Core Intel Core i7

Operating system

Macos 13.3.1 (22E261)

Browser

Version 112.0.5615.121 (Official Build) (x86_64)

Additional context

No response

`geo_color` is not rendering the color green

User report: If you set the value of "geo_color" to "green", a warning is thrown in the console

Could not parse color from value '#fac33'

And the color in the geospatial display is set as black. The geo_color property works in general, with values like "red", "black", and "blue" all appropriately setting the geospatial display.

Negative heights don't visualize correctly

Setting an agent with negative height causes a weird visualization effect, as if the agent was a box with an open bottom.

Desired behavior: Flip the agent 'upside down'.

Unable to run experiments in Python models with specific imports

Describe the bug

The simulation run normal without errors, however experiment run generate error from step one as follow
"ERROR simulation: Error with 0 steps taken: Simulation error: Simulation (id: 643692c93f4b1a91f8eedf61) failed with error: Unique("Cannot reach language worker, shutting down")
Learn more about common errors in our docs"
This is the public link to the simulation which inlclude one experiment.
The simulation use Python datatime library, re, and numpy.
Many behaviours are written in Python and some in Javascript.

To reproduce

  1. Click on experiment menu
  2. click on the experiment "1"
  3. wait for sometime
  4. error appear
    Screenshot 2023-04-13 at 14 28 02

Expected behavior

There should be no error , or an error which describe the problem.

Project URL

https://core.hash.ai/@alaabarazi/actor_stable3/main

Device

Macbook Pro (2019) intel

Operating system

Macos 13.2.1

Browser

Version 111.0.5563.146 (Official Build) (x86_64)

Additional context

No response

Agent position is displayed differently if `scale` is set

Agents are typically centered on their position in the xy plane, with the bottom of an agent's bounding box located on the z-coordinate.

However, if height is 0 but scale is non-zero, the agent is fully centered on their position in 3d space.

hEngine "ninja_gn_binaries.py" not found?

Not exactly sure where this ninja_gn_binaries.py even comes from and why it's needed but it is failing sue to it.

Describe the bug

Compiling v8 v0.45.0
error: failed to run custom build command for v8 v0.45.0

Caused by:
process didn't exit successfully: /Users/<>/hash/engine/target/debug/build/v8-1afa013990c9e136/build-script-build (exit status: 101)
--- stdout
cargo:rerun-if-changed=.gn
cargo:rerun-if-changed=BUILD.gn
cargo:rerun-if-changed=src/binding.cc
cargo:rerun-if-env-changed=CCACHE
cargo:rerun-if-env-changed=CLANG_BASE_PATH
cargo:rerun-if-env-changed=DENO_TRYBUILD
cargo:rerun-if-env-changed=DOCS_RS
cargo:rerun-if-env-changed=GENERATE_COMPDB
cargo:rerun-if-env-changed=GN
cargo:rerun-if-env-changed=GN_ARGS
cargo:rerun-if-env-changed=HOST
cargo:rerun-if-env-changed=NINJA
cargo:rerun-if-env-changed=OUT_DIR
cargo:rerun-if-env-changed=RUSTY_V8_ARCHIVE
cargo:rerun-if-env-changed=RUSTY_V8_MIRROR
cargo:rerun-if-env-changed=SCCACHE
cargo:rerun-if-env-changed=V8_FORCE_DEBUG
cargo:rerun-if-env-changed=V8_FROM_SOURCE
cargo:rustc-link-lib=static=rusty_v8

--- stderr
thread 'main' panicked at 'ninja_gn_binaries.py download failed: Os { code: 2, kind: NotFound, message: "No such file or directory" }', /Users/<>/.cargo/registry/src/github.com-1ecc6299db9ec823/v8-0.45.0/build.rs:274:8
note: run with RUST_BACKTRACE=1 environment variable to display a backtrace

To reproduce

  1. Export short-squeeze simulation from hCore
  2. Fork hEngine to local IDE
  3. Change counter.rs -> counter.js and change dependencies.json to not reflect dependency on counter.rs
  4. follow instructions as described in https://github.com/hashintel/hash/tree/main/apps/engine#cli-arguments-and-options under the section "Run a simulation"
  5. Run according to instructions in hEngine README.md cargo run --bin cli -- --project /<directory>/short-squeeze/ single-run --num-steps 5

Link to HASH Core

No response

Expected behavior

It's suppsoed to run just fine since it's just a copy of what's already in hCore

Rust compiler

1.65.0-nightly (d394408fb 2022-08-07)

Host

aarch64-apple-darwin

Target

nightly-aarch64-apple-darwin

Version

3.8.10

Additional context

Not using any Python here.

Field pull-down in Define New Metric form does not include all fields

Open a model and run the model for many steps, then select pause.
In the Analysis tab, select "define new metric". Then, in the "define new metric" form, select the "FIELD" pull-down.
Since there is no opportunity to select a particular type of agent, I am expecting to see the full list of all fields used for any type of agent. Instead, I only see a list of out-of-the-box default fields (like agent_id, agent_name, behaviors, color, direction, etc.). I cannot define a new metric using the agent fields that exist, as shown in attached screen captures.

Hash Bug 1

Hash Bug 2

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.