Giter VIP home page Giter VIP logo

mephisto's Introduction

Mephisto

code style: black code style: prettier

Mephisto makes crowdsourcing easier.

We provide a platform for launching, monitoring, and reviewing your crowdsourcing tasks. Tasks made on Mephisto can easily be open sourced as part of other projects. Like the chess-playing automaton we've adopted the name from, Mephisto hides the complexity of projects that need human interaction or input.

You can find complete details about the project on our docs website.

Quickstart

Get started in 10 minutes

How to run projects

Want to help?

Check out our guidelines for contributing and then take a look at some of our tagged issues: good first issue, help wanted.

For library authors, you may also find the how to contribute documentation helpful.

License

Mephisto is MIT licensed. See the LICENSE file for details.

Citation

If you use Mephisto in your work, please consider citing the project. It helps us prioritize features based on our users, and helps others discover implementations that may be relevant to their projects.

@misc{mephisto,
  doi = {10.48550/ARXIV.2301.05154},
  url = {https://arxiv.org/abs/2301.05154},
  author = {Urbanek, Jack and Ringshia, Pratik},
  keywords = {Artificial Intelligence (cs.AI), Human-Computer Interaction (cs.HC), FOS: Computer and information sciences, FOS: Computer and information sciences},
  title = {Mephisto: A Framework for Portable, Reproducible, and Iterative Crowdsourcing},
  publisher = {arXiv},
  year = {2024},
  copyright = {arXiv.org perpetual, non-exclusive license}
}

mephisto's People

Contributors

alexandresablayrolles avatar bottler avatar dependabot[bot] avatar edwardguo61 avatar ericmichaelsmith avatar etesam913 avatar facebook-github-bot avatar gzhihongwei avatar jackurb avatar jxmsml avatar liushh avatar macota avatar meganung avatar meta-paul avatar mjkanji avatar mnahinkhan avatar mojtaba-komeili avatar moyapchen avatar mwillwork avatar preithofer avatar pringshia avatar rebecca-qian avatar salelkafrawy avatar sghmk12 avatar snyxan avatar thammegowda avatar vaibhavad avatar wayne163 avatar xksteven avatar yash621 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mephisto's Issues

`create-react-app` template for review-based tasks using the Mephisto CLI

Create a package called cra-template-mephisto-review in the /packages/ folder in the repo. This package will be a custom template for create-react-app projects, as described here: https://create-react-app.dev/docs/custom-templates/

The goal of this template package will be to simplify the steps outlined in this PR, specifically Step 2:
#270


Desired final state:

npx create-react-app my-review --template mephisto-review

should replace all of Step 2 in the linked PR above.

Stretch goal: Improve the CSS and design of the initial template as well.

[TODO Roundup] Database Management Decisions

Summary

Some decisions for how to manage the database and data model, possible speed improvements, and more have been put off. It's about time to decide on whether these are important to pursue, or if we should just leave them behind.

Requirements

  • grep '# TODO(#101)' returns no results

[Feature Request] Remove/Edit requester from db

It would be nice to have a functionality that does the opposite of db.new_requester() just in case someone name their requester name, or requester type wrongly, or at least being able to edit requester type or name by calling some function. Currently it only supports add new_requester and if any changes are being made on the same requester name, it will throw EntryAlreadyExistsException.

Create a CRA template for review-based tasks

Create a package called cra-template-mephisto-review in the /packages/ folder in the repo. This package will be a custom template for create-react-app projects, as described here: https://create-react-app.dev/docs/custom-templates/

The goal of this template package will be to simplify the steps outlined in this PR, specifically Step 2:
#270


Desired final state:

npx create-react-app my-review --template mephisto-review

should replace all of Step 2 in the linked PR above.

Stretch goal: Improve the CSS and design of the initial template as well.

[TODO Roundup] Improved cleanup on task failure

Summary

In general, we should strive to fully clean up tasks when a run ends, either intentionally or unintentionally. Unfortunately, we aren't always able to get everything, so it should be possible to assess whether the resources for a task run have been successfully resolved after a shutdown to be able to force a shutdown again.

One possible implementation would be to write some kind of info for "last live time" on TaskRuns, and then scrub can go through anything that isn't in a final state then check against the last live time to see if we should be killing them.

Requirements

  • Implement a CrowdProvider.scrub() method that will be used to remove and delete any pending tasks that have been disconnected
  • grep '# TODO(102)' returns no results.

QA Data Collection Task

Hi,

I want to port QA data collection task from ParlAI to Mephisto.

Here a single MTurk will question-answers pairs relevant to the paragraph. Will this require different Mephisto blueprints?
In ParlAI there were tasks for data loading like squad and Wikipedia. Any guidelines on how to include the same functionality in Mephisto?

ParlAI Chat Demo - Something wrong with worker pairing and agent status updates

Hi,

I have been doing some pilot studies for a crowdsourcing task that pairs 2 MTurk workers for a conversation. I noticed some unusual behavior in some assignments. Specifically, there are cases where the worker starts the HIT and gets the partner timeout message within a few seconds. I'll list down the steps to replicate the task setup and provide the logs below.

I forked Mephisto (vaibhavad/Mephisto) (just a few hours ago) and made some minor changes:

  1. Changed the logging (vaibhavad/Mephisto@f5158ed) in supervisor.py, operator.py, and blueprint.py so that issue-specific information is logged.
  2. Statically linked packages (vaibhavad/Mephisto@a18ac6e) as somehow node modules we not working on my system (#325)
  3. Changed task configuration to 10 conversations (vaibhavad/Mephisto@c00c4b4), and using custom_prebuilt.yaml so that bundle.js is picked from webapp.

Running the task -
I used the following steps

git clone https://github.com/vaibhavad/Mephisto.git
pip install parlai
cd Mephisto
pip install -e .
mephisto register mturk_sandbox name=my_mturk_user_sandbox access_key_id=<ACCESS_KEY> secret_access_key=<SECRET_KEY>
cd Mephisto/packages/mephisto-task
npm install; npm run dev
cd Mephisto/packages/bootstrap-chat
npm install; npm run dev
cd Mephisto/examples/parlai_chat_task_demo/webapp
npm install; npm run dev
cd ..
python parlai_test_script.py mephisto/architect=heroku mephisto.provider.requester_name=my_mturk_user_sandbox

I tested the system using three different MTurk Sandbox accounts, so 3 workers are registered. I kept returning the task midway frequently and starting new ones from different accounts to replicate the scenario I observed in production. Here are some observation (with reference to logs)
mephisto_logs.txt

  1. (Line 79) Task available on Worker Sandbox
  2. (Line 159) Worker 2 returned the HIT as Agent 2, and started to work on a new task as Agent 4, but the status is updated after ~1-2 minutes (Line 194). In some cases we saw that this time to update the agent status is even longer.
  3. (Line 327) As we are testing with only 3 workers, having 6 in_task agent statuses means some of these states are stale.
  4. The specific abnormal behavior we observed is visible from Line 340-347. Assignment 3 was originally launched with Agents 5 and 6 (Line 206). Their statuses changed in Line 331. Agent 11 is created and paired without any waiting with Agent 5 (Line 340), although Agent 5 status was returned, not waiting. Agent 6 status updated from partner disconnect to completed, which is also unusual. Finally, Agent 11 gets partner disconnect within 5 seconds of starting the Assignment (Line 347).
  5. Similar behavior is also observed in Line 410-414. Agent 14 is paired with Agent 5 although Agent 5's status is timeout.

Is there a way to make sure that only agents with status waiting are paired?
How does the Heroku server declare an agent disconnected/returned? I am assuming it must be based on the frequency of alive signals received. Can you point me to that specific code section?
Also, the unusual pairing mentioned above is always preceded with the line Updating a final status, was timeout/returned and want to set to in task, which I'm assuming is referring to the status of Agent 5 (Line 339 and 409). It from update_status function in data_model/agent.py, although I don't quite understand the sequence of functions which lead to it being called.

I'll be very grateful if you can help me with this. :)

[Feature Request] MTurkProvider filter workers based on timezone.

Anecdotal evidence has suggested that workers in the hours of 9-5 are more effective at certain tasks, so there has been an ask to make it possible to target based on time zone (or schedule launches for optimal times).

Important details - must respect an organization's set preferences on allowed workers and locations.

Full scoping to come.

[TODO Roundup] Make Operator more configurable

Summary

The Mephisto Operator is the most powerful class in all of Mephisto, in that it does the heavy lifting of launching and running jobs, and configuring the rest of the Mephisto architecture to do what's intended. One painful area of this flow is actually configuring the launch itself.

For context, Mephisto needs to be able to send configuration options forward to the frontend in order for people to be able to launch jobs from there. These configuration options also need to be persisted somehow to ensure that equivalent jobs can be re-launched later. As such there are a number of flows around trying to send required arguments. See Operator.parse_and_launch_run for the full flow that finds and parses these arguments.

The goal is to simplify the process for launching scripts from argument dicts, and to ensure the contents within are saved to the task_run's data directory for reproducibility in the future. (As of now, the TaskConfig handles part of this, but only does so for the parsed string arguments and not those passed additionally by dict).

As a stretch goal, it would be nice to move away from ArgParser dependence, as this has us locked to a few less-than-optimal coding situations (including converting passed dictionaries to string args, re-parsing, initializing an argparser during a critical path, ++).

Requirements

  • Create parse_arguments_and_launch_run and launch_run_from_argument_dict methods in Operator, which act as wrappers around the basic parse_and_launch_run function. This refactor will involve sharing some code between the two methods (with the former following essentially the same code flow as parse_and_launch_run has now). The big question to solve is how to properly handle ensuring both methods pass through the same assert_args flow when the version from the dict doesn't run through the argparser.
  • Update examples to use the launch_run_from_argument_dict method.
  • Add and wire missing arguments.
  • Write documentation on parse_arguments_and_launch_run.

Wishlist

  • Create a new class (potentially extending ArgumentParser) that handles the process of registering arguments, being able to display registered arguments, showing default values, etc. See argparse_parser.py for our existing solution.

[TODO Roundup] Improve Operator Flow

Summary

Right now a lot of the functionality of the Operator class has been written 'to get it working' but hasn't been refactored into a useful API for users. One such example of this is the process for waiting for an operator to finish a task run (and dealing with cleanup), which exists in every example file.

It would be useful to make Operator a little more robust and fleshed out by providing a method to suspend a thread until task runs are finished, and to update the existing parse_and_launch_run method to clean up after itself if it fails to initialize (+ logs #93).

Requirements

  • Implement a wait_for_task_runs method (that accepts an argument for a log rate to visibly show a message about tasks that are still waiting) that replicates the behavior currently available in example files
  • improve the wait_for_task_runs method to have more robust error handling that passes through a few stages:
    • On an Exception, it displays the exception context and asks the human if they want to shut down or ignore the exception (with a simple (y)/n input).
    • On an interrupt, it begins the shutdown process, first telling the user how many active tasks there are right now. At this point it asks if we want to interrupt those active tasks or wait for them to complete (also with a simple (y)/n input).
    • Following this interrupt, suspends the user's ability to ctrl-C during the rest of the shutdown process to ensure that the architect, supervisor, and operator are properly shut down.
  • Update the parse_and_launch_run method(s) to clean up after themselves if they run into a failure during the setup of a task.

Hit reducation

Hi, I was developing a two-workers chat task through Mephisto. When I tested it Mturk sandbox, I found that when every user accepts the task, the hits will reduce immediately. If the user does not finish the task, when the hit will be added back?

My second question is, in a two-workers task, two workers need to join to trigger the task. If one user joins early and leaves, and then another worker joins, what will happen? Can we set a time interval constraint between two worker joining the task? When I deploy it on Mturk, I found the task is triggered to begin but no data has been collected. I think that caused by the first worker left and the conversation should be started by the first worker.

Thanks for you help

Installation issue with setuptools package

Related to the pypa/setuptools#2353:

After cloning the repo and running pip install -e ., I'm getting a setuptools error (stacktrace below). Solved (at least for the time being lol) by removing the pyproject.toml. Guessing this a setuptools version issue?

Installing collected packages: mephisto Running setup.py develop for mephisto ERROR: Command errored out with exit status 1: command: /private/home/ccross/.conda/envs/conda_parlai/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/home/ccross/ParlAI/Mephisto/setup.py'"'"'; __file__='"'"'/private/home/ccross/ParlAI/Mephisto/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' develop --no-deps cwd: /private/home/ccross/ParlAI/Mephisto/ Complete output (3 lines): Traceback (most recent call last): File "<string>", line 1, in <module> ModuleNotFoundError: No module named 'setuptools' ---------------------------------------- ERROR: Command errored out with exit status 1: /private/home/ccross/.conda/envs/conda_parlai/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/home/ccross/ParlAI/Mephisto/setup.py'"'"'; __file__='"'"'/private/home/ccross/ParlAI/Mephisto/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' develop --no-deps Check the logs for full command output.

[RFC][Draft] First-time User Experience

I think the onboarding experience for Mephisto will be really important in making sure this product sticks and that we maintain a high retention rate.

This RFC is an attempt to gather some initial thoughts on optimizing for the NUX (new user experience) aka first-time user experience and align on proposed action items.

I pose some questions and some answers below, the latter of which is not meant to be exhaustive.

1. The Installation Experience

The question

What should be the method of delivery for this project?

Some answers

  • Option 1: The user can download a package, e.g. pip install mephisto which can give them access to the interface/tool to use (via CLI command).
    • In this case upstream updates can be incorporated by downloading the updated version of the python package.
  • Option 2: The user can clone the repo and use that to access the tools provided within the repository (e.g. similar to projects like ParlAI).
    • In this case upstream updates must be managed by pulling from the git repo periodically. If the user has made some changes, they must manage the merge conflicts.

2. The Quick-Start Experience

The question

How can we empower users to get immediate value? How can we make the user feel empowered and accomplished with the tool, with minimal overhead? How can we dazzle the user with the product's potential?

Some answers

  • Have a task gallery available that shows common templates that already exist, selling on: ease of creating new templates, tools around inspecting templates, diversity of templates (this last one will grow over time and will be a long-term payoff).
  • Create a quick-start guide with a dead-simple use-case, e.g. start a static task locally using a predefined task template, and gather a result.
  • Provide an incrementally more advanced use-case of having a dynamic task.
  • Take a Documentation-Driven Development approach to ensure that we are mindful of the learning curve of features as we build them out.

3. Nailing Down the Touchpoints

The question

What are different opportunities for users to formulate a perception about our product? How can we make sure we win in each of those areas?

Some answers

  1. Github README.md
  2. Website?
  3. Quick Start Guide
  4. Documentation
  5. Github issues

4. The API Experience

The question

How can we ensure that we are being intentional & thoughtful about API design decisions?

Some answers

property for agent.observe()

What properties we can use in the dict of agent.observe(). From examples/parlai_chat_task_demo/demo_worlds.py, we know there are id, text, task_data, respond_with_form, type, question, choices. Besides those properties, is there any other properties?

Which files should I modify if I want to add a new property?

Automated testing on Github

Summary

As in the title, it would be nice to get test signal on uploaded PRs. Ideally, we'd be able to configure it to only run relevant tests (ones that import changed files), but getting any at all is a start. We'd also love to get coverage as well (Perhaps using codecov?).

Add an operational script to synchronize dependencies in package.json for examples/ folder

Problem:

The packages/ folder contains two npm packages that many of the projects in the examples/ directory use: mephisto-task and bootstrap-chat.

When we introduce a new feature or a bugfix to one of those packages, it's important to make sure that we update the version number in the package.json of the projects in the examples/ directory as well.

Solution:

Add a script to the scripts/ folder that allows a project maintainer to set the version number of the mephisto-task and bootstrap-chat packages in all the examples/*/package.json files.

Implementation:

Largely up to the contributor. You may want to implement something similar to the existing script at https://github.com/facebookresearch/Mephisto/blob/master/scripts/update_js_deps.sh which contains useful code that traverses through all the package.json files to update security deps.

Example usage:

scripts/sync_package_version.sh mephisto-task 1.0.13

scripts/sync_package_version.sh bootstrap-chat 1.0.7

Bonus points for factoring out the part of the update_js_deps.sh that does the traversing and runs a user-provided script on each package.json, and using it to (1) re-implement the existing update deps functionality as-is, and (2) setting the explicit package version #s are described in this issue.

Example usage:

$ scripts/traverse_package_dirs.sh 'npm install [email protected]' --scope examples/ 
# the idea of the --scope flag here is to make sure that this only runs on files in a certain directory

$ scripts/traverse_package_dirs.sh 'update_js_deps.sh'
# this involves removing the traverse out of the current update_js_deps file and moving it into traverse_package_dirs.sh

[Feature Request] Argparser print args

It would be helpful if the argparser printed the args that were parsed as a sanity check before launching the task, especially as in some of the example run.py files, many of the args are hard coded.

Stale TODO in `data/` dir README

The TODO here seems stale now that we provide the option to specify a data dir via the core.main_data_dir property in ~/.mephisto/config.yml. Perhaps we can update the README to mention that?

[TODO Roundup] Utils cleanup

Summary

Mephisto doesn't have a clear internal utility structure for code and functions that exist outside of the main functionality. We can consider two primary divisions of utilities:

  1. Those that are useful for end-users to be able to do special functionality inside of their own scripts and classes
  2. Those that are useful for Mephisto internals and aren't meant to be used by the majority of users.

I propose splitting these up into tools and utils respectively. Prior to 1.0, it would be relevant to do a pass through the codebase to find functionality that belongs in one of these two, and to make the move.

[TODO Roundup] Improve example starting flow

Summary

The existing examples have a few issues with running on their first launch. As of now, if you want to launch on sandbox and see how that works, you need to manually update constants in the script, and uncomment some lines to ensure you can create a requester, potentially modify the input arguments, etc.

Ideally the arguments and location could be configured on the command line, and there would be no need to create a requester by uncommenting.

Requirements

  • Update the example scripts to be able to set their flags (like USE_LOCAL) using command line flags.
  • Update the example scripts to be able to set the arguments actually passed to the operator using command line flags. (likely after #94 is completed)
  • Create utility functions for forcing the retrieval of a requester by provider type. This flow would involve trying to query for any of them (defaulting to the most recently created) but asking the user to create the requester otherwise (see the flow for getting required requester arguments and registering a new requester from the frontend for an example).
  • Add missing configuration options to some examples.

Stretch

  • Add native JSON parsing for initial task data.

[TODO Roundup] Result Management Functionality

Summary

The mephisto DataBrowser class is just getting started, and needs to be extended with a bunch of common use cases for browsing data.

  • Query by worker
  • Find incomplete assignments from a task run
  • Query workers by qualification #591
  • Find onboarding task for agent #600
  • Find task run details
  • ...

Using MTurk qualifications (age, country, master, etc)

Hi, thanks for the very cool repo!

I am working on speech synthesis and am trying to use Mephisto for Mean Opinion Score (MOS) tests.
I was able to follow the examples to create a MOS task (maybe I would submit a PR later to add it to examples?).
The thing I am not able to do is set qualifications for turkers.

When I use the mturk web-ui it allows me to set qualifications, specifically:

  • Country of origin
  • Age
  • Master turkers
  • Etc

I thought that a natural place to look for such qualifications is in the conf file, but I didn't find it in the examples. Could you point me to how its done please?

Thanks!

Core `mephisto` directory structure could use work

The current directory structure doesn't have as much clear subdivision as it probably should, likely due to natural shifting of the design. Right now it contains:

- client
   - <apis, clis, python server code>
- core
   - python classes that are any number of steps above the data model
- data_model
   - <elements of the data model> 
   - <some configuration for the data model> 
   - <base forms of the core abstractions>
- providers
   - <provider implementations>
- scripts
   - <arbitrary helper scripts>
- server
   - architects
      - <implementations of the architect class>
   - blueprints
      - <blueprint implementations>
   - channels
      - <Channels for architects to use>
- tasks
   - <empty suggested directory that people can use as a workspace>
- utils
   - file of utilities for people to use in scripts

At the moment the contents of providers and server should likely better display the abstractions. webapp is mostly related to the client, and the contents of data_model, core, and utils don't fully follow the high level architecture design laid out in our main doc.

I propose the following setup:

- client
   - <apis, clis>
   - full_app
      - <python server for the full application>
      - webapp
         - the webapp files for specific frontends
    - review_app
       - ...
    # Shared frontend code between views should exist in `packages` at the top level
- data_model
   - <only data model elements as described in the architecture document>
- abstractions
   - <Blueprint>
   - <Architect>
   - <CrowdProvider>
   - blueprints
      - <implementations of blueprints>
   - architects
      - channels
         - <implementations of channels>
      - <implementations of architects>
   - providers
      - <implementations of crowd providers>
- operations
   - <classes that implement parts of the operations layer, as described by the architecture docs>
- tools
   - <classes that operate on the Mephisto data model, but not as operations, such as the MephistoDataBrowser, and script utils>
- scripts
   - <arbitrary helper scripts>

To make this progression, I'd suggest we add a deprecation notice at v0.2 to the old imports, having them point to the new locations (and do a pass-through import) and then remove them in a v0.3.

Open to discussion on what this might look like or if there are better ideas!

DoneButton disappear after episode done

I've received some emails saying that turkers are unable to click Done with this HIT button in parlai chat. cmd:
python ~jingxu23/ParlAI/parlai_internal/mturk/tasks/reverse_persona/parlai_chat_task_demo/parlai_test_script.py
The DoneButton would flash for just a couple of seconds right after episode_done() = True and then disappear.
It appears that it might be related to this pr #366, where

Above is only my hypothesis, not sure if it's the actual cause. My temporary way of fixing it is modifying https://github.com/facebookresearch/Mephisto/blob/master/mephisto/operations/supervisor.py#L801-L806 to below:

            if status != db_status:
                if status == AgentState.STATUS_COMPLETED:
                    continue
                if status != AgentState.STATUS_DISCONNECT:
                    # Stale or reconnect, send a status update
                    self._send_status_update(self.agents[agent_id])
                    continue  # Only DISCONNECT can be marked remotely, rest are mismatch
                agent.update_status(status)

which would render the DoneButton again which I can also click on.
Not sure if it's correct tho.

bug when run parlai_chat_task_demo on mturk sandbox

We use the latest mephisto.
This is the commend we use python parlai_test_script.py conf=onboarding_example mephisto/architect=heroku mephisto/provider=mturk_sandbox. We use this commend because the commend in quickstart reports error.

Then we get the following error:

[09-21 16:32:15] p26949 {operator.py:254} ERROR - Encountered error while launching run, shutting down
Traceback (most recent call last):
File "/home/tao/Mephisto/mephisto/core/operator.py", line 242, in validate_and_run_config_or_die
provider.setup_resources_for_task_run(task_run, run_config, task_url)
File "/home/tao/Mephisto/mephisto/providers/mturk/mturk_provider.py", line 107, in setup_resources_for_task_run
for qualification in task_args.get("qualifications", []):
NameError: name 'task_args' is not defined
[2020-09-21 16:32:15,742][mephisto.core.operator][ERROR] - Encountered error while launching run, shutting down
Traceback (most recent call last):
File "/home/tao/Mephisto/mephisto/core/operator.py", line 242, in validate_and_run_config_or_die
provider.setup_resources_for_task_run(task_run, run_config, task_url)
File "/home/tao/Mephisto/mephisto/providers/mturk/mturk_provider.py", line 107, in setup_resources_for_task_run
for qualification in task_args.get("qualifications", []):
NameError: name 'task_args' is not defined
Heroku: Deleting server: tao-parlai-chat-example-6-cc11
Destroying tao-parlai-chat-example-6-cc11 (including all add-ons)... done
[09-21 16:32:23] p26949 {operator.py:352} ERROR - Ran into error while launching run:
Traceback (most recent call last):
File "/home/tao/Mephisto/mephisto/core/operator.py", line 349, in validate_and_run_config
run_config=run_config, shared_state=shared_state
File "/home/tao/Mephisto/mephisto/core/operator.py", line 263, in validate_and_run_config_or_die
raise e
File "/home/tao/Mephisto/mephisto/core/operator.py", line 242, in validate_and_run_config_or_die
provider.setup_resources_for_task_run(task_run, run_config, task_url)
File "/home/tao/Mephisto/mephisto/providers/mturk/mturk_provider.py", line 107, in setup_resources_for_task_run
for qualification in task_args.get("qualifications", []):
NameError: name 'task_args' is not defined
[2020-09-21 16:32:23,377][mephisto.core.operator][ERROR] - Ran into error while launching run:
Traceback (most recent call last):
File "/home/tao/Mephisto/mephisto/core/operator.py", line 349, in validate_and_run_config
run_config=run_config, shared_state=shared_state
File "/home/tao/Mephisto/mephisto/core/operator.py", line 263, in validate_and_run_config_or_die
raise e
File "/home/tao/Mephisto/mephisto/core/operator.py", line 242, in validate_and_run_config_or_die
provider.setup_resources_for_task_run(task_run, run_config, task_url)
File "/home/tao/Mephisto/mephisto/providers/mturk/mturk_provider.py", line 107, in setup_resources_for_task_run
for qualification in task_args.get("qualifications", []):
NameError: name 'task_args' is not defined
[09-21 16:32:23] p26949 {operator.py:302} INFO - operator shutting down
[2020-09-21 16:32:23,378][mephisto.core.operator][INFO] - operator shutting down

It looks like task_args is not defined at mturk_provider.py.

botocore.errorfactory.RequestError

Hi, I tried to deploy a formal task in amazon mturk, although the shell presents the following error message:
botocore.errorfactory.RequestError: An error occurred (RequestError) when calling the CreateHITWithHITType operation: This Requester has insufficient funds in their account to complete this transaction. Please visit https://requester.mturk.com/prepayments/new to purchase Prepaid HITs.
I think this a log from the amazon API. My account has turned on the AWS bill so I can't purchase prepaid hits anymore. Does Mephisto provide any setting to set whether to use prepaid hit or AWS bill?

deployment of parlai_chat_task_demo on mturk sandbox

Hi, I was trying to build up a chat data collection task based on the parlai_chat_task_demo. My task can work normally when deploy on local. Although when I deployed my task on mturk sandbox, the worker can't connect to the server correctly. (I registered two mturk sandbox account ). I also tried to use the parlai_chat_task_demo example without any modification, although I still get the same error. The screenshot is the page in browser and the command I used in the shell. Would you kindly provide any suggestion on this case? Thanks for your help.
Screenshot from 2020-11-02 15-45-30

Screenshot from 2020-11-02 15-33-31

[RFC] Using Poetry for package management

https://poetry.eustace.io/

Among other things, poetry also 1) handles the virtualenv for you, 2) creates a hashed lockfile in case you need reproducible builds, and 3) makes contribution easier, since you only have to run "poetry install" after cloning a repo and are ready to go.
- https://stackoverflow.com/a/57676367

I built Poetry because I wanted the one tool to manage my Python projects from start to finish. I wanted something reliable and intuitive that the community could use and enjoy.
- Sรฉbastien Eustace

Uses the pyproject.toml standard, see PEP 518

Thoughts?

6/1 Laundry List

Web UI

  • Denote required fields in the UI when launching tasks
  • Hide the mock architect in the UI since it is only used for testing purposes
  • Should we hide the mock requester account as well? Should we add a local requester instead?
  • Add a better file picker in the UI (but for this we need to have the back-end be able to denote an arg as type file)
  • A task failed because it was run with a local architect w/ an mturk requester. mTurk threw an error because the URL was localhost. This task however still shows up in the launch view. Perhaps change the task to a different state and expire all units?

CLI

  • Make mephisto check auto-register the mock requester if one was created
  • Typecast registered (issue: Mock_REQUESTER was saying registered is 0 instead of False)
  • Mock requester seems to be unregisterable via mock_requester.py. Should we move the registering to mephisto check?
  • Was unable to force quit a task. Kept getting the message: Waiting on 1 task runs, Ctrl-C ONCE to FORCE QUIT. Repro: (blueprint: static_react_task, architect: local, requester: mock)

ParAI Chat Task Demo error in frontend

Hi,

I tried to run ParlAI Chat Task Demo on local server using python parlai_test_script.py. There is no error in the python logs.

When I open localhost:3000/?worker_id=x&assignment_id=1, the frontend doesn't load. The console of Google Chrome shows the following error (image attached). Cannot read property 'Provider' of undefined. I'm guessing MephistoContext is undefined but I'm not sure how to confirm or fix this.

Screen Shot 2020-12-03 at 7 34 53 PM

Better CLI output

It would be great to think of Mephisto's output from a user-first perspective. Here's a mockup of what a more friendly output stream could look like. In this case, I move away from just using logging as output as mentioned in #386 and focus on a more first-class support for CLI output.

GIF:

rich_print

Still image:

Screen Shot 2021-02-03 at 4 32 52 PM


Mockup source:

Requires the rich package, pip install rich:

from time import sleep
from rich.console import Console
from rich.prompt import Confirm

console = Console()
DELAY_OFF = 0

def print_summary(task):
    from rich.table import Table, Column
    from rich import box
    import locale
    locale.setlocale(locale.LC_ALL, 'en_US.UTF-8') 

    table = Table("property", Column("value", style="magenta bold"), "description", title="Task Launch Preview", box=box.ROUNDED, expand=True, show_lines=True)
    # table = Table("task_name", "blueprint", "architect", "crowd provider", "requester", title="Task Launch Preview")
    # table.add_row("my_custom_task", "simple_static_task", "local", "mturk_sanbox", "demo")
    table.add_row("task_name", "my_custom_task", "ID used to later query all HITs launched in this run.\nConfigure via: task.task_name.")
    table.add_row("blueprint", "simple_static_task", "The base template for your task. Can be found in the folder: mephisto/abstractions/blueprints.")
    table.add_row("architect", "local", "Where this task will be hosted/launched from. Configure via: mephisto/architect=...")
    table.add_row("crowd provider", "mturk_sandbox", "The underlying platform that supplies the crowdworkers (e.g. mturk, mturk_sandbox).")
    table.add_row("requester", "demo", "The account/budget to run the task through. Configure via: mephisto.provider.requester_name=... Or use the mephisto CLI for more info: 'mephisto requesters'")


    console.print(table)

    billing_table = Table("num HITs", "price", "estimated total", title="Estimated Costs")
    billing_table.add_row(str(task['hits']), locale.currency(0.02), f"[bold green]{locale.currency(task['hits'] * 0.02)}[/bold green]")
    console.print(billing_table)

def log(category, message):
    return f"[dim]{category}[/dim] - {message}"

def prepare_router():
    console.log(log("router/install", "npm install (hidden, show with 'build.show_npm_output: true')"))
    sleep(1 * DELAY_OFF)
    console.log(log("router/install", "npm build (hidden, show with 'build.show_npm_output: true')"))
    sleep(2 * DELAY_OFF)
    console.log(log("router/install", "completed npm build"))
    console.log(log("operator/launch", "created a task run under name: 'react-static-task-example'"))

def prepare_architect():
    console.log(log("architect/deploy", "building payload"))
    sleep(1 * DELAY_OFF)
    console.log(log("architect/deploy", "sending payload to final destination"))
    sleep(2 * DELAY_OFF)
    console.log(log("architect/deploy", "deployed successfully"))





def launch_tasks(console):
    from rich.emoji import Emoji
    preview_url = "http://localhost:3000"
    assignment_url = "http://localhost:3000/?worker_id=x&assignment_id="
    for job in range(3):
        # console.log(f":rocket: Launched Assignment {job+1}. Preview: '{preview_url}' | Assignment: '{assignment_url}{job+1}'")
        console.log(log(f":rocket: assignment/launch", f"assignment ID: {job+1}\nPreview: '{preview_url}'\nAssignment: '{assignment_url}{job+1}'"))
        sleep(0.1)
    sleep(2 * DELAY_OFF)


def start():
    task = { "hits": 500}
    print_summary(task)
    confirm = Confirm.ask(f"[bold]Are you sure you would like to proceed launching {task['hits']} HITs?[/bold]")
    if not confirm:
        return

    with console.status("[bold green]Setting up router...") as status:
        prepare_router()
    with console.status("[bold green]Invoking architect...") as status:
        prepare_architect()


    from rich.progress import Progress
    with Progress(transient=True) as progress:
        task = progress.add_task("[bold green]Launching tasks...", total=3)
        sleep(1)

        launch_tasks(progress.console)
        # simulate completion:
        for job in range(3):
            sleep(1.5)
            if job == 1:
                progress.console.log(log(f"[yellow]:hourglass: assignment/update", f"Assignment {job+1} timed-out."))
            else:
                progress.console.log(log(f"[green]:inbox_tray: assignment/update", f"Assignment {job+1} received."))
            progress.advance(task)

        sleep(1)



    console.log("Finished launching 3 tasks.")

    # with console.status("[bold green]Launching tasks...", spinner="earth") as status:
    #     launch_tasks()


start()

[TODO Roundup] Logging overhaul

Summary

Mephisto doesn't yet have a good system for logging of information. As such we'd like to add loggers via the python Logger class. At a high level, modules should each have their own loggers. Initially we should be able to configure the log level across the whole application (from DEBUG to CRITICAL), and should be able to write both to terminal and to file. NOTE - the logger by default should be configured to use NullHandler, and mephisto should have a setting to toggle this to StreamHandler (for writing to terminal) or FileHandler (for writing to file).

Some helpful information for logging best practices can be found across the web (1, 2, 3...)

Requirements:

  • Create a core.logger class for Mephisto to store any initializers and helper methods required for creating loggers.
  • Add a logger object to every class in the core, data_model, server, and providers modules.
  • Replace existing print statements with logs, and add additional marked logs.
  • Add logging to unintended except blocks

Missing logs

  • Do a check to see if the database schema is not as expected, and log this to the user.
  • Update the wrapper around the call to heroku functions to only surface the "You have too many apps" message if there are actually too many applications, otherwise surface the contained error. This can be tested by creating a heroku account, launching 5 apps without shutting them down, and then launching another.
  • Do propper logging around failed MTurk Approve and Reject calls, described here

Wishlist

  • Peruse the flow of the Supervisor, Operator, and TaskLauncher classes to see if there are important control flow operations that might be improved with logging.
  • Add logging to the data model for similar important control flow paths, things that might require inspection. (Essentially, while debugging other parts of Mephisto, when you find the important variable to introspect, add that to DEBUG logging. Important control flow goes into INFO, and then errors above that).
  • Update the logging in print_run_details to actually have useful details about the progress of tasks currently being run.
  • Write debug level logging to file, and provide a utility for cleaning up old Mephisto logs.

shared information between two human worker in one conversation.

I'm developing a human-human chat collection task, which needs to present a shared message between two workers before the conversation. There are two problems I don't know how to solve:

  1. In one chat task, two human workers need to share the same message before the conversation.
  2. Shared messages are different across tasks.

I plan to present this message in MTurkOnboardWorld class. But, it looks like this class needs assignment_id to solve the above problems. I don't know where I can get assignment_id and pass it to MTurkOnboardWorld class.

Is there any way to solve these problems?

Add front-end error logging support to the Web UI

Problem

Currently, when an error occurs on a task, the webapp silently fails. The only way the task author is usually notified is via angry worker emails. The task could be running for several hours before the author knows something went wrong. This could result in wasted time & money.

We'd like to be able to proactively catch JS errors on the front-end and log caught errors to the back-end.

Proposal

There are three cases where I can see errors occurring:

1. Errors within a Mephisto context

For these cases, I think we should add a helper utils component called <ErrorBoundary /> (more info here) and expose a handleFatalError method to the useMephistoTask hook:

import { useMephistoTask } from "mephisto-task"
import { ErrorBoundary } from "mephisto-task/utils";

function MyApp() {

  const { handleFatalError, /* ... */  } = useMephistoTask( /* ... */);

  return (
    <ErrorBoundary onError={handleError}>
      <AppThatThrowsException />
    </ErrorBoundary>
  );
}

handleFatalError() will allow users to imperatively handle failures, while the <ErrorBoundary /> util component will allow users to declaratively add the functionality.

Note: There are cases where error boundaries don't get triggered. Luckily I think Option 3 below might be able to handle those cases however.

2. Errors in Mephisto library code

This should require no implementation work on the user side. Mephisto should either explicitly call handeFatalError internally or let the error bubble up and get handled by the global catch-all mentioned below.

3. Global error-catch all, errors outside of Mephisto context & maybe even React

At this point, we have no knowledge that a connection to Mephisto even exists. Perhaps error handling logic in these cases should be implemented in wrap_crowd_source.js per each provider. We can also add a global event handler such as window.onerror = function() { /* ... */ }

For the logging mechanism on the error, perhapse we can just use handleSubmitToProvider

function handleSubmitToProvider(task_data) {

Discussion

To log the error message on the server side, one thing we could do is to repurpose handleSubmit. We can create a similar handler called handleFatalError that essential just calls handleSubmit( { error: errorMessage }).

With this scheme, on the mephisto side, errors are just logged like task submission results. This may or may not be desirable.

Although it is easy in that it leverages the existing data schema, we will then need to add some mechanism for users to filter out these error entries from their dataset.

One concern I have is with window.onerror. Perhaps a small, silly error occurs (maybe a decorate image asset failed to load), in that case it would seem a little extreme to "auto-submit" the task by calling handleSubmitToProvider in the onerror handler. A workaround could be that if there was a separate endpoint that could handle being called multiple times for the lifetime of a task, then we could still allow the app to stay up without submission in these cases. However, this would involve building out additional stuff on the back-end.

Any other thoughts on how to persist the errors @JackUrb? Does this approach seem reasonable?

Testing

As a baseline test, the following types of errors should be handled in static_react_task:

onClick={() => onSubmit({ rating: "bad" })}

Here we can replace () => onSubmit({ rating: "bad" }) with () => { throw new Error("test"); } to test Error type 1 above.

Here for the first line in <MainApp /> we can add throw new Error("test"), this should be ideally caught by Error type 3 above.

How to dynamically publish HITs when previous HITs have been returned or submitted?

Hi,

I have been trying the Static React Task. I found that the number of published HITs is decided by the number of items in the shared_state.static_task_data. I was wondering whether there is a way to dynamically publish HITs rather than post all of them at the beginning. I'm asking because if we post all the HITs (say 1000) at the beginning, there might be too many concurrent working assignments, and the backend might not be able to handle.

Ideally, it would be better if we can also dynamically publish HITs base on the previously collected HITs.

Sorry to bother if it's already posted somewhere or in some examples.

Nicer looking logging

It would be nice if Mephisto could have some better formatted logging.

I came across this library, which has a drop-in logging handler for formatting:

https://github.com/willmcgugan/rich#logging-handler

image

I tried to implement this in logger_core.py and was able to get it to work to some extent, but had issues like certain log outputs were getting left out, and other log outputs were getting duplicated (printed both richly and plainly), so maybe someone else can take a look into this. Jack mentioned that the 1.1 release of Hydra should also make logging a bit easier.

Path changes in cleanup scripts

In mephisto/scripts/mturk/cleanup.py: broken imports line 11-15 with the change from core and providers into abstraction - can also submit a PR if that's easier!

Some examples don't run with newer versions of NPM

The cause of this issue is npm install failing for versions of npm@v7. See output below.

See the Peer Dependencies section here: https://blog.npmjs.org/post/626173315965468672/npm-v7-series-beta-release-and-semver-major

This causes npm install to fail in certain cases. For example:

Potential Fixes

Output

$ cd mephisto/examples/simple_static_task && python static_test_script.py
[2021-01-27 04:42:55,653][mephisto.operations.operator][INFO] - Creating a task run under task name: html-static-task-example

added 123 packages, and audited 124 packages in 8s

3 high severity vulnerabilities

To address all issues, run:
  npm audit fix --force

Run `npm audit` for details.
[2021-01-27 04:43:03,347][sh.command][INFO] - <Command '/bin/rm -rf /mephisto/data/data/runs/NO_PROJECT/1/build/router', pid 19>: process started
npm ERR! code ERESOLVE
npm ERR! ERESOLVE unable to resolve dependency tree
npm ERR!
npm ERR! While resolving: [email protected]
npm ERR! Found: [email protected]
npm ERR! node_modules/react
npm ERR!   react@"^16.5.2" from the root project
npm ERR!
npm ERR! Could not resolve dependency:
npm ERR! peer react@"16.13.1" from [email protected]
npm ERR! node_modules/mephisto-task
npm ERR!   mephisto-task@"^1.0.14" from the root project
npm ERR!
npm ERR! Fix the upstream dependency conflict, or retry
npm ERR! this command with --force, or --legacy-peer-deps
npm ERR! to accept an incorrect (and potentially broken) dependency resolution.
npm ERR!
npm ERR! See /root/.npm/eresolve-report.txt for a full report.

npm ERR! A complete log of this run can be found in:
npm ERR!     /root/.npm/_logs/2021-01-27T04_43_04_367Z-debug.log
[2021-01-27 04:43:04,386][mephisto.operations.operator][ERROR] - Encountered error while launching run, shutting down
Traceback (most recent call last):
  File "/mephisto/mephisto/operations/operator.py", line 210, in validate_and_run_config_or_die
    built_dir = architect.prepare()
  File "/mephisto/mephisto/abstractions/architects/local_architect.py", line 132, in prepare
    self.server_dir = build_router(self.build_dir, self.task_run)
  File "/mephisto/mephisto/abstractions/architects/router/build_router.py", line 89, in build_router
    task_builder.build_in_dir(local_server_directory_path)
  File "/mephisto/mephisto/abstractions/blueprints/static_html_task/static_html_task_builder.py", line 62, in build_in_dir
    self.rebuild_core()
  File "/mephisto/mephisto/abstractions/blueprints/static_html_task/static_html_task_builder.py", line 45, in rebuild_core
    raise Exception(
Exception: please make sure npm is installed, otherwise view the above error for more info.
[2021-01-27 04:43:04,389][mephisto.operations.operator][ERROR] - Could not shut down architect: shutdown called before deploy
Traceback (most recent call last):
  File "/mephisto/mephisto/operations/operator.py", line 210, in validate_and_run_config_or_die
    built_dir = architect.prepare()
  File "/mephisto/mephisto/abstractions/architects/local_architect.py", line 132, in prepare
    self.server_dir = build_router(self.build_dir, self.task_run)
  File "/mephisto/mephisto/abstractions/architects/router/build_router.py", line 89, in build_router
    task_builder.build_in_dir(local_server_directory_path)
  File "/mephisto/mephisto/abstractions/blueprints/static_html_task/static_html_task_builder.py", line 62, in build_in_dir
    self.rebuild_core()
  File "/mephisto/mephisto/abstractions/blueprints/static_html_task/static_html_task_builder.py", line 45, in rebuild_core
    raise Exception(
Exception: please make sure npm is installed, otherwise view the above error for more info.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/mephisto/mephisto/operations/operator.py", line 262, in validate_and_run_config_or_die
    architect.shutdown()
  File "/mephisto/mephisto/abstractions/architects/local_architect.py", line 175, in shutdown
    assert self.running_dir is not None, "shutdown called before deploy"
AssertionError: shutdown called before deploy
[2021-01-27 04:43:04,389][mephisto.operations.operator][ERROR] - Ran into error while launching run:
Traceback (most recent call last):
  File "/mephisto/mephisto/operations/operator.py", line 412, in validate_and_run_config
    return self.validate_and_run_config_or_die(
  File "/mephisto/mephisto/operations/operator.py", line 268, in validate_and_run_config_or_die
    raise e
  File "/mephisto/mephisto/operations/operator.py", line 210, in validate_and_run_config_or_die
    built_dir = architect.prepare()
  File "/mephisto/mephisto/abstractions/architects/local_architect.py", line 132, in prepare
    self.server_dir = build_router(self.build_dir, self.task_run)
  File "/mephisto/mephisto/abstractions/architects/router/build_router.py", line 89, in build_router
    task_builder.build_in_dir(local_server_directory_path)
  File "/mephisto/mephisto/abstractions/blueprints/static_html_task/static_html_task_builder.py", line 62, in build_in_dir
    self.rebuild_core()
  File "/mephisto/mephisto/abstractions/blueprints/static_html_task/static_html_task_builder.py", line 45, in rebuild_core
    raise Exception(
Exception: please make sure npm is installed, otherwise view the above error for more info.
[2021-01-27 04:43:04,390][mephisto.operations.operator][INFO] - operator shutting down

Control pairing mechanism in dialogue task

I'm working on a dialogue collection task in which speakers have to play a role (user and wizard). We wish a certain group of people play the role of wizard. For example, a certain group of worker ID must be the wizard. When we tried on the sandbox, the pairing mechanism looks like random. Can mephisto control the pairing process in dialogue task?

[TODO Roundup] Possible optimization

Summary

There are a number of core Mephisto components that could potentially be improved upon but due to time constraints were implemented in a potentially inefficient way. This task keeps track of all of these components (with # TODO(103))

[TODO Roundup] InitializationData Loading

Summary

At the moment data loading into Mephisto tasks is managed with a static list of InitializationData that is constructed at the moment of task launch. This doesn't work well with the idea of tasks that launch additional instances as time proceeds. As such if we want dynamic length tasks, we'll need to get the generator version of InitializationData working. We'll also need to update certain core assumptions about task status to determine the difference between a task that has all of its units complete vs one that just has its currently generated units complete (and more may come later).

Requirements

  • Make TaskLauncher __init__ take an additional argument for max_number_of_concurrent_units. Create a version of launch_units that spawns a background thread to launch the units, and then that thread should rely on a generator that only returns a unit when there are less than the max in the "Launched/assigned" state. (There can be local state added to TaskLauncher to keep track of the units that are currently in the state, such that you only need to check on those rather than all units. It will also be useful to keep a list of currently unlaunched units). This thread should be exited after expire_units is called. Write a test case in the task_launcher_test.py that initializes a task launcher with a cap of 1, and ensure it only spawns 1 task at a time (and that completing that one task causes a second to launch)
  • Get the Generator version of InitializationData working (grep InitializationData to get started on extra context). This can be done by instead creating a thread for TaskLauncher that will be responsible for keeping track of both launch_assignments and launch_units. You can break this up into two subfunctions try_launch_more_assignments and try_launch_more_units, which start being called in the main thread after the corresponding launch_x function is called until expire_units is called. (#188)
  • Update Operator to handle properly shutting down when a task run has been completed. At the moment TaskRun.get_is_completed() is used in _track_and_kill_runs to see when a currently running task has no more units to work on before shutting down. This will no longer work properly if the operator is waiting on an assignment to be generated, as all of the already generated assignments may be completed, and this method will shut down a task that may still have assignments to come. As such, the Operator should be checking the TaskLauncher instead, and should only refer to get_is_completed when the TaskLauncher's assignment generator thread is done. Also, the TaskRun should be updated to only call self.db.update_task_run(self.db_id, is_completed=True) and set the cached __is_completed value after the Operator sets a flag on the TaskRun to note that assignment generation is done.
  • Add the reconnection feature for live tasks, which requires updates to Supervisor and Operator. The general flow change will be that, if task_args.get('requeue_incomplete_tasks') is True, a requeue=True flag is passed to TaskLauncher. When this flag is set, upon initializing the task launcher, the launcher will create an empty requeued_assignment_data array, and wrap the initialization_data_iterator in a generator function that prioritizes returning elements from that array before from the initialization data iterator. Then, in the Supervisor, if in the _launch_and_run_assignment call's finally clause, if the TaskLauncher has an initialized requeued_assignment_data array, TaskLauncher.requeue(assignment.get_assignment_data()) should be called to requeue that assignment. In order for this to work, the Job RecordClass will need to be updated to receive a TaskLauncher as well, and the _launch_and_run_assignment call should receive the job rather than just the TaskRunner.

Deploy on our own server

From current instruction, it looks like we have to deploy the service on Heroku. We have our own server. Could we deploy the service on our own server?

Export error on ErrorBoundary

Unable to import {ErrorBoundary} properly:
Tested the cmd python Mephisto/examples/static_react_task/run_task.py which import it in app.jsx and frontend error:
image

I tried copy paste the class ErrorBoundary from utils.js directly to packages/mephisto-task/src/index.js and it works. Probably some problem with export?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.