Giter VIP home page Giter VIP logo

linguists-client's Introduction

linguists-client

Responsive user interface for an NL-to-SQL system. Empowers non-technical users with direct database access.

Read the paper

Linguists_Client

Comprehensive conceptual view

Linguists Project Conceptual View

Developer Setup

  1. clone the repo git clone https://github.com/SethCram/linguists-client.git
  2. install the JavaScript runtime environment Node.js v16 and npm, the Javascript package manager
    1. on Windows (recommended), download Node.js off their website which comes with npm pre-installed
    2. on Ubuntu-based distributions
      sudo apt install -y nodejs
      sudo apt install -y npm
    3. on RHEL-based distributions
      sudo yum install -y nodejs
  3. run "npm install" in the root folder to install the project dependencies
    cd linguists-client
    npm install
  4. run npm start in the root folder to get the frontend up and running
    1. if an issue is encountered, try installing nvm and updating to the right node.js version:
      curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.3/install.sh | bash
      command -v nvm
      If "command -v nvm" doesn't output "nvm", logout and relogin, and then:
      nvm install 16.17.1
  5. make a change to the project and notice how it's immediately reflected in the development server after saving
  6. head over to the backend at https://github.com/SethCram/Linguists-NLP-to-SQL/blob/main/README.md#setup and follow the setup steps

Deployment Instructions (on Linux)

  1. first, setup the backend: backend deployment instructions
  2. install setup software Node.js and npm
    1. on Ubuntu-based distributions
      sudo apt install -y nodejs
      sudo apt install -y npm
    2. on RHEL-based distributions
      sudo yum install -y nodejs
  3. clone the repository git clone https://github.com/SethCram/linguists-client.git
  4. install the project dependencies
    cd linguists-client
    npm install
  5. try running the frontend sudo npm start
    1. if an issue is encountered, try installing nvm and updating to the right node.js version:
      curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.3/install.sh | bash
      command -v nvm
      If "command -v nvm" doesn't output "nvm", logout and relogin, and then:
      nvm install 16.17.1
  6. cancel the process with Ctrl-C now that we verified that it launched okay
  7. redirect the server traffic to the web application
    sudo vi /etc/nginx/nginx.conf
    add this inside the "server" block, replacing "root" with:
    root /var/www/html;
    
    verify the syntax of the config file is okay and start nginx using it
    sudo nginx -t
    sudo service nginx restart
  8. create the proper folders, copy the production build files to it, and make sure SELinux doesn't interfere
    sudo mkdir -p /var/www/html
    sudo cp -r ./build/* /var/www/html
    restorecon -r /var/www/html
  9. navigate to the public IP address using http (e.g. http://[publicIPAddress]) and the frontend should be visible or use curl to verify curl http://[publicIPAddress]
  10. verify that it's connected to the backend at port 8000 by clicking on the dropdown and seeing if it breaks

linguists-client's People

Contributors

sethcram avatar

Watchers

 avatar  avatar  avatar

linguists-client's Issues

[ENHANCEMENT] Create a user management system

Is your feature request related to a problem? Please describe.

  • Yes, there's no way to maintain the frontend once it's running

Describe the solution you'd like

  • Create a user management system for atleast admin

Describe alternatives you've considered

  • Create a user management system for all users of system

Additional context

  • Would likely require creating a whole nother API
    • Ideally .NET w/ C# to get experience with it

Development

    • ...

[ENHANCEMENT] Allow removal of database file and folder from from backend

Is your feature request related to a problem? Please describe.

  • Yes, currently can't remove erroneous or unneeded files that've been uploaded unless have access to source code

Describe the solution you'd like

  • Allow removal of database file and folder from from backend via API call

Additional context

  • Shouldn't allow just anybody to delete uploaded db files
  • This is mainly where user management system mentioned in #3 could be useful
    • non logged in user could query any uploaded database (maybe) (or not do anything)
    • logged in user can only query + delete their own databases they uploaded

Development

  • implement backend file deletion method
    • deletes both sql and database file by the given name
      • it's possible that sql file doesn't exist if db file uploaded directly
      • so throw an error but only if neither the db or sql file is removed
      • otherwise just log what happened
    • doesnt throw an error when both files are missing
      • simple error in boolean logic

[ENHANCEMENT] Change website's color scheme

Is your feature request related to a problem? Please describe.

  • Yes, the color scheme is reused from elsewhere

Describe the solution you'd like

  • a new color scheme that goes well with the website

Additional context

  • should change css variables to different values
  • could potentially also change background image

Development

    • ...

[ENHANCEMENT] Allow file deletion from frontend

Is your feature request related to a problem? Please describe.

  • Yes, the API has a file deletion method but no way to access it on the frontend

Describe the solution you'd like

  • Allow file deletion from frontend via a button

Additional context

  • could place a red X on the RHS of database selection dropdown
    • should only appear w/ database selected
    • once clicked, should clear selected db

Development

  • implement clickable icon only when db selected
    • styled similarly to file upload btn
    • on opposite side as file upload btn
  • implement "isDeleting" variable during deletion
    • should only do so if some deletions take a long time
    • or if want to turn trash can symbol into loading sign during this time

[ENHANCEMENT] Deploy the project onto the cloud

Is your feature request related to a problem? Please describe.

  • Yes, the project must be accessible during EXPO online

Describe the solution you'd like

  • Deploy the project onto the cloud

Additional context

  • AWS Lambda and other AWS should be free
  • Likely done thru deploying a Docker Image to AWS ECS from Docker Hub
  • Required minimal specs for API:
    • 32GB of storage
    • 10GB of RAM

Development

  • Research into proper way to deploy Docker image API to AWS

    • https://www.youtube.com/watch?v=awFLzy0XwXo
      • Setup an EC2 instance running Ubuntu
      • Turned the public .pem key into a private .ppk key using puttygen
      • SSH'd into instance using Putty + private .ppk key
      • Downloaded docker onto the EC2 instance
      • He uploaded his repo via FileZilla + using a key
      • Then he used his docker compose to launch the container
        • I might have to create a docker compose
      • Navigated to live UI using the public DNS IP and then ":exposedPortNumber"
        • need to make sure this EC2 instance's security group allows the exposed port # access thru the firewall
        • thru adding a new security rule allowing traffic thru port number in security group tab
    • https://www.youtube.com/watch?v=YDNSItBN15w
      • Create a Dockerfile, uploaded it to ECR repo, created an ECS cluster, and created a task to deploy the container
      • Can convert docker-compose to equivalent ECS task definition using container transform
        • but can't run docker-compose on CLI of ECS cluster
        • which i think is why previous tutorial used EC2 instance instead of ECS cluster
      • Ends up launching ECS on EC2 instance
    • https://www.youtube.com/watch?v=lO2wU2rcGUw
      • ECS for running multiple containers across EC2 instances
      • ECS cluster = group of EC2 instances that are spun up for us
      • Linux based container must be run on Linux server
        • bc docker is running on the host OS
      • no VM in ECS
        • just isolated proc's running on docker engine
      • all EC2 instances use same security group
        • possible for multiple docker containers to be running on one host
        • don't want them to be trying to listen on the same ports
        • so need to create custom inbound rule for custom TCP from a dynamic port range
        • will need to map application to those particular ports
      • ECS cluster spreads itself across multiple availability zones
      • can manually setup number of ECS instances or use auto scaling
      • Only load balancer suited for ECS is App Load Balancer
        • catered towards apps and containers
        • should be internet-facing
        • need VPC that supports ECS cluster
        • outside of load balancer should be new security group from HTTP port 80 for incoming traffic
        • should rm default listener and target group
      • Can have diff task defs for diff aspects of app
        • such as one for frontend, one for backend
    • https://www.youtube.com/watch?v=SgSnz7kW-Ko
      • coudl use EC2 or Lambda to host Fastapi
      • FastAPI doesn't support HTTPS
      • could use pubic IP directly or Public DNS
  • try to deploy API to AWS

    • Created an AWS account + linked Credit Card
      • but chose free plan so hopefully won't be billed
    • created EC2 instance
      • everyone should be able to access API
        • would prefer to have HTTPS enabled since FastAPI displays a UI
        • need HTTP for API access from website
        • need SSH for installing things on the server + getting it setup
      • running Ubuntu since it's what I used on local machine
        • Amazon linux may have been easier to interface w/
      • Turned the public .pem key into a private .ppk key using puttygen
      • SSH'd into instance using Putty + private .ppk key
      • Downloaded docker onto the EC2 instance
    • Shouldn't need to use ECS unless want more than one container running at a time
    • Tried manually setting up + running system on EC2 instance via SSH:
      • ran out of memory after only using 4.9GB even tho 8GB allocated onto volume
      • Destroyed the instance and created a new one with a 30GB volume
        • since 30GB of EBS is the most the free-tier allows
        • was able to use the same security group so same .pem key
        • had to upgrade, install make, clone the repo, and install docker.io
    • Pull docker eval image:
      • Had to run "sudo usermod -a -G docker $USER" to run docker
        • adds unprivileged user to docker group
        • had around 3GB taken so 27GB left
      • the image has historically taken 24+/-1 GB
      • Docker image took up 24.7GB and was successfully retrieved
    • Tried running "make serve" on EC2 instance with max free tier volume EBS storage of 30 GB
      • "make serve" failed bc it used all the space left on the device
        • huggingface model was another 2.92GBs that I didn't have that it pulled down
          • was able to pull only 2.90GB of it
        • why does it need this? not sure
        • used "du -sh" to determine that the repo folder takes up 16GB of space on my local machine
          • Was only able to takeup 3.4GB of space on remote machine since only had that much left
          • Docker image pull and repo setup took up the rest of the 30GB
        • pulled a fresh repo + make served and it only took up 3.4GB
          • so the AWS server must have needed 20MBs more
    • Tried attached EFS since it's got 5GB of free storage
      • would require its own folder tho
      • could potentially attach this and use it as the folder to store databases
        • but not to provide additional storage in installation process
        • looked at the size of each folder in the root of repo using "du -sh ./*"
          • transformers_cache is by far the biggest at 3GB
          • believe that Huggingface model stored here
      • tried attached EFS as transformers_cache folder
        • this will also leave some extra room for databases uploaded to the syst
        • created an EFS on same VPN and subnet as EC2 instance
        • downloaded the EFS mount helper on EC2 instance
        • ran the mount cmd it gave me after seling "attach" in the dashboard
        • after it got done downloading the huggingface model into the EFS folder, the SSH connection hang
        • tried attaching EFS as transformers_cache folder after transformers_cache folder mostly filled with huggingface model
          • hung again once tried running make serve
          • tried connecting via SSH thru Windows w/ -v for verbose and it didn't rlly give any important info
            • but that hung too
        • restarting the EC2 instance required a remount of the EFS
          • could automate this
      • Checked each mount's storage via "df -h"
        • also used this to verify that EFS was mounted
      • gave the EC2 instance 35GB and it still hangs after finishing the huggingface download
        • if it hasn't changed for a couple hours, will reload instance and see how much space left
        • throws Docker error 137 bc runs out of RAM
        • needs atleast 10GB of RAM to run API
      • Possibly need Nginx to map the port out to the internet?
        • could verify by either attempting
      • So theoretically, storage problem overcame by EFS mounting
        • Wasn't able to verify since didn't wait for OOM docker error
  • test deploy website

    • notes detailed in logbook
    • did so on AWS using RHEL 9 with yum successfully
    • University servers use Rocky Linux 8 with yum
  • ensure can deploy API + frontend onto fresh Rocky Linux 8 VM

    • credentials are default admin username for now
    • gave it 10/16GB of RAM, 10/12 CPU cores, 35GB of disk
      • says I don't even have 6 cores, so should only give it 2 like I do for Docker desktop ig
    • https://www.youtube.com/watch?v=lyZafK_CZ0Y
    • was able to run all the deployment instructions verbatim on a Minimal installation
    • but couldn't connect to nginx or fastapi from host OS to VM OS
    • had to setup port-forwarding rule in virtual box to get traffic from host to guest via port 80 for http
    • now http://127.0.0.1/ brings up nginx page, but uvicorn is still unaccessible through http://127.0.0.1/docs#
    • "nohup" didn't work bc threw error that wasn't running in tty
    • But running make serve & > Enter > bg worked
      • but this won't persist
      • sounds like i need gunicorn
      • even running it in the bg fails once the next command is executed
    • had to use "nvm" to install correct node.js version
    • initial frontend connection works but:
      • "Ruleset ignored due to bad selector"
      • "Firefox can’t establish a connection to the server at ws://127.0.0.1:3000/ws."
      • and then all the api calls start failing
    • even while the backend is running like normal
      • looks like the frontend still had trouble connecting with the beckend regardless
      • can't verify this is the case locally since don't have a browser
    • can switch between different consoles on minimal linux install
      • "Ctrl-Alt-#"
      • where # = specific terminal number
        • default is 1
    • Verified swagger UI locally located at http://localhost:8000/docs#/ via curl
    • verified frontend locally available at http://lcoalhost via curl
    • internet setup:
      • give a high priority number for auto-activation (More than all other connections).
      • nmcli connection modify [CONNECTION-NAME] connection.permissions ''
      • Make sure your wifi gets enabled at boot time (otherwise you have to login and enable it).
        • nmcli connection modify [CONNECTION-NAME] connection.autoconnect yes
      • Now this connection will be activated before login and stay connected after logout.
        • nmcli connection modify [CONNECTION-NAME] connection.autoconnect-priority 10
    • pushed test api up to git and pulled it onto the VM
      • so now can make changes and immediately see the results
      • launch it with python3 ./code/api.py
      • can test it using postman, but not through connecting to the frontend
        • keep refreshing the page bc can't find hot-update.json
        • hopefully this wont be an issues once an actual docker image is built using /api/ prefix and allowing all forwarding ip's
    • In the context of servers, 0.0.0.0 means "all IPv4 addresses on the local
      machine". If a host has two IP addresses, 192.168.1.1 and 10.1.2.1, and a server running on the host listens on 0.0.0.0, it will be reachable at both of those IPs.
    • found my public ip addy thru Windows, not online website
      • can connect to VM thru there from anywhere
    • programmatically running gunicorn and using uvicorn workers (1):
      • works locally
      • allows the use of nohup properly
      • also works on VM for external postman traffic
      • docs# still don't work but that's okay
      • added install of gunicorn to dockerfile
    • tried creating new docker image with:
      • api prefix on url paths
      • programmatic gunicorn
    • gunicorn didn't keep the process alive, only in the test api did it
      • bc docker was being run with -it with required an interactive terminal
      • so, created new docker serve cmd for prod to detach docker container
    • seems like shouldn't daemonize gunicorn since it's being controlled by docker daemon
    • still seeing "no supported WebSocket library detected" w/ using gunicorn
    • running the old stable image no longer works for some reason
      • none of the endpoints can be reached
      • likely bc /api/ not used
    • to keep the docker container alive, must be run detched mode
    • docs# were accessible locally when using gunicorn (with uvicorn too)
    • gunicorn fails on /ask/ endpoint
      • "WARNING: Logging before InitGoogleLogging() is written to STDERR
        F0504 19:37:05.337496 46 Singleton-inl.h:249] Attempting to use singleton std::shared_ptrfolly::IOExecutor/(anonymous namespace)::GlobalTag in child process after fork
        *** Check failure stack trace: ***
        [2023-05-04 19:37:05 +0000] [1] [WARNING] Worker with pid 45 was terminated due to signal 6""
      • seems either due to problems with I/O singleton or signal 6 bc /ask/ endpoint takes so long
      • could've also derived from gunicorn wrong versioning since uvicorn has specific version
      • possible fix: tiangolo/uvicorn-gunicorn-fastapi-docker#145 (comment)
    • reverting back to using uvicorn instead of gunicorn should fix this since other docker image works with /ask/ endpoint
      • but need /api/ prefix on urls, so need new docker image
    • built new docker image using uvicorn, datached docker run, api url prefixes, and allowing port forwarding
      • works locally + thru postman
      • works okay as long as frontend stays in development mode
        • tried compiling frontend like how did for book-club but kept getting error 403 unauth'd and error 500 internal server error from nginx
        • also had to create the directory that the build files were moved to so this is possibly why
          • didn't have to do this on AWS
        • only harm in keeping using dev server is pm2 setup and constant "can’t establish a connection to the server at ws" messages in the console
        • when looked in nginx error.log, saw that permission to access the index.html file was denied
        • solved the problem thru restorecon -r /var/www/html since selinux reassigned incorrect permissions on the copied build folder
        • reset dir and file permissions using chmod =rwx file/dir and it still worked okay
      • for some reason, now fastAPI is accessible on the web
        • but need to navigate to port 8000 to use it without the /api/ prefix on the url
        • added possible steps for fastAPI deployment on backend instructions
    • didn't even need location block for static files since only a single-page web app
    • logged out of all terminal sessions and system still accessible online
    • "save machine state" == "hibernate" in Windows
    • should verify that docker + container startup on server restart
      • had to add an instruction to ensure nginx starts on server restart
      • launches perfectly as soon as the server reboots
      • should make sure endpoints work too
        • backend doesnt work at first but i think that's bc container is relaunching and make serve takes a long time
        • verified that this is the case and the syst just needs time to boot
    • websocket issue solution https://stackoverflow.com/a/69518866/13046931
  • fix the weird extra space at the bottom when the frontend is deployed

    • doesnt happen in development
  • install only non-dev dependencies

  • deploy the API onto the university server

    • had to add docker install instructions for RHEL-based distros
    • pull-eval-image fails:
      • "Error: writing blob: adding layer with blob "sha256:[imageId]": Error processing tar file(lsetxattr /: operation not supported): exit status 1"
      • docker error 125
      • sounds like adding "--storage-opt "overlay.mount_program=/usr/bin/fuse-overlayfs"" when running the docker cmd should bypass this
        • but slows down the pull since "Tmpfs (and so rootfs) does not allow all the xattrs (user.* and trusted.*) necessary to function as the upper directory"
        • best to run on XFS file system
        • "tmpfs is a temporary filesystem that resides in memory and/or swap partition(s). Mounting directories as tmpfs can be an effective way of speeding up accesses to their files, or to ensure that their contents are automatically cleared upon reboot."
      • instead, problem solved through running "sudo make serve"
    • "No supported WebSocket library detected. Please use 'pip install uvicorn[standard]', or install 'websockets' or 'wsproto' manually."
    • Looks like uninstalling uvicorn and installing 'uvicorn[standard]' should solve this
    • tried doing so in a running container to see if this fixed the problem
      • I don't think this is the right way to do it
    • image
      •      listen 80;
             server_name nlmapper.dblab.nkn.uidaho.edu;
        
             location / {
                    proxy_pass http://0.0.0.0:8000;
                    proxy_http_version 1.1;
                 proxy_set_header Host $http_host;
                    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                    proxy_set_header X-Forwarded-Proto $scheme;
                    proxy_set_header Upgrade $http_upgrade;
                    proxy_set_header Connection $connection_upgrade;
                    proxy_cache_bypass $http_upgrade;
             }
        
      }```
    • asked Kallol to delete old image and re-pull new image + changes to git repo and see updated deployment instructions
  • deploy the frontend onto the university server

  • setup HTTPS certificates and nginx config

  • verify that the api gives a valid status code + msg w/ OOM errors

[ENHANCEMENT] Dropdown improvements

Is your feature request related to a problem? Please describe.

  • Yes, the dropdown is fairly primitive

Describe the solution you'd like

  • The dropdown could allow for search
  • The dropdown elements should be ordered alphabetically

Additional context

  • search functionality may require a different dropdown component

Development

  • dropdown allows for search
  • dropdown elements ordered alphabetically
    • sorted names if any retrieved from db using inplace .sort()
  • dropdown should be closed when clicked outside of

[ENHANCEMENT] Notify user when uploaded file not in the proper format

Is your feature request related to a problem? Please describe.

  • Yes, any uploaded file is injected straight into the system
    • may not be in a compatible format but save it anyways

Describe the solution you'd like

  • Notify user when uploaded file not in the proper format and don't save file to FS

Additional context

  • Should be done for db file upload and sql file upload w/ it's created
  • For db file:
    • Upload into system
    • Generate a relevant (how? or maybe any?) question
      • Could possibly ask user for verification question
      • "How many records are there?"
        • should test if this works before requiring more user input
    • If error code concerning file format or file accessible thrown
      • Delete file from FS + tell user of error w/ file format

Development

  • perform error checking when sql file uploaded

    • done during process of sql file upload
  • perform error checking when db file uploaded

  • notify frontend user when file upload rejected

    • placed file failure msg above dropdown
      • so dropdown wont obscure it
  • notify frontend user w/ question rejected

    • haven't pinned down why this happens but it does
    • used same styling as other error msg

[ENHANCEMENT] Allow frontend upload of sql file

Is your feature request related to a problem? Please describe.

  • Yes, only db files can be uploaded

Describe the solution you'd like

  • Allow frontend upload of sql file through the same db file upload btn

Additional context

  • sql file upload should be attempted first
    • if that fails, then db file upload should be tried

Development

  • use a try-catch block to attempt db file upload then sql file upload
  • also made sure uploaded + selected item uploaded to optional names

[ENHANCEMENT] Provide a description of what's in a database to user

Is your feature request related to a problem? Please describe.

  • Yes, users don't know the contents of pre-exsiting databases

Describe the solution you'd like

  • Provide a description of what's in a database to user

Describe alternatives you've considered

  • unnecessary for client use but needed for usability of UI

Additional context

  • Could be db diagram or db content description
    • Could provide both if tools found
    • Contents description would likely be more user friendly

Development

    • ...

[ENHANCEMENT] Should set committing applicable changes to Picard

Is your feature request related to a problem? Please describe.

  • Yes, some changes made to backend branch can be committed back to main repo since fixes dockerfile

Describe the solution you'd like

  • Should set committing applicable changes to Picard

Additional context

  • Updated dockerfile for proper image building
  • API development process flow chart
  • Graphical representation of what's contained in the API (T5+3B and Picard and their interplay)
  • Endpoint for file upload
  • Endpoint for all db files retrieval
  • Would require a Pull Request (PR)
    • Members of the team don't seem active regarding the project's development

Development

    • ...

[ENHANCEMENT] Notify frontend user when file upload fails

Is your feature request related to a problem? Please describe.

  • Yes, the user doesn't know when or why their file upload fails

Describe the solution you'd like

  • An error message displayed after getting the results of the file upload

Additional context

  • should appear above file upload

Development

[ENHANCEMENT] Figure out what types of queries/questions the backend has the most trouble generating

Is your feature request related to a problem? Please describe.

  • Yes, it's hard to gauge how much the system should be trusted in different scenarios

Describe the solution you'd like

  • Figure out what types of queries/questions the backend has the most trouble generating
  • Then give report of them to client w/ stats

Additional context

  • Could look at what type of questions/queries it performs the worst on when being eval'd against unseen Spider question-sql pairs
    • definitely possible since have trained model
    • look thru documentation/issues for how

Development

[ENHANCEMENT] Allow the insertion of a schema and populate it with faker if desired

Is your feature request related to a problem? Please describe.

  • Yes, currently need to upload database file

Describe the solution you'd like

  • Allow the insertion of a schema and populate it with faker if desired

Additional context

  • Should take in sqlite file, populate it with fake data thru insertions, then use tool to generate db file from it?
    • used 1 such tool to generate Chinook.db called sqlite3 with a .read operation

Development

  • allow upload of sql file and convert it to a database file using tool

    • had to call subprocess to startup a shell and feed dot cmd to that
      • otherwise would need to feed it SQL
      • which could be useful for direct SQL query running but not here
    • created a new method to fulfull this purpose
    • still needa verify whether sql should be written in byte mode or not
      • it prolly shouldn't so needa add an option to copy file in byte mode or not
    • still needa verify whether shutil.copyobj() overwrites pre-existing destination file
      or not
      • to test both of these before committing to an image
      • need to find out how to read a local file into a fileobject for reading + writing
      • documentation says "If dst specifies a file that already exists, it will be replaced."
    • verified that the sqlite3 tool overwrites destination file
  • stored uploaded sql files in a new directory

    • added a new SQL folder to contain them in backend args
    • created it at make_serve execution
      • gonna needa figure out how to run this cmd w/ deploying API
    • steps:
      • create db file from sql file
        • if fails, throw error bc sql not in proper format
      • if successful, save both files in their respective dirs
    • had to change the ordering of steps to account for saving the sql file before generating the db file
      • due to restrictions of sqlite3 tool
      • then if error occurs, have to delete uploaded sql file
    • had to copy in byte mode, even tho reading from SQL file
    • if file copying fails, had to delete the empty file that's created
    • decided to error out if pre-existing sql file that'll be overwritten
    • to add sql_path:
      • added to backend args
      • added to serve.json config
      • added dir creation + docker mounting of it to makefile
  • figure out how to generate fake data for unknown scheme or given a scheme

    • should be able to pass the file name
    • sql data generators online
      • not very user friendly
    • mockaroo seems promising thru their api https://www.mockaroo.com/
  • after fake data generated, should automatically generate db file and put it into proper folder

    • should overwrite pre-existing file in folder with db file name

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.