Giter VIP home page Giter VIP logo

multi-juicer's Introduction

MultiJuicer, Multi User Juice Shop Platform

Running CTFs and Security Trainings with OWASP Juice Shop is usually quite tricky, Juice Shop just isn't intended to be used by multiple users at a time. Instructing everybody how to start Juice Shop on their own machine works ok, but takes away too much valuable time.

MultiJuicer gives you the ability to run separate Juice Shop instances for every participant on a central kubernetes cluster, to run events without the need for local Juice Shop instances.

Note: MultiJuicer is now an official part of the Juice Shop OWASP Project. For this change the this repo was recently moved from the iteratec organisation into the official juice-shop GitHub organisation. If you notice or encounter any problems introduced by this change, check the v6.0.0 changelog for possible upgrade steps, if the problems can't be solved by it please reach out via a GitHub discussion or via slack.

What it does:

  • dynamically create new Juice Shop instances when needed
  • runs on a single domain, comes with a LoadBalancer sending the traffic to the participants Juice Shop instance
  • backup and auto apply challenge progress in case of Juice Shop container restarts
  • cleanup old & unused instances automatically

MultiJuicer, High Level Architecture Diagram

Installation

MultiJuicer runs on kubernetes, to install it you'll need helm(helm >= 3.7 required)

helm install multi-juicer oci://ghcr.io/juice-shop/multi-juicer/helm/multi-juicer

See production notes for a checklist of values you'll likely need to configure before using MultiJuicer in proper events.

Installation Guides for specific Cloud Providers / Environments

Generally MultiJuicer runs on pretty much any kubernetes cluster, but to make it easier for anybody who is new to kubernetes we got some guides on how to setup a kubernetes cluster with MultiJuicer installed for some specific Cloud providers.

Customizing the Setup

You got some options on how to setup the stack, with some option to customize the JuiceShop instances to your own liking. You can find the default config values under: helm/multi-juicer/values.yaml

Download & Save the file and tell helm to use your config file over the default by running:

helm install -f values.yaml multi-juicer ./multi-juicer/helm/multi-juicer/

Deinstallation

helm delete multi-juicer

FAQ

How much compute resources will the cluster require?

To be on the safe side calculate with:

  • 1GB memory & 1CPU overhead, for the balancer & co
  • 300MB & 0.2CPU * number of participants, for the individual JuiceShop Instances

The numbers above reflect the default resource limits. These can be tweaked, see: Customizing the Setup

How many users can MultiJuicer handle?

There is no real fixed limit. (Even thought you can configure one ๐Ÿ˜‰) The custom LoadBalancer, through which all traffic for the individual Instances flows, can be replicated as much as you'd like. You can also attach a Horizontal Pod Autoscaler to automatically scale the LoadBalancer.

Why a custom LoadBalancer?

There are some special requirements which we didn't find to be easily solved with any pre build load balancer:

  • Restricting the number of users for a deployment to only the members of a certain team.
  • The load balancers cookie must be save and not easy to spoof to access another instance.
  • Handling starting of new instances.

If you have awesome ideas on how to overcome these issues without a custom load balancer, please write us, we'd love to hear from you!

Why a separate kubernetes deployment for every team?

There are some pretty good reasons for this:

  • The ability delete the instances of a team separately. Scaling down safely, without removing instances of active teams, is really tricky with a scaled deployment. You can only choose the desired scale not which pods to keep and which to throw away.
  • To ensure that pods are still properly associated with teams after a pod gets recreated. This is a non problem with separate deployment and really hard with scaled deployments.
  • The ability to embed the team name in the deployment name. This seems like a stupid reason but make debugging SOOO much easier, with just using kubectl.

How to manage JuiceShop easily using kubectl?

You can list all JuiceShops with relevant information using the custom-columns feature of kubectl. You'll need to down load the juiceShop.txt from the repository first:

$ https://raw.githubusercontent.com/juice-shop/multi-juicer/main/juiceShop.txt

$ kubectl get -l app.kubernetes.io/name=juice-shop -o custom-columns-file=juiceShop.txt deployments
TEAM         SOLVED-CHALLENGES   LAST-REQUEST
foobar       3                   Wed May 4 2042 18:14:22 GMT+0000 (Coordinated Universal Time)
team-42      0                   Wed May 4 2042 18:14:30 GMT+0000 (Coordinated Universal Time)
the-empire   0                   Wed May 4 2042 18:14:46 GMT+0000 (Coordinated Universal Time)

Where did this project come from

The project start at iteratec, a german based software development company, to run their security trainings for their own developers and their clients. The project was then open sourced in 2019 and donated to the OWASP organisation / the OWASP Juice Shop project in 2023.

Talk with Us!

You can reach us in the #project-juiceshop channel of the OWASP Slack Workspace. We'd love to hear any feedback or usage reports you got. If you are not already in the OWASP Slack Workspace, you can join via this link

multi-juicer's People

Contributors

adrianeriksen avatar bkimminich avatar blucas-accela avatar coffeemakingtoaster avatar dependabot[bot] avatar dergut avatar fwijnholds avatar j12934 avatar jonasbg avatar jvmdc avatar michaeleischer avatar netr0m avatar nickmalcolm avatar orangecola avatar pseudobeard avatar rseedorff avatar saymolet avatar scornelissen85 avatar sharjeelaziz avatar skandix avatar stefan-schaermeli avatar stuebingerb avatar sydseter avatar troygerber avatar wurstbrot avatar zadjadr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

multi-juicer's Issues

Team passcode reset

We deployed multi-juicer as part of an internal security hackathon and it was great ๐Ÿ‘

I noticed that a lot of the teams forgot to write down their passcode when creating their instance. Later on, teams wanted to switch browsers, laptops etc. and couldn't because the passcode was gone.

I think it would be very helpful to add the functionality to reset the passcode either in the admin interface or next to the team display card
https://github.com/iteratec/multi-juicer/blob/9dd8ceb7c28f35d66d150d9a2352f9f130241a6c/juice-balancer/ui/src/pages/JoinPage.js#L66-L68

Run End2End test inside kubernetes

Using github actions we could run our E2E tests against proper kubernetes cluster created using tools like kind.

Ideally this would run the tests against a couple of k8s version. (Something the the last 4 maybe?) When running parallel this should be too slow.

The E2E tests are currently a bit neglected and don't pass properly ๐Ÿ˜ž
Partly because the are not executed automatically

More ways to deploy

I am trying to set juicy-ctf up using a local on premise ubuntu server. Could there be some sort of instructions on how to set this up on pure ubuntu and use just one computer (virtualization or two docker instances perhaps)

This would probably need to include instructions on how to install kubernetes and helm to fit this method the best.

Also, as a possibility, if a docker image was added as a deployment method, that would be nice also. (have docker as a wrapper so that everything is contained inside)

Renaming JuicyCTF

Twitter poll results for renaming JuicyCTF with the goal if disambiguation from juice-shop-ctf NPM module:

Did any better ideas than "MultiJuicer" come up in the meantime?

Progress Watchdog, hardcoded default Namespace

https://github.com/iteratec/multi-juicer/blob/master/progress-watchdog/main.go#L78 there is the "default" Namespace hardcoded which does not work if deployed to any other namespace than "default".

Therefore I also see permission errors (as sa, role, rolebinding in an other namespace):

panic: deployments.apps is forbidden: User "system:serviceaccount:juicy-ctf:progress-watchdog" cannot list resource "deployments" in API group "apps" in the namespace "default"
goroutine 1 [running]:
main.createProgressUpdateJobs(0xc000180420, 0xc000271760)
/src/main.go:80 +0x5da
main.main()
/src/main.go:66 +0x2ff

Problem with proxing on local server

Hi,
I've got a certain problem. After creation of team i can't access newly created pod (after pressing Start Hacking I return to creating new team). This issue only occures when I'm using my own config file. I changed only secure: false to secure: true, because i want to launch it on our production server. Do you know what may be the problem?

Publish MultiJuicer Terraform Modules for individual cloud providers

Terraform allows to distribute reusable modules which setup cloud infrastructure.
This could be used to provide "ready to go" MultiJuicer setups for different cloud setups which can be installed very easily.

These Modules should setup:

  1. kubernetes setup
  2. install MultiJuicer helm chart
  3. install all components required to direct traffic to the MultiJuicer balancer.

To publish the modules we'd probably have to setups individual repositories per cloud provider.
https://www.terraform.io/docs/modules/index.html

MultiJuicer doesn't run on microk8s without ClusterDNS enabled

Hi,

I have the problem that I can't access on a Juice Shop instance. I installed the Multi-Juicer with helm. After setting the port-forwarding in my kubernetes, I can open the start page on port 3000 of the Balancer. There I can create a new team. After that I'm clicking on "Start Hacking" and the website begins to load and load and load.... The Juice Shop won't open and I am stuck on the website which is always loading.

Here are the logs from my pods:

Juice-Shop Logs:

> [email protected] start /juice-shop
> node app

info: All dependencies in ./package.json are satisfied (OK)
info: Chatbot training data botDefaultTrainingData.json validated (OK)
info: Detected Node.js version v12.18.3 (OK)
info: Detected OS linux (OK)
info: Detected CPU x64 (OK)
info: Required file index.html is present (OK)
info: Required file styles.css is present (OK)
info: Required file main-es2018.js is present (OK)
info: Required file tutorial-es2018.js is present (OK)
info: Required file polyfills-es2018.js is present (OK)
info: Required file runtime-es2018.js is present (OK)
info: Required file vendor-es2018.js is present (OK)
info: Required file main-es5.js is present (OK)
info: Required file tutorial-es5.js is present (OK)
info: Required file polyfills-es5.js is present (OK)
info: Required file runtime-es5.js is present (OK)
info: Required file vendor-es5.js is present (OK)
info: Configuration multi-juicer validated (OK)
info: Port 3000 is available (OK)
info: Server listening on port 3000

Juice-Balancer Logs:

time="2020-12-15T13:39:20.768Z" level="info" msg="JuiceBalancer listening on port 3000!"
time="2020-12-15T13:46:10.266Z" level="info" msg="Team test doesn't have a JuiceShop deployment yet"
time="2020-12-15T13:46:10.280Z" level="info" msg="Reached 0/10 instances"
time="2020-12-15T13:46:11.402Z" level="info" msg="Creating JuiceShop Deployment for team \"test""
time="2020-12-15T13:46:11.532Z" level="info" msg="Created JuiceShop Deployment for team \"test""
time="2020-12-15T13:46:11.760Z" level="info" msg="Awaiting readiness of JuiceShop Deployment for team \"test""
time="2020-12-15T13:46:54.404Z" level="info" msg="JuiceShop Deployment for team \"test" ready"
time="2020-12-15T13:47:21.999Z" level="warn" msg="Proxy fail \"ENOTFOUND" for: GET /"
time="2020-12-15T13:49:22.419Z" level="warn" msg="Proxy fail \"ENOTFOUND" for: GET /"
time="2020-12-15T13:50:10.444Z" level="warn" msg="Proxy fail \"ENOTFOUND" for: GET /"
time="2020-12-15T13:51:23.126Z" level="warn" msg="Proxy fail \"ENOTFOUND" for: GET /"
time="2020-12-15T13:52:11.072Z" level="warn" msg="Proxy fail \"ENOTFOUND" for: GET /"
time="2020-12-15T13:53:23.762Z" level="warn" msg="Proxy fail \"ENOTFOUND" for: GET /"
time="2020-12-15T13:54:11.615Z" level="warn" msg="Proxy fail \"ENOTFOUND" for: GET /"
time="2020-12-15T13:55:24.377Z" level="warn" msg="Proxy fail \"ENOTFOUND" for: GET /"
time="2020-12-15T13:56:12.203Z" level="warn" msg="Proxy fail \"ENOTFOUND" for: GET /"
time="2020-12-15T13:57:25.183Z" level="warn" msg="Proxy fail \"ENOTFOUND" for: GET /"
time="2020-12-15T13:58:12.807Z" level="warn" msg="Proxy fail \"ENOTFOUND" for: GET /"
time="2020-12-15T13:59:25.936Z" level="warn" msg="Proxy fail \"ENOTFOUND" for: GET /"
time="2020-12-15T14:00:13.754Z" level="warn" msg="Proxy fail \"ENOTFOUND" for: GET /"
time="2020-12-15T14:01:26.806Z" level="warn" msg="Proxy fail \"ENOTFOUND" for: GET /"
time="2020-12-15T14:02:14.442Z" level="warn" msg="Proxy fail \"ENOTFOUND" for: GET /"
time="2020-12-15T14:03:27.412Z" level="warn" msg="Proxy fail \"ENOTFOUND" for: GET /"
time="2020-12-15T14:04:15.032Z" level="warn" msg="Proxy fail \"ENOTFOUND" for: GET /"
time="2020-12-15T14:05:28.143Z" level="warn" msg="Proxy fail \"ENOTFOUND" for: GET /"
time="2020-12-15T14:06:15.750Z" level="warn" msg="Proxy fail \"ENOTFOUND" for: GET /"
time="2020-12-15T14:08:14.131Z" level="warn" msg="Proxy fail \"ENOTFOUND" for: GET /"
time="2020-12-15T14:08:16.503Z" level="warn" msg="Proxy fail \"ENOTFOUND" for: GET /"

I already updated the helm repo and reinstalled but it won't work.

Use kubernetes annotations to store instance information instead of redis

MultiJuicer currently depends on redis as a datasource.
MultiJuicer stores 3 pieces of data related to every instance in redis:

  1. Passcode: Bcrypt hash of the instances passcode
  2. LastRequest: Timestamp of the last request proxied to the instance. Update at most every 10 seconds to reduce load.
  3. ContinueCode: JuiceShop ContinueCode to backup the current Instance Progress.

All of the data could be stored directly on the instance, as annotations on the Instances kubernetes deployment or directly on the pod. This would enable to remove redis entirely. This would make the setup easier as no persistent volumes are required and removed a potential point of failure.

Scoreboard / Trainer Dashboard

To give a overview on which team already solved particular challenges, MultiJuicer(JuicyCTF ๐Ÿ˜‰) should provide a scoreboard.
The scoreboard is meant for less competitive events and should be focused on providing a helpful overview for the trainer(s) and a fun overview for the players.

This scoreboard should be able to display the following things:

  • Challenge List Page: Show a list of all challenges and show how many users have solved the challenge
  • Challenge Detail Page: See which users have solved a particular challenge
  • (Optional) Category Overview: List all challenges of a category (e.g. XSS) and how many users have solved them.

It might also be nice to add an option to use a more competitive mode. Competition can be fun even during training. For that a ranking could be added with the following pages:

  • Ranking Page: Ranking of users. Each challenge should give points (or maybe just stars) based on its difficulty. The players are then ranked on how many points they have earned
  • Team Detail Page: Show all solved challenges of a single team

Isolate JuiceShop Instances from each other using NetworkPolicies

Currently a user could use RCE or SSRF vulnerabilities to connect to JuiceShop instances of other users.

This would kind of be a awesome challenge in itself ๐Ÿ˜…
Like: "Steal the challenge progress from another team"

But as we (currently ๐Ÿ˜‰) don't have the possibility to add new Challenges at run time it would probably be best to prohibit any traffic coming from JuiceShop to other JuiceShop pods via k8s NetworkPolicies. Might even work to prevent any cluster internal traffic from the JuiceShop this would have to be tested to ensure that this doesn't cause troubles with the juice-balancer.

Issues when trying to customize shop with values.yml

Hi there,

I'm trying to build a custom shop look for an internal CTF using juice shop, i've been able to modify the shop title and logo modifying it in values.yml but when I try to add the "products" section on it, the juice shop instance will not load.

I'm using multi-juicer with kubernetes with the following setup:

minikube start
helm install multi-juicer multi-juicer/multi-juicer -f multi-juicer/helm/multi-juicer/test.yml

I've also tried using one of the custom yaml provided in the juice shop guide (https://pwning.owasp-juice.shop/part1/customization.html), (tried the Mozilla one) but it will just load the default juice shop looks.

Any help would be appreciated.

Backup CodingChallenges Progess per Instance

The new CodingChallenges (both "FindIt" and "Fixit" Challenges) work in the current MultiJuicer version work but are not backed up by the progress-watchdog like normal JuiceShop challenges.

Short recap the Challenge Progress is currently "backed up" to the Deployements annotations, short example (changed the continue code so that nobody steals it ๐Ÿฆน):

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    multi-juicer.iteratec.dev/challengesSolved: "66"
    multi-juicer.iteratec.dev/continueCode: Q2HBuBhDtJcqIXToCnF8iPSjUKurhwt2IyTJszinfyHxuNhRTaCwFMi5fPSzUlH8Ru55hjRtVYcb4TgjF2ZiqjfVgUYWHjruv2cNYIrQTQoCzmsmqFgDSNjU4ZHo4HwmtQMczpTJvC8rslJi6KfQ3SMLUbmHOBhnnIoZsjE
    multi-juicer.iteratec.dev/lastRequest: "1633681195273"
    multi-juicer.iteratec.dev/lastRequestReadable: Fri Oct 08 2021 08:19:55 GMT+0000
      (Coordinated Universal Time)
    multi-juicer.iteratec.dev/passcode: $2a$12$XcQFCciAaJzLEXQH48qPxO3a8HMAXZ.a3iJ2mMeZ29mKI38CiGXoe
  creationTimestamp: "2021-10-08T08:07:41Z"
  generation: 1
  labels:
    app: juice-shop
    deployment-context: mj
    team: team42
  name: t-team42-juiceshop
  namespace: default
spec:
  ...

This mechanism should be extended to also back of the values used for the "continueCodeFindIt" and "continueCodeFixIt" cookies:

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    multi-juicer.iteratec.dev/challengesSolved: "66"
    multi-juicer.iteratec.dev/continueCode: Q2HBuBhDtJcqIXToCnF8iPSjUKurhwt2IyTJszinfyHxuNhRTaCwFMi5fPSzUlH8Ru55hjRtVYcb4TgjF2ZiqjfVgUYWHjruv2cNYIrQTQoCzmsmqFgDSNjU4ZHo4HwmtQMczpTJvC8rslJi6KfQ3SMLUbmHOBhnnIoZsjE
    multi-juicer.iteratec.dev/continueCodeFindIt: continueCodeFindItHere
    multi-juicer.iteratec.dev/continueCodeFixIt: continueCodeFixItHere
    multi-juicer.iteratec.dev/lastRequest: "1633681195273"
    multi-juicer.iteratec.dev/lastRequestReadable: Fri Oct 08 2021 08:19:55 GMT+0000
      (Coordinated Universal Time)
    multi-juicer.iteratec.dev/passcode: $2a$12$XcQFCciAaJzLEXQH48qPxO3a8HMAXZ.a3iJ2mMeZ29mKI38CiGXoe
  creationTimestamp: "2021-10-08T08:07:41Z"
  generation: 1
  labels:
    app: juice-shop
    deployment-context: mj
    team: team42
  name: t-team42-juiceshop
  namespace: default
spec:
  ...

Configuration Problems with Multi-Juicer

Hi,

I'm actually trying to edit the values.yaml to hide the Challenge Hints, Hacking Instructor and to upgrade the Grace Period to 10d. Therefore I deleted the # in front of showChallengeHints and showHackingInstructor in the values.yaml file. After that I tried the installation with your instruction: helm install -f values.yaml multi-juicer ./multi-juicer/helm/multi-juicer/
This command was not useful in all, because I get the following error: Error: path "./multi-juicer/helm/multi-juicer/" not found
The only way this command works if I change it to: helm install -f values.yaml multi-juicer multi-juicer/multi-juicer
If I do this, I won't be able to open a JuiceShop after I joined a team. The error that Kubernetes shows me is:

Readiness probe failed: Get "http://10.1.13.73:3000/rest/admin/application-version": dial tcp 10.1.13.73:3000: connect: connection refused
Back-off restarting failed container

(If I install it without changing the values.yaml file, it is working)

After that I tried to change the values using the --set="" command. My command was: helm install multi-juicer multi-juicer/multi-juicer --set="juiceShopCleanup.gracePeriod=10d" --set="juiceShop.config.application.showChallengeHints=false" --set="juiceShop.config.application.showHackingInstructor=false"
If i try this command, I will get this error:

 coalesce.go:199: warning: destination for config is a table. Ignoring non-table value application:
  logo: JuiceShopCTF_Logo.png
  favicon: favicon_ctf.ico
  # showChallengeHints: false
  showVersionNumber: false
  # showHackingInstructor: false
  showGitHubLinks: false
# ctf:
  # showFlagsInNotifications: true
Error: template: multi-juicer/templates/juice-shop-config-map.yaml:9:37: executing "multi-juicer/templates/juice-shop-config-map.yaml" at <4>: wrong type for value                       ; expected string; got map[string]interface {}

Can you help me to solve this problem?

Regards

Integrate a CTF Plattform

Integrate a CTF Plattform (e.g. CTFd or FBCTF) directly.

This setup should include:

  • Starting up the CTF Plattform via helm

Optional but really nice to have:

  • Automatically importing the challenges into the CTF Plattform, e.g. by using juice-shop-ctf as a libary to create the import and then importing them directly via init container / what ever to the CTF Plattform
  • Automatically submit flags to the CTF Plattform
  • Automatically create users for the CTF Plattform when the create a team in the balancer / sync teams between balancer and CTF Plattform
  • Proxy requests to the CTF Plattform so that both JuiceShop and ScoreBoard are able to run on the same URL

Handle proxy errors properly

When a instance got deleted because of inactivity and the user visits the app, they will get a proxy error.

504 Gateway Time-out
The server didn't respond in time.

This isn't really helpful.
Better would be to redirect the user to the balancer page, with a helpful message and give them the ability to recreate the instance.

Application Level Prometheus Metrics and custom Grafana Dashboard

Add prometheus metrics for information application level information like:

  • Number of registered teams
  • Number of logins (successful, failed)
  • Number of deleted instances
  • Number of challenges solved (Requires #27)
  • Http Response Codes (per path)[for Balancer and Proxied Requests]
  • Http Request Latency from JuiceShop (per path)[for Balancer and Proxied Requests]
  • Number of Requests requests proxied to the instances

The metrics endpoint should not be located at the usual /metrics but /balancer/metrics so that potential new JuiceShop Challenges(juice-shop/juice-shop#1275) are not hindered by this ๐Ÿ˜‰.

A option should be added to the Helm Chart to enable / disable the endpoint.

The Helm Chart should provide a option to deploy a prometheus-operator ServiceMonitor for the /balancer/metrics endpoint.

The /balancer/metrics should also be secured using http basic auth. The credentials for it should be located in a kubernetes secrets to make it accessible to both the balancer and the ServiceMonitor.

To provide a baseline dashboard for the new metrics a grafana dashboard should be created and placed inside a specially labelled ConfigMap which can be picked up by the Grafana Sidecar: https://github.com/helm/charts/tree/master/stable/grafana#import-dashboards

Score Board loading stuck from null pointer error

  1. Launch MultiJuicer v3.3.0
  2. Create a new instance
  3. Visit the/#/score-board page of that instance
  4. Loading animation is shown endlessly which the console spills out
vendor-es2015.js:1 ERROR TypeError: Cannot read property 'showFlagsInNotifications' of null
    at h._next (main-es2015.js:formatted:7467)
    at h.__tryOrUnsub (vendor-es2015.js:1)
    at h.next (vendor-es2015.js:1)
    at c._next (vendor-es2015.js:1)
    at c.next (vendor-es2015.js:1)
    at a._next (vendor-es2015.js:1)
    at a.next (vendor-es2015.js:1)
    at a._next (vendor-es2015.js:1)
    at a.next (vendor-es2015.js:1)
    at a._next (vendor-es2015.js:1)
vn @ vendor-es2015.js:1
handleError @ vendor-es2015.js:1
next @ vendor-es2015.js:1
i @ vendor-es2015.js:1
__tryOrUnsub @ vendor-es2015.js:1
next @ vendor-es2015.js:1
_next @ vendor-es2015.js:1
next @ vendor-es2015.js:1
next @ vendor-es2015.js:1
emit @ vendor-es2015.js:1
(anonymous) @ vendor-es2015.js:1
invoke @ polyfills-es2015.js:1
run @ polyfills-es2015.js:1
runOutsideAngular @ vendor-es2015.js:1
onHandleError @ vendor-es2015.js:1
handleError @ polyfills-es2015.js:1
runTask @ polyfills-es2015.js:1
invokeTask @ polyfills-es2015.js:1
invoke @ polyfills-es2015.js:1
n.args.<computed> @ polyfills-es2015.js:1
setTimeout (async)
a @ polyfills-es2015.js:1
scheduleTask @ polyfills-es2015.js:1
onScheduleTask @ polyfills-es2015.js:1
scheduleTask @ polyfills-es2015.js:1
scheduleTask @ polyfills-es2015.js:1
scheduleMacroTask @ polyfills-es2015.js:1
u @ polyfills-es2015.js:1
(anonymous) @ polyfills-es2015.js:1
i.<computed> @ polyfills-es2015.js:1
i @ vendor-es2015.js:1
__tryOrUnsub @ vendor-es2015.js:1
next @ vendor-es2015.js:1
_next @ vendor-es2015.js:1
next @ vendor-es2015.js:1
_next @ vendor-es2015.js:1
next @ vendor-es2015.js:1
_next @ vendor-es2015.js:1
next @ vendor-es2015.js:1
_next @ vendor-es2015.js:1
next @ vendor-es2015.js:1
notifyNext @ vendor-es2015.js:1
_next @ vendor-es2015.js:1
next @ vendor-es2015.js:1
a @ vendor-es2015.js:1
invokeTask @ polyfills-es2015.js:1
onInvokeTask @ vendor-es2015.js:1
invokeTask @ polyfills-es2015.js:1
runTask @ polyfills-es2015.js:1
invokeTask @ polyfills-es2015.js:1
f @ polyfills-es2015.js:1
p @ polyfills-es2015.js:1

This is the line where the null pointer happens in Juice Shop:

this.allowRepeatNotifications = t.challenges.showSolvedNotifications && t.ctf.showFlagsInNotifications

I think this might happen due to the way the values.yaml overwrites the configuration by default: The ctf property is probably null after this config is applied, overwriting the default settings of Juice Shop itself. The fix is probably to just also comment out the line with ctf: because then I'd expect the Juice Shop's own defaults to load.

I'll send a PR with that change, but I can't really test it due to Kubernetes incompetence... :-D

Openshift instance issue

Hi Team,

I have deployed the application in our Enterprise Openshift and I am able to login as admin also. but when I am trying to create a new team, I get "Internal Server Error". I have already tried reproduce the issue by deploying it in different public cloud environments, everywhere the issue is same. Kindly help.

Also it looks like, the repo - https://iteratec.github.io/multi-juicer/ doesn't exist, please check.

Regards,
Sashi
image

OpenShift 4 instance issue

Hi team,

I've installed multi-juicer on OpenShift 4 corporate instance from DockerHub and everything looks fine except it doesn't create new teams. When I check pod's log I see there following message:

time="2022-05-25T14:59:03.974Z" level="info" msg="JuiceBalancer listening on port 3000!"
time="2022-05-25T14:59:22.404Z" level="error" msg="Encountered unknown error while checking for existing JuiceShop deployment"
time="2022-05-25T14:59:22.404Z" level="error" msg="deployments.apps "t-team1-juiceshop" is forbidden: User "system:serviceaccount:cp-429991:default" cannot get resource "deployments" in API group "apps" in the namespace "default""

Is there any workaround to fix it? I can create and bind roles only within the namespace of the project.

connectionCache

The checkIfInstanceIsUp function in proxy.js uses req.cleanedTeamname to check for the team in the cache. The updateLastConnectTimestamp function uses req.teamname to add the team to the cache. I added some logging and noticed that the checkIfInstanceIsUp function never finds the team in the cache and always calls getJuiceShopInstanceForTeamname(teamname) which adds quite a bit of time to every request. updateLastConnectTimestamp seems to add t-teamname and checkIfInstanceIsUp checks just for teamname. Both functions should use the same version of the teamname for the cache.

New Logo

The current logo was created pretty hastily.

Would be nice to have a new and better logo which has:

  • can be printed on stickers ๐Ÿ˜‰
  • has a similar graphic style to the JuiceShop icon

General Ideas are to have a mixer / blender spilling over with all the juice it has inside.

Template variables could not be initialized: Datasource named Prometheus was not found

Hi,

i have a problem with setting up the Multi-Juicer with a monitoring setup. I have followed the guides for the monitoring setup and installed everything.
I installed Grafana with the commands (https://github.com/grafana/helm-charts/tree/main/charts/grafana) and set the attribut sidecar.datasource.enabled to true in the values.yaml file from Grafana.
Then I called Grafana in the browser to view the dashboard. The problem is that firstly the error message "Template variables could not be initialized: Datasource named Prometheus was not found" appears and secondly the data in the dashboard is completely random and does not reflect statistics of any Juice Shop instance.
Do you know how to solve this problem? Basically, I just followed the guides and ended up with the error message.

K8s autoscaling - balancer timeouts

I am running multi-juicer on Google Kubernetes Engine (GKE) and have spotted a bug when running large events.

Spinning up a new container of juice shop on a node that has capacity takes around 22s. But, when a node is at full capacity and you spin up a new container, it takes about 1m30s for the new node to become available and the container to launch. However, in that time, the balancer times out with:

GET https://training.test.appsec.tools.bbc.co.uk/balancer/teams/7tguy/wait-till-ready 502
Failed to wait for deployment readiness

You can still log out of the balancer and log back in to your team with your password, but you need to know that you need to do that, and in large team events, the unlucky few who hit this bug don't know that they need to do that.

To replicate:

  1. Operate a GKE cluster that is sized appropriately for multi-juicer i.e. the default node pool is running with very little spare CPU capacity.
  2. Launch another multi-juicer team (juice-shop container); this will force the cluster to autoscale and add a new node.
  3. Looking at the multi-juicer balancer with dev tools open, errors will be reported after about 1m.
  4. Watch the cluster resources page in GKE. The new node and associated juice-shop container should take about 1m30s to become operational.
  5. Back to the multi-juicer balancer with dev tools, the page will fail to refresh/reload and will simply fail at the page saying "Starting a new Juice Shop Instance" with the spinner spinning indefinitely.

Add "reset team passcodes" button to admin page

It would be nice if the token for a team on multi-juicer, could be stored in the metadata regarding the pod. So if everyone in the team forgot their team pincode, they could ask admins of the cluster to recover it by checking the metadata for the team pod.

Like one can for the admin password, as seen in the attached picture.
But have the option to attach team pincode to its pod.

image

Issue deploying mulit-juicer

Helm Version:
Client: &version.Version{SemVer:"v2.16.1", GitCommit:"bbdfe5e7803a12bbdf97e94cd847859890cf4050", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.16.1", GitCommit:"bbdfe5e7803a12bbdf97e94cd847859890cf4050", GitTreeState:"clean"}

I'm trying to deploy multi-juicer with the use of this command

helm install -f values.yaml multi-juicer ./multi-juicer/helm/multi-juicer

But I get the error, Error: This command needs 1 argument: chart name

Last Login IP Not working correctly

The last login ip will show the ip of the LoadBalancer not the IP of the User... ๐Ÿ˜ณ

Warning Spoilers:

The challenge to override the Last Login IP will most likely also not work in most cloud setups as the initial Cloud Loadbalancer will most likely strip away the X-Forwarded-For headers set by the user.

Multi Juicer in an offline network

Multijuicer stops working within an offline network environment. The following steps were followed to install:

sudo apt install docker.io docker-compose -y
sudo snap install kubectl --classic
sudo curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
sudo curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
minikube start
kubectl cluster-info
helm repo add multi-juicer https://iteratec.github.io/multi-juicer/
helm install multi-juicer multi-juicer/multi-juicer
kubectl get pods
wget https://raw.githubusercontent.com/iteratec/multi-juicer/main/guides/k8s/k8s-juice-service.yaml
kubectl apply -f k8s-juice-service.yaml
kubectl port-forward --address 0.0.0.0 service/juice-balancer 3000:3000

When the system is brought into the isolated network, started with kubectl port-forward --address 0.0.0.0 service/juice-balancer 3000:3000 and a client attempts to connect, here is the result:

Screenshot 2023-05-20 at 3 16 28 PM

Any ideas?

New instances are failing with "multi-user-redis"

This used to work fine but now I am seeing this issue. Any ideas?

Events:
  Type     Reason     Age                From                   Message
  ----     ------     ----               ----                   -------
  Normal   Scheduled  36s                default-scheduler      Successfully assigned default/t-val-juiceshop-79b6488494-bftsr to s0025abl5905
  Normal   Pulled     35s                kubelet, s0025abl5905  Container image "bkimminich/juice-shop:latest" already present on machine
  Normal   Created    35s                kubelet, s0025abl5905  Created container juice-shop
  Normal   Started    34s                kubelet, s0025abl5905  Started container juice-shop
  Normal   Pulling    15s (x3 over 34s)  kubelet, s0025abl5905  Pulling image "iteratec/juice-progress-watchdog"
  Normal   Pulled     12s (x3 over 31s)  kubelet, s0025abl5905  Successfully pulled image "iteratec/juice-progress-watchdog"
  Warning  Failed     12s (x3 over 31s)  kubelet, s0025abl5905  Error: secret "multi-juicer-redis" not found
  Warning  Unhealthy  9s (x11 over 29s)  kubelet, s0025abl5905  Readiness probe failed: Get http://10.244.1.121:3000/rest/admin/application-version: dial tcp 10.244.1.121:3000: connect: connection refused
[chandv3_adm@sdcnjc ~]$ curl http://10.244.1.121:3000/rest/admin/application-version
{"version":"9.3.1"}

Public access deployed on Azure

I am new here and I followed the steps to deploy multi-juicer on AKS and tried port forwarding to access from my local machine. Unfortunately it doesn't work. Do I need to change anything on the Azure side to have public access?

Appreciate your help and thank you in advance

Regards,
Karthik

Integration Test Suite

Should have a test suite to test:

  • registering as a new user
  • login into a already existing team

Should run juice-balancer in a somewhat proper cluster.

Maybe using k3s?

Or spinning up a new test cluster somewhere?

SecurityContext should support runAsUser

I have an issue with my PodSecurityPolicy specifying runAsNonRoot as true.

The JuiceBalancer already uses the non-root user app but Kubernetes needs a numeric id in order to verify that it is not the root user:

Error: container has runAsNonRoot and image has non-numeric user (app), cannot verify user is non-root

For this to work, the pod securityContext needs to specify runAsUser with the according user id of app.

I would love a change to either:

  1. include the numeric id of the app user in the template:
-- helm/multi-juicer/templates/juice-balancer-deployment.yaml

securityContext:
  runAsUser: 100
  runAsGroup: 101
  1. or include a template variable for the pod securityContext, so that users can modify it themselves:
-- helm/multi-juicer/templates/juice-balancer-deployment.yaml

{{- if .Values.balancer.securityContext }}
securityContext:
  {{ toYaml .Values.balancer.securityContext | indent 8 }}
{{- end }}
-- helm/multi-juicer/values.yaml

balancer:
  securityContext: {}

After click on "Start Hacking" only forwarded to the start page (with new team-name)

Hello,

I installed the latest version on a Kubernetes cluster. External accessibility with the help of the load balancer is not a problem at all.

As soon as I enter a team name, I get to the page with the access code and a message that the shop is starting up. This takes about 10-15 seconds and the button (Start Hacking) appears for the supposed juice shop. As soon as I click on it, I come back to the start page, where I can select a team name again.

If I use the same team name again, it correctly asks for the access code. But even here I only land on the start page.

If I use the "Admin" team, I get to the admin overview. Here I am told that there are no teams yet.

In Kubernetes, I see the team's matching pod.

What could be the reason?

Why are successfulJobsHistoryLimit and failedJobsHistoryLimit set to 10?

Both successfulJobsHistoryLimit and failedJobsHistoryLimit are set by default to 10 which seems to consume resources in K8s (100mCPU per process). Is there a reason for setting them to 10 and having the process persist after it has completed or is this something I can reduce to a much lower value without any knock on effects or is 10 specified for a reason?

Install in non internet environment

Hi,

I am quite new to Juicy-CTF and have problems getting it to work in no internet environment. My cluster is 2 node Kube cluster that is not connected to internet. I never used Helm before so I just installed helm rpm for RHEL 7.

Once the cluster is running, I could not find any resources as how I could get Helm to work in my environment.

  1. I cloned this repo and ran the following
    #helm install juicy-ctf ./juicy-ctf-master/helm/juicy-ctf/values.yaml
  2. This created few pods in the cluster
NAME                             READY   STATUS      RESTARTS   AGE
cleanup-job-1577419200-n9dvn     0/1     Completed   0          55m
juice-balancer-5fb5d9c77-qbktt   1/1     Running     0          61m
juicy-ctf-redis-master-0         0/1     Pending     0          61m
juicy-ctf-redis-slave-0          0/1     Pending     0          61m
  1. Redis continues to be in Pending, so on investigation
LAST SEEN   TYPE      REASON             OBJECT                                                      MESSAGE
54m         Normal    Scheduled          pod/cleanup-job-1577419200-n9dvn                            Successfully assigned default/cleanup-job-1577419200-n9dvn to s0025abl5905
54m         Normal    Pulling            pod/cleanup-job-1577419200-n9dvn                            Pulling image "iteratec/juice-cleaner:latest"
54m         Normal    Pulled             pod/cleanup-job-1577419200-n9dvn                            Successfully pulled image "iteratec/juice-cleaner:latest"
54m         Normal    Created            pod/cleanup-job-1577419200-n9dvn                            Created container cleanup-job
54m         Normal    Started            pod/cleanup-job-1577419200-n9dvn                            Started container cleanup-job
54m         Normal    SuccessfulCreate   job/cleanup-job-1577419200                                  Created pod: cleanup-job-1577419200-n9dvn
54m         Normal    SuccessfulCreate   cronjob/cleanup-job                                         Created job cleanup-job-1577419200
54m         Normal    SawCompletedJob    cronjob/cleanup-job                                         Saw completed job: cleanup-job-1577419200, status: Complete
48s         Warning   FailedScheduling   pod/juicy-ctf-redis-master-0                                error while running "VolumeBinding" filter plugin for pod "juicy-ctf-redis-master-0": pod has unbound immediate PersistentVolumeClaims
48s         Warning   FailedScheduling   pod/juicy-ctf-redis-slave-0                                 error while running "VolumeBinding" filter plugin for pod "juicy-ctf-redis-slave-0": pod has unbound immediate PersistentVolumeClaims
12s         Normal    FailedBinding      persistentvolumeclaim/redis-data-juicy-ctf-redis-master-0   no persistent volumes available for this claim and no storage class is set
12s         Normal    FailedBinding      persistentvolumeclaim/redis-data-juicy-ctf-redis-slave-0    no persistent volumes available for this claim and no storage class is set

I do not understand the message so I just deleted the juicy-ctf
helm delete juicy-ctf
I assumed the problem could be with persistent volume. So I created the volume myself redis-pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: redis-data-juicy-ctf-redis-master-0
spec:
  storageClassName: manual
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /opt/appsec/redis-cluster-master-0
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: redis-data-juicy-ctf-redis-slave-0
spec:
  storageClassName: manual
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /opt/appsec/redis-cluster-slave-0

And applied the above
kubectl apply -f redis-pv.yaml
After which I created claim

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: persistentvolumeclaim/redis-data-juicy-ctf-redis-master-0
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: persistentvolumeclaim/redis-data-juicy-ctf-redis-slave-0
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi

When I run to check the storage created

NAME                                  CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                              STORAGECLASS   REASON   AGE
redis-data-juicy-ctf-redis-master-0   20Gi       RWO            Retain           Bound    default/juicy-ctf-redis-master-0   manual                  53m
redis-data-juicy-ctf-redis-slave-0    20Gi       RWO            Retain           Bound    default/juicy-ctf-redis-slave-0    manual                  53m

After this if I install juicy-ctf, still the same error. I am not sure how I could fix this. Can you please help?

Improve team name validation and handling

Validation currently allows uppercase letters but the endpoint fails as uppercase letters arent supported in deployment / service names.

Maybe support more characters in teamname as we cant save the full name in the labels and use a reduced version in the deployment name

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.