Giter VIP home page Giter VIP logo

open-match's People

Contributors

0verbyte avatar alekser avatar alexey-kremsa-globant avatar andrewgrundy avatar calebatwd avatar clementcapart avatar dependabot[bot] avatar govargo avatar gyusang avatar hazward avatar hsorellana avatar ihrankouski avatar jeremyje avatar joeholley avatar kemurayama avatar micah-gcp avatar mikeseese avatar mridulji avatar rib avatar rmb938 avatar samix73 avatar sawagh avatar sealtv avatar smoya avatar syntxerror avatar tbble avatar tscrypter avatar watsonso avatar yeukovichd avatar yfei1 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

open-match's Issues

Choose the form of the requests/responses for MMFs running the gRPC harness

It doesn't have to be the final one, but we should definitely have a pass before it ends up in the repo, as it doesn't conform to any of Jeremy's changes or have any of Caleb's input. So far, what we have right now is:

message MmfSpec {
    string name = 1;  // Arbitrary developer-chosen, human-readable string.  Optional.
    string host = 2;  // Host or DNS name for service providing this MMF.  Must be resolve-able by the backend API.
    int32 port = 3;   // Port number for service providing this MMF.
    enum Type {
        // Generic serving types.
        GRPC = 0;
        REST = 1;
        // Knative types will be KNATIVEREST, KNATIVEGRPC, etc.
 
        // Deprecated types
        K8SJOB = 7;
    }   
    Type type = 4;    // Type of MMF
}
 
message CreateMatchRequest {
  MatchObject matchobject = 1;
  MmfSpec mmfspec = 2;
}
 
message ListMatchesRequest {
  MatchObject matchobject = 1;
  MmfSpec mmfspec = 2;
}

The MatchObject message didn't change, so I didn't include it here.

The Backend API then generates the necessary database record keys and calls the function gRPC API at the host and port it reads from the MmfSpec message:

service Function {
 
  // The assumption is that there will be one service for each MMF that is 
  // being served.  Build your MMF in the appropriate serving harness, deploy it
  // to the K8s cluster with a unique service name, then connect to that service
  // and call 'Run()' to execute the fuction.
  rpc Run(messages.Arguments) returns (messages.Result) {}
 
}

Its messages look like this, and definitely need to get updated to be in line with Jeremy's changes.

// Messages for gRPC-served matchmaking functions.
 
// The message for passing in the per-request identifiers
// to a matchmaking function; used so it knows which records to
// write/update in state storage.  Internal to Open Match; this is populated by
// the Backend API and sent to your gRPC-served MMF.
message Request {
    string profile_id = 1;  // Developer-chosen profile name, a copy of the MatchObject name.
    string request_id = 2;  // Per-request generated unique ID, by convention XID.
    string proposal_id = 3; // Intermediate result proposal ID. by convention, the string 'proposal.' prepended to the result_id. 
    string result_id = 4;   // Final result ID. By convention, request_id.profile_id
    int64 timestamp = 5;    // Reserved for future use.
}
 
// The message for passing all the necessary arguments to a 
// matchmaking function.
message Arguments{
    Request request = 1;
    MatchObject matchobject = 2;
}                                     

So, let me know how we want it to look and I'll get to work on updating it for inclusion in the 0.4.0 RC.

Proposal: Hermetic and Reproducible Docker files

Currently the Open Match Dockerfiles are pulling code from multiple sources.
Let's see backendapi as an example.

# Pulls code from internal and config from 040wip from upstream.
RUN svn export https://github.com/GoogleCloudPlatform/open-match/branches/040wip/internal
RUN svn export https://github.com/GoogleCloudPlatform/open-match/branches/040wip/config
WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/cmd/backendapi
# Copies the main file from the local repository.
COPY . .
# Pulls dependencies from head.
RUN go get -d -v

The problem with this is the code for Open Match is pulled from multiple places, a bit from the local checkout and most from the upstream branch of Open Match. This build is problematic since it makes development more difficult and the build output is not consistent.

I'd like to propose 2 ways to build docker images.

Build within Docker

Example: https://github.com/GoogleCloudPlatform/open-match/pull/98/files
With this approach all the building happens within a docker image. The current approach in the example would be to do the following:

FROM golang:1.12
ENV GO111MODULE=on

WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match
COPY . .
RUN go mod init github.com/GoogleCloudPlatform/open-match
RUN go mod tidy
RUN go mod edit -require k8s.io/[email protected]
RUN go mod vendor

Baseline image that has the codebase and dependencies.

FROM open-match-base-build as builder

WORKDIR /go/src/github.com/GoogleCloudPlatform/open-match/cmd/backendapi/
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo .

FROM gcr.io/distroless/static  
COPY --from=builder /go/src/github.com/GoogleCloudPlatform/open-match/cmd/backendapi/backendapi .

ENTRYPOINT ["/backendapi"]

Creates a minimal image that just has the output binary. Everything is built within docker.

Advantages

  • Hermetic building, builds are isolated since the developer box might have customizations that can creep in.
  • Easier for new engineers to build since a development environment only requires docker to be installed.

Build Locally Copy to Docker Image

An alternative approach is to build outside of docker and simply copy the binary inside.
Example: https://github.com/jeremyje/open-match/blob/buildsys/cmd/backendapi/Dockerfile

# Golang application builder steps
FROM alpine:3.8

ARG APP_ROOT=/go/src/github.com/GoogleCloudPlatform/open-match
ARG APP_NAME=backendapi

RUN apk --update add ca-certificates \
  && adduser -D openmatch \
  && mkdir -p ${APP_ROOT}/config \
  && mkdir -p ${APP_ROOT}/cmd/${APP_NAME} \
  && chown -R openmatch:openmatch /go

COPY --chown=openmatch:root ./cmd/${APP_NAME}/${APP_NAME} ${APP_ROOT}/cmd/${APP_NAME}/${APP_NAME}

USER openmatch
WORKDIR ${APP_ROOT}
ENTRYPOINT ["/go/src/github.com/GoogleCloudPlatform/open-match/cmd/backendapi/backendapi"]

This example does 2 things, create a user which we should do anyways but assumes the binary is built outside already.

Advantages

  • Simpler Dockerfiles.
  • Reduces dependency downloads, go mod tidy/download is very expensive. This also significantly speeds up CI builds.
  • More development friendly, works with build on save workflows.

Proposal: openmatch.dev as the main website URL.

We secured multiple domains for Open Match and we are getting close to publishing a website for it. Currently the Github repository is github.com/GoogleCloudPlatform/open-match. For go get, this is problematic because hyphenated last paths are not allowed, see #114 for details.

Given this restriction and the desire to keep this as consistent as possible I'd like to propose http://openmatch.dev/ as the primary URL for the website.

We can also have http://open-match.dev/ as well but we'll need to workaround the tail hyphenation problem that open-match does with golang.

PHP MMF example build issue

Ran into this today

Step 13/14 : RUN composer install
 ---> Running in 6d98c9c5c406
Do not run Composer as root/super user! See https://getcomposer.org/root for details
Composer could not find a composer.json file in /usr/src/open-match/examples/functions/php/mmlogic-simple
To initialize a project, please create a composer.json file as described in the https://getcomposer.org/ "Getting Started" section
The command '/bin/sh -c composer install' returned a non-zero code: 1
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: exit status 1

Probably need a little tweak to the Dockerfile to sort it out.

Fatal error reading config file

Hi. I used Google Cloud Platforn.

kubectl logs om-backendapi-568cb94d86-b5x9p
time="2019-01-07T03:52:14Z" level=fatal msg="Fatal error reading config file" app=openmatch caller=config/main.go component=config error="While parsing config: invalid character '.' looking for beginning of value"
kubectl get all
NAME                                  READY     STATUS              RESTARTS   AGE
pod/om-backendapi-568cb94d86-b5x9p    0/1       CrashLoopBackOff    2          1m
pod/om-frontendapi-85b7457b89-f2fh2   0/1       CrashLoopBackOff    1          42s
pod/om-mmforc-687f45b6ff-55kcz        0/1       ContainerCreating   0          35s
pod/om-mmlogicapi-55c6d8766d-t5m4l    0/1       ContainerCreating   0          21s
pod/redis-master-b56bcf67d-sgddc      1/1       Running             0          3h

Evaluate other config storage options

As mentioned in the docs, right now the 'common' config used by various components is in a json file that gets added to the component containers at build time. This was always intended as an interim solution, we should discuss and evaluate moving config to a more 'kubernetes-native' format.

Requirements

  • Needs to be accessible by all components
  • Should be able to be set using 'standard' resource/file formats like json or yaml

Nice to have

  • Would be good to have some mechanism for securing values in the config. For example, how k8s configmaps can include values from k8s secrets.

Unneeded

  • Update at runtime. Expectation is that the items stored in this config are the kind you read at startup and don't update.
  • Ability to store multiple 'configs' or config 'versioning' within OM. Expectation is that you will keep your configs alongside your code in your repo, and 'apply' them when you deploy an OM instance.

Some eval criteria

In the past we've discussed kubernetes ConfigMaps, and those might be a good fit but someone needs to evaluate them. This isn't an exhaustive list, just what springs to mind immediately.

  • How easy are they to manage? Do they require a new set of operator knowledge that we don't expect OM users to have yet?
  • What is the mechanism for the components 'reading' the config if it's in a configmap?
  • Are components talking directly to the k8s API to get it, or is it getting sent via setting env vars in the component's k8s resource definitions?
    • If talking directly to the API, are we putting a requirement on customizable components that they include the necessary components to read from the k8s API (modules in their language of choice, etc).
    • How does this impact running components outside of k8s (i.e. dev env on local machines?)
  • What happens if the config isn't there yet at startup?

Move public API protos to pkg/pb/ directory.

As our APIs stabilize and we want to start exposing them to be consumed publically we'll need a directory that indicates "safe to depend on". Typically this is the pkg/ directory for golang applications as stated by golang-standards/project-layout/pkg. The official golang website does not have an opinion on this but using a well understood pattern I think is a reasonable fallback here.

At this point it's unclear when we'll reach that state but we should at least discuss and maybe put a doc.go file there to reserve the package for those changes. At the very latest it should be done populated once we start pushing v1.0.0 release candidates.

Some examples of stuff that should go into pkg/.

  1. Publicly consumable protocol buffer definitions like frontend.proto.
  2. Useful libraries that can be used to extend Open Match.
  3. The Go implementation of the gRPC harness.

frontendclient ignores resultsChan

In frontendclient, resultsChan is only used to print:

		case a := <-resultsChan:
			pretty.PrettyPrint(a)

And the Assignment is always empty:

	player := players[0]
	if connect && player.Assignment != "" {
		err = udpClient(context.Background(), player.Assignment)

It should be:

		case a := <-resultsChan:
			pretty.PrettyPrint(a)
                        udpClient(context.Background(), a.Assignment)

Create a vanity URL for the golang package, go get open-match.dev/open-match.

I'd like to propose creating a vanity url for Open Match to abstract away the location of the repository and make it easier for users to access the code.

Benefits

  1. This will be the last time users will have to change the package name. There's an imminent package name change happening anyway because of #83.
  2. Alignment with Agones.
  3. Simplified support for versioning in the future, a la gopkg.in style.
  4. Url will align well with the website.
  5. No corporate branding within the URL itself.

Something similar to agones.dev/agones. This is also tied to the website site hosting which I'll need to discuss with the Agones folks.

The changes consumers of Open Match will need to make are the following:

import "openmatch.dev/openmatch"

Example of getting the repository:

go get -u openmatch.dev/openmatch
cd src/openmatch.dev/openmatch/

To be clear, this does not change the physical location of the repository but only adds a layer of indirection. For contributing to Open Match you'll still need to be aware of the repository path.

In addition, I'd to also propose that we remove the hyphen from the package name because it causes issues in go.

The following happens when you create a doc.go file with openmatch as the package name.

./doc.go:18:13: syntax error: unexpected -, expecting semicolon or newline

It is possible to workaround this limitation but it means we’ll need to name the package openmatch within .go files like the following.

import openmatch "open-match.dev/open-match"

MMFs on Knative

In prep for the glorious serverless function futuretimes, we'd like to get a prototype working that runs MMFs as Knative functions instead of k8s jobs (as mentioned in the roadmap). Plan is currently to get this in for 0.4.0 RC in late Feb 2019.

Goals:

  • Stay as close to the simplicity of the current MMFs-as-jobs system as possible. From the perspective of a user developing against OM we'd like to stay as close to 'write code | trigger build | push code | run code' as possible. This may mean some extra scaffolding via gRPC harnesses.
  • Attempt to make building/runtime harnesses for both knative and k8s job MMFs. The goal will be to give a shared foundation such that the custom code we expect devs to write could easily be used in either way. (This is related to the next point.)
  • Establish a new way to pass in the environment variable values. Env vars won't be a good way to communicate with serverless functions so we should see if we can unify both knative and k8s job MMFs to use a common way of passing in the critical state storage keys to read and write. The current prototype passes the env vars to a serverless function through a input argument, but this bifurcates the implementation to 'MMFs that read from input arguments' and 'MMFs that read from env vars'. This can probably be handled using k8s job harnesses that put env vars into an input argument but it's open for discussion.

Current implementation thoughts:

  • Everything else in OM uses gRPC and the latest knative serving release just added limited gRPC support. We should see if we can build on this and avoid having to support multiple protocols for knative functions.
  • For now we probably don't want to deal with knative functions hosted outside of the OM k8s cluster, so for the first pass we likely want all the knative services to be run as cluster-local
  • The core contributors agree that we don't want to take on knative eventing at this time. MMF results have a very small amount of time before they become stale and if knative serving isn't ready to run a function, in most cases we would prefer to get a failure rather than queueing the request. We can adjust which MMFs are being called on-the-fly to get better results this way. (This one probably merits more discussion)

Things to evaluate:

  • We need to seriously evaluate knative build and see if it can be useful for MMFs.
  • Are different MMFs different knative services? Different knative revisions? Do we make 'families' of MMFs that are different services with multiple members of the family as different revisions?
  • Additional resource overhead of running the knative components in the cluster should be watched.
  • Istio and Knative have opinions about metrics collection (Prometheus) and tracing (Jaeger). We should check to see what if any changes need to be made to OM to fit with the systems they are using. We definitely want to avoid a situation where someone using Knative MMFs has two prometheus instances running: one for knative and one for OM.

I'm sure we'll add to this as we go and lots more thoughts will come up once 0.4.0 RC comes out with an implementation we can all mess around with, so please add your comments as you encounter things we should consider.

Fix Promethus k8s resource ordering issue

In the part of the dev guide that covers applying the kubernetes configs for the prometheus operator, it has:

kubectl apply -f prometheus_operator.json
kubectl apply -f prometheus.json
kubectl apply -f prometheus_service.json
kubectl apply -f metrics_servicemonitor.json

A couple users have noticed errors creating these services that go away on applying them a second time. There's probably a resource that is out-of-order in one of the files, so it's not getting added to the cluster before another resource that depends on it. Could probably be fixed with a few minutes of trial-and-error. Low priority obviously, since everything works if the user just re-runs the offending kubectl apply, which is probably the first thing everyone will try.

Watcher exponential backoff + jitter

The watcher functions in internal/statestorage/redis/redispb/matchobject.go and internal/statestorage/redis/redispb/player.go are both currently 'query every 2 seconds' and are going to generate a lot of the overall redis load on OM. It would be better to have them set up with (configurable) exponential backoff & jitter. There are a few implementations of exp bo + jitter as golang modules, we need someone to evaluate them and then test the one we pick.

Evaluate switching various object IDs from UUIDv4 to xid

https://github.com/rs/xid

This is an internal representation thing - shouldn't have any impact outside of the core components, or in any custom functions/evaluators/backend client/frontend clients. Will drop the string size for the unique IDs from 36 characters to 20 characters, a pretty significant memory savings in Redis. We'll also be passing around less data this way, and it should help with the 64-character Job name limit in Kubernetes when kicking off evaluators and MMFs as well.

Create HA Redis deployment as an option

Basic work here would be something like:

  • Put together the k8s resources necesary to stand up a HA redis deployment (HA proxy + multiple redis instances set up for replication, something like the 'alternative solution' in this github repo readme)
  • Test that this works correctly with OM against the 0.2.0 release. Validate that failover works.
  • If possible see if there's a way to configure this such that k8s deployments recover downed instances and have them automatically added to the HA configuration without operator intervention
  • If this all works, rename the k8s service that redis lives behind from redis-sentinel to just redis and update the OM codebase to match this change (so future users won't be confused by the naming)
  • finalize a k8s resource file that is a 'drop-in' replacement for the existing deployments/k8s/redis-deployment.json/redis-service.json and install/yaml/01-redis.yaml that stands up a HA redis deployment that users can elect to use if they need it.

K8s manifests for a minimum permission MMFOrc SA

Min-spec k8s service account for creating k8s jobs for the MMFOrc would be nice for folks running in a cluster managed by another team or otherwise modified from the default SA permissions. Along with this would need to be a change to the manifests for the MMFOrc deployments to use that SA. Would avoid this MMFOrc error message that is caused by a default k8s SA account without job creation permission:

{"app":"openmatch","caller":"mmforc/main.go","component":"mmforc","error":"jobs.batch is forbidden: User \"system:serviceaccount:default:default\" cannot create jobs.batch in the namespace \"default\"","level":"error","msg":"Couldn't create k8s job!","time":"2018-12-21T00:13:28Z"}

Proposal: Use go mod compatible semver for release tags.

Golang is now mid stage at rolling out go modules. It'll be the default in the next version ~6 months away. An important requirement that go mod enforces is the use of semantic version tags for releases. This means if a game studio wants to depend on Open Match 0.4.0 the recommended way in Go 1.13+ will be run go get -u github.com/GoogleCloudPlatform/[email protected]. Using branches and tag formats that are not vX.Y.Z will fail. There is a fall back for repositories that do not follow this standard which involves using the git commit hash instead which is ugly. See https://github.com/golang/go/wiki/Modules#version-selection for more information.

I'd like to propose we should start using semver starting with the 0.4.0 release. This change is tied with proposal, #97 which allows Open Match to confirm with typical Github workflows.

Proposal: After 040wip release we should do development directly on master branch.

Typically in a git repository the master branch is where active development occurs for cutting a new release. This is outlined in the Github flow process. In addition, once a version is cut there's a few things that should happen,

  1. The version is tagged using a semantic version complaint tag like v1.0.0. See proposal, #110.
  2. A release branch is created for v1.0.x releases. Example, release-v1.0.0.

Challenges for the old pattern:

  1. It's surprising behavior to go get -u github.com/GoogleCloudPlatform/open-match and not really know what version you're working with.
  2. go get will only read from master. This means consumers of Open Match will be pinned to a very old version of Open Match.
  3. Other tools make assumptions on repository layout that we currently break away from, like godoc.org, goreportcard.com,

not branches. It has support to read from semver tags against master branch now.

How do we get there?

  1. Merge in (forcefully or not) 040wip changes into master.
  2. Tag the commit with v0.4.0
  3. Create a branch called release-v0.4.0
  4. New pull requests will always be accepted into master. If a change is required into a patch release then 2 PRs are necessary, master and the release branch for the hotfix.

With this change we'll be modeling the development pattern that Kubernetes does.

Why (*frontendAPI).DeleteRequest() and (*frontendAPI).DeleteAssignment() are identical?

I noticed that the implementions of (*frontendAPI).DeleteRequest() and (*frontendAPI).DeleteAssignment() in cmd\frontendapi\apisrv\apisrv.go are identical. I am wondering what is the difference of these two grpc methods:

service API {
    rpc DeleteRequest(Group) returns (Result) {}
    rpc DeleteAssignment(PlayerId) returns (Result) {}
}

message Group{
  string id = 1;            // By convention, string of space-delimited playerIDs 
  string properties = 2;    // By convention, a JSON-encoded string
}

message PlayerId {
    string id = 1;          // By convention, a UUID
}

Group.properties is not used in DeleteRequest().

Proposal: Split up matchmaker_config.yaml

The matchmaker_config.yaml has grown to be a very large file and Open Match is a set up different servers with different context. We should consider splitting up the configuration so that each server is only provided the context necessary to run properly.

This should improve reliability because it'll force users to have multiple ConfigMaps for each configuration so if there's a bad config change it'll only impact a subset of server types instead of all of them.

evaluator should remove rejected players from the 'proposed' ignorelists

Evaluator needs a pass either way; it's crufty and hasn't gotten much of an update since 0.1.0. If 0.4.0 turns it into a long-lived process then I'll take care of this then. Otherwise, just leaving this as a reminder so I don't forget to fix this. Belongs around this line - that rejected proposed ID needs to be removed from the proposed ignorelist so it will be seen by later MMFs.

Mini Match: Create a single binary that hosts Open Match.

We should have an executable binary that can basically run Open Match as a minified version for testing in other languages. It should be something simple that users can shell out to and kill it via HTTP.

$ ./minimatch -grpc_port 10000 -http_port 10001
Running Mini Match on 10000 and http://localhost:10001
$ curl -O http://localhost:10001/kill
Mini Match is closed.

Mini Match will basically use real Open Match code but use an in-memory Redis, all servers are hosted on the same port from the same binary, and configuration is calculated based on context rather than reading it from a file.

Create a Security Best Practices/Design Document for Open Match

As Open Match expands we'll need to provide guidance to customers on how to securely deploy it. More importantly we'll need best practices to make security informed decisions during our future designs to make sure that OM is secure by default.

I'll publish a draft for community review/edits once it's ready. As a preview it'll be simply just an index of all the best practices and recommendations for the dependencies of the project and how it relates to the project.

Move builds to a 'base' image and associated changes

Since we want public images in 0.3.0, we'd like to start the work to un-jank-ify the Dockerfiles and cloudbuild.yaml files. This should be doable by anyone with a decent understanding of docker, cloudbuild, and kubernetes (to test that the results work correctly). Basically:

  1. First thing to know: this should probably come after the issue to evaluate using configmaps for the OM config. Removing the need to include the config file in the images will be necessary to keep from having to have base images for languages other than golang.
  2. For all the golang container images that use internal modules (this will be all components, not necessarily all examples or test clients, tho), make a cloudbuild.yaml and Dockerfile that builds off the golang base image and adds the internal directory. After making this new 'open match base image', the Dockerfile and cloudbuild.yaml file to build it should be the only ones left in the repo root, I think.
  3. Refactor all the other cloudbuild.yaml files and Dockerfiles in the repo root to be in the directories of the associated components, examples, and test clients.
  4. [Optional] Probably have some make script or the like to run 'all' the builds that can be run from the repo root.

Proposal: Use Go Modules

Go modules was launched in Golang 1.11 and have matured in 1.12 and is being pushed as the new way to manage dependencies within golang.

What's great about it is it's integrated into the golang tool and incorporates all the learnings of the many dependency management tools go has had over the years.

I think the main benefit here will be improved reproducibility of builds since all dependencies are pulled at predefined versions.

It also looks like Agones is moving towards Go modules as well. googleforgames/agones#625

Frontend API Kubernetes Ingress guide

Had three different users ask about proper configuration of an Kubernetes nginx ingress for the Frontend API. Would be great if someone can add a quick guide to the important annotations and such required to get this set up correctly for the HTTP2 gRPC Frontend API service. Could optionally cover more broadly the topic of making the Frontend API public, and should definitely cover what use cases require the (relatively complex) nginx ingress over a simple public service configured using the type: LoadBalancer.

Add support for MMFs set up as a gRPC service.

Currently, Open Match can be configured with MMFs as containers that get run as K8s jobs. This mechanism does not scale well and will be deprecated. The recommended mechanism would be to host the MMF as a gRPC service and to pass along, the connection details of this service to Open Match.

Open Match will prescribe the service proto and will create clients to this service and call into this service to execute MMFs.

Open Match will also provide a gRPC harness that handles the server setup bit, with instructions to stand up a gRPC server and samples of implementing the MMF that can plug into this harness.

Catch exception when pools are empty in python 3 MMF example

Retrieving pool 'defaultPool'.
'defaultPool': count 000000 | elapsed 0.002
Retrieving pool 'supportPool'.
'supportPool': count 000000 | elapsed 0.002
Traceback (most recent call last):
  File "/usr/local/lib/python3.5/random.py", line 262, in choice
    i = self._randbelow(len(seq))
  File "/usr/local/lib/python3.5/random.py", line 239, in _randbelow
    r = getrandbits(k)          # 0 <= r < 2**k
ValueError: number of bits must be greater than zero

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "./harness.py", line 80, in <module>
    results = mmf.makeMatches(profile_dict, player_pools)
  File "/usr/src/open-match/examples/functions/python3/mmlogic-simple/mmf.py", line 34, in makeMatches
    player['id']  = random.choice(list(player_pools[player['pool']]))
  File "/usr/local/lib/python3.5/random.py", line 264, in choice
    raise IndexError('Cannot choose from an empty sequence')
IndexError: Cannot choose from an empty sequence

Should catch and exit cleanly.

Local development instructions

For folks that need local development of Open Match, instructions and possibly a make file for Kubernetes in Docker (like the one Agones has) or even just minikube would be great. Just instructions on getting everything compiled and deployed into a local environment is all that's needed, I think.

frontend.DeletePlayer should remove player from all ignorelists

This should probably be added in deletePlayer as another async call after the call to playerindices.DeleteMeta on line 180. Otherwise could get into a situation where the player has deleted their object via the frontend API but still can't get in a game (as something put them on an ignorelist that never got cleaned up).

If someone wants to submit a PR it can go in as a bugfix and make 0.3.5, otherwise I'll put it in 0.4.0 and backport it later.

Vendor-locking and new golang.org/x/sys/unix build error

Ran into some issues building Dockerfile.mmforc. Looks like: grpc/grpc-go#2181

Step 16/17 : RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo .
 ---> Running in 209353b544c8
# google.golang.org/grpc/internal/channelz
/go/src/google.golang.org/grpc/internal/channelz/types_linux.go:41:15: undefined: unix.GetsockoptLinger
/go/src/google.golang.org/grpc/internal/channelz/types_linux.go:44:15: undefined: unix.GetsockoptTimeval
/go/src/google.golang.org/grpc/internal/channelz/types_linux.go:47:15: undefined: unix.GetsockoptTimeval

We currently don’t do vendor-locking. We could update the dockerfile to checkout the particular commit for golang.org/x/sys/unix like it does for k8s.io, but I think we should go ahead and bite the bullet and investigate doing dependency locking.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.