Giter VIP home page Giter VIP logo

go-scim's People

Contributors

afedyk-sugarcrm avatar imulab avatar johejo avatar requaos avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

go-scim's Issues

Failed to delete non-last member of group

When I create a new group and add 3 members, the content I store in the database is as follows:

{
	"schemas": ["urn:ietf:params:scim:schemas:core:2.0:Group"],
	"id": "a608ff25-b6f8-4bd6-a78a-771f8e039088",
	"externalId": "aaaaaa",
	"meta": {
		"resourceType": "Group",
		"created": "2023-04-20T15:33:35",
		"lastModified": "2023-04-20T16:10:58",
		"location": "/Groups/a608ff25-b6f8-4bd6-a78a-771f8e039088",
		"version": "W/\"02915506ddc93d91d2929c82b582749328925188\""
	},
	"displayName": "Group1DisplayName2",
	"members": [{
		"value": "1f01127a-076e-400c-b4d9-a30bb63e8c30"
	}, {
		"value": "fdab798c-8cea-4ab1-8c48-68b1969a1b16"
	}, {
		"value": "3d1d7217-e900-4ec9-a9ea-bb58c8beade6"
	}]
}

Next, when I try to delete the first and second members of the group, an error occurs as follows:

{
    "schemas": [
        "urn:ietf:params:scim:api:messages:2.0:Error"
    ],
    "status": 400,
    "scimType": "noTarget",
    "detail": "noTarget: no target at index '2' from 'members'"
}

Body like this:

{
	"schemas": ["urn:ietf:params:scim:api:messages:2.0:PatchOp"],
	"Operations": [{
		"op": "remove",
		"path": "members[value eq \"fdab798c-8cea-4ab1-8c48-68b1969a1b16\"]"
	}]
}

But removing the last member can be successful

I need how to fix this, because deleting members won't delete only the last one, Thank you so much!

compose missing linux_amd64/scim folder

make docker compose results in this error:

 => ERROR [stage-1 2/3] COPY --from=builder /build/scim/bin/linux_amd64/scim /usr/bin/scim           0.0s 
------                                                                                                    
 > [stage-1 2/3] COPY --from=builder /build/scim/bin/linux_amd64/scim /usr/bin/scim:                      
------                                                                                                    
failed to compute cache key: "/build/scim/bin/linux_amd64/scim" not found: not found
make: *** [docker] Error 1

Azure AD Patch request issues

Hey, Azure posts the following when trying to PATCH a user:

{
"schemas":["urn:ietf:params:scim:api:messages:2.0:PatchOp"],
"Operations":[{
        "op":"Add","path":"urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager",
        "value":"45619541-95de-4d5e-9872-571b5d2c5577"}]
}

We are unable to parse and the get following error from go-scim:

{
    "schemas": [
        "urn:ietf:params:scim:api:messages:2.0:Error"
    ],
    "status": 400,
    "scimType": "invalidPath",
    "detail": "invalidPath: error compiling path"
}

I think the issue might be that manager is a complex attribute under the enterprise schema, but is sent over with a simple ID.

Do you know if this is a bug in go-scim of if we should format the PATCH request differently? Can you provide an example of what this request should look like so it works in go-scim.

Best Regards!
Plamen

Multiple Attributes Selection

Go-Scim can't seem to handle multiple attributes selection. A request like:

GET http://{{Server}}{{Port}}/{{Api}}/Users?attributes=userName,emails

Returns:

{
"schemas": [
"urn:ietf:params:scim:api:messages:2.0:Error"
],
"status": 400,
"scimType": "invalidPath",
"detail": "invalidPath: error compiling path"
}

It breaks on the comma ',' between userName and emails.

This test is from the Microsoft SCIM Reference Code Postman test collection: https://github.com/AzureAD/SCIMReferenceCode/wiki/Test-Your-SCIM-Endpoint

Support the enterprise extension by default

As use cases of Microsoft AD seems popular with this library, it will be beneficial to support enterprise extension in the showcase project out-of-box so users can test it out.

Error on startup

Im trying to get this running, but i cant seem to get it to startup.

i have a mongodb 4.0.8 server running locally on localhost:27017

but when i run the scim server i get

[thawkins@localhost example]$ go run server.go
panic: no reachable servers

goroutine 1 [running]:
github.com/davidiamyou/go-scim/handlers.ErrorCheck(...)
/home/thawkins/go/src/github.com/davidiamyou/go-scim/handlers/shared.go:55
main.initConfiguration()
/home/thawkins/go/src/github.com/davidiamyou/go-scim/example/server.go:69 +0x19ce
main.main()
/home/thawkins/go/src/github.com/davidiamyou/go-scim/example/server.go:102 +0x26
exit status 2

Any Ideas

Fedora 29, using latest go

MongoDB operator constant is wrong, and fixing it breaks tests

image

This should be mongoNot = "$not", rather than mongoNot = "$nor".

I'm working with a fork of this and changing it caused some tests to fail. it makes me concerned that the tests are wrong, since all the functions involved say not, rather than nor. I don't have time fix it and submit a PR here, unfortunately.

ResourceType and Schema should be treated as Resource as well

Currently, we treat ResourceType and Schema (by inclusion, also Attribute) as ordinary Go structures. This is for good reasons. As these data models are used frequently as metadata, modelling them in a dynamic data structure like Resource will have negative impact on performance.

However, this creates a few problems when it comes to rendering for endpoints like /ResourceTypes, /ResourceType/:id, /Schemas and /Schema/:id. The response are actually application/json+scim formatted, which is rather awkward to produce using Go's native json mechanism, but resonates rather well with the SCIM mechanism in /v2/pkg/json.

Hence, we propose the following enhancement:

  • Add spec.internal.MetaSchema variable, which is the schema that describes Schema
  • Add spec.internal.ResourceTypeSchema variable, which is the schema that describes ResourceType.
  • Delegate JSON decoding to /v2/pkg/json for ResourceType and Schema through Resource, and using navigator to transfer values to the Go structure (in a hard coded way)
  • Maintain reference to the original Resource that holds the value in ResourceType and Schema.
  • Delegate JSON encoding to /v2/pkg/json for ResourceType and Schema, through the internal Resource reference.
  • Add GetRawResource method to both ResourceType and Schema that returns the reference to the internal Resource.

InvalidDBRef

What did I do:

curl --location --request PATCH 'http://localhost:5000/Groups/f709a088-20ef-4f31-8986-a72c3e65a4c6' \
--header 'Content-Type: application/javascript' \
--data-raw '{
  "schemas":[
    "urn:ietf:params:scim:api:messages:2.0:PatchOp"
  ],
  "Operations":[
    {
    "op":"add",
    "path":"members",
    "value":[{"value":"d3f3bb44-4583-41ba-920f-3319266ed435"}]
    }
  ]
}'

What did you expect:
The groupsync worker to add the group to the user.

What did you get:

{"level":"error","error":"(InvalidDBRef) The DBRef $ref field must be followed by a $id field","resourceId":"d3f3bb44-4583-41ba-920f-3319266ed435","resourceVersion":"W/\"08f6f4be51ef1c744e1c809dc7ca8b8c4136fea2\"","time":"2020-01-14T15:16:35Z","message":"failed to replace resource in mongo"}
{"level":"error","error":"(InvalidDBRef) The DBRef $ref field must be followed by a $id field","message":"&{GroupID:f709a088-20ef-4f31-8986-a72c3e65a4c6 MemberID:d3f3bb44-4583-41ba-920f-3319266ed435 Trial:1}","time":"2020-01-14T15:16:35Z","message":"error encountered while syncing user group, will requeue"}
{"level":"error","limit":1,"message":"{\"group_id\":\"f709a088-20ef-4f31-8986-a72c3e65a4c6\",\"member_id\":\"d3f3bb44-4583-41ba-920f-3319266ed435\",\"trial\":2}","trial":2,"time":"2020-01-14T15:16:35Z","message":"message had exceeded trial limit, will drop message"}

Redis support

@imulab I would like to work on an alternative to mongo by implementing a redis module that satisfies the same interface.
This thread should serve the purpose of a discussing the advantages and/or disadvantages of doing this.
My motivation is based on past experience in maintaining and managing mongo clusters at production scale. I still have nightmares...

Visitor for ByPropertyToByResource adapter skips remaining ByProperty filters if one is not supported

Problem code:

// Visit performs a DFS visit on the resource and sequentially invokes the ByProperty filters on each visited property
// in the resource. Any visit or filtering error is returned.
func Visit(ctx context.Context, resource *prop.Resource, filters ...ByProperty) error {
	n := flexNavigator{stack: []prop.Property{resource.RootProperty()}}
	v := syncVisitor{
		resourceNav: &n,
		visitFunc: func(resourceNav prop.Navigator, referenceNav prop.Navigator) error {
			for _, filter := range filters {
				if !filter.Supports(resourceNav.Current().Attribute()) {
					return nil
				}
				if err := filter.Filter(ctx, resource.ResourceType(), resourceNav); err != nil {
					return err
				}
			}
			return nil
		},
	}
	return resource.Visit(&v)
}

In the following iteration, returning nil when Supports returns false directly exits the visitorFunc

for _, filter := range filters {
	if !filter.Supports(resourceNav.Current().Attribute()) {
		return nil
	}
	if err := filter.Filter(ctx, resource.ResourceType(), resourceNav); err != nil {
		return err
	}
}

continue should be used instead to respect the remaining filters.

Support for Custom Fields

Hi @imulab

I am looking to add support for custom fields, such as this:

        {
            "id": "urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:customField1",
            "name": "customField1",
            "type": "string",
            "_index": 0,
            "_path": "urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:customField1"
        },
        {
            "id": "urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:customField2",
            "name": "customField2",
            "type": "string",
            "_index": 0,
            "_path": "urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:customField2"
        },

Would you recommend adding this to one of the existing schemas (user or enterpriseUser)? Should I create a separate custom schema for this instead? What other changes would I need to make to support this.

Thank you!

ResourceTypes endpoint responds with incorrect body

I am using the built in ResourceTypes handler. It is hardcoded to respond with a list of the resource types. According to the spec this is wrong https://tools.ietf.org/html/rfc7644#section-4. Am I missing something?

It should be a list response:

  {
    "totalResults":2,
    "itemsPerPage":10,
    "startIndex":1,
    "schemas":["urn:ietf:params:scim:api:messages:2.0:ListResponse"],
    "Resources":[{
      "schemas": ["urn:ietf:params:scim:schemas:core:2.0:ResourceType"],
      "id":"User",
      "name":"User",
      "endpoint": "/Users",
      "description": "User Account",
      "schema": "urn:ietf:params:scim:schemas:core:2.0:User",
      "schemaExtensions": [{
        "schema":
          "urn:ietf:params:scim:schemas:extension:enterprise:2.0:User",
        "required": true
      }],
      "meta": {
        "location":"https://example.com/v2/ResourceTypes/User",
        "resourceType": "ResourceType"
      }
    },
   {
     "schemas": ["urn:ietf:params:scim:schemas:core:2.0:ResourceType"],
     "id":"Group",
     "name":"Group",
     "endpoint": "/Groups",
     "description": "Group",
     "schema": "urn:ietf:params:scim:schemas:core:2.0:Group",
     "meta": {
       "location":"https://example.com/v2/ResourceTypes/Group",
       "resourceType": "ResourceType"
     }
   }]
  }

Update and search operations fail when attributes contain dot

Hi @davidiamyou,

I noticed some issues using attributes that contain a dot in its own name (ie when I use the urn for extension attributes, using urn:ietf:params:scim:schemas:extension:enterprise:2.0:User like a complex attribute).

Mongodb does not allow the . within the attribute name, so:

  • Update fails if one of the attribute of your resource contains a .
  • Search fails if you are searching for an attribute contains a .

I think a solution would be to set up a mapping between the attribute actual name and the key used within db (avoiding the usage of . in the db key string for mongodb, may be another db could have issue with other chars).

Could "assist" custom property suitable to support this mapping?

Have you some tips?

Why is filter query param required?

Am I reading the spec wrong? This project requires a filter query param on the GET /Users endpoint but the spec says A query against a server root indicates that all resources within the server SHALL be included, subject to filtering.

Another question about attribute syntaxt.

Hi there, sorry if I am spamming you. Seeing the following 2 test failures in https://github.com/AzureAD/SCIMReferenceCode/wiki/Test-Your-SCIM-Endpoint

The below 2 URIs

http://localhost:5001//Users?attributes=emails%5Btype+eq+%22work%22%5D
http://localhost:5001//Users?attributes=emails%5Bvalue+eq+%22emailName357%22%5D

result in

{
    "schemas": [
        "urn:ietf:params:scim:api:messages:2.0:Error"
    ],
    "status": 400,
    "scimType": "invalidPath",
    "detail": "invalidPath: error compiling path"
}

It is not clear to me whether these are valid requests. Replacing the + chars with spaces also breaks the same way.

http://localhost:5001/Users?attributes=emails.type eq "work"
http://localhost:5001/Users?attributes=emails[value eq "emailName357"]

however these work fine:

Are these valid use cases or are these bad tests. Also, is it likely for these to be used at all, if my SCIM endpoint is only to receive data from Azure AD? The only reason I could think of these complex queries to be required is if they are somehow used for syncing the state between the SCIM api and Azure AD.

V2

v1 of go-scim was my first attempt to approach an open source project and it was a great experience. However, to be honest, I found myself not entirely up to the task of maintaining this version as it has got several problems:

  • Reflection: extensive use of reflection makes the code harder to maintain
  • Resource model: the resource model was decoupled with the schema attributes, making it awkward when having to walk the tree with the assistance of attributes. This affects all further operations (i.e. validation, modification) on the resource.
  • Filter/Path: filter and path parsing could be improved to be more dependable; the URN prefix feature is somewhat hardcoded.
  • JSON: JSON serialization is based on reflection while we already have the type information available in the schema; JSON deserialization relies on an intermediate data structure.
  • BSON: Like JSON, serialization to MongoDB also relies on the intermediate data type of bson.M, which is essentially a map. Then MongoDB picks it up and encode it to bytes using reflection. Again, it is wasteful to do the same thing twice.
  • SPI: I like the idea of using SPI and that this library serves as a foundation of server implementations rather than being an implementation itself. However, not providing a not-a-toy initial setup is rather inconvenient for people to test this software and customize. I would like to keep SPIs as extension points, but do provide workable defaults out of the box for v2.

Comments and ideas are welcomed.

Should AzureAD Tests Pass?

Hello!

Have you seen this test suite for SCIM endpoints? I ran and got a fair amount of failing tests ~50%

https://github.com/AzureAD/SCIMReferenceCode/wiki/Test-Your-SCIM-Endpoint

Digging into the failures some of them do not seem very significant, and changes in route/ response structure, etc make the tests pass. Not sure how significant this is, and whether it might impede actual Azure AD integration.

What SCIM providers have you tested this, or know of this running successfully with. GSuite, Okta?

Thank you so much!
Plamen

Is NoSQL Injection Possible?

Question, do we know if the mongodb goland driver takes care of potential for noSQL injection for us? Or is that something we need to account for.

Alternative way to approach schema assisted validation, rendering, etc

Current approach of putting everything into essentially a map provides great extensibility but is too generic. In most use cases, we still want to access the core attributes as is, but with some sort of schema extensions.

Should we explore a way to assist processing with Go Tags and also allow embedding to enable user-defined extensions?

Error locating old resource for replacement in DB interface

Description

Replace method of DB interface only allows a single *prop.Resource parameter which is supposed to be the updated resource. This does not allow implementation to locate/lock the old resource from the database which was retrieved at the start of the replacement API call for reference usage, because meta.version was changed prior to calling Replace.

Proposal

Change Replace method signature to Replace(ctx context.Context, ref *prop.Resource, replacement *prop.Resource) to allow implementation see the reference resource retrieved earlier.

Thanks for @requaos for noticing this issue and provide initial solution

Lessons learnt so far and refactor work

Description

Through implementing the server module, I got the chance to experience the API from the core module and the protocol module, and I have a few reflections.

Project structure

The core module and protocol module, for most of the time, are going to be referenced and used together for any useful APIs. The separation of these two modules are a bit unhandy. It would probably introduce more headache when their module versions digress.

SPI

The idea of SPI is nice. It allows users to use their own implementations to plugin functions required by the API. However, through the implementation of the server module, I feel that a good SPI design is quite hard. There are always some edge cases where a standard SPI interface won't be able to cover. For instance, when designing the http.Request SPI, specifically the PathParam method, I am really assuming the usage pattern of the router library I had in mind. Such a SPI will probably introduce more problems down the road.

Event callbacks

The event callback mechanism in the service implementations are of similar defect. This mechanism are supposed to provide a hook for developers for more customized processing logic at specific events. For instance, if developers decide on the architecture where the Group resource and the User.groups attribute will achieve eventual consistency, then the callback will be a good place to submit a sync job to a middleware after the Group resource has been changed/persisted. However, this has two problems.

  • One is that the implementations must not be blocking, albeit the mechanism really had no way to enforcing it.
  • Another is that I am not sure how should any errors be treated. Should the error be returned to caller, or should it be ignored since persistence was already successful?

Resource event propagation

The propagation mechanism achieves its goal, but the design is troublesome. We are forced to maintain a two way reference in the resource structure (i.e. two way between parent and child property), and there is no way to NOT react to an event once the subscribers are in place.

Proposed solution

In this refactoring effort, we propose the following solutions to the above issues:

The pkg module and change of entry point

Merge the majority of core module and protocol module to a pkg module. The top level entry point would be services instead of handlers. The services can be directly used by user implementations of handlers, which can choose whatever library the user sees fit. The pkg module would be independently developed and versioned.

This also addresses the SPI issue. Now, since users build their own handlers, they are no longer restricted by the SPI. Http router libs and logging libs can be used directly without having to wrap it in a SPI implementation.

Services exposed as interfaces

SCIM services will be exposed as interfaces instead of plain structures. This allows users to create facade implementations that wraps them, and provide additional logic based on the results from the wrapped service.

The users can have full control whether the logic will be blocking or non-blocking, and can decide what to do with any errors.

Enhanced navigators

Because navigators actually maintain the stack from the root of the resource down to the property of interest, it is the perfect place to implement the event mechanism. Navigators will expose Add, Replace and Delete methods which delegates to those of the Property, but perform event notification along the stack afterwards.

When an operation does not wish to cause event propagation. It can simply use nav.Current().Add() instead of nav.Add(), for instance.

The resulting benefit is a simpler resource structure. Properties no longer have to maintain a reference to their container.

Other improvements

  • Parameterized annotations: annotations accepts parameters. Subscribers and Filters can use these parameters to customize their behaviour. This would reduce the number of configurations and annotations needed.
  • Use Go 1.13 error wrapping feature. Use errors.Unwrap to determine the root cause
  • Address #37 by introducing ref parameter in DB.Replace method.
  • Largely simply the complexity of a synchronizing visitor, used by filters. Implement a navigator which pushes OutOfSync marker property onto the stack when going out of sync, freeing the visitor of the complexity of maintaining both stacks and sync status.

invalidValue: 'schemas' is required - when 'schemas' is present

Hi, just trying to give this library a quick spin with Okta (syncing users and groups to mongodb). I just did the quick make docker compose and then started having Okta send requests. Creating users and groups seems to work, but a few of the patch requests are failing.

After successfully sending a patch operation to add members to the group, Okta then does a get on the group, followed by this patch:

{
    "schemas": [
        "urn:ietf:params:scim:api:messages:2.0:PatchOp"
    ],
    "Operations": [
        {
            "op": "replace",
            "value": {
                "id": "79fe65ca-65db-4840-85a9-12283543b2b8",
                "displayName": "Jeff.Scim.Test"
            }
        }
    ]
}

Its unclear to me why Okta thinks it needs to patch the group (b/c the name of the group has not changed). Regardless, I'm getting this confusing response:

{
    "schemas": [
        "urn:ietf:params:scim:api:messages:2.0:Error"
    ],
    "status": 400,
    "scimType": "invalidValue",
    "detail": "invalidValue: 'schemas' is required"
}

But it looks like the request includes the "schemas" array and its populated with a value. Any insight into where this error is coming from and what its trying to tell me?

Thanks!

Docker build does not use cache for dependency downloading

The way the dockerfile is currently written doesn't leverage caching because the download deps step is run in the same context as the build binary step. Using the below dockerfile will improve dev experience.

`
##########################################################################

GO BUILDER

##########################################################################
FROM golang:1.13-buster AS builder

WORKDIR /build/scim
COPY go.mod ./
COPY pkg/v2/go.mod ./pkg/v2/go.mod
COPY mongo/v2/go.mod ./mongo/v2/go.mod
COPY Makefile ./
RUN make deps

COPY . ./
RUN make binary

##########################################################################

FINAL IMAGE

##########################################################################
FROM debian:buster-slim

copy binary

COPY --from=builder /build/scim/bin/linux_amd64/scim /usr/bin/scim

copy public files

COPY --from=builder /build/scim/public /usr/share/scim/public

run

CMD ["/usr/bin/scim"]
`

Schema Path typo

Schema for addresses sub-attributes refers to 'photos' under path. Should be 'addresses'.

      "id": "urn:ietf:params:scim:schemas:core:2.0:User:addresses",
      "name": "addresses",
      "type": "complex",
      "multiValued": true,
      "_index": 116,
      "_path": "addresses",
      "_annotations": {
        "@AutoCompact": {},
        "@ExclusivePrimary": {},
        "@ElementAnnotations": {
          "@StateSummary": {}
        }
      },
      "subAttributes": [
        {
          "id": "urn:ietf:params:scim:schemas:core:2.0:User:addresses.formatted",
          "name": "formatted",
          "type": "string",
          "_index": 0,
          "_path": "photos.formatted"
        },
        {
          "id": "urn:ietf:params:scim:schemas:core:2.0:User:addresses.streetAddress",
          "name": "streetAddress",
          "type": "string",
          "_index": 1,
          "_path": "photos.streetAddress",
          "_annotations": {
            "@Identity": {}
          }
        },

DateTime parsing is incorrect

Scim defines its datetime field (https://datatracker.ietf.org/doc/html/rfc7643#section-2.3.5) as following the w3 xsd schema (https://www.w3.org/TR/xmlschema11-2/#dateTime). Currently the dateTime validation fails for dateTime with timezones +00:00. Modifying the fromIso method to the bellow seems to do the trick.

func (p *dateTimeProperty) fromISO8601(str string) (time.Time, error) {
	const DateTimeFormat     = "2006-01-02T15:04:05.999999999-07:00"
	const DateTimeNoTimezone = "2006-01-02T15:04:05.999999999"
	const DateTimeUTC        = "2006-01-02T15:04:05.999999999Z"
	var (
		val time.Time
		err error
		z   = str[len(str)-6]
		utc = str[len(str)-1]
	)

	if z == '+' || z == '-' {
		val, err = time.Parse(DateTimeFormat, str)
	} else if utc == 'Z' {
		val, err = time.Parse(DateTimeUTC, str)
	} else {
		val, err = time.Parse(DateTimeNoTimezone, str)
	}

	if err != nil {
		return time.Time{}, fmt.Errorf("%w, value for '%s' does not conform to ISO8601", spec.ErrInvalidValue, p.attr.Path())
	}

	return val, nil

}

Validation fails in case of integer attributes

Hi @davidiamyou,

I noticed that if I create a resource with some integer attributes, validation fails.
This issue seems related to the JavaScript storing mechanism: all stored numbers in JavaScript are floating point, according to IEEE 754.

So when you perform json unmarshal the type of integer attributes will be float64.
And when you perform validation with reflection, crossing this misleading type with
the one declared in the schema, it always fails.

400 returned instead of 409 on non-unique userName

When a non-unique userName is used in a POST to /Users, the code returns 400, but should return 409 as per the RFC: https://tools.ietf.org/html/rfc7644

If the service provider determines that the creation of the requested
resource conflicts with existing resources (e.g., a "User" resource
with a duplicate "userName"), the service provider MUST return HTTP
status code 409 (Conflict) with a "scimType" error code of
"uniqueness", as per Section 3.12.

go-scim returns this error message with the 400: {"level":"error","error":"invalidValue: value of 'userName' is not unique","time":"2020-05-01T13:37:36-04:00","message":"error when creating resource"}

Includes and excluded attributes ignores super paths

When the include path of emails is used, the rendering outcome actually produces an empty JSON array, because the traversing logic does not consider its sub attributes as included.

Similar things would happen for excluded attributes.

A more intelligent traversing logic is required when using the visitor for JSON serialization.

How to use extensions

Hi @davidiamyou
I'm wondering through go-scim example trying to understand how to use extensions.

For instance, I would use enterpriseUser extension.

  1. Do I have to modify user_internal.json adding the fields of the extension or I have to create enterprise_user_internal.json?
  2. How can I reference the schema of the extension during the creation of the resource (validation, etc)?
  3. Do I have to create a new resource EnterpriseUser?

Please, can you provide a little example?

Scim filter with complex attribute query

When trying to use a complex attribute query in a filter ex.

/Users?filter= emails%5Btype+eq+%22work%22%5D

The filter exits with: invalid character '[' around position 0 (hint:invalid character in path)

How can handle Bearer tokens in the repository layer?

I have many different teams/apps that are going to use our company's SCIM APIs.

We had great success so far leveraging go-scim's handler, so we just needed to implement custom logic inside our Repository structs in order to CRUD SCIM Resources into our Resources.

We need, however, to identify which team these resources are part of so they can be manage accordingly.

How can I read the Authorization header from a request and use its value in the repository layer?

Thanks!

Deserialize does not unescape characters

meta.version is usually in the form of "W/\"100\"". When deserialized, it became "W/\\\"100\\\"" internally, which breaks its original value when serialized again.

Study the benefit of using a binary data tool to embed resources

@requaos proposed a binary data tool pkger in #38

The original intention was to avoid too many COPY commands creating too many layers. The stock files were since placed in a single folder, reducing the total number of COPY commands needed to 2 (1 for binary, 1 for public folder).

However, the use of binary data tool may still be beneficial, if:

  • It reduces COPY commands even more
  • It does not add too much complexity to the building process (requiring external software be installed)
  • It can help resource references in tests. Currently tests reference files in their relative paths, which is a fragile practice. It will be much better to be able to reference absolute paths from the root of the module.

JSON parsing fails for empty JSON arrays.

Defined custom resource:

{
  "name": "default",
  "description": "Default AbsurdLab Password Control Policy",
  "maxLength": 32,
  "minLength": 8,
  "minAlphas": 2,
  "minNumerals": 1,
  "minAlphaNumerals": 1,
  "minSpecialChars": 1,
  "maxSpecialChars": 0,
  "minUpperCase": 1,
  "minLowerCase": 1,
  "maxRepeatedChars": 4,
  "startsWithAlpha": true,
  "firstNameDisallowed": true,
  "lastNameDisallowed": true,
  "userNameDisallowed": true,
  "minPasswordAgeInDays": 1,
  "warningAfterDays": 25,
  "expiresAfterDays": 30,
  "disallowedChars": "",
  "disallowedSubStrings": [],
  "passwordHistorySize": 5,
  "maxIncorrectAttempts": 5
}

Parsing fails right after finishing disallowedSubStrings , error is "attribute name is expected". If change "disallowedSubStrings": [] to "disallowedSubStrings": null, parsing becomes successful.

Suspects scanner stepping in the case of empty array isn't right.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.