Giter VIP home page Giter VIP logo

apispec's People

Contributors

bangertm avatar benbeadle avatar buxx avatar cjproud avatar codectl avatar colin-b avatar dependabot-preview[bot] avatar dependabot-support avatar dependabot[bot] avatar djanderson avatar dradetsky avatar fmeow avatar frol avatar greyli avatar hello-josh avatar jmcarp avatar karec avatar kasium avatar kortsi avatar lafrech avatar lucasrc avatar lucasrcosta avatar mathewmarcus avatar mjpieters avatar pre-commit-ci[bot] avatar pyup-bot avatar sloria avatar theirix avatar urkr avatar yoichi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

apispec's Issues

Marshmallow extensions does not support plural schema instances

I expected

get:
  description: Get the list of pets.
  responses:
    200:
      description: A list of pet objects.
      schema:
        type: array
        items: PetSchema

to produce

"get": {
  "description": "Get the list of pets.",
  "responses": {
    "200": {
    "description": "A list of pet objects.",
    "schema": {
      "type": "array",
      "items": {
        "$ref": "#/definitions/Pet"
      }
    }
  }
}

but got

"get": {
  "description": "Get the list of pets.",
  "responses": {
    "200": {
    "description": "A list of pet objects.",
    "schema": {
      "type": "array",
      "items": "PetSchema"
    }
  }
}

The issue is that apispec/ext/marshmallow/init.py#L77 only checks if many is set to true on the class, but the more common marshmallow pattern is to set many=True when constructing schema instances. This also fails for the reverse case when the class is set to many=True, but the instance used for the view is constructed with many=False (although this seems like an anti-pattern).

Since the instance parameter overrides the class attribute, it should probably be considered a default, which means that it should be ignored and the format I used above should be used instead.

Default values for webargs

Now that webargs is using marshmallow internally, we specify default argument values using the missing parameter:

arg = fields.Str(missing='foo')

But apispec doesn't check the missing key when introspective parameter defaults--it only checks default. To get the correct swagger default, we have to specify both parameters:

arg = fields.Str(missing='foo', default='foo')

Which isn't ideal.

Since we're using the same logic to introspect fields and schemas that are used for serialization and deserialization, one approach would be to tell field2property and related methods whether they're introspecting load or dump. For the specific case of parameter defaults, we'd check default for dump and missing for load.

Support Schemas for Request Body / Query Params

The marshmallow plugin already nicely supports schemas for response bodies, but it would be nice if it could also support schemas for request bodies/query params. I have a PR that could support this, but wanted to get feedback on the format. Here's what I was thinking:

Body:

This would operate similarly to the existing response schema handling, where the schema would get replaced by a $ref if found in the definitions.

post:
    description: Add your favorite pet
    parameters:
        - name: body
          schema: PetSchema
    responses:
        200:
            description: The pet that was added
            schema: PetSchema

Result:

{
  "paths": {
    "/pet": {
      "post": {
        "description": "Add your favorite pet",
        "parameters": [
          {
            "in": "body",
            "name": "body",
            "schema": {
              "$ref": "#/definitions/Pet"
            }
          }
        ],
        "responses": {
          "200": {
            "schema": {
              "$ref": "#/definitions/Pet"
            },
            "description": "The pet that was added"
          }
        }
      }
    }
  }
}

Query params:

This gets a little weird, since OpenAPI doesn't really support refs for query params AFAICT. What I was thinking is that we'd iterate through all the fields in the schema and make each one a parameter.

get:
    description: Search by pet id
    parameters:
        - name: query
          schema: PetSearchSchema
    responses:
        200:
            description: The pet that was found with the search
            schema: PetsSchema

Result:

{
  "paths": {
    "/pet": {
      "get": {
        "description": "Search by pet id",
        "parameters": [
          {
            "type": "string",
            "name": "pet_id",
            "in": "query"
          }
        ],
        "responses": {
          "200": {
            "schema": {
              "$ref": "#/definitions/Pet"
            },
            "description": "The pet that was found with the search"
          }
        }
      }
    }
  }
}

How to specify attributes (such as dump_only,...) on Nested fields?

Say you have a dump_only Nested field in your model.

The model reads:

class Parent(Schema):

    child = Nested(Child, dump_ony=True)

Assuming you added Child to the definitions, the spec then looks like

    "Parent": {
      "properties": {
        "child": {
          "$ref": "#/definitions/Child", 
          "readOnly": true
        },
      }, 
      "type": "object"
    }, 

Unfortunately, this is invalid OpenAPI spec: swagger-api/swagger-js#402

I get this warning in ReDoc:

Other properties are defined at the same level as $ref at "#/definitions/Parent/properties/child". They are IGNORED according to the JsonSchema spec

Has anyone here been facing this already?

Should we use the AllOf trick described here?

Ignore `load_only` fields if `dump=True` in `fields2jsonschema`?

Hi,

I'm making use of https://github.com/jmcarp/flask-apispec to automatically generate docs in a personal flask project. This library makes use of your apispec swagger extension to generate docs for requests and responses. I figured out that responses in the generated docs where including marshmallow's load_only fields, which (at least in my case) is not convenient. In the line that I'm linking below you're excluding dump_only fields if dump=False when invoking that method. Do you think it would be a good idea to also ignore load_only fields if dump=True?

https://github.com/marshmallow-code/apispec/blob/dev/apispec/ext/marshmallow/swagger.py#L492

I'm opening this for discussion, and I'll be happy to create a PR for that in case you are ok with the proposed functionality.

Marshmallow 1.2 is not supported

Doc states that
Requires marshmallow>=1.2.
but
I've got an error with marshmallow==1.2.6

Traceback (most recent call last):
  File "api_spec.py", line 12, in <module>
    spec.definition('UserProfile', schema=ProfileSchema)
  File "/Users/etataurov/Envs/click_server_asyncio/lib/python3.5/site-packages/apispec/core.py", line 191, in definition
    ret.update(func(self, name, **kwargs))
  File "/Users/etataurov/Envs/click_server_asyncio/lib/python3.5/site-packages/apispec/ext/marshmallow/__init__.py", line 30, in schema_definition_helper
    return swagger.schema2jsonschema(schema, spec=spec)
  File "/Users/etataurov/Envs/click_server_asyncio/lib/python3.5/site-packages/apispec/ext/marshmallow/swagger.py", line 276, in schema2jsonschema
    return fields2jsonschema(fields, schema, spec=spec, use_refs=use_refs, dump=dump)
  File "/Users/etataurov/Envs/click_server_asyncio/lib/python3.5/site-packages/apispec/ext/marshmallow/swagger.py", line 333, in fields2jsonschema
    if field_name in exclude or (field_obj.dump_only and not dump):
AttributeError: 'String' object has no attribute 'dump_only'

because dump_only was added in 2.0 https://marshmallow.readthedocs.org/en/latest/changelog.html

I guess it is probably OK to drop 1.2 support, but docs should be fixed then.
Thanks

Schema Refs

Today, to add a ref to a Schema from a Schema's nested field, we must add a ref parameter, as such:

cat_with_ref = fields.Nested(CategorySchema, ref='Category', description="A category")

To me, it looks like we're repeating Category (which is gettable by removing Schema from the CategorySchema name).

I think we should do it the other way around: automatically add a ref such as #/definitions/Category for any fields.Nested(CategorySchema), unless use_refs=false is passed to the APISpec.

What do you think ?

Getting Assert when using many=True

Getting this assert when I use a Marshmallow schema with many=True

Schemas with many=True are only supported for 'json' location (aka 'in: body')

apispec/ext/marshmallow/swagger.py ~230.

I am using the following decorator:
@use_kwargs(MySchema(many=True), locations=('json',))

Marshmallow schema partial=True

When I have a schema with required fields but then use partial=True on a PUT route for instance, swagger spec still shows they are all required. Is this a bug or should I be doing something different?

Thanks!

Bad serialization with load_operations_from_docstring

Hello,

I noticed a problem while using the apispec.utils.load_operations_from_docstring method.

While trying to parse the following docstring:

"""
Fetch multiple stuff

---
  get:
    description: Returns stuff
      responses:
        200:
          description: A list of stuff.
            produces: [ application/json ]
            schema:
              type: array
              items:
                $ref: "#/definitions/Stuff"
"""

Using this code:

spec.add_path(path=Path(path="/api/stuff", operations=load_operations_from_docstring(method.__doc__)))

The 200 response was serialized as an int, and not as a string. As a result, the JSON was not valid.
I did a little workaround, a path_helper you can find below. Maybe this should be included as a default path_helper, or the behavior should be fixed somehow.

def yaml_serializer(apispec, **kwargs):
    def replace_nums(d):
        for k, v in d.items():
            if isinstance(k, int):
                d[str(k)] = v
                del d[k]
            if isinstance(v, dict):
                replace_nums(v)

    replace_nums(kwargs['path'])
    return kwargs['path']

spec.register_path_helper(yaml_serializer)

(Also, writing 'items: StuffSchema' in the docstring didnt work as expected, so I had to add the "$ref" line manually.)

I'm not at ease enough with the project to make a PR yet, but I thought I should tell you guys !

EDIT: along with the fix for the bad-formatted integers as key, the fix for the bad-formatted "schema" value

def yaml_serializer(apispec, **kwargs):
    def replace_nums(d):
        for k, v in d.items():
            if isinstance(k, int):
                d[str(k)] = v
                del d[k]
            if isinstance(v, dict):
                replace_nums(v)

    def add_schema_ref(d):
        for k, v in d.items():
            if k == "schema":
                if not isinstance(v, dict):
                    d[k] = {'$ref': '#/definitions/' + v.replace('Schema', '')}
                elif v.get('items', False) and not isinstance(v, dict):
                    schema = v['items']
                    v['items'] = {'$ref': '#/definitions/' + schema.replace('Schema', '')}
            elif isinstance(v, dict):
                add_schema_ref(v)

    replace_nums(kwargs['path'])
    add_schema_ref(kwargs['path'])
    return kwargs['path']

spec.register_path_helper(yaml_serializer)

Self-referential nested fields not parsed correctly

Apologies if I'm missing something obvious here as I'm still figuring out Marshmallow and APISpec.

When we have a Model with a self-referential relationship, apispec is not able to parse the child relationships successfully. Here's an example (SQLAlchemy, Marshmallow-SQLAlchemy, APISpec, and Flask)

Models:

Parent_Child = Table('Parent_Child', Base.metadata,
    Column('parent_id', Integer, ForeignKey('Entity.id')),
    Column('child_id', Integer, ForeignKey('Entity.id'))
    )

class Entity(Base):
    __tablename__ = 'Entity'
    id = Column(Integer, primary_key=True)
    name = Column(String(80))
    children = relationship('Entity', secondary='Parent_Child', primaryjoin=(id == Parent_Child.c.parent_id), secondaryjoin=(id == Parent_Child.c.child_id), lazy='joined', backref='parent')

Schema:

class EntitySchema(ModelSchema):
    children = fields.Nested('self', many=True)
    class Meta:
        model = Entity
        sqla_session = session
        include_fk = True

EntitySerializer = EntitySchema()

APISpec to_dict output (note the Entity.children object reference)

{
    'parameters': {},
    'paths': {
        '/entities': {}
    },
    'swagger': '2.0',
    'info': {
        'version': '1.0.0',
        'title': 'M2M Nested Example'
    },
    'description': 'M2M Nested Example',
    'definitions': {
        'Entity': {
            'properties': {
                'children': .at 0x00000000048BF7B8 > ,
                'id': {
                    'format': 'int32',
                    'type': 'integer'
                },
                'parent': {
                    'type': 'array',
                    'items': {
                        'type': 'string'
                    }
                },
                'name': {
                    'type': 'string'
                }
            }
        }
    }
}

Again, apologies if I've missed something obvious or this is a known issue. I can provide a complete sample app via gist if that would be helpful.

Thanks!

Support for MethodView

I'm using Flask's MethodViews for an API.

Basically, the idea is to have one class per endpoint and one class method for each HTTP method. The method name is the HTTP method, lowercase.

from flask.views import MethodView

class UserAPI(MethodView):

    def get(self):
        """User detail view.
        ---
        get:
            responses:
                200:
                    schema: UserSchema
        """
        users = User.query.all()
        ...

    def post(self):
        """User post.
        ---
        post:
            responses:
                201:
                    schema: UserSchema
        """
        user = User.from_form_data(request.form)
        ...

app.add_url_rule('/users/', view_func=UserAPI.as_view('users'))

AFAIU, there is currently no support for Flask's MethodView or Flask's View, only functions are supported. Is this correct?

I think I managed to get apispec to produce the correct OpenAPI file using the following patch:

dev...Nobatek:dev_method_views

The idea is that if the view is a MethodView, we should not use the view's docstring but loop over the docstrings of each method.

Thinking of it, maybe the only difference is that this allows the user to keep each docstring in each method rather than grouping all of them at the top of the class. This can be seen as cosmetic, but I think it would be better this way.

I'd be happy to get any feedback.

If there is interest for this feature, I may clean this up, add tests and all, but before I move any further, I'd rather be sure I'm on the right tracks.

[marshmallow plugin] Inspect validators

Many of marshmallow's validators can be inspected to provide additional data when constructing OpenAPI entities.

from marshmallow import fields, validate
from apispec.ext.marshmallow.swagger import field2property

field = fields.Int(validate=validate.Range(min=3, max=42))

prop = field2property(field)
assert prop['minimum'] == 3
assert prop['maximum'] == 42

circular dependency triggers stackoverflow

The following schemas:

from marshmallow import fields

class SampleSchema(Schema):
    runs = fields.Nested('RunSchema', many=True, exclude=('sample',))

class RunSchema(Schema):
    sample = fields.Nested(SampleSchema, exclude=('runs',), required=True)

Causes an infinite recursion when trying to resolve the two schemas.

I am trying to debug the problem, but I am lost right now. Is someone able to point me to the right direction?

Apispec doesn't play nicely with the schemas where fields are generated/modified on the fly

The following use-case doesn't work for me:

from marshmallow import Schema, fields
from apispec.ext.marshmallow.swagger import schema2jsonschema

class A(Schema):
    q = fields.Integer()
    def __init__(self, default_q=None):
        super(A, self).__init__()
        self.fields['q'].default = default_q

class B(Schema):
    a = fields.Nested(A(default_q=1))

schema2jsonschema(B)

The output jsonschema is:

{
    'properties': {
        'a': {
             'properties': {
                 'q': {
                     'format': 'int32',
                     'type': 'integer'
                 }
            }
        }
    }
}

I expect to see:

{
    'properties': {
        'a': {
             'properties': {
                 'q': {
                     'format': 'int32',
                     'type': 'integer',
                     'default': 1
                 }
            }
        }
    }
}

Here are the relevant source code lines:

Only a Nested schema class (not instance) will be analyzed for the Nested fields:
https://github.com/marshmallow-code/apispec/blob/dev/apispec/ext/marshmallow/swagger.py#L105

Only fields that are declared on class definition will be analyzed when running schema2parameters and schema2jsonschema

How should I add an existing Path instance to an APISpec instance

I tried to do something like:

spec = APISpec(**kwargs)
spec.add_path(Path(**kwargs))

And I received the following error (as I should have)

File "/Users/Josh/Developer/Kaplan/AWS-Lambda-APIGateway-POC/env/lib/python2.7/site-packages/apispec/core.py", line 169, in add_path
    self._paths.setdefault(path.path, path).update(path)
TypeError: unhashable type: 'Path'

Is there an easy way to add an existing Path object or do I need to duplicate the logic of self._paths.setdefault(path.path, path).update(path)?

If this functionality seems worthwhile, I can submit a PR to update APISpec.add_path to accept Path objects.

marshmallow/swagger.py - custom field mapping

Hi.

Assuming a model uses custom Marshmallow fields, those do not appear in FIELD_MAPPING and therefore are not documented properly: default (type, format) is ('string', None).

If _get_json_type_for_field used isinstance rather than type, fields inheriting from Marshmallow fields would at least be treated like their parent, but this wouldn't be totally satisfying.

OpenAPI spec allows custom formats for fields, like apispec does Email, for instance,

marshmallow.fields.Email: ('string', 'email'),

Is there a way to pass custom field mappings to apispec?

If not, would this be considered a relevant feature request?

A list of custom types ((CUSTOM_FIELD_MAPPING) could be appended to FIELD_MAPPING. Or better, CUSTOM_FIELD_MAPPING would be checked first, so as to allow overriding existing mappings in FIELD_MAPPING.

discriminator support?

Swagger 2.0 has 'discriminator' field, which is essentially used to choose the schema to use based on contents of the field. At least apispec 0.6 does not have this (nor does actually marshmallow itself, +- marshmallow-polyfield which is similar but helps choose type of field instead of the whole object's schema).

Any plans on this?

Allow several schemas, equivalent from client POV, to resolve to the same definition

It is possible in an application to have multiple serialization/deserialization schemas that look identical to the client but behave differently in the application.

This is the case in a MongoDB application of mine where I use an Object Document Mapper. My app code may return an object, in which case it gets marshalled using a "OO <-> json" schema. But sometimes I have to sort of skip the ODM (because I'm using the aggregation framework) and I have a "MongoDB <-> json" schema.

You could think of other examples, like multi-DB and an object that can be stored in both.

The point is all the schema attributes are the same (same fields are dump_only, etc.) but they may deserialize with different field names, for instance, or one schema may instantiate an object and the other just return plain json.

This is internal stuff, it should not be exposed to the client. So I'd like to be able to register both schemas with the same definition in the docs.

One would be the master, the one used by schema2jsonschema to provide a definition in the docs. The other one(s) would just refer to that definition.

I realize this might not be a common use case, but I don't think it would add too much complexity.

It could be

1/ Implicit

spec.definition('User', schema=UserObjectSchema)
spec.definition('User', schema=UserDBSchema)

On the second call, schema_definition_helper knows a Schema is already declared and considers the first as the master, the second as the clone.

2/ Explicit

spec.definition('User', schema=UserObjectSchema)
spec.definition('User', schema=UserDBSchema, clone=True)

but I don't really see the need for that explicit version.

This would impact

  • schema_definition_helper: check whether a shema with the same name is already registered
  • APISpec.definition: add new exception allowing a definition_helper to stop the registration process so that no definition is actually added.

Any comment before I submit a PR?

fields2jsonschema warning about Shema Object

Hi! Can you help me to understand the warning I'm getting.

... swagger.py:450: UserWarning: Only explicitly-declared fields will be included in the Schema Object. Fields defined in Meta.fields or Meta.additional are ignored.
  "Only explicitly-declared fields will be included in the Schema Object. "

My Schema

class BaseUserSchema(ModelSchema):
    """
    Base user schema exposes only the most general fields.
    """

    class Meta:
        # pylint: disable=missing-docstring
        model = User
        fields = (
            User.id.key,
            User.username.key,
            User.first_name.key,
            User.middle_name.key,
            User.last_name.key,
        )
        dump_only = (
            User.id.key,
        )

And here is the sample of code where I'm getting the warning.

from apispec.ext.marshmallow.swagger import schema2jsonschema
pprint(schema2jsonschema(BaseUserSchema))

... swagger.py:450: UserWarning: Only explicitly-declared fields will be included in the Schema Object. Fields defined in Meta.fields or Meta.additional are ignored.
  "Only explicitly-declared fields will be included in the Schema Object. "
{'properties': {'first_name': {'maxLength': 30, 'type': 'string'},
                'id': {'format': 'int32', 'type': 'integer'},
                'last_name': {'maxLength': 30, 'type': 'string'},
                'middle_name': {'maxLength': 30, 'type': 'string'},
                'username': {'maxLength': 80, 'type': 'string'}},
 'required': ['username'],
 'type': 'object'}

The result of function is as expected. But if I understand the warning right my fields declared in Meta.fields should be ignored.

Can you help me to understand what is this warning about?

/cc @frol

Question about fields.Nested and APISpec.definition() call order

What is correct way to handle nested schemas with apispec? For example I have something like that

class BarSchema(Schema):
    baz = fields.String()


class FooSchema(Schema):
    bars = fields.Nested(BarSchema, many=True)


spec.definition('Foo', spec=FooSchema)
spec.definition('Bar', spec=BarSchema)
spec.to_dict()

Generated definition for Foo includes Bar as nested object instead of referencing existing definition. Changing order of schemas in spec helps

spec.definition('Bar', spec=BarSchema)
spec.definition('Foo', spec=FooSchema)

but what about situations with circular dependencies when FooSchema nests BarSchema and vice versa?

Marshmallow and webargs working together

These two libraries have a significant amount of overlap, I'd say they're almost competing if it wasn't that @sloria is responsible for both of them.

This is a common pattern for us:

# curl -H "Content-Type: application/json" -X POST -d '{"name": "World"}' http://localhost:5000/

from flask import Flask
from marshmallow import fields, Schema
from webargs import Arg, ValidationError
from webargs.flaskparser import parser, use_args

app = Flask(__name__)
app.config.update(PROPAGATE_EXCEPTIONS=True, DEBUG=True)

@parser.location_handler('json_body')
def parse_data(request, name, arg):
    return request.json

class Hello(Schema):

    name = fields.Str(required=True)

def validate_schema(val):
    if val.errors:
        raise ValidationError('')
    return True

hello_args = {
    'hello': Arg(
        Hello().load, required=True, location='json_body', validate=validate_schema)
}

@app.route('/', methods=['POST'])
@use_args(hello_args)
def hello_world(args):
    return 'Hello {}'.format(args['hello'].data['name'])

if __name__ == '__main__':
    app.run()

While this isn't onerous to setup there are a couple of features that would be nice along with being an "officially blessed" pattern (and not just something in our own libraries/projects).

A json body location

Probably the biggest wart in our usage pattern right now. The way we're using webargs isn't ideal with the hello key in the above example being a little meaningless. This does suggest using webargs jointly with marshmallow in this manner might be altogether foolish?

A Marshmallow Arg

But continuing onwards foolishly or not. This sort of thing would improve expressiveness:

hello_args = {
    'hello': SmoreArg(Hello, required=True, location='json_body')
}

Obviously this would handle the Hello().load and validate=validate_schema sections in the example. But it could also add functionality, for example adding a request [1] to the context of the schema and formatting errors. And of course could work with the more advanced features being experimented on in smore.


More than happy to get my hands dirty with some of this if we can work out a nice API to be implemented.

[1] Users and database connections are attributes on our requests which would be really handy for validation.

Combine marshmallow and tornado schema reference.

Using marshmallow plugin it is possible to write in docstring only class name link, but with tornado plugin one have to write path to definition link.
When I am pointing to marshmallow class in tornado docstring i doesn't convert into '$ref': '#/definitions... in swagger json

How to integrate with swagger ui

I would like to understand how does this work with swagger. I generated the swagger.json by writing output of spec.to_dict() and then opening the swagger.json file from swagger ui. There are no errors reported and neither and documentation generated.

Note: is it because everything in my spec is just marshmallow schema and NO api endpoints

dump_only attributes passed in Meta options are not marked as readOnly

Reported by @lafrech in #81 (comment):


I noticed this only adds readOnly if the field was provided dump_only attribute directly. If the field appears in the dump_only list in Meta, readOnly is not added.

Is this known/intended?

More precisely, it "does not work" if spec.definition is passed a marshmallow.schema.SchemaMeta but it "works" if it is passed a Schema instance:

class GistSchema(ma.Schema):
id = ma.fields.Int()
content = ma.fields.Str()

class Meta:
    dump_only = ('content',)

readOnly not added

spec.definition('Gist', schema=GistSchema)

readOnly added

spec.definition('Gist', schema=GistSchema())
I just begun experimenting with apispec and I'm not sure passing an instance is the right thing to do. In the docs, a SchemaMeta is passed.

schema2parameters - body + header

Hey,

I have a schema and the default_in = body, but one of the fields has a headers location.
schema2parameters inserts the header field inside the body instead of returning one body param and one header param.

What do I do?

class AccountsSchema(Schema):
    user_id = fields.Int(required=True, location="headers")
    name = fields.Str(required=True, validate=validate.Length(min=1, max=255), location="json")
    status = fields.Str(required=True, validate=validate.OneOf(choices=["Active", "Paused"]), location="json")
def fields2parameters(fields, schema=None, spec=None, use_refs=True, dump=True,
                      default_in='body', name='body', required=False):
    """Return an array of OpenAPI parameters given a mapping between field names and
    :class:`Field <marshmallow.Field>` objects. If `default_in` is "body", then return an array
    of a single parameter; else return an array of a parameter for each included field in
    the :class:`Schema <marshmallow.Schema>`.

    https://github.com/OAI/OpenAPI-Specification/blob/master/versions/2.0.md#parameterObject
    """
    if default_in == 'body':
        if schema is not None:
            # Prevent circular import
            from apispec.ext.marshmallow import resolve_schema_dict
            prop = resolve_schema_dict(spec, schema, dump=dump)
        else:
            prop = fields2jsonschema(fields, spec=spec, use_refs=use_refs, dump=dump)

        return [{
            'in': default_in,
            'required': required,
            'name': name,
            'schema': prop,
        }]

    assert not getattr(schema, 'many', False), \
        "Schemas with many=True are only supported for 'json' location (aka 'in: body')"

    exclude_fields = getattr(getattr(schema, 'Meta', None), 'exclude', [])

    return [
        field2parameter(
            field_obj,
            name=_observed_name(field_obj, field_name),
            spec=spec,
            use_refs=use_refs,
            dump=dump,
            default_in=default_in
        )
            for field_name, field_obj in iteritems(fields)
            if (
                (field_name not in exclude_fields)
                and not (field_obj.dump_only and not dump)
            )
    ]

dump_only is ignored in nested schemas for parameters

Given the following:

class ParentSchema(Schema):
    id = fields.Int(dump_only=True)
    name = fields.String()
    child = fields.Nested('ChildSchema')

class ChildSchema(Schema):
    id = fields.Int(dump_only=True)
    name = fields.String()

The parameter spec is generated:

"parameters": [
  {
    "in": "body", 
    "name": "body", 
    "required": false, 
    "schema": {
      "properties": {
        "child": {
          "properties": {
            "id": {
              "format": "int32", 
              "type": "integer"
            }, 
            "name": {
              "type": "string"
            }
          }, 
          "type": "object"
        }, 
        "name": {
          "type": "string"
        }
      }, 
      "type": "object"
    }
  }
]

The Parent "id" is omitted but the Child "id" remains.

Will submit a PR on this.

Inspect parameters from tornado handler method with coroutine

When used @gen.coroutine or similar decorator on handler method

@gen.coroutine
def get(self, item_id):

inspect.getargspec function (tornado plugin) return decorated arguments, and then we got empty list

args = inspect.getargspec(method).args[1:]

And error like this, when used function spec.add_path

raise APISpecError('Path template is not specified')

It fixed If use more common function signature from inspect module,
For example

args = list(inspect.signature(method).parameters.keys())[1:]

Support `enum` property of JSON Schema

Now that webargs is using marshmallow and has access to the OneOf and ContainsOnly validators, smore can populate the enum property of JSON Schema fields. If this seems reasonable, I'm down to send a patch @sloria.

minLength/maxLength being added to all types

Marshmallow allows me to add validation against nested schema length, doing as the follows:

class FooSchema(Schema):
    bars = fields.Nested('BarSchema', validate=Length(min=1), many=True)

It'll validate if the bars array contains at least 1 object, but using apispec to generate the spec for this schema, it'll output the follow:

{
  "definitions": {
    "FooSchema": {
      "properties": {
        "bars": {
          "items": {
            "$ref": "#/definitions/BarSchema"
          },
          "minLength": 1,
          "type": "array"
        }
      }
    }
  }
}

Based on swagger specification http://json-schema.org/latest/json-schema-validation.html#anchor26 (5.6/5.7) the minLength and maxLength can only be used for string types, in other words these fields can't be used with arrays, so the field2length needs checking if the value is an array to properly add length validators.

APISpec.add_path overwrites paths

I was toying around with writing a wsgi middleware that used smore while fleshing out some ideas and noticed that smore.apispec.APISpec.add_path does a full replace via dict.update when adding a new path that matches an existing path. It ends up being a last one wins situation.

What do you think about storing the Path object in _paths and doing a recursive merge of all of its properties when calling add_path and then converting to dict when smore.apispec.APISpec.to_dict is called?

[RFC] Pluggable API documentation generator

Now that smore has many of the lower-level functions for converting marshmallow Schema and webargs Args to swagger definitions, next step is to implement a system for generating full API docs.

Ideas for initial iteration:

  • Based on Swagger 2.0 spec. This will allow us to leverage the latest Swagger-UI
  • Pluggable. The documentation generator will work with any web framework, with or without webargs, etc. etc. Plugins provide helpers for generating metadata.
  • Easy way to serve swagger docs. Possibly part of the Flask plugin.

Ideas for the future:

  • Generate swagger-based from docstrings.
  • Sphinx extension?

Proof of concept

I wrote up a simple proof-of-concept in this gist: https://gist.github.com/sloria/dc1b2d2e43fbcea866ae

Prior art

Should marshmallow helpers respect `only`, `exclude`?

When introspecting Nested fields, we don't check for the only or exclude attributes. For example:

class FooSchema(Schema):
    bar = fields.Str()
    baz = fields.Str()

class BobSchema(Schema):
    foo = fields.Nested(FooSchema, only=('bar', ))

If we call schema2jsonschema(BobSchema), the nested foo will include bar and baz fields, even though baz will never be included in this case. Which isn't necessarily a problem, unless baz is required:

class FooSchema(Schema):
    bar = fields.Str()
    baz = fields.Str(required=True)

In this case, users of apispec will likely return JSON that wouldn't validate against the definition schema, since there's going to be a missing required field. I haven't actually encountered this situation, and I don't know how often it's going to come up--just wanted to raise for discussion.

Rename and release as "apispec"

For all intents and purposes, this project's sole focus is the pluggable API doc generation proposed in #1.

I propose renaming this repo/package to apispec.

Note: this code still remains experimental--the new name is only to reflect the primary goals of this project.

Tornado getargspec with decorated handler method arguments.

Current implementation uses deprecated inspect.getargspec() method to get argument names.

It causes issues having decorated handler methods and you get decorator arguments instead of method arguments which breaks swagger path generation.

Core path does not support swagger basePath

Related to #69.

Flask allows configuring the APPLICATION_ROOT, which provides the base path for the routes. #70 fixed this issue so that the flask extension reports the fully qualified path.

When the specs are dumped as swagger specs using to_dict(), the full path is shown on every route, even though the swagger specification supports the basePath field on the root object.

I think it is important to keep the meaningful portion of the route URL separate from the base path to maintain readable documentation. #70 was the right fix for the flask extension since the apispec core assumes paths are absolute, but the fact that both flask and swagger support a configurable base path indicates that the apispec core should probably internalize the concept of base paths and expose them to extensions.

Examples of base path in similar libraries:

Edit: Updated to refect that generating swagger specs is a core functionality of apispec, not an extension.

Inspect `attribute` and/or `load_from`/`dump_to` properties for marshmallow extension

If I create the following schema:

class User(Schema):
    first_name = fields.String(attribute='firstName')
    last_name = fields.String(attribute='lastName')

and register it

spec = APISpec(
    title='Users',
    version='1.0.0',
    plugins=[
        'apispec.ext.marshmallow',
    ],
)
spec.definition('User', schema=User)

the generated APISpec still lists the the properties as first_name and last_name instead of firstName and lastName. Similarly, the load_from and dump_to parameters are also ignored.

Would there be interest/would it make sense to change the properties in the generated APISpec to reflect the attribute, load_from, and/or dump_to marshmallow field parameters?

Support for plugins to add multiple paths at once

In #68 it shows the need for plugins to add multiple paths at one time.

I'm currently implementing a Apispec.add_paths() function, but I don't feel this is the better way to do this.

I've thinking that allowing path helpers to return a list of Path objects, in addition to Path object, is better and easier. Perhaps refactor the code that add a Path object to the spec to an internal method and leave in add_path() the code to extract the Path objects and pass it to the internal method.

@sloria What do you think?

Integration tests with swagger-tools

I caught a few minor issues (see #6, #7) running output through the validator in swagger-tools:

swagger-tools validate spec.json

It could be helpful to add integration tests using swagger-tools to catch subtle or poorly documented issues like this. I'm down to write this if you think it would be worthwhile @sloria.

Support custom Marshmallow fields

Currently, all custom fields automatically become type="string", format=None due to the hard-coded FIELD_MAPPING. Thus, I cannot extend any marshmallow type, e.g. Dict or Number with apispec support. Currently, the only way to extend apispec support is to patch the FIELD_MAPPING dict.

`APISpecError: Could not find endpoint for view` when using flask MethodViews

I've seen #85 which seems to suggest Flask's MethodViews should be supported. I've added a URL rule with:

app.add_url_rule('/articles/<publication>', view_func=Articles.as_view('articles'))

But when I try to add a path for my schema with:

spec.definition('Article', schema=ArticleSchema)
spec.add_path(view=Articles)

I get an error:

apispec.exceptions.APISpecError: Could not find endpoint for view <class 'mypackage.resources.articles.Articles'>

I've tried passing various combinations of Articles, Articles.as_view('articles'), etc to add_path but nothing works. I've also tried the function in #85, but it still doesn't work.

I'm assuming this is a bug?

Flask extension does not support blueprints

I have organized my views with Flask blueprints, but am unable to document them in the view file, because it is not executed in the application context.

# app/views/example.py

from flask import Blueprint
example_view = Blueprint('example_view', __name__)

from app.spec import spec

@bp.route('example', methods=['GET'])
def get_example():
    """An example.
    ---
    get:
        description: Get an example
        responses:
            200:
                description: An example
                schema: ExampleSchema
    """
    return 'example', 200

spec.add_path(view=get_example)
  ...
  File "/usr/local/lib/python2.7/site-packages/apispec/core.py", line 170, in add_path
    self, path=path, operations=operations, **kwargs
  File "/usr/local/lib/python2.7/site-packages/apispec/ext/flask.py", line 62, in path_from_view
    rule = _rule_for_view(view)
  File "/usr/local/lib/python2.7/site-packages/apispec/ext/flask.py", line 38, in _rule_for_view
    view_funcs = current_app.view_functions
  File "/usr/local/lib/python2.7/site-packages/werkzeug/local.py", line 343, in __getattr__
    return getattr(self._get_current_object(), name)
  File "/usr/local/lib/python2.7/site-packages/werkzeug/local.py", line 302, in _get_current_object
    return self.__local()
  File "/usr/local/lib/python2.7/site-packages/flask/globals.py", line 34, in _find_app
    raise RuntimeError('working outside of application context')
RuntimeError: working outside of application context

Is there a way to keep the spec declarations with the blueprint? (It seems like there might not be.)

Do you think it would be useful to add the ability to add all the views from a blueprint at once?

I noticed that the flask extension seems to acknowledge that a view could contain multiple paths, but assumes it only contains one. apispec/ext/flask.py#L46

Maybe something like spec.add_paths() could be added to handle compound view objects?

Meta.fields ignored

/lib/python3.5/site-packages/apispec/ext/marshmallow/swagger.py:450: UserWarning: Only explicitly-declared fields will be included in the Schema Object. Fields defined in Meta.fields or Meta.additional are ignored.
"Only explicitly-declared fields will be included in the Schema Object. "

I'm not clear what I'm supposed to do to resolve this. I'm just trying to use Meta.fields with Flask-Marshmallow. Am I doomed to walk this Earth for all eternity, suffering so greatly with each passing step?

Separate plugins into separate packages?

It might be beneficial from a maintenance perspective to have the bundled plugins, e.g. apispec.ext.marshmallow and apispec.ext.flask, exist as separate packages. That would allow them to have their own release cycles independent of apispec core, which is currently pretty stable.

Feedback welcome.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.