encode / orm Goto Github PK
View Code? Open in Web Editor NEWAn async ORM. ๐
Home Page: https://www.encode.io/orm
License: BSD 3-Clause "New" or "Revised" License
An async ORM. ๐
Home Page: https://www.encode.io/orm
License: BSD 3-Clause "New" or "Revised" License
I installed orm in a clean virtualenv, and import orm.fields, it raised a ModuleNotFoundError error.
-> python3 -m venv venv
-> source venv/bin/activate
-> pip install orm
Collecting orm
Using cached https://files.pythonhosted.org/packages/c6/a6/4a867c58f4bcab01cf782f91a58f37b4a93f390aba89a7f1c3410f9454fe/orm-0.1.2.tar.gz
Collecting databases (from orm)
Using cached https://files.pythonhosted.org/packages/f5/0b/e4959843279b6e8ebe638f9f52f4fb403c33ae621f3fbd5b6a8c973c7752/databases-0.2.1.tar.gz
Collecting typesystem (from orm)
Using cached https://files.pythonhosted.org/packages/ee/a3/2220dc25ddc8461ac405fe3f2f02cc2dd7360c9be597128aa905eda1c5b7/typesystem-0.2.0.tar.gz
Collecting sqlalchemy (from databases->orm)
Using cached https://files.pythonhosted.org/packages/85/29/d7a5687d0d21ea8133f2d4ef02dfb4d191afe7ebc8bd9f962d99bdf595e1/SQLAlchemy-1.3.1.tar.gz
Installing collected packages: sqlalchemy, databases, typesystem, orm
Running setup.py install for sqlalchemy ... done
Running setup.py install for databases ... done
Running setup.py install for typesystem ... done
Running setup.py install for orm ... done
Successfully installed databases-0.2.1 orm-0.1.2 sqlalchemy-1.3.1 typesystem-0.2.0
You are using pip version 18.1, however version 19.0.3 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
-> python
Python 3.7.2 (default, Dec 27 2018, 07:35:06)
[Clang 10.0.0 (clang-1000.11.45.5)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from orm import fields
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "~/test-orm/venv/lib/python3.7/site-packages/orm/__init__.py", line 2, in <module>
from orm.fields import Boolean, Integer, Float, String, Text, Date, Time, DateTime, JSON, ForeignKey
File "~/test-orm/venv/lib/python3.7/site-packages/orm/fields.py", line 5, in <module>
from attr import attrs
ModuleNotFoundError: No module named 'attr'
>>>
I want to create a directory containing all the models as such
models/user.py
from orm import Model, String
from models.base import Base
class User(Model, Base):
__tablename__ = 'users'
id = String(primary_key=True, max_length=100)
username = String(max_length=100, allow_null=False)
password = String(max_length=100, allow_null=False)
And
models/account.py
from orm import Model, String, ForeignKey, Float
from models.base import Base
from models.user import User
class Account(Model, Base):
__tablename__ = 'accounts'
id = String(primary_key=True, max_length=100)
balance = Float(allow_null=False)
user = ForeignKey(User)
with the Base class containing createIfNotExists
method
models/base.py
from abc import ABCMeta
from databases import Database
from sqlalchemy import create_engine, MetaData
class Base(object):
__metaclass__ = ABCMeta
__database__ = Database('sqlite://path-to-db.sqlite')
__metadata__ = MetaData()
@classmethod
def createIfNotExists(cls) -> bool:
engine = create_engine(str(cls.__database__.url))
cls.__metadata__.create_all(engine)
return True
But when I run the following script to create these tables, it gives me sqlalchemy.exc.NoReferencedTableError
from models.user import User
from models.account import Account
User.createIfNotExists()
Account.createIfNotExists()
sqlalchemy.exc.NoReferencedTableError: Foreign key associated with column 'accounts.user' could not find table 'users' with which to generate a foreign key to target column 'id'
Is there a way to keep these model classes in separate modules and keep the relationship when creating tables?
Hey! I've been searching for a way to add sqlachemy model style orm to our production starlette project and stumbled upon this.
I see that it used to be a lot of activity and my question is if it safe to use it and if anyone actively develop it.
I am willing to contribute if needed
Cheers
F
Especially useful for integrations with REST API frameworks. Let's discuss.
Draft
class Decimal(ModelField, typesystem.Decimal):
def get_column_type(self):
return sqlalchemy.Numeric(scale=self.scale, precision=self.precision)
def __init__(self, scale: int, precision: int, **kwargs):
assert scale, "scale is required"
assert precision, "precision is required"
self.precision = precision
self.scale = scale
super().__init__(**kwargs)
Usage
price = orm.Decimal(allow_null=True, scale=8, precision=18)
Well, it is ok, can I continue and offer pull request? Maybe you prefer Django's style max_digits
and decimal_places
?
I meet this error
Traceback (most recent call last):
File "switch.py", line 24, in <module>
loop.run_until_complete(switch(args.trigger_id))
File "/usr/lib/python3.6/asyncio/base_events.py", line 473, in run_until_complete
return future.result()
File "switch.py", line 8, in switch
trigger = await Trigger.objects.get(pk=trigger_id)
File "/home/foxmask/DjangoVirtualEnv/yeoboseyo/lib/python3.6/site-packages/orm/models.py", line 182, in get
return await self.filter(**kwargs).get()
File "/home/foxmask/DjangoVirtualEnv/yeoboseyo/lib/python3.6/site-packages/orm/models.py", line 127, in filter
column = self.table.columns[key]
File "/home/foxmask/DjangoVirtualEnv/yeoboseyo/lib/python3.6/site-packages/sqlalchemy/util/_collections.py", line 194, in __getitem__
return self._data[key]
KeyError: 'pk'
switch.py
# coding: utf-8
import argparse
import asyncio
from yeoboseyo.models import Trigger
async def switch(trigger_id):
"""
:param trigger_id: trigger id to switch on/off
:return:
"""
trigger = await Trigger.objects.get(pk=trigger_id)
status = not trigger.status
await trigger.update(status=status)
print(f"Successfully switched Trigger '{trigger.description}' to {status}")
if __name__ == '__main__':
print('์ฌ๋ณด์ธ์ ! Switch')
parser = argparse.ArgumentParser(description='Switch status of one trigger')
parser.add_argument('trigger_id',
metavar='N',
type=int,
help='provide the id of the trigger to switch on/off')
args = parser.parse_args()
loop = asyncio.get_event_loop()
loop.run_until_complete(switch(args.trigger_id))
loop.close()
and the models
class Trigger(orm.Model):
__tablename__ = "trigger"
__database__ = database
__metadata__ = metadata
id = orm.Integer(primary_key=True)
rss_url = orm.String(max_length=255)
joplin_folder = orm.String(max_length=80)
description = orm.String(max_length=200)
date_created = orm.DateTime(default=datetime.datetime.now)
date_triggered = orm.DateTime(allow_null=True)
status = orm.Boolean(default=False)
result = orm.Text(allow_null=True)
date_result = orm.DateTime(allow_null=True)
provider_failed = orm.Integer(allow_null=True)
consumer_failed = orm.Integer(allow_null=True)
if I use
trigger = await Trigger.objects.get(id=trigger_id)
instead of the code above everything wokrs fine
trigger = await Trigger.objects.get(pk=trigger_id)
Hello, thanks for the good library, is it possible to allow optional fields in Model
when using Model.objects.create()
(just don't pass fields to INSERT INTO
)? Can work on this feature.
My use case is this: I have a table that automatically updates/sets some fields (ex. dt_created
, dt_changed
) and I would like to not pass datetime.now
every time I need to insert a new row. Table SQL is below (PostgreSQL):
CREATE TABLE tokens (
id bigserial NOT NULL,
user_id text NOT NULL CHECK(trim(user_id) != ''),
scope text NOT NULL,
access text NOT NULL CHECK(trim(access) != ''),
refresh text NOT NULL CHECK(trim(refresh) != ''),
expires_at bigint NOT NULL,
dt_created timestamp with time zone NOT NULL DEFAULT NOW(),
dt_changed timestamp with time zone NOT NULL DEFAULT NOW(),
dt_deleted timestamp with time zone NULL,
PRIMARY KEY (id)
);
Right now I don't even add dt_*
fields to model, but I would like to have access to them without the need to specify dt_created
/dt_changed
when calling Model.objects.create()
Model right now:
class Token(Model):
__tablename__ = "tokens"
__database__ = database
__metadata__ = metadata
id = Integer(primary_key=True)
user_id = Text()
scope = Text(allow_blank=True)
access = Text()
refresh = Text()
expires_at = Integer()
Model I would like to use:
class Token(Model):
__tablename__ = "tokens"
__database__ = database
__metadata__ = metadata
id = Integer(primary_key=True)
user_id = Text()
scope = Text(allow_blank=True)
access = Text()
refresh = Text()
expires_at = Integer()
dt_created = DateTime(optional=True)
dt_created = DateTime(optional=True)
dt_deleted = DateTime(allow_null=True, optional=True)
Eg. If username
is a unique field, then User.objects.create(username=...)
should ideally:
This is far more helpful than raising an integrity error.
Django ORM field can have a choice argument to validate data.
from django.db import models
class Student(models.Model):
FRESHMAN = 'FR'
SOPHOMORE = 'SO'
JUNIOR = 'JR'
SENIOR = 'SR'
YEAR_IN_SCHOOL_CHOICES = (
(FRESHMAN, 'Freshman'),
(SOPHOMORE, 'Sophomore'),
(JUNIOR, 'Junior'),
(SENIOR, 'Senior'),
)
year_in_school = models.CharField(
max_length=2,
choices=YEAR_IN_SCHOOL_CHOICES,
default=FRESHMAN,
)
def is_upperclass(self):
return self.year_in_school in (self.JUNIOR, self.SENIOR)
I tried to add choices argument, but I don't know how to do it right.
My code looks like below.
class ChoiceField(ModelField, typesystem.Choice):
def __init__(self, **kwargs):
if 'choices' in kwargs:
choices = kwargs.pop('choices')
super(typesystem.Choice, self).__init__(choices=choices)
super().__init__(**kwargs)
class String(ChoiceField, typesystem.String):
def __init__(self, **kwargs):
....
def validate(self, value, *, strict:bool = False):
# overload validate
if self.choices:
super(typesystem.Choice, self).validate(value)
return super(typesystem.String, self).validate(value)
Although we can use Choice schema class to validate choice data, the inheritance will make us confused. and we have to overload validate function
This is labeled as a convenience function, and I found it useful to have around many times when I was working in Django. I think it would be great to duplicate here.
Reference: https://docs.djangoproject.com/en/3.0/ref/models/querysets/#get-or-create
Hi, Guys!
I'm with a question about how to make pagination as orm package? I see that exist a issue #30 add the offset, however, this is not completed merge.
Thanks
Could the .update()
returns, like for .create()
, the model instance ?
Hi, when importing ORM class ; get this error
import orm
File "/home/foxmask/[...]/lib/python3.6/site-packages/orm/__init__.py", line 2, in <module>
from orm.fields import Boolean, Integer, Float, String, Text, Date, Time, DateTime, JSON, ForeignKey
File "/home/foxmask/[...]/lib/python3.6/site-packages/orm/fields.py", line 90, in <module>
class JSON(ModelField, typesystem.Any):
AttributeError: module 'typesystem' has no attribute 'Any'
$ pip freeze --local |grep orm
orm==0.1.4
when doing
pip install orm
that installs the content of the setup.py but, once using databases ...
File "/home/foxmask/[...]/lib/python3.6/site-packages/databases/backends/sqlite.py", line 5, in <module>
import aiosqlite
ModuleNotFoundError: No module named 'aiosqlite'
I think it's related to the line https://github.com/encode/orm/blob/master/setup.py#L53 which does not specify any DB platform like we do by running
pip install databases[sqlite]
Limit which fields are selected with .fields()
There is already Query class here https://github.com/sqlalchemy/sqlalchemy/blob/master/lib/sqlalchemy/orm/query.py with all powerful staff included. I mean it could be based on that
I have a table that I'm trying to adapt from sqlalchemy in which a column name is a Python keyword (open
). In my sqlalchemy table, I use open_ = Column("open", Float)
so as to avoid overriding the keyword.
In my orm.Model
, I'm unable to find a way to have an equivalent mapping of the mymodel.open_
object.
I tried open_ = orm.Float(name="open")
but that isn't a supported kwarg.
open_ = orm.Float(title="open")
doesn't work, resulting in asyncpg.exceptions.UndefinedColumnError: column tiingo_daily_historical.open_
Am I overlooking a way to do this?
Many thanks in advance!
Add support for eg. User.objects.filter(username="tomchristie").exists()
I noticed that if the id of the record does not exist, then the get will raise a exception, the invoke code need to try/catch it which is not convenient. I also find some solution in stackoverflow (https://stackoverflow.com/questions/3090302/how-do-i-get-the-object-if-it-exists-or-none-if-it-does-not-exist, https://stackoverflow.com/questions/10867138/model-objects-get-or-none). The method obj = Model.objects.filter(id=1).first()
did not work because QuerySet did not have a first method.
Finally I can make it work by using a classmethod get_or_none
. The runnable example code is belows.
import databases
import orm
import sqlalchemy
import asyncio
from orm import NoMatch
database = databases.Database("sqlite:///db.sqlite")
metadata = sqlalchemy.MetaData()
class Note(orm.Model):
__tablename__ = "notes"
__database__ = database
__metadata__ = metadata
id = orm.Integer(primary_key=True)
text = orm.String(max_length=100)
completed = orm.Boolean(default=False)
@classmethod
async def get_or_none(cls, **kwargs):
try:
notes = await Note.objects.get(**kwargs)
except NoMatch:
return None
async def do_something():
# Create the database
engine = sqlalchemy.create_engine(str(database.url))
metadata.create_all(engine)
# .create()
await Note.objects.create(text="Buy the groceries.", completed=False)
await Note.objects.create(text="Call Mum.", completed=True)
await Note.objects.create(text="Send invoices.", completed=True)
# .all()
notes = await Note.objects.all()
# .filter()
notes = await Note.objects.filter(completed=True).all()
# exact, iexact, contains, icontains, lt, lte, gt, gte, in
notes = await Note.objects.filter(text__icontains="mum").all()
print(notes)
# .get()
note = await Note.objects.get(id=1)
# .update()
await note.update(completed=True)
# .delete()
await note.delete()
# 'id' always refers to the primary key
note = await Note.objects.get(id=2)
print(note.id) # 2
# get will raise NoMatch exception, we use another version of get_or_none
note = await Note.get_or_none(id=100)
print(note) # None
if __name__ == "__main__":
loop = asyncio.get_event_loop()
loop.run_until_complete(do_something())
loop.close()
I hope a similar method like get_or_none
could implement in the core code.
Add support for eg. User.objects.order_by('-created')
Needs to support field_name
, -field_name
, and multiple fields.
Do you plan to support UUID type and allow it to be used as a primary key?
I just got an exception when tried to create new database entry via await SomeModel.objects.create(...)
:
asyncpg.exceptions.NotNullViolationError: null value in column "id" violates not-null constraint
DETAIL: Failing row contains (null, John, Doe, [email protected], 1997, M, null, f, null, f, 0, 0, null, 0, 0).
Model:
class SomeModel(Model):
__tablename__ = DATABASE_TABLE
__database__ = DATABASE
__metadata__ = DATABASE_METADATA
id = Integer(primary_key=True)
# Some other fields
According to examples it should work without providing id field manually. Maybe I am missing something?
Sorry for kind of straight forward question, but it seems that project became abandoned?
Hello! I've run into an issue using select_related
. I've noticed that where a relation is null and the related table is listed in select_related
, there is no record retrieved for the original model. Using these models for example:
class Track(orm.Model):
# fields omitted, not relevant
class Post(orm.Model):
# snip...
track = orm.ForeignKey(Track, allow_null=True)
Let's say I have two posts with IDs 1 and 2. Post 2 does not have a related track. If I write the query Post.objects.select_related('track').all()
, I only get post 1 returned.
However, if the select_from
in build_select_expression
is modified to include isouter
:
select_from = sqlalchemy.sql.join(select_from, model_cls.__table__, isouter=True)
This seems to do the trick.
I need an efficient way to add records to a db in bulk. I don't think it matters so much what the API looks like for accomplishing that. If there is an efficient way to do that already, even if it's not a QS-level method, please let me know.
I noticed that if the id of the record does not exist, then the get will raise a exception, the invoke code need to try/catch it which is not convenient. I also find some solution in stackoverflow (https://stackoverflow.com/questions/3090302/how-do-i-get-the-object-if-it-exists-or-none-if-it-does-not-exist, https://stackoverflow.com/questions/10867138/model-objects-get-or-none). The method obj = Model.objects.filter(id=1).first()
did not work because QuerySet did not have a first method.
Finally I can make it work by using a classmethod get_or_none
. The runnable example code is belows.
import databases
import orm
import sqlalchemy
import asyncio
from orm import NoMatch
database = databases.Database("sqlite:///db.sqlite")
metadata = sqlalchemy.MetaData()
class Note(orm.Model):
__tablename__ = "notes"
__database__ = database
__metadata__ = metadata
id = orm.Integer(primary_key=True)
text = orm.String(max_length=100)
completed = orm.Boolean(default=False)
@classmethod
async def get_or_none(cls, **kwargs):
try:
notes = await Note.objects.get(**kwargs)
except NoMatch:
return None
async def do_something():
# Create the database
engine = sqlalchemy.create_engine(str(database.url))
metadata.create_all(engine)
# .create()
await Note.objects.create(text="Buy the groceries.", completed=False)
await Note.objects.create(text="Call Mum.", completed=True)
await Note.objects.create(text="Send invoices.", completed=True)
# .all()
notes = await Note.objects.all()
# .filter()
notes = await Note.objects.filter(completed=True).all()
# exact, iexact, contains, icontains, lt, lte, gt, gte, in
notes = await Note.objects.filter(text__icontains="mum").all()
print(notes)
# .get()
note = await Note.objects.get(id=1)
# .update()
await note.update(completed=True)
# .delete()
await note.delete()
# 'id' always refers to the primary key
note = await Note.objects.get(id=2)
print(note.id) # 2
# get will raise NoMatch exception, we use another version of get_or_none
note = await Note.get_or_none(id=100)
print(note) # None
if __name__ == "__main__":
loop = asyncio.get_event_loop()
loop.run_until_complete(do_something())
loop.close()
I hope a similar method like get_or_none
could implement in the core code.
Currently, there is no way to apply constraints supporting two columns. E.g., the notion that any single column can match, but if a pair match, a uniqueness violation should occur.
This can be solved by allowing parameters to be passed through to the underlying sqlalchemy table.
This would be very useful in our project.
Hello all,
I tried use ORM to fetch rows older than 15 mins:
Post.objects.filter(created_at < time_point).all()
But I got this exception instead:
NameError: name 'created_at' is not defined
How to query like this in ORM?
Best regards
Is cockroachdb supported?
i didn't see any update after Apr 2020.
There needs to be an efficient way to modify all records matching a QuerySet.
I would imagine this would best be implemented as an update
method on the QuerySet, that combines the QuerySet's filter_clauses
with a set_clause
to issue an UPDATE
statement.
It would also be useful to have something like Django's bulk_update for a way to efficiently perform more targeted updates, though I have less of a concept of either how Django does it, or how I think it should be done here. I think this is lower priority than just getting update
implemented in the first place.
I want to select rows with a query where the values are not None
when I've set a field as allow_null=True
. It's very possible that I'm missing something and that this can be achieved in another way.
When primary key is i.e. String
, model instance returned from model.object.create
has primary key as sequential integer. Once instance is fetched again, primary key is as expected - type and value wise.
Postgres behaviour is similar. Returned instance primary key in this case is actually None
. Same for UUID
type. Once fetched again, everything is as expected.
Could be that problem originates from databases
library or incorrect usage of it.
Two copy/paste tests.
import random
import databases
import pytest
import sqlalchemy
import orm
from tests.settings import DATABASE_URL
from tests.test_columns import async_adapter
database = databases.Database(DATABASE_URL, force_rollback=True)
metadata = sqlalchemy.MetaData()
def key():
return "".join(random.choice("abcdefgh123456") for _ in range(8))
class Model(orm.Model):
__tablename__ = "model"
__metadata__ = metadata
__database__ = database
id = orm.String(primary_key=True, default=key, max_length=8)
name = orm.String(max_length=32)
@pytest.fixture(autouse=True, scope="function")
def create_test_database():
engine = sqlalchemy.create_engine(DATABASE_URL)
metadata.create_all(engine)
yield
metadata.drop_all(engine)
@async_adapter
async def test_pk_1():
model = await Model.objects.create(name="NAME")
assert isinstance(model.id, str)
@async_adapter
async def test_pk_2():
model = await Model.objects.create(name="NAME")
assert await Model.objects.all() == [model]
Problem: If I create new model and want to save it - I need to use await ModelClass.objects.create(column_name=model.column_name, ...)
.
Solution: Just call something like model.save()
the same way model.update()
is used right now. Also model.save()
can fallback to model.update()
if id is set.
Maybe I am missing something and there is another easy and clean way to add a new row, I am ready to use it or implement one.
I have used this convenience method from Django less frequently than get_or_create
(#53 ) but it would still be nice to have this method here too.
Reference: https://docs.djangoproject.com/en/3.0/ref/models/querysets/#update-or-create
Hey! Have you any plans for using pydantic instead of typesystem or add support of it?
Also, it will be great to have opportunity to use the UUID field and server_default.
Check Travis log for details https://travis-ci.org/encode/orm/jobs/523520208
Add support for eg. User.objects.count()
Hi,
I've heard Tom mention that "orm doesn't have support for M2M yet" (and indeed it does not currently support M2M), so I figured I'd open an issue since there isn't one yet either.
I will be playing around with potential APIs for M2M in orm
, and report back here if I find anything. :-)
Hello again. I have an issue with using multiple tables in select_related
that results in the incorrect SQL being generated.
Using these tables as an example (irrelevant fields omitted):
class User(orm.Model, BaseUser):
pass
class Track(orm.Model):
pass
class Post(orm.Model):
track = orm.ForeignKey(Track, allow_null=True)
author = orm.ForeignKey(User)
And using this query to illustrate:
posts = await Post.objects.select_related(['track','author']).all()
With this query, some JOIN expressions are dropped from the resulting SQL, so it looks something like this:
SELECT [fields] FROM track, post JOIN "user" ON "user".id = post.author
What I'd expect to see is the track
table also getting a JOIN
expression, rather than just appearing in the FROM
. This results in the incorrect behaviour of, as far as I can make out, joining every combination of track and post together. (I didn't even realise this was valid SQL, so I've learned something.)
I believe the issue is redefining select_from
here, in models.py
:
def build_select_expression(self):
tables = [self.table]
select_from = self.table
for item in self._select_related:
model_cls = self.model_cls
select_from = self.table # this is the culprit
for part in item.split("__"):
model_cls = model_cls.fields[part].to
select_from = sqlalchemy.sql.join(select_from, model_cls.__table__)
tables.append(model_cls.__table__)
I think the core problem is the redefinition of select_from
on line 76, as this stops the joins from being built up each time through the loop. So if that line is removed, it seems to solve the issue.
Django's approach leads to ugly code (order_line__variant__product__name__icontains=...
etc.), does not allow for explicit aliasing and breaks things like static type checking and autocomplete (PyCharm has tons of Django-specific code to get basic autocomplete to work).
I think it should be possible to use the Alechemy's query builder given that it works in GINO:
https://docs.sqlalchemy.org/en/latest/orm/tutorial.html#common-filter-operators
Any plans to support it?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.