Giter VIP home page Giter VIP logo

datanucleus-rdbms's Introduction

datanucleus-rdbms

DataNucleus support for persistence to RDBMS Datastores. This plugin makes use of JDBC drivers for the datastores supported.

This project is built using Maven, by executing mvn clean install which installs the built jar in your local Maven repository.

KeyFacts

License : Apache 2 licensed
Issue Tracker : http://github.com/datanucleus/datanucleus-rdbms/issues
Javadocs : 6.0, 5.2, 5.1, 5.0, 4.1, 4.0
Download : Maven Central
Dependencies : See file pom.xml
Support : DataNucleus Support Page

Datastore Adapters

Each supported datastore will have an associated "adapter" stored under org.datanucleus.store.rdbms.adapter, so if planning on supporting or improving support for an RDBMS database this is the place to look (as well as in plugin.xml).

Mappings

DataNucleus RDBMS maps fields to columns using a mapping. Each Java type has a JavaTypeMapping variant, under org.datanucleus.store.rdbms.mapping.java and each of these has 1+ ColumnMapping (which map approximately onto JDBC types), under org.datanucleus.store.rdbms.mapping.column.

Later DataNucleus releases are making less use of these mappings and more use of the internal TypeConverter mechanism, so that we now only have mappings for some of the more basic types or where complicated handling is required, with remaining types using TypeConverterMapping.

SQL Generation

All SQL generated by datanucleus-rdbms uses an SQL API, located under org.datanucleus.store.rdbms.sql. There you have classes representing SELECT, UPDATE, DELETE and INSERT statements, providing an API for building the SQL.

Table Groups

A table group is a group of tables in the SQL statement. The SQL statement will be composed of 1 or more table groups. A table group equates to an object in an object-based query language. For example the candidate object will be in the first table group. When a relation is navigated the related object will be in its table group. And so on. All of the way down an inheritance tree will use the same table group; by that we mean that if you have a class Person and class Employee which extends Person and they have their own tables in the datastore, then when referring to the candidate object of type Employee, the tables PERSON and EMPLOYEE will be in the same table group.

Table Naming

With the SQLStatement API a developer can define the aliases of tables in the SQL statement. If they don't define an alias then the aliases will be generated for them using a DataNucleus extension. The plugin-point org.datanucleus.store.rdbms.sql_tablenamer defines an interface to be implemented by plugins for naming of tables. The default option is alpha-scheme.

alpha-scheme will name tables based on the table group they are in and the number of the table within that group. So you will get table aliases like A0, A1, A2, B0, B1, C0, D0. In this case we have a candidate object in the query with 3 tables (A0, A1, A2) and relations to an object with 2 tables (B0, B1) and these have relations to other objects with a single table (C0), (D0).

t-scheme will name tables based on the table number in the statement as a whole and doesn't use the table group for anything. So you get table aliases like T0, T1, T2, T3, T4, T5, etc.

table-name will use the table name instead of an alias. Clearly this will not work if you have joins to the same table name, but is useful in situations where, for example, the RDBMS doesn't support aliases in an UPDATE/DELETE statement.

To define which namer plugin you want to use, set the extension table-naming-strategy on the SQLStatement and by the persistence property datanucleus.rdbms.sqlTableNamingStrategy at construction of the PMF/EMF.

datanucleus-rdbms's People

Contributors

andi-huber avatar andyjefferson avatar beikov avatar benze avatar chrisco484 avatar cxzl25 avatar ergunbaris avatar jlleitschuh avatar kraendavid avatar leyart avatar nbartels avatar nicholas-os avatar nscuro avatar philipru avatar renataogarcia avatar renhao-liu avatar rguhan-vzm avatar rm3l avatar siepkes avatar sosotughushi avatar stevespringett avatar stijns avatar suilven avatar tgiddings avatar tikhomirovsergey avatar yunus avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

datanucleus-rdbms's Issues

Split RDBMSManager ClassAdder into separate class(es) to modularise schema management

The RDBMSManager schema management process is embodied in the "ClassAdder" process. This operates in its own transaction using a separate connection. The structure of RDBMSManager is overly complex as a result. We should simply provide a separate process giving it access to the key schema information, and have methods to do specific things

  1. add class/classes to persistence
  2. delete class from persistence
  3. update a mapping in the datstore with new columns e.g when a new implementation of an interface is found.

Optimizing the fetch of map fields (1:N relations) could significantly reduce number of sql statements

The load of a Map currently can involve more than 1 SQL. This is embodied in SCOUtils.populateMapDelegateWithStoreData().
In this method we reads in the keys (if persistable), then the values (if persistable), and then the "entries" (ids of keys and values) so we can associate the keys to the values. For a Map<Persistable, Persistable> this means 3 SQL statements.

Issue 282 also had the following, which is effectively the same area.
When we have a Map and want to get the entries (Map.entrySet()), we currently select the "map table". When using a join table to form the relation this will be the join table. When the key / value has its own table we simply have a FK to the key table or value table respectively. We don't join across right now (although there is some code in there that doesn't work for all situations).

Query handling of CAST only caters for some situations, and doesn't always add a discriminator clause

If we have something like
class Base (SINGLE_TABLE, with DISCRIMINATOR)
class Sub extends Base (SUPERCLASS_TABLE)

and we have a query like

SELECT b FROM Base b WHERE (TREAT(b) AS Sub).someField = value

then this currently ignore the TREAT (cast) whereas it should add a DISCRIMINATOR clause.

The problem is that we can only add the discriminator clause to a BooleanExpression ... i.e let it propagate back up to the
{...}.someField = value
and add the discriminator constraint there.

Wrong SQL is generated with a little bit complex logic containing 'OR' condition

I've encountered a bug. This only happens if the query is a little bit complex.
The abstract pattern of the query is like:

predicateA && (predicateB || predicateC)

Where predicateB is a 'contains' statement, and predicateC is combination of three additional conditions. If I query for predicateA && predicateB && predicateC, it works.
test-datanucleus.zip
Please check the attached test case.

I have extracted the problem following the same data structure with my real project.

The problem I faced is the SQL generated from JDO is incorrect.

I tried to replicate the same scenario in the test case, but in the test case, it raises another problems.

  1. If running 'mvn clean compile test' specified in the template you provided, the tables generated in the database seem different from the second way below

  2. Generate with my ant script (running datanucleus_schema_tool in my attached 'mybuild.xml')

My expectation of how the generated tables look like should be the same as the second way. We use ant script to generate the tables in our production.

Because I defined some fields as type of ArrayList and also specified in jdo file, it doesn't generate the join table with method 1).

Anyway, this seems also a problem.

When running the test, it gives the following error:

Exception thrown when executing query : SELECT DISTINCT 'mydomain.model.Watch' AS NUCLEUS_TYPE,A0.KEY,A0.NAME FROM WATCH A0 CROSS JOIN STRATEGY VAR_VARTOPSTRATEGY WHERE A0.COMPANY_MANUFACTURE_KEY_EID = ? AND ((EXISTS (SELECT 1 FROM STRATEGY A0_SUB WHERE A0_SUB.PRIMARYSTRATEGIES_KEY_OWN = A0.KEY AND A0_SUB.KEY = ?)) OR (EXISTS (SELECT 1 FROM STRATEGY A0_SUB INNER JOIN STRATEGY B0_SUB ON A0_SUB.KEY = B0_SUB.KEY WHERE A0_SUB.SECONDARYSTRATEGIES_KEY_OWN = A0.KEY) AND EXISTS (SELECT 1 FROM STRATEGY A0_SUB WHERE A0_SUB.DOWNLINEMARKETSTRATEGIES_KEY_OWN = VAR_VARTOPSTRATEGY.KEY AND A0_SUB.KEY = B0_SUB.KEY) AND VAR_VARTOPSTRATEGY.KEY = ?))

I believe my data structure is clearly defined.

Sort Order: Ascending order - Click to sort in descending order

All
Comments
Change History
Activity Stream

[ Permalink | Edit | Delete | « Hide ]
Ray added a comment - 20/Jul/15 04:32 PM
Please check out the test case.

[ Permalink | Edit | Delete | « Hide ]
Andy Jefferson added a comment - 05/Aug/15 09:59 AM
Attached is your testcase using the DataNucleus template.
Moved the package.xxx file to src/main/resources (where all non-source have to be with Maven).
Changed the package.xxx file to be an ORM file since that will override what is in all annotations.
Added a persistence property.
Includes the log file obtained when running it "mvn clean test".

All passes.

[ Permalink | Edit | Delete | « Hide ]
Andy Jefferson added a comment - 05/Aug/15 10:02 AM
Can't see any issue.
If you don't get join tables with then you look in the log and work out why ... some metadata not overriding the higher level, so put it in ORM (which is where it should be anyway IMHO).
If you start up a PMF and want all tables to be known about then you either use an auto-start, or persistence.xml and specify the persistence property datanucleus.persistenceUnitLoadClasses as per the docs.

I see no exception from the query.

[ Permalink | Edit | Delete | « Hide ]
Ray added a comment - 05/Aug/15 02:54 PM
I see your changes and also you change the version to 4.0.0-release

I just download your modified test case and tried to run it, it throws exception (no javax.jdo.xxx).

I added a dependency

org.datanucleus javax.jdo 3.2.0-m1

But when I run "mvn clean test" it still throws exceptions:

Nested Throwables StackTrace:
java.lang.NullPointerException
at org.datanucleus.api.jdo.metadata.JDOAnnotationReader.processMemberAnnotations(JDOAnnotationReader.java:1083)
at org.datanucleus.metadata.annotations.AbstractAnnotationReader.getMetaDataForClass(AbstractAnnotationReader.java:225)
at org.datanucleus.metadata.annotations.AnnotationManagerImpl.getMetaDataForClass(AnnotationManagerImpl.java:167)
at org.datanucleus.metadata.MetaDataManagerImpl.loadAnnotationsForClass(MetaDataManagerImpl.java:2793)
at org.datanucleus.metadata.MetaDataManagerImpl.loadPersistenceUnit(MetaDataManagerImpl.java:1075)
at org.datanucleus.enhancer.DataNucleusEnhancer.getFileMetadataForInput(DataNucleusEnhancer.java:782)
at org.datanucleus.enhancer.DataNucleusEnhancer.enhance(DataNucleusEnhancer.java:500)
at org.datanucleus.enhancer.DataNucleusEnhancer.main(DataNucleusEnhancer.java:1152)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.datanucleus.maven.AbstractDataNucleusMojo.executeInJvm(AbstractDataNucleusMojo.java:331)
at org.datanucleus.maven.AbstractEnhancerMojo.enhance(AbstractEnhancerMojo.java:281)
at org.datanucleus.maven.AbstractEnhancerMojo.executeDataNucleusTool(AbstractEnhancerMojo.java:81)

You said you run the test and saw no errors. Did you see the log

">> Watch result set = 2" ?

JDOQL : "JDOHelper.getObjectId" ought to select the discriminator when present

Currently when a query has JDOHelper.getObjectId it just considers the identity value and not the class. It should also have a mapping for the discriminator, or equivalent (when using union). In particular

  1. when in the SELECT clause
  2. when in the WHERE clause

Need to add use-cases that this is aimed at, since in some cases it would not apply.

NUCRDBMS-1012 updates to join table PK creation were incomplete. Need further improvement

The problem arises from the fact that the ownerMapping is being created first, and at that point it thinks a PK is required. Later on the elementMapping is created and it then knows not to have a PK, but ColumnImpl has no method to disable the PK on the owner.

A workaround for this issue for those using 5.0.0-m2 is to add metadata to the like this

which will turn off the PK for ALL join table columns

SQLServer : Handle limiting of SELECTs for SQLServer < 2012 - support ROW_NUMBER() OVER() for range restriction on SELECTs

Seems to be like DB2
http://msdn.microsoft.com/en-us/library/ms186734.aspx

This comment shows how the generated SQL could look like.

Using the ROW_NUMBER() and OVER() statements requires either an order-clause or a partition-clause.
For this use-case the order-clause should be defined. I'd suggest to order by primary-key, if there is no ordering present in the JDOQL/JPQL and otherwise to use the user-defined ordering.

Note, that the ROW_NUMBER() method is 1-based, so to express the JDOQL (fromIncl, toExcl) one could use

WHERE RowNumber > fromIncl AND RowNumber <= toExcl

or

-- SQLServer between is inclusive
BETWEEN fromIncl+1 AND toExcl

The actual query would use an inline-view either expressed using the WITH keyword or as sub-query for the from-expression.

-- Example using WITH:

WITH OrderedQuery AS
(
SELECT t.Field1, t.Field2, t.Field3,
ROW_NUMBER() OVER (ORDER BY t.IdField) AS 'RowNumber'
FROM "dbo"."tblFirmenAnsprechpartner" t
)
SELECT *
FROM OrderedQuery
WHERE RowNumber > fromIncl AND RowNumber <= toExcl

-- ###################################

-- Example using from-sub-query

SELECT * FROM
(
SELECT SELECT t.Field1, t.Field2, t.Field3,
ROW_NUMBER() OVER (ORDER BY t.IdField) AS 'RowNumber'
FROM "dbo"."tblFirmenAnsprechpartner" t
)
WHERE RowNumber > fromIncl AND RowNumber <= toExcl

SELECT * FROM
(
SELECT SELECT t.Field1, t.Field2, t.Field3,
ROW_NUMBER() OVER (ORDER BY t.UserOrderField1 t.UserOrderField2 DESC) AS 'RowNumber'
FROM "dbo"."tblFirmenAnsprechpartner" t
)
WHERE RowNumber BETWEEN fromIncl+1 AND toExcl

Unfortunately I was not able to find out which SQLServer version support this.
I've successfully tested with SQLServer 2008 and found some posts saying it will work at least down to SQLServer 2000.

Hope this was useful.

Note that this is for SQLServer prior to 2012. With SQLServer 2012 there is support for SQL 2008 standard OFFSET/FETCH which is added by NUCRDBMS-733

Map.values() SetStore using join table doesnt allow for NULL values in a map

If we have a Map<Simple, PC> with the map stored in a join table, such as in jdo/identity HashMapTest.testNormalPutNullValues we get a DB structure like this

CREATE TABLE HASHMAP1
{
IDENTIFIERA INTEGER NOT NULL,
IDENTIFIERB VARCHAR(255) NOT NULL,
CONSTRAINT HASHMAP1_PK PRIMARY KEY (IDENTIFIERA,IDENTIFIERB)
}

CREATE TABLE HASHMAP1_ITEMS
(
IDENTIFIERA_OID INTEGER NOT NULL,
IDENTIFIERB_OID VARCHAR(255) NOT NULL,
"KEY" VARCHAR(255) NOT NULL,
IDENTIFIERA_VID INTEGER NULL,
IDENTIFIERB_VID VARCHAR(255) NULL,
CONSTRAINT HASHMAP1_ITEMS_PK PRIMARY KEY (IDENTIFIERA_OID,IDENTIFIERB_OID,"KEY")
)

CREATE TABLE CONTAINERITEM
(
IDENTIFIERA INTEGER NOT NULL,
IDENTIFIERB VARCHAR(255) NOT NULL,
...
CONSTRAINT CONTAINERITEM_PK PRIMARY KEY (IDENTIFIERA,IDENTIFIERB)
)

The persist step issues
INSERT INTO HASHMAP1_ITEMS (IDENTIFIERA_VID,IDENTIFIERB_VID,IDENTIFIERA_OID,IDENTIFIERB_OID,"KEY")
VALUES (,,<-604059026>,<'-290476856'>,<'Key2'>)
INSERT INTO HASHMAP1_ITEMS (IDENTIFIERA_VID,IDENTIFIERB_VID,IDENTIFIERA_OID,IDENTIFIERB_OID,"KEY")
VALUES (,,<-604059026>,<'-290476856'>,<'Key1'>)
INSERT INTO HASHMAP1_ITEMS (IDENTIFIERA_VID,IDENTIFIERB_VID,IDENTIFIERA_OID,IDENTIFIERB_OID,"KEY")
VALUES (,,<-604059026>,<'-290476856'>,<'Key5'>)
INSERT INTO HASHMAP1_ITEMS (IDENTIFIERA_VID,IDENTIFIERB_VID,IDENTIFIERA_OID,IDENTIFIERB_OID,"KEY")
VALUES (,,<-604059026>,<'-290476856'>, <'Key4'>)
INSERT INTO HASHMAP1_ITEMS (IDENTIFIERA_VID,IDENTIFIERB_VID,IDENTIFIERA_OID,IDENTIFIERB_OID,"KEY")
VALUES (,,<-604059026>,<'-290476856'>, <'Key3'>)

and then if we try MapField.values().iterator() it generates

SELECT 'org.jpox.samples.types.container.ContainerItem ' AS NUCLEUS_TYPE,A0.IDENTIFIERA,A0.IDENTIFIERB,A0."NAME",A0.STATUS,A0."VALUE"
FROM CONTAINERITEM A0
INNER JOIN HASHMAP1_ITEMS B0 ON A0.IDENTIFIERA = B0.IDENTIFIERA_VID AND A0.IDENTIFIERB = B0.IDENTIFIERB_VID
WHERE B0.IDENTIFIERA_OID = <-604059026> AND B0.IDENTIFIERB_OID = <'-290476856'>

Since it is selecting the VALUE table then it cannot find the NULL values. Would need to select the JOIN table and join to the value.

The entrySet() operation works fine

Remove is not cascading, instead printing warning

I have defined my entities like this:

@Entity
public class Person {
    // id and other stuff
    @ManyToOne(cascade = CascadeType.ALL)
    private Address address;
}
@Entity
public class Address {
    // id and other stuff
}

When I am doing a delete of a person, I expect that the address is deleted, too. But instead, when I am doing

entityManager.remove(person);

I get an warning

Jun 15, 2016 6:50:16 PM org.datanucleus.store.rdbms.mapping.java.PersistableMapping preDelete
WARNING: Delete of my.sample.Person@50669be4 needs delete of related object at my.sample.Person.address but cannot delete it direct since FK is here

My persistence.xml looks like this:

<persistence version="2.1" xmlns="http://xmlns.jcp.org/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence http://xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd">
    <persistence-unit name="my-sample-pu" transaction-type="RESOURCE_LOCAL">
        <provider>org.datanucleus.api.jpa.PersistenceProviderImpl</provider>
        <class>my.sample.Address</class>
        <class>my.sample.Person</class>
        <properties>
            <property name="javax.persistence.jdbc.url" value="jdbc:postgresql://localhost:5432/sample"/>
            <property name="javax.persistence.jdbc.user" value="sample"/>
            <property name="javax.persistence.jdbc.password" value="sample"/>
            <property name="javax.persistence.jdbc.driver" value="org.postgresql.Driver"/>
            <property name="javax.persistence.schema-generation.database.action" value="drop-and-create"/>
        </properties>
    </persistence-unit>
</persistence>

PostgreSQL : Support Enum persisted as datastore enum

Support persisting of Java enums as database enums.

PostgreSQL: http://www.postgresql.org/docs/9.3/static/datatype-enum.html
MySQL: https://dev.mysql.com/doc/refman/5.0/en/enum.html

Not supported by all DB's though. Firebird, H2, SQLServer, Oracle for example do not have enums.

The preferred handling is to use a CHECK constraint on a VARCHAR column, as per https://stackoverflow.com/a/9366855/8558216 and DataNucleus already supports those via the following extension "enum-check-constraint" specified on the ColumnMetaData.

Support definition of foreign keys in CREATE TABLE statement

Some RDBMS allow specification of FOREIGN KEYs at the end of a CREATE TABLE statement. We should allow it.

CREATE TABLE TBL1
(
ID INT NOT NULL,
COL1 INT,
FOREIGN KEY (COL1) REFERENCES TBL2 (COLX)
)

The complication is that the related table needs to exist, so need to introduce ordering into table creation whereas without this we don't (and just send the CREATE FK statement when both tables exist)

Add support for bulk-fetch of "joined"

We provide support for "bulk-fetch" of multi-valued fields when the user executes a query and the multi-valued field is in the fetch plan. Our current support involves issuing a
SELECT ... FROM element WHERE EXISTS (SELECT id FROM owner WHERE element.owner_id = owner.id AND (where clause of query))

We should also allow the user to be able to request a (INNER) JOIN rather than EXISTS. This would involve applying the query WHERE clause direct to the element SELECT somehow, so likely may involve some modification to QueryToSQLMapper

Wrong transactionalVersion for leads to NucleusOptimisticException

While analysing wrong NucleusOptimisticException's in my application I deteced this bug:

Preconditions: Optimistic locking with VersionStrategy.VERSION_NUMBER. Version field mapped to class member. Reference to other object obj.other. Object not in L2 cache.

The access to obj.other for the hollow object obj sets the transactionalVersion of obj to 0 which leads to a NucleusOptimisticException at commit.
see test case.
test-jdo(1).zip

Error moving items between persistent Lists with javax.jdo.option.Optimistic=true and datanucleus.cache.level2.type=none and javax.jdo.annotations.Order(mappedBy="pos")

I try to move items from one persistent object's List to the List of an other persistent object.
With
javax.jdo.option.Optimistic=true and
datanucleus.cache.level2.type=none and
the an index mapping with javax.jdo.annotations.Order(mappedBy="pos")
there seems to be a problem with this move.
Some of the items loose there connection to the parent List. They are not contained in the source List and also not contained in the destination List.
test-jdo.zip

JDOQL : Support compilation of ":param.collField.contains()" for (implicit) named parameters

If we want to do something like

SELECT FROM MyClass WHERE :myObj.someField.contains(this.field)

and then try to compile it, we get a message like

org.datanucleus.store.rdbms.sql.expression.IllegalExpressionOperationException: Cannot perform operation ".contains" on org.datanucleus.store.rdbms.sql.expression.NullLiteral@2ea41516
at org.datanucleus.store.rdbms.sql.expression.SQLExpression.invoke(SQLExpression.java:601)
at org.datanucleus.store.rdbms.query.QueryToSQLMapper.processInvokeExpression(QueryToSQLMapper.java:3585)
at org.datanucleus.query.evaluator.AbstractExpressionEvaluator.compilePrimaryExpression(AbstractExpressionEvaluator.java:213)
at org.datanucleus.query.evaluator.AbstractExpressionEvaluator.compileUnaryExpression(AbstractExpressionEvaluator.java:182)
at org.datanucleus.query.evaluator.AbstractExpressionEvaluator.compileAdditiveMultiplicativeExpression(AbstractExpressionEvaluator.java:161)
at org.datanucleus.query.evaluator.AbstractExpressionEvaluator.compileRelationalExpression(AbstractExpressionEvaluator.java:136)
at org.datanucleus.query.evaluator.AbstractExpressionEvaluator.compileOrAndExpression(AbstractExpressionEvaluator.java:78)
at org.datanucleus.query.evaluator.AbstractExpressionEvaluator.evaluate(AbstractExpressionEvaluator.java:46)
at org.datanucleus.query.expression.Expression.evaluate(Expression.java:338)
at org.datanucleus.store.rdbms.query.QueryToSQLMapper.compileFilter(QueryToSQLMapper.java:495)
at org.datanucleus.store.rdbms.query.QueryToSQLMapper.compile(QueryToSQLMapper.java:416)
at org.datanucleus.store.rdbms.query.JDOQLQuery.compileQueryFull(JDOQLQuery.java:918)
at org.datanucleus.store.rdbms.query.JDOQLQuery.compileInternal(JDOQLQuery.java:344)
at org.datanucleus.store.query.Query.compile(Query.java:1669)

This is because the parameter doesn't have a value at the point of compilation, hence you have the equivalent of an NPE but in a query. Need a better way of handling this

relation-discriminator extension should work with N-1 relationships

relation-discriminator-column should apply to n-1 join tables. Currently, only ElementContainerTable deals with it, which means that the parent must have a Collection of children for it to work. Since N-1 is a subset of N-M, I think it should be fairly trivial to implement this issue

When asked to implement it due to it "being trivial", the response was along the lines of "oh, I can't do things like that". Raised on the old DN forum.

Support SoftDelete of objects

Support for core-180.
Allow soft deletion of objects by configuring in metadata that deleted instances of class/interface X is soft-deleted.

e.g.

@PersistenceCapable
@SoftDelete
public class MyClass
{
    ...
}

We would need to add a surrogate column for the soft delete flag (boolean), as well as update to retrieval of objects to check on the flag, and change DeleteRequest to do an UPDATE setting the flag.

Bulk Fetch : Low performance SQL are generated for Objects that are having lists of other objects

Description
For one to many relationship, after migrating to datanucleus 4 the JDO queries are having a very low performance. Datanucelus brings all sub-objects in memory and for tables with millions of entries this takes the CPU at 100%.

Model example

@PersistenceCapable(detachable` = "true")
public class Activity {
    @Persistent(valueStrategy = IdGeneratorStrategy.UUIDHEX)
    @Column(jdbcType = "VARCHAR", length = 32)
    private String encodedKey;

    @Persistent(defaultFetchGroup = "true")
    @Element(dependent = "true")
    private List<FieldChangeItem> fieldChanges = null;
.....
}

When fetching the activities without any filter all field changes are taken into the memory.
Generated SQL that is causing problems:

SELECT 
    '....model.FieldChangeItem' AS `NUCLEUS_TYPE`,
    ....
FROM
    `FIELDCHANGEITEM` `A0`
WHERE
    `A0`.`FIELDCHANGES_INTEGER_IDX` >= 0
        AND EXISTS( SELECT 
            '.....model.Activity' AS `NUCLEUS_TYPE`,
                `A0_SUB`.`ENCODEDKEY` AS `DN_APPID`
        FROM
            `ACTIVITY` `A0_SUB`
        WHERE
            `A0`.`FIELDCHANGES_ENCODEDKEY_OWN` = `A0_SUB`.`ENCODEDKEY`)
ORDER BY `NUCORDER0`

Note
The problem is present for defined fetch groups as well, not only for defaultFetchGroup = "true".

Used Workaround
Removed defaultFetchGroup = "true" and made lazy programatic fetching for this list. In this case filtered by parent key queries were generated when the field change items were fetched.

Clarification regarding connection pool sizes

Apache uses data nucleus RDBMS libs (version 3.2.9), with BoneCP connection pooling (0.8.0-RELEASE). I am trying to find out what the default connection pool size is.
I dug around a bit in both data nucleus code and the BoneCP code and what I see in reality does not match the logic in the code.
Using "datanucleus.connectionpool.maxPoolSize" seems to have no effect on the max number of connections to the backend DB. Using "datanucleus.connectionpool.minPoolSize", the total number of connections to the backendDB are 2x the value set for this property.
I have played around with these values a little bit in number of different clusters in-house and the observations were consistent.
For example:
minPoolSize=10, maxPoolSize=15 total number to the backend DB was 20.
minPoolSize=20, maxPoolSize=30, total number of CNXS to backend DB was 40.
Without setting these values, I see the total number of CNXS to DB are 10.

In a different environment,
the default CNXS were 14.
when minPoolSize=12 and maxPoolSize=30, we observed a total of 42 connections to the backend DB.

Looking at the code neither behavior makes any sense.

The code above seems to be using the minPoolSize and maxPoolSize values to set the min and max setMinConnectionsPerPartition()/setMaxConnectionsPerPartition on the DataStore.

https://github.com/wwadge/bonecp/blob/74bc3287025fc137ca28909f0f7693edae37a15d/bonecp/src/main/java/com/jolbox/bonecp/BoneCPConfig.java#L64
https://github.com/wwadge/bonecp/blob/74bc3287025fc137ca28909f0f7693edae37a15d/bonecp/src/main/java/com/jolbox/bonecp/BoneCPConfig.java#L66
https://github.com/wwadge/bonecp/blob/74bc3287025fc137ca28909f0f7693edae37a15d/bonecp/src/main/java/com/jolbox/bonecp/BoneCPConfig.java#L70

So the default values for
partitionCount=1 (recommended value is 2-4 for partitionCount)
minConnectionsPerPartition=1
maxConnectionsPerPartitions=2

So when using defaults across the board, we should only see a max of 2 connections to the backend DB. I am seeing 10 in my env and 14 in a customer env.

when minPoolSize=12, maxPoolSize=30, we should see a max of 30 connections, but we are seeing 42.
when minPoolSize=10, maxPooSize=15, we should see a max of 15 but we are seeing 20 even under no load scenarios.

Could you please explain the whats being enforced and whats not. Thanks

Support that a Class reference itself with a Compound Identity that declare the self reference as part of the primary key

The log of the test :
10:08:19,171 (main) DEBUG [DataNucleus.Datastore.Schema] - Column "pymodule.name" added to internal representation of table.
10:08:19,171 (main) DEBUG [DataNucleus.Datastore.Schema] - Field [metamicro.jet.core.persistency.modules.PyModule.name] -> Column(s) [pymodule.name] using mapping of type "org.datanucleus.store.mapped.mapping.StringMapping" (org.datanucleus.store.rdbms.mapping.VarCharRDBMSMapping)
10:08:19,171 (main) DEBUG [DataNucleus.Datastore.Schema] - An error occurred while auto-creating schema elements - rolling back
10:08:19,171 (main) ERROR [DataNucleus.SchemaTool] - An exception was thrown during the operation of SchemaTool. Please refer to the log for full details. The following may help : Unsupported relationship with field metamicro.jet.core.persistency.modules.PyModule.parent
Unsupported relationship with field metamicro.jet.core.persistency.modules.PyModule.parent
org.datanucleus.exceptions.NucleusException: Unsupported relationship with field metamicro.jet.core.persistency.modules.PyModule.parent
at org.datanucleus.store.rdbms.table.ClassTable.initializePK(ClassTable.java:1024)
at org.datanucleus.store.rdbms.table.ClassTable.preInitialize(ClassTable.java:252)
at org.datanucleus.store.rdbms.RDBMSManager$ClassAdder.addClassTable(RDBMSManager.java:2563)
at org.datanucleus.store.rdbms.RDBMSManager$ClassAdder.addClassTables(RDBMSManager.java:2354)
at org.datanucleus.store.rdbms.RDBMSManager$ClassAdder.addClassTablesAndValidate(RDBMSManager.java:2625)
at org.datanucleus.store.rdbms.RDBMSManager$ClassAdder.run(RDBMSManager.java:2279)
at org.datanucleus.store.rdbms.AbstractSchemaTransaction.execute(AbstractSchemaTransaction.java:113)
at org.datanucleus.store.rdbms.RDBMSManager.addClasses(RDBMSManager.java:912)
at org.datanucleus.store.rdbms.SchemaTool.createSchema(SchemaTool.java:673)
at org.datanucleus.store.rdbms.SchemaTool.main(SchemaTool.java:289)
jet-core-persistency-modules.jdo.gz

Map<PC, PC> with dependent key/value on delete of owner when optimistic txn can cause error

Look at "test.jdo.application" DependentFieldTest "testDependentFieldsInverseMapsDeletion".
This passes when using pessimistic transactions, but when switching to optimistic it causes

testDependentFieldsInverseMapsDeletion(org.datanucleus.tests.DependentFieldTest) Time elapsed: 0.463 sec <<< ERROR!
javax.jdo.JDOUserException: Cannot write fields to a deleted object
FailedObject:2
at org.datanucleus.api.jdo.state.PersistentDeleted.transitionWriteField(PersistentDeleted.java:126)
at org.datanucleus.state.AbstractStateManager.transitionWriteField(AbstractStateManager.java:584)
at org.datanucleus.state.JDOStateManagerImpl.preWriteField(JDOStateManagerImpl.java:4662)
at org.datanucleus.state.JDOStateManagerImpl.setObjectField(JDOStateManagerImpl.java:2625)
at org.datanucleus.state.JDOStateManagerImpl.setObjectField(JDOStateManagerImpl.java:2521)
at org.datanucleus.store.mapped.scostore.FKMapStore.removeValue(FKMapStore.java:701)
at org.datanucleus.store.mapped.scostore.FKMapStore.remove(FKMapStore.java:658)
at org.datanucleus.store.mapped.scostore.FKMapStore.clear(FKMapStore.java:734)
at org.datanucleus.store.types.sco.queued.ClearMapOperation.perform(ClearMapOperation.java:35)
at org.datanucleus.store.types.sco.queued.ClearMapOperation.perform(ClearMapOperation.java:26)
at org.datanucleus.store.types.sco.queued.OperationQueue.performAll(OperationQueue.java:137)
at org.datanucleus.store.types.sco.backed.HashMap.flush(HashMap.java:248)
at org.datanucleus.store.mapped.mapping.MapMapping.preDelete(MapMapping.java:250)
at org.datanucleus.store.rdbms.request.DeleteRequest.execute(DeleteRequest.java:178)
at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.deleteTable(RDBMSPersistenceHandler.java:492)
at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.deleteObject(RDBMSPersistenceHandler.java:461)
at org.datanucleus.state.JDOStateManagerImpl.internalDeletePersistent(JDOStateManagerImpl.java:4518)
at org.datanucleus.state.JDOStateManagerImpl.flush(JDOStateManagerImpl.java:4868)
at org.datanucleus.ObjectManagerImpl.flushInternal(ObjectManagerImpl.java:3227)
at org.datanucleus.ObjectManagerImpl.flush(ObjectManagerImpl.java:3167)
at org.datanucleus.ObjectManagerImpl.preCommit(ObjectManagerImpl.java:3308)

So it gets to the remove method where it will check on dependent key/value but then tries to delete the value. Probably needs to flush things to the datastore first

Throw exception if user attempts to model embedded class with collection element

We do not support an embedded object with a Collection of non-embedded objects, so best to advise the user at schema generation. The problem here is that the embedded object has no "id" and so it is debatable what to put in the join table as the owner id; one option would be the id of the owner of the embedded object, but that embedded type could be involved in other relations.

Order metadata not populated for the list member

Hi everyone, we have a scenario where sometimes the order metadata of a list member it's not populated. After a server application restart the error disappear and later can appear again.

Error message:

Class "LoanAccount" has collection field "funds" and this has no mapping in the table for the index of the element class "InvestorFund". Maybe you declared the field as a java.util.Collection and instantiated it as a java.util.List yet omitted the element in the MetaData ? javax.jdo.JDOUserException: Class "LoanAccount" has collection field "funds" and this has no mapping in the table for the index of the element class "InvestorFund". Maybe you declared the field as a java.util.Collection and instantiated it as a java.util.List yet omitted the element in the MetaData ? at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:636) at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:720) at org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:740)

JDO models:

@PersistenceCapable(detachable = "true", table = "GUARANTY")
@Discriminator(strategy = DiscriminatorStrategy.VALUE_MAP, column = BaseSecurity.DISCRIMINATOR_COLUMN, value = "BASE_SECURITY")
public abstract class BaseSecurity implements Serializable {

static final String DISCRIMINATOR_COLUMN = "DISCRIMINATOR";

...

}

@PersistenceCapable(detachable = "true")
@Discriminator(strategy = DiscriminatorStrategy.VALUE_MAP, column = BaseSecurity.DISCRIMINATOR_COLUMN, value = "GUARANTY")
public class Guaranty extends BaseSecurity {

...

}

@PersistenceCapable(detachable = "true")
@Discriminator(strategy = DiscriminatorStrategy.VALUE_MAP, column = BaseSecurity.DISCRIMINATOR_COLUMN, value = "INVESTOR_FUND")
public class InvestorFund extends BaseSecurity{

....

}

@PersistenceCapable(detachable = "true")
public class LoanAccount {

...
    @Element(dependent = "true")
@Persistent
@Column(name = "GUARANTEES_ENCODEDKEY_OWN", target = "Guaranty")
@Order(column = "GUARANTEES_INTEGER_IDX")
private List<Guaranty> guarantees = null;

@Element(dependent = "true")
@Persistent
@Column(name = "FUNDS_ENCODEDKEY_OWN", target = "InvestorFound")
@Order(column = "FUNDS_INTEGER_IDX")
private List<InvestorFund> funds = null;
...

}

Note
Error is raised only for the funds field from the LoanAccount, never for the guarantees field

Any help is appreciated, thanks!

Oracle : support writing BLOB columns in join tables

Oracle (for storing large amounts of data, offline, in a BLOB column) requires some whacky process of inserting EMPTY_BLOB() on an INSERT and then retrieving and setting the actual value of the BLOB field. This is only implemented for tables of classes currently, and not for join tables.

v6.0 updates means that we have OracleCollectionMapping and the element mapping would be such as OracleBlobColumnMapping (in the join table). The add element of the backing store fires off any INSERT (or UPDATE if it was to do one). This would need to call mapping.performSetPostProcessing(...).

The difficult part of this is tied to OracleBlobColumnMapping.setPostProcessing (and equivalent Clob method). This needs to do
SELECT {blob} FROM join_tbl WHERE ID = ?
but with a join table we don't have an "id", we have the owner, but that only restricts to all elements of the collection. We also (may, with an indexed List) have an index column. We need to restrict to a particular element of the collection (or particular key/value of the map).

Note that we could allow a BLOB to store less than 4k bytes (?) by just putting the value into the INSERT statement, but why use a BLOB in that case?

Support JPA "MapsId"

AbstractMemberMetaData has "getMapsIdAttribute" from annotations/XML. Need to make use of it.

Could be useful if fully supporting '@EmbeddedId', for example

@Entity
public class Employee
{
    @Id 
    long id;
}
@Entity
@IdClass(DependentId.class)
public class Dependent
{
    @EmbeddedId
    DependentId id;

    @MapsId("employeePK")
    @ManyToOne 
    Employee employee;
}
@Embeddable
public class DependentId
{
    String name; // matches name of @Id attribute
    long employeePk; // matches type of Employee PK
    ...
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.