Giter VIP home page Giter VIP logo

datanucleus-mongodb's People

Contributors

andyjefferson avatar rayman245 avatar renataogarcia avatar tikhomirovsergey avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

datanucleus-mongodb's Issues

Support native MongoDB queries

We could potentially allow
pm.newQuery("MongoDB", "db.MYCOLL.findOne({'_id':'" + oid + "'})");

em.createNativeQuery("db.MYCOLL.findOne({'_id':'" + oid + "'})", MyClass.class);

Obviously this would have to only support a subset of the possible MongoDB API calls possible from DB.getCollection(...). So find, and findOne. Similarly the returned DBObject's would need unmapping to return the Object[] for a row of results.

The alternative is already available via NucleusConnection

Resolution of inherited attributes broken if class is marked as non-cacheable

I'm currently migrating from DN core 5.0.8 to 5.1.8 and datanucleus-mongodb
5.0.4 to 5.1.0-release. I get the following exception when trying to read an object containing a reference to a sub-object:

.. Caused by: java.lang.NullPointerException at org.datanucleus.store.mongodb.MongoDBUtils.getClassNameForIdentity(MongoDBUtils.java:216) at org.datanucleus.store.mongodb.MongoDBStoreManager.getClassNameForObjectID(MongoDBStoreManager.java:190) at org.datanucleus.ExecutionContextImpl.getClassNameForObjectId(ExecutionContextImpl.java:3741) at org.datanucleus.ExecutionContextImpl.getClassDetailsForId(ExecutionContextImpl.java:3635) at org.datanucleus.ExecutionContextImpl.findObject(ExecutionContextImpl.java:3502) at org.datanucleus.identity.IdentityUtils.getObjectFromIdString(IdentityUtils.java:474) at org.datanucleus.store.mongodb.fieldmanager.FetchFieldManager.getValueForContainerRelationField(FetchFieldManager.java:968) at org.datanucleus.store.mongodb.fieldmanager.FetchFieldManager.fetchNonEmbeddedObjectField(FetchFieldManager.java:735) at org.datanucleus.store.mongodb.fieldmanager.FetchFieldManager.fetchObjectField(FetchFieldManager.java:683) at org.datanucleus.state.StateManagerImpl.replacingObjectField(StateManagerImpl.java:1953) at de.mpg.ipp.codac.business.schema.teamwork.UserGroup.dnReplaceField(UserGroup.java) at de.mpg.ipp.codac.business.schema.common.W7XDbNamedObject.dnReplaceFields(W7XDbNamedObject.java) at org.datanucleus.state.StateManagerImpl.replaceFields(StateManagerImpl.java:4302) at org.datanucleus.state.StateManagerImpl.replaceFields(StateManagerImpl.java:4327) at org.datanucleus.store.mongodb.MongoDBUtils$1.fetchFields(MongoDBUtils.java:796) at org.datanucleus.state.StateManagerImpl.loadFieldValues(StateManagerImpl.java:3719) at org.datanucleus.state.StateManagerImpl.initialiseForHollow(StateManagerImpl.java:383) at org.datanucleus.state.ObjectProviderFactoryImpl.newForHollow(ObjectProviderFactoryImpl.java:113) at org.datanucleus.ExecutionContextImpl.findObject(ExecutionContextImpl.java:3194) at org.datanucleus.store.mongodb.MongoDBUtils.getObjectUsingApplicationIdForDBObject(MongoDBUtils.java:791) at org.datanucleus.store.mongodb.MongoDBUtils.getPojoForDBObjectForCandidate(MongoDBUtils.java:753) at org.datanucleus.store.mongodb.query.LazyLoadQueryResult.getNextObject(LazyLoadQueryResult.java:324) at org.datanucleus.store.mongodb.query.LazyLoadQueryResult$QueryResultIterator.next(LazyLoadQueryResult.java:449) at de.mpg.ipp.codac.persistence.DataNucleusProvider.doGetObjects(DataNucleusProvider.java:362) ... 30 more

I'm testing against MongoDB 3.0.15. I'll provide a test case demonstrating the issue and attached 2 test documents for the test case.
DN.zip

Support @OneToMany with no List of ids stored at the "1" side.

I'm not sure if this was a conscious decision for efficiency, or if this is a bug, but I found it surprising.
Suppose I have the following entity classes:
I have the following entity classes (only showing relevant portions):

@Entity
public class TestEntity implements Serializable {
    @OneToMany(mappedBy = "testEntity")
    private List<RelatedListElementEntity> relatedListElements;
}

@Entity
public class RelatedListElementEntity implements Serializable {
    @ManyToOne
    private TestEntity testEntity;
}

What I expected:

When I persist RelatedListElementEntity I expect to see something like:
{
"_id" : "8a998109363148060136314806f30002",
"testEntity" : "8a998109363148060136314806f30001"
} ,
{
"_id" : "8a99810936314806013631484c590003",
"testEntity" : "8a998109363148060136314806f30001"
}

When I persist TestEntity I expect to see something like:
{
"_id" : "8a998109363148060136314806f30001",
}

What I got:

The RelatedListElementEntity looked good, but for TestEntity I see:
{
"_id" : "8a998109363148060136314806f30001",
"relatedListElements" : [ "8a998109363148060136314806f30002", "8a99810936314806013631484c590003" ],
}

(the list elements have been persisted into the document AND into the "owning" property).

Is this by design or a bug? If a bug, I could look into correcting it. It seems StoreFieldManager is the correct place to look, and we may want to add some checks into storeObjectField() to prevent this from being stored. One thought is to add some more checks into isStorable(), (which is called by storeObjectField()).

What do you think?

Incongruent exception on primary expressions

When trying to sort a result using a field that is not part of the entity model (perhaps by a typo), when compiling the sort section of the query, the QueryToMongoDBMapper class will eventually reach the processPrimaryExpression method; which calls getFieldNameForPrimary to get the actual name of the field in mongo documents. But since the field isn't registered within the entity contract, the metadata doesn't known it and returns null.

This will cause NucleusException in any case, with an exception message like "Primary Expressions are not supported by this mapper" (since that's the only thing that the super class does), while QueryToMongoDBMapper does support primary expressions. Furthermore, there's a log message that warns the user that the field isn't part of the entity, so the sorting won't take place on the datastore, this suggests that the query compilation should be successfull; but the results won't be ordered.

So I'm unsure if this is the desired behaviour, or if a exception with a clearer message should be thrown or if it should just work. In any case, this is very confusing; I really suggest a little attention to this point.

Thanks

Support CompoundIdentity

We need to support where the PK is formed with a persistable object. Currently it tries to serialise the related object which is wrong

NPE when trying to find rootTable

In org.datanucleus.store.mongodb.MongoDBUtils.getClassNameForIdentity(Object, AbstractClassMetaData, ExecutionContext, ClassLoaderResolver) I get a NullPointerException when it is tried to retrieve the table for an abstract superclass.

Table rootTable = storeMgr.getStoreDataForClass(rootCmd.getFullClassName()).getTable();

getStoreDataForClass comes back with NULL as result for an abstract superclass. I investigated the table of the storeDataManager and it contains only entries for concrete classes, am I doing something wrong in my annotation?

@PersistenceCapable(detachable = "true", identityType = IdentityType.APPLICATION, objectIdClass = Id.class) @Discriminator(strategy = DiscriminatorStrategy.CLASS_NAME) @Inheritance(strategy = InheritanceStrategy.COMPLETE_TABLE) public abstract class SuperClass { ..

Support use of @Converter on String field

With these classes:

@PersistenceCapable
public class Hub implements Serializable {
    private String name;
    private List<Schedule> schedules;
}

And

@PersistenceCapable
@EmbeddedOnly
public class Schedule implements Serializable {
    private String day;
    @Convert(LocalTimeConverter.class)
    private String opens;
    @Convert(LocalTimeConverter.class)
    private String closes;
}

The converter does not work, the stored document should be a number/Long and not the String field value since there is a converter. The converter code can be found here: https://github.com/cyberquarks/test-jdo-converter/blob/master/src/main/java/mydomain/model/converter/LocalTimeConverter.java

Here's the test case: https://github.com/cyberquarks/test-jdo-converter
The issue affects the latest version of Datanucleus/Datanucleus JDO

Add support for "authentication database"

Feature Request: MongoDB Authentication Failure/Missing Property, There has to be a property to set mongodb authentication database for accessing a MongoDB deployment that has access control enabled

Bug Report : https://github.com/mahesh-pardeshi17/test-jdo

As per documentation provided by DataNucleus for MongoDB datastore, I couldn't found any property to setup a authentication database neither a MongoDB Standard Connection String Format mongodb://[username:password@]host1[:port1][,host2[:port2],...[,hostN[:portN]]][/[database][?options]] isn't fully supported. I got to know from above, DataNucleus missed [?options] parameter out of it.

Ultimately mongodb://username:password@host:port/database?authSource=admin
MongoDB Native Connection URL is working fine to connect to database with authentication using extra option authSource=authdatabase natively, which isn't working with DataNucleus.

Support JPA @OrderBy

When we retrieve a List from MongoDB we retrieve it in the exact same order as persisted. That is, it is stored as a Collection in the owner object. If a user puts some @orderby annotation they may want the list reordering.

This applies to ALL non-RDBMS datastores, so creating some generic code would make sense and reorder them in-memory, maybe using a Comparator.

Support MongoDB map/reduce capabilities through JDO/JPA query interface

The MongoDB java driver has mapReduce() methods taking in the reduction definition. We need to take in the reduction definition, perhaps as query extension/hint(s).

User has own conversion function for data perhaps, so used as bulk update type thing.

On a general note, with GAE v1 it stored data in a particular mode (with FK type info in elements). In v2 it is now stored in a new mode (with child information in property in the parent. To move from v1 to v2 a migration process would be a particular map function. Again, like a bulk update.

Obviously people more familiar (than me) with map-reduce will likely have their own uses; things that can't be expressed using JDOQL/JPQL maybe

Lazy load query results : add handler to get size by query

With the current LazyLoadQueryResult class, when the user calls "size()" on the results it will load all results (since that is the only currently supported way of getting the size). An alternative would be to do as we provided for RDBMS, add a method that will execute an additional query returning the count. This will then avoid loading all results of the query

Schema evolution support : support field addition, change and removal

During the lifecycle of a model class a user could do the following things to it

  1. Add a field. Need to cater for objects in the datastore without this new field
  2. Update type of field. Need to cater for migrating the data when accessed
  3. Delete a field. Need to remove this redundant data from existent objects

We also have class rename as a possible scenario.

Load of query doesnt ensure that all SCOs are wrapped

When we execute a query and use FetchFieldManager, we have to use the constructor for an ExecutionContext (i.e ObjectProvider not yet known). This means that we cannot easily wrap any SCO fields in FetchFieldManager. Consequently we need to call

ObjectProvider.replaceAllLoadedSCOFieldsWithWrappers()

just after the FetchFieldManager process.

Upgrade to v4.x of the MongoDB sync driver, support "MongoDatabase" rather than "DB"

"DB" is deprecated in MongoDB java driver 3.1+, and people should use MongoDatabase instead. This will involve a lot of method updates.

Refer to https://mongodb.github.io/mongo-java-driver/4.3/driver/getting-started/installation/ which states
"we recommend that new applications depend on the mongodb-driver-sync module."
so that means add this to the POM

    <dependency>
        <groupId>org.mongodb</groupId>
        <artifactId>mongodb-driver-sync</artifactId>
        <version>4.3.1</version>
    </dependency>

and the primary classes such as MongoClient, MongoDatabase are in package com.mongodb.client.

Support optimised delete of objects

datanucleus-core 3.0 M3 introduces the ability to receive all objects to be deleted in a single call to StorePersistenceHandler.deleteObjects(). This allows us to remove the objects from MongoDB in fewer calls. Mongodb java driver doesn't currently provide a bulk delete method so can't do just yet

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.