adobe / aepp Goto Github PK
View Code? Open in Web Editor NEWLicense: Apache License 2.0
License: Apache License 2.0
Hello, are you planning/working on descriptor API support?
Thank you!
deleteEntity should delete the profile. A merge policy should be provided or set as a default value so the API successfully can delete the profile.
{'message': 'The provided merge policy is not found. Provide a valid merge policy and try again.', 'statusCode': 422, 'type': 'http://ns.adobe.com/aep/errors/UPLIB-101221-404', 'title': 'The provided merge policy is not found. Provide a valid merge policy and try again.', 'error-code': 'UPLIB-101221-404', 'error-message': 'The provided merge policy is not found. Provide a valid merge policy and try again.', 'status': 422}
myProfiles.deleteEntity(
schema_name=schema,
entityId=entityID,
entityIdNS=ENITTY_ID_N
)
We're thinking about future usage of aepp in context of App Builder
As far as I understand App Builder doesn't support Python. Are there any thoughts or Adobe discussions about that?
Is there any consideration for implementing a module-based design for AEPP enhancement in the future? A module-based design, similar to packages like Pandas, would allow importing the entire AEPP package and making all its modules and contents accessible. For instance, when importing Pandas as 'pd', it provides access to all the functions. This design choice allows you to perform various operations without explicitly calling different classes under the pandas library. I'm curious if there are plans to explore such a design approach for AEPP. (In the case of Pandas library, it imports all the classes in the init.py file for the end user. Which makes the library so popular with even Python beginners. Although I'm uncertain about the level of development effort this would require for AEPP).
Hello Team,
I tried to use aepp in linux machine X64 (PY 3.10.4) of azure function app service plan. The "aepp" library is not working due to "[Error] ImportError: cannot import name 'Sequence' from 'collections' (/usr/local/lib/python3.10/collections/init.py)"
"aepp" is the root cause for this issue. Could you help me out whats wrong with installing in linux machine.
Would like to easily check if a connection is set as a Destination or Source in order to build overview report on AEP instance for clients.
When checking the connections existing to a client, it is not possible via the API to define if it is a Destination or a Source.
The way to do it is to download the existing connections and then check their connectionSpec to find through the attributes
that the following flag is set isDestination
or isSource
Providing an attribute for connectionType in the FlowManager that automatically check these attributes to define the type.
I am trying to set up an OAuth2 based authentication. Expected behavior is to authenticate to the targeted sandbox based on an OAuth2 server-to-server credentials.
Cannot authenticate. Jupyter console reporting:
Issue retrieving token
{'error': 'unauthorized_client'}
import aepp
aepp.createConfigFile(destination='template_config.json', auth_type="oauth")
{
"org_id": "<orgID>",
"client_id": "<client_id>",
"secret": "<YourSecret>",
"sandbox-name": "playhouse",
"environment": "prod",
"auth_code": "<auth_code>"
}
import aepp
config1 = aepp.importConfigFile('sanoma.dev.json',auth_type='oauth',connectInstance=True)
from aepp import schema
mySchemaInstance = schema.Schema(config=config1)
KeyError Traceback (most recent call last)
Cell In[9], line 7
3 config1 = aepp.importConfigFile('sanoma.dev.json',auth_type='oauth',connectInstance=True)
5 from aepp import schema
----> 7 mySchemaInstance = schema.Schema(config=config1)
File c:\users\decaro\onedrive - adobe\documents\dev\python\jupiter\notebook\lib\site-packages\aepp\schema.py:110, in Schema.__init__(self, containerId, config, header, loggingObject, **kwargs)
108 header = config.getConfigHeader()
109 config = config.getConfigObject()
--> 110 self.connector = connector.AdobeRequest(
111 config=config,
112 header=header,
113 loggingEnabled=self.loggingEnabled,
114 logger=self.logger,
115 )
116 self.header = self.connector.header
117 self.header["Accept"] = "application/vnd.adobe.xed+json"
File c:\users\decaro\onedrive - adobe\documents\dev\python\jupiter\notebook\lib\site-packages\aepp\connector.py:86, in AdobeRequest.__init__(self, config, header, endpoints, verbose, loggingEnabled, logger, retry, **kwargs)
79 token_info = self.get_jwt_token_and_expiry_for_config(
80 config=self.config,
81 verbose=verbose,
82 aepScope=kwargs.get("aepScope"),
83 privacyScope=kwargs.get("privacyScope"),
84 )
85 else:
---> 86 token_info = self.get_oauth_token_and_expiry_for_config(
87 config=self.config,
88 verbose=verbose
89 )
90 self.token = token_info.token
91 self.config["token"] = self.token
File c:\users\decaro\onedrive - adobe\documents\dev\python\jupiter\notebook\lib\site-packages\aepp\connector.py:142, in AdobeRequest.get_oauth_token_and_expiry_for_config(self, config, verbose, save)
134 config = config.getConfigObject()
135 oauth_payload = {
136 "grant_type": "authorization_code",
137 "client_id": config["client_id"],
138 "client_secret": config["secret"],
139 "code": config["auth_code"]
140 }
141 response = requests.post(
--> 142 config["oauthTokenEndpoint"], data=oauth_payload, verify=False
143 )
144 return self._token_postprocess(response=response, verbose=verbose, save=save)
KeyError: 'oauthTokenEndpoint'
I can copy all fieldgroups.
Copy of fieldgroups fails for customized out-of-the-box fieldgroups.
latest aepp, Linux
import aepp
from aepp import schema
original_config = aepp.importConfigFile(path="credentials.json", connectInstance=True)
original_instance = schema.Schema(config=original_config)
fg_original = original_instance.getFieldGroup("_my.mixins.7dc518b25875a5562fd9aad33ca2fa0fd951c08c95faebd1")
fg_copy = original_instance.copyFieldGroup(fg_original, title="new", tenantId="_other")
Traceback (most recent call last):
File "/home/xxx/work/aepdeploy/xxx/../bug.py", line 9, in <module>
fg_copy = original_instance.copyFieldGroup(fg_original, title="new", tenantId="other")
File "/home/xxx/.local/lib/python3.9/site-packages/aepp/schema.py", line 1219, in copyFieldGroup
obj["definitions"]["property"]["properties"][tenantId] = obj["definitions"]["property"]["properties"][oldTenant]
KeyError: '_my'
To enhance our understanding of the dataset's structure, I propose making the platform_sdk.dataset_reader accessible. This will enable us to unpack the entire dataset and view it comprehensively, including the nested fields. Currently, the AEPP supports data loading through the queryservice module by specifying a SQL query, which loads the data into a pandas dataframe. However, each column in the dataframe only represents the first hierarchy of the nested object in the schema, unless we manually unpack a certain object in the query. For example: "select web.* from table_abc" will give us the fields nested in the second layer under "web" object.
By utilizing the platform_sdk.dataset_reader, we can effortlessly load the data with its nested fields unpacked, resulting in a more extensive perspective of the dataset. This approach enables us to grasp a clearer understanding of the data's structure by having access to all the fields it contains. Furthermore, it enhances the efficiency of querying and data processing, data manipulation since we no longer need to manually unpack individual object and the value won't be nested for each field.
Example of using SDK dataset reader, automatically unpack all the nested fields under "web" object.
pip install aepp
on python 3.10 does not affect the behavior of subsequent pip installs
#9 also observed when using IBM spark 3.3 with python 3.10 runtime (but not with spark 3.3 with python 3.9)
Observed in some python 3.10 environments, e.g. #9 and also in IBM watson studio runtimes with spark and python 3.10. Does not seem to be reproducible in every python 3.10 environment.
pip install aepp
pip install adlfs
...
Installing collected packages: pathlib, pathlib2, aepp
Successfully installed aepp-0.3.2.post1 pathlib-1.0.1 pathlib2-2.3.7.post1
Traceback (most recent call last):
File "/home/spark/shared/conda/envs/python/bin/pip", line 7, in <module>
from pip._internal.cli.main import main
File "/opt/ibm/conda/miniconda/lib/python/site-packages/pip/_internal/cli/main.py", line 9, in <module>
from pip._internal.cli.autocompletion import autocomplete
File "/opt/ibm/conda/miniconda/lib/python/site-packages/pip/_internal/cli/autocompletion.py", line 10, in <module>
from pip._internal.cli.main_parser import create_main_parser
File "/opt/ibm/conda/miniconda/lib/python/site-packages/pip/_internal/cli/main_parser.py", line 9, in <module>
from pip._internal.build_env import get_runnable_pip
File "/opt/ibm/conda/miniconda/lib/python/site-packages/pip/_internal/build_env.py", line 6, in <module>
import pathlib
File "/opt/ibm/conda/miniconda/lib/python/site-packages/pathlib.py", line 10, in <module>
from collections import Sequence
ImportError: cannot import name 'Sequence' from 'collections' (/opt/ibm/conda/miniconda/lib/python3.10/collections/__init__.py)
The pathlib.py
file that is added to site-packages when pathlib-1.0.1
is installed by aepp
tries to import Sequence from collections but this was deprecated in python 3.3 and has stopped working in 3.10.
Since there is a built-in version of pathlib
available from version 3.4 on and the pypi module pathlib-1.0.1
is just a backport of its features, it is suggested to drop the explicit references to pathlib
and pathlib2
in the requirements and setup for this library.
addFieldGroupToSchema works
error:
File "/home/xxx/.local/lib/python3.9/site-packages/aepp/schema.py", line 694, in addFieldGroupToSchema
res = self.putSchema(schemaDef)
File "/home/xxx/.local/lib/python3.9/site-packages/aepp/schema.py", line 476, in putSchema
if schemaId.startswith("https://"):
AttributeError: 'dict' object has no attribute 'startswith'
my sample code:
res = target_instance.addFieldGroupToSchema(schemaId=existing_schema_id, fieldGroupIds=new_field_group_ids)
Linux/Latest
Issue may be in
https://github.com/adobe/aepp/blob/main/aepp/schema.py#L694
Maybe an easy fix would be to use
res = self.putSchema(schemaId=schemaId, schemaDef=schemaDef)
instead?
In the method streamMessage inside the DataIngestion class, data
is expected to be a dict. There exists a checking as follows:
if data is None and type(data) != dict:
raise Exception("Require a dictionary to be send for ingestion")
The exception tells the user that it expects data
to be a dictionary. Similar thing also happens for streamMessages in which the expected type is a list.
Rereading the code,
if data is None and type(data) != dict:
raise Exception("Require a dictionary to be send for ingestion")
When data
is not None, no matter what type data
is, that condition will always evaluate to False
, leading to it not raising exception when data is, say, a string.
One potential common pitfall is to type data
as a string instead of a dict.
SSL certificate verification should be enabled by default, unless the user takes specific action to disable it for very specific scenarios.
SSL certificate verification is disabled currently.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.