Giter VIP home page Giter VIP logo

opal-tools's Issues

encodingMappings not working on Windows

During the setup.cmd Phase, the generated file opal-installer.json contains an invalid mapping:

  "encodingMappings": [
    {
      "encoding": "UTF8",
      "matchRegEx": "\\\\sql\\\\.*apex.*/.*f*sql"
    }

it should be:

  "encodingMappings": [
    {
      "encoding": "UTF8",
      "matchRegEx": "\\\\sql\\\\.*apex.*\\\\.*f*sql"
    }

Prefix for registry tables not working

Installer when using registry tables has BUG. Calling fixed table “opal_installer_patches” during patch finish script. I’ve set different prefix “MOYPATCH” so it’s not anymore OPAL, I tried with MOYPATCH it works until you try to update “end date”.

image

Exporter: use lists for pre and post sql files

The opal-exporter currently uses a single sql file to initialize the session and a custom export file which is executed at the end. Both should accept lists of files which are executed in order.

using environment variables in sqlcl script

Exporting APEX app, at the moment this is biggest blocker, I cant set relative path to export application like cd %OPAL_TOOLS_SRC_SQL_DIR%\apex sqlcl don’t support system variables, at least on Windows.

display timings at the end of the export

Since script was running almost 1 hours to export all (9929 Rows) …. I would like also see timings “Elapsed: 01:02:00.000” to know how much time I can expect on next run 😊

Refactor logging

Currently, there are different calls to output information on the command line and in the logs, this should be unified. A new class Logger should unify those into a single call.

Msg.println()
logfile.append();

Timing information missing in logs

The installer log file does not contain the timings of the whole script and how many files were processed. This is displayed on the command line but not in the logs.

Naming convention for logfiles

All logfiles are stored in the subdirectory logs. They are generic and contain a timestamp in the filename.

It would be better to be able to influence the naming of the filename similar to Apache http logfiles, Apache Tomcat logfiles, etc.

Variables to consider:

  • ConnectionPool.name
  • Mode: executePatch|copyFiles
  • Mode parameter: VALIDATE_ONLY|EXECUTE
  • Timestamp

Solution for REF_CONSTRAINTS in separate files

Typically, REF_CONSTRAINTS are dependent objects of a table.
They are not part of the user_objects view and thus cannot be spooled into a different directory.

We need a solution for this because when you install tables with dependencies, you should first install the tables in the first pass and install the dependencies in the second pass.

export error with JOBs

While exporting I had errors only with “LOB” and “JOB” object_type (We have on that schema almost 100 jobs, non of them where exported), schema2ddl export that out.

Almost all finished with following error
export: SEND_FLEXMAIL_QUEUE[JOB] => 20:00:20 (Exporter,ERROR) sql error: ORA-31600: invalid input value SEND_FLEXMAIL_QUEUE for parameter NAME in function GET_DDL
ORA-06512: at "SYS.DBMS_METADATA", line 6681
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 105
ORA-06512: at "SYS.DBMS_METADATA", line 6558
ORA-06512: at "SYS.DBMS_METADATA", line 9672
ORA-06512: at line 1

Refactor version.txt - use only a single one

The current application version is stored in the file version.txt and the version is set through the build process. Currently, I use multiple files for that.

Should be only one.

Add switch --skip-export to exporter

With this switch the actual export of objects is skipped ... and thus only the pre and post scripts are run.

This is useful when we want to create a separate batch script in order to export the apex application separately.

Display list of files that are copied and statistics at the end

During "copyPatchFiles" we can currently only see the directives that are processed, but not the files that are actually copied. Here a rewrite is required so that we iterate all files manually in code, not Apache commons lib.

copy files from:c:\app\project1\sql
           to  :C:\app\project1\patches\2020\2020-11-06-test 123\sql

process patch file listing in: PatchFiles.txt

Mapping: jri_test/synonyms => jri_test/010_preinstall
Mapping: jri_test/sequences => jri_test/020_sequences
Mapping: jri_test/types => jri_test/030_types
Mapping: jri_test/tables => jri_test/040_tables
Mapping: jri_test/ref_constraints => jri_test/050_ref_constraints
Mapping: jri_test/views => jri_test/060_views
Mapping: jri_test/procedures => jri_test/070_procedures
Mapping: jri_test/functions => jri_test/080_functions
Mapping: jri_test/packages => jri_test/090_packages
  *.pks
Mapping: jri_test/packages => jri_test/100_package_bodies
  *.pkb
Mapping: jri_test/triggers => jri_test/110_triggers

Also, include statistics like for the exporter:

*** The export finished in 0:14 [mm:ss] and exported 13/13 objects successfully.

*** done.

Problem with blank space in username

Uroš has username c:\Users\TRT Uros....

So set environment was challenging

I’ve send you example of working file in zip with correct setting. Basically setting for OPAL_TOOLS_JAVA_ARGS was missing double quotes like in following sample :

set PROJECT_ROOT=C:\Users\TRT Uros\Documents\auctim_rep
set OPAL_TOOLS_HOME_DIR=%PROJECT_ROOT%\01_Project_Standards\opal-tools
set OPAL_TOOLS_JAVA_ARGS=-Dlog4j.configurationFile="%OPAL_TOOLS_HOME_DIR%\conf\log4j2.xml" -Djava.util.logging.config.file="%OPAL_TOOLS_HOME_DIR%\conf\log4j.properties" -Dfile.encoding=UTF-8

Using encrypted passwords in the connection pools

Support for using encrypted passwords:

  • during installation the passwords are automatically encrypted
  • also, when the connection pool is loaded and the password is NOT encrypted, it will automatically be encrypted.
  • thus, you can later change the passwords in clear text ... and they will be encrypted on-the-fly

Also, during set up the echo for the passwords should be turned off or replaced with "***".

export error with LOBs

While exporting I had errors only with “LOB” and “JOB” object_type (We have on that schema almost 100 jobs, non of them where exported), schema2ddl export that out.

And few LOB-s
export: TRANSLATIONS_LOB[LOB] => 20:03:17 (Exporter,ERROR) sql error: ORA-31600: invalid input value LOB for parameter OBJECT_TYPE in function GET_DDL (LOB was not exported with schema2ddl)
ORA-06512: at "SYS.DBMS_METADATA", line 6681
ORA-06512: at "SYS.DBMS_METADATA", line 6466
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 105
ORA-06512: at "SYS.DBMS_METADATA_INT", line 4166
ORA-06512: at "SYS.DBMS_METADATA_INT", line 4643
ORA-06512: at "SYS.DBMS_METADATA_INT", line 9656
ORA-06512: at "SYS.DBMS_METADATA", line 6409
ORA-06512: at "SYS.DBMS_METADATA", line 6522
ORA-06512: at "SYS.DBMS_METADATA", line 9672
ORA-06512: at line 1

Omitting the "right" schema when exporting synonyms

In the pre-script we choose to NOT emit the schema name for the exported objects:

DBMS_METADATA.SET_TRANSFORM_PARAM( DBMS_METADATA.SESSION_TRANSFORM, 'EMIT_SCHEMA', false );        --undocumented remove schema

This is a good thing in most cases, e.g. it will turn

create table <schema>.<user>(column type);

into

create table <user>(column type);

But when you want to export synonyms referencing objects in different schemas, this is a problem.

Sure, the workaround would be to use (or remove the setting altogether):

DBMS_METADATA.SET_TRANSFORM_PARAM( DBMS_METADATA.SESSION_TRANSFORM, 'EMIT_SCHEMA', true );        --undocumented remove schema

but this will affect ALL objects and not just the sequences.

This is NOT a bug but typically unwanted behavior. This is the default behavior of DBMS_METADATA.GET_DDL.

New command line switch to change the filename per object type

currently, we have the following switches to influence the name and location of the generated file:

--extension-mappings default:log package:sql "package body:pkb"
--directory-mappings "package body:packages"
--filename-template #schema#/#object_type#/#object_name#.#ext#

This can all be unified into a single switch with a list of filename settings per object_type:

--filename-templates object_type:#schema#/#object_type#/#object_name#.#ext#

e.g.:

"ref constraint":#schema#/#object_type#/#object_name#_ref.#ext#
"package":#schema#/package/#object_name#.pks
"package body":#schema#/package/#object_name#.pkb
"default:#schema#/#object_type#/#object_name#.#ext#"

ToDo:

  • remove obsolete switches
  • create new switch --filename-templates
  • modify export shell scripts to pick up the change

Performance optimization

Exporting from database is slow when using --dependent-objects
COMBINATION but mostly Tables/Views
With --dependent-objects
*** The export finished in 3:28 [mm:ss] and exported 123/123 objects successfully.
Without --dependent-objects
*** The export finished in 0:23 [mm:ss] and exported 123/123 objects successfully.

                                       Tables and views are SLOW, packages, sequences, jobs and synonyms are fast
                         ONLY PACKAGES :
                                       *** The export finished in 1:15 [mm:ss] and exported 422/422 objects successfully.
                         FULL EXPORT without --dependent-objects
                                       *** The export finished in 27:37 [mm:ss] and exported 9925/9928 objects successfully.
                         FULL EXPORT with --dependent-objects  (removed only trigger on table)
                                       *** The export finished in 55:50 [mm:ss] and exported 9867/9867 objects successfully.

Calling set-user-environment.cmd first

  • call set-user-environment.cmd first
  • during setup.cmd / setup.sh we will prompt for this file including the full pathname

This way you can use (depending on your use case):

  • putting the script on the system path so that it can be found without a path
  • using a relative path
  • using a path based on a system variable

Changing the connect string in the connection pool file

Here are some choices on how to specify the jdbc url for Oracle: https://www.razorsql.com/articles/oracle_jdbc_connect.html

ORDS prompts for everything separately, user, pwd, host, port, sid or servicename.
SQLcl uses:
<connect_identifier> can be in the form of Net Service Name
or Easy Connect.

 @[<net_service_name> | [//]Host[:Port]/<service_name>]

e.g. sql scott/tiger@localhost:1521:xe

Since I store user and pwd separately, I guess I could allow:

<server>:<port>:<sid>
and
<server>:<port>/<service_name>

Then I would construct this in the code to:

jdbc:oracle:thin:@<server>:<port>:<sid>

Should I also make sure that the existing config files don't break? When I find "jdbc:oracle:thin:@" I would just remove it. I could allow both or I could even modify the file on the fly.

opal-tools\bin\export scripts reference wrong schema

If you have multiple schemas during installation during generation of opal-tool\bin\ export scripts, we don’t set right SCHEMA it’s used first schema for all export scripts

--connection-pool-name schema1 ^ is basically wrong should be schema2..n

And settings to which Folder to open at the end for schema2..n

Enable hide/echo during setup for passwords

Sometimes it is better when you enter passwords when nobody can see them in clear text. This is the case when you do the setup together with other people, have a screen share or screen recording running.

But most times it is better to see what you type, because then you are sure that the password is correct.

Thus, it could become a switch when running the setup command or you could be prompted whether you want to see the passwords or hide them during the setup procedure.

Create exporter module similar to scheme2ddl

Why not use the existing one?

  • easier to interface with installer
  • not flexible enough (specifying the objects that you want to export
  • adding SQLcl support, specifically for exporting APEX application TOGETHER with the pl/sql code
  • working around certain exceptions in Oracle was not possible

Add command line switch --no-logging to installer

Currently, logs are always generated during validation and installation phases.
The validation logs don't add much value and thus could be omitted.

A switch will be added to suppress logs when calling the installer. This might also be helpful when you only want to record the installation on production systems but not test/qa environments.

Switch:

--no-logging

Default configuration for the export

What should the default configuration look like?

  • should indexes be part of the table script or not?
  • should indexes be both ... part of the table script AND exported separately?
  • should triggers be part of the table script or not?
  • should triggers be both ... part of the table script AND exported separately?

Performance optimization - dependent_ddl

Currrently, for each dependent object type a separate call to get_dependent_ddl is issued. This makes sense, because an ORA-exception will be thrown when no dependent data is available. Thus, I cannot easily do a union all for the dependent queries.

But I could generate a pl/sql block that executes the get_ddl statement, catches the exception and adds the result to the CLOB for the ddl.

Initial release

The initial release contains all the features we have been using for the last six months in two actual client projects.

Add config file

Create a new command line switch

--config-file <file>

for a java properties file or a json file to move the many command line switches into a single config file

Benefit:

  • easier sharing of configs / layout
  • when people create multiple batch files then the basic config is put redundantly into many files. If anything changes, many places will have to be updated

Replace * with % for filter elements

Currently, we are using like statements to generate the object query. Therefore, we have to use % on the command line. This is somewhat problematic on Windows systems, because % is a special character and thus has to be escaped. This is done by doubling or sometimes quadrupling this character, e.g. XLIB_%%%% as the proper filter in a batch script.

Thus, we want to be able to use * in addition to % for the filter. If we encounter a *, we will automatically convert that to a %.

Configure color coding of shell scripts in setup routine

We can use color codes in the different shell scripts to make it obvious into which environment we install the files, e.g. green for development, yellow for integration/test and red for production environments.

For Windows we use:

@REM Pick shell color using command "color <background><foreground>"
@REM e.g. black background with green foreground: "color 0A"
@REM call help for details "color ?"
@REM 
@REM green : "color 0A"
@REM yellow: "color 0E"
@REM red   : "color 0C"

For Linux / Mac systems we use:

# Pick shell tput setaf using command "tput setaf color"
# e.g. green foreground: "tput setaf 2"
# 
# green : "tput setaf 2"
# yellow: "tput setaf 3"
# red   : "tput setaf 1"

This can be prompted for during the setup script, e.g.:

List of shell colors for the environments (comma-separated, e.g. green,yellow,red)
  • The default should be: green,yellow,red
  • Those typical colors will be used internally and translated to the correct color code.
  • All other color codes will be inserted into the script files as is

Refactoring: splitting the configuration json file into two separate classes

Currently there is only one class to read the JSON config file. But over time the connection pool is separated from the opal-installer.json file. But under the hood there is only one class used. This leads to a number of issues. Thus, the connection pools should be separated from the general config classes.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.