Using the Database Migration Tool

For your convenience, we included the documentation of the database migration tool here. The content displayed here is the same as distributed with the tool itself.

Database schemas for the infrastructure functions:

The schemas must be checked with each new Cadenza version and updated if necessary. The Database Migration Tool is available for this purpose. A backup before using the tool is strongly recommended, as the structures of the schemas may change!

Aliases in JasperReports reports: In the SQL statements of data queries, the aliases are derived from the column names of the table (if there is a column name in multiple tables, the aliases are numbered consecutively). It is therefore always possible that aliases or SQL statements have changed. It is therefore necessary to check whether aliases of the Cadenza SQL statement are used for integrated JasperReports reports and, if necessary, the reports must be adapted.

Purpose

The Database Migration Tool can be used to automatically create schema structures required by Cadenza. For existing production systems it can be used to upgrade schema structure to a state which is compatible with the Cadenza distribution the tool has been released with.

Configuration

The Database Migration Tool requires connection details like username, password, jdbcURL and schemaName to access the target database. These parameters can be provided as commandline parameters, or they can be provided in a configuration file. File configuration allows to provide details of more than one database, this way a single tool execution can handle the process of migration for a whole Cadenza deployment.

The Database Migration Tool can also use the cadenza, and it’s configuration to migrate the databases without the need of extra configurations. You would need to use the Option 3: Using Cadenza Configurations, and provide the same environment variables/system properties that you set when running cadenza also for the migration tool.

Option 1: Using a Configuration File

Parameter Short name M/O Description and Notes

--config-file=<arg>

-cf

M

Configuration file path. This should be absolute path. The format of the configuration is defined in Configuration File Format.

--mode=<arg>

-m

0

Operating mode, one of:

  • update (default) - in this mode the tool will apply all necessary changes to bring databases to the required schema version

  • validate - the tool will check if the configured databases are compatible with the current Cadenza version.

This validation is based solely on the migration changelog table within the schema. The tool verifies that all necessary migration scripts have been logged as executed successfully, but it does not check the existence or correctness of schema objects, such as tables or columns. Therefore, as long as the changelog table is up-to-date, it will not detect inconsistencies where a schema object was manually deleted or not created due to improper database restoration.
  • generateSQL - the tool will generate separate SQL scripts for every provided database, which will contain all necessary changes, that would be applied in update mode. If the database is up-to-date, then SQL file won’t be generated, as there are no changes to apply. This mode should be only used to preview the changes, which will be applied to the database in update mode.

O = Optional; M = Mandatory

Configuration File Format

Example configuration file:

<?xml version="1.0" encoding="UTF-8"?>
<dbMigrationConfiguration>
 <outputDirectory>/usr/local/generated</outputDirectory>
 <databases>
  <database>
    <id>repodb_postgresql</id>
    <jdbcURL>jdbc:postgresql://localhost:5432/db</jdbcURL>
    <user>user1</user>
    <password>$SYSTEM{REPODB_PASS}</password>
    <schemaName>repodb_schema</schemaName>
    <databaseUseCase>REPODB</databaseUseCase>
   </database>
   <database>
    <id>configdb_postgresql</id>
    <jdbcURL>jdbc:postgresql://localhost:5432/db</jdbcURL>
    <user>configdbuser</user>
    <password>d9bcdfadc8bc</password>
    <schemaName>configdb_schema</schemaName>
    <databaseUseCase>CONFIGDB</databaseUseCase>
   </database>
 </databases>
 <secretHandling>
   <processor>decode</processor>
 </secretHandling>
</dbMigrationConfiguration>

Configuration file properties:

  • outputDirectory -directory where all generated sql scripts should be stored. Relevant only for generateSQL mode. Has to be always an absolute path

  • id -unique id of given database. Will be used as name for generated corresponding SQL files and in program execution summary

  • jdbcURL - JDBC URL for a given database

  • user - Username to access the database. Provided user should have all required privileges to modify database schema structure

  • password - Password to access the database. Is expected to be encoded in the same way as Cadenza Datasource passwords. If a variable is used, then first the variable will be resolved and the password will be decrypted. Variable contents should be also encoded

  • schemaName - Database schema name that should be migrated. If not specified default one for given database vendor will be used. Due to the bug in the Oracle JDBC driver, it should be shorter than 28 characters.

  • databaseUseCase - Database use case: one of

    • REPODB

    • CDS

    • AUTHENTICATION

    • AUDITLOG

    • JOBSCHEDULING

    • USERPREFS

    • CONFIGDB

    • MORATORIUM

  • processor - Optional configuration to configure how secrets in the configuration file are interpreted. One of:

    • pass-through: Interpret all secrets as they are present in the configuration file without further processing.

    • decode: Interpret all secrets as obscured using PasswordEncoder. The default option.

The configuration file can contain references to environment and system variables. Supported format is the same as the one used in Cadenza configuration i.e. $SYSTEM{ENV_VARIABLE}.

Option 2: Using Command-Line Parameters

Parameter Short name M/O Description and Notes

--mode=<arg>

-m

O

Operating mode, one of:

  • update (default) - in this mode tool will apply all necessary changes to bring databases to the required schema version,

  • validate - tool will check if given databases are compatible with given Cadenza version,

This validation is based solely on the migration changelog table within the schema. The tool verifies that all necessary migration scripts have been logged as executed successfully, but it does not check the existence or correctness of schema objects, such as tables or columns. Therefore, as long as the changelog table is up-to-date, it will not detect inconsistencies where a schema object was manually deleted or not created due to improper database restoration.
  • generateSQL - tool will generate for every provided database separate SQL script, which will contain all necessary changes, that would be applied in update mode. If the database is up-to-date, then SQL file won’t be generated, as there are no changes to apply. This mode should be only used to preview the changes, which will be applied to the database in update mode.

--connection=<arg>

-c

M

Database connection string. Ignored if configuration file is provided.

--username=<arg>

-u

M

Username to use during migration. Ignored if configuration file is provided.

--database-use-case=<arg>

-d

M

Database use case

--password=<arg>

-p

O

Connection password to use during migration, if not provided program will ask for it during execution. Ignored if configuration file is provided.

--secret-processor=<arg>

-sp

O

The secret processor to use for password('-p') parameter. One of 'pass-through', 'decode' (default). Ignored if configuration file is provided.

--schema=<arg>

-s

O

Database schema name, if skipped, default one for given database vendor will be used. Due to the bug in the Oracle JDBC driver, it should be shorter than 28 characters. Ignored if configuration file is provided.

--output-file==<arg>

-o

O

Output file name if mode is 'generateSQL', defaults to generated.sql in the current directory. Ignored if configuration file is provided.

O = Optional; M = Mandatory

Option 3: Using Cadenza Configurations

Parameter Short name M/O Description and Notes

--cadenza-mode

-cm

M

Uses the configurations from Cadenza

--mode=<arg>

-m

O

Operating mode, one of:

  • update (default) - in this mode tool will apply all necessary changes to bring databases to the required schema version,

  • validate - tool will check if given databases are compatible with given Cadenza version. Exception for CDS: it will succeed only if REPODB is up-to-date.

This validation is based solely on the migration changelog table within the schema. The tool verifies that all necessary migration scripts have been logged as executed successfully, but it does not check the existence or correctness of schema objects, such as tables or columns. Therefore, as long as the changelog table is up-to-date, it will not detect inconsistencies where a schema object was manually deleted or not created due to improper database restoration.
  • generateSQL - tool will generate for every provided database separate SQL script, which will contain all necessary changes, that would be applied in update mode. If the database is up-to-date, then SQL file won’t be generated, as there are no changes to apply. This mode should be only used to preview the changes, which will be applied to the database in update mode. Exception for CDS: it will succeed only if REPODB is up-to-date.

Migrating single database use case is also possible with --cadenza-mode by following the Option 2: Using Command-Line Parameters, and using the --cadenza-mode parameter. There ae some exceptions for CDS database use case:

  • When you update the CDS, the REPODB is also updated.

  • When you validate the CDS, the REPODB is also validated, and only if REPODB succeed, we can determine the status of CDS.

  • When you generateSQL the CDS, it will work only if REPODB is up-to-date.

Usage

Before execution

The Database Migration Tool should be used to apply all schema changes required by a given Cadenza version. As these changes often cannot be reverted, a database backup should be created before tool execution.

When the Database Migration Tool is running, it must not be killed under any circumstances. The tool should be left running until it finishes its execution, this will prevent all the problems related to maintenance of database locks.

You will need to use the migration tool in a version matching your Cadenza distribution. As these versions change over time, we refer to the Cadenza version as <version> throughout this document. Please make sure to replace <version> with the correct version in the scripts below, otherwise they will fail.

To check which Cadenza version is supported by a given tool, please execute the following command:

$ DatabaseMigrationTool -v
Supported cadenza version: Cadenza <version>

Provide database drivers if needed

Right now the Database Migration Tool contains the drivers for all supported databases besides SAP Hana. If you have SAP Hana databases which you need to migrate, you first have to add the corresponding driver to the drivers folder of the Database Migration Tool. Please create the directory if it does not exist.

For SAP Hana it normally is the ngdbc.jar.

If the Database Migration Tool is run as a docker container, then the directory with additional libraries has to be mounted into /usr/local/migration/drivers

This can be achieved by adding the following parameter to the docker command:

--mount src="<somedir-with-jars>",target="/usr/local/migration/drivers",type=bind

Commandline application execution

Following examples show how the tool can be executed with a configuration file located in /usr/local/migration/conf/config.xml or with connection details passed as commandline arguments.

Worth noting is the lack of the password parameter in the comandline parameter examples. The user will be asked to enter the password. The password must not be encoded! Passwords won’t be stored in the history of executed commands.

Database validation

The tool was executed against two empty databases: repodb_postgresql and cds_postgresql. As both databases were empty then for both of them execution result is marked as FAILED.

$ ./bin/DatabaseMigrationTool -m validate -cf /usr/local/migration/conf/config.xml
Execution results:
repodb_postgresql: FAILED
cds_postgresql: FAILED

The tool was executed against empty database, that should be used in the future as database repository (repodb). The database was empty, so execution result is marked as FAILED.

$ ./bin/DatabaseMigrationTool -c jdbc:postgresql://localhost:5432/test -u repodb -d repodb -m validate
Enter password for repodb user:
Execution results:
Database as cmd params: FAILED

SQL script generation

This mode is still in development and we can not yet recommend it to generate a real upgrade script. Please use this mode only for generating a preview that allows you to verify what will happen in a migration. Use the update mode to do the actual update of the schema.

Tool was executed against two empty databases: repodb_postgresql and cds_postgresql. Execution was successful for both databases. In the output directory two files should be created: repodb_postgresql.sql and cds_postgresql.sql.

$ ./bin/DatabaseMigrationTool -m generateSQL -cf /usr/local/migration/conf/config.xml
Execution results:
repodb_postgresql: SUCCESS
cds_postgresql: SUCCESS

Tool was executed against empty database, that should be used in the future as database repository (repodb). Execution result is marked as SUCCESS. In the program output we can see the path to file where the generated sql file was stored.

$ ./bin/DatabaseMigrationTool -c jdbc:postgresql://localhost:5432/test -u repodb -d repodb -m generateSQL -o /home/user/database-migration-tool/Database_Migration_Tool/generated.sql
Enter password for repodb user:
2022-09-23T09:04:11,788 INFO  [main] net.disy.cadenza.database.migration.tool.DatabaseMigrationTool - Schema update SQL script successfully written to /home/user/database-migration-tool/Database_Migration_Tool/generated.sql.
2022-09-23T09:04:11,789 INFO  [main] net.disy.cadenza.database.migration.tool.DatabaseMigrationTool - Execution finished.
Execution results:
Database as cmd params: SUCCESS

Database migration

The tool was executed against two empty databases: repodb_postgresql and cds_postgresql. After successful execution both databases should have required schema structure.

$ ./bin/DatabaseMigrationTool -m update -cf /usr/local/migration/conf/config.xml
Execution results:
repodb_postgresql: SUCCESS
cds_postgresql: SUCCESS

The tool was executed against an empty database, that should be used in the future as database repository (repodb). Execution result is marked as SUCCESS, the database can be used with given Cadenza version.

$ ./bin/DatabaseMigrationTool -c jdbc:postgresql://localhost:5432/test -u repodb -d repodb -m update
2022-09-23T09:15:22,585 INFO  [main] net.disy.cadenza.database.migration.tool.DatabaseMigrationTool - Application started.
Enter password for repodb user:
2022-09-23T09:14:13,297 INFO  [main] net.disy.cadenza.database.migration.tool.DatabaseMigrationTool - Migration finished successfully for database: Database as cmd params.
2022-09-23T09:14:13,298 INFO  [main] net.disy.cadenza.database.migration.tool.DatabaseMigrationTool - Execution finished.
Execution results:
Database as cmd params: SUCCESS

The tool as executed in --cadenza-mode. Only -cm parameter is enough provided that the environment-variables/system-properties are set properly.

$ ./bin/DatabaseMigrationTool -cm
Execution results:
Database use case: REPODB, and url: jdbc:h2:file:/...: SUCCESS
Database use case: REPODB, and url: jdbc:h2:file:/...: SUCCESS
Database use case: AUTHENTICATION, and url: jdbc:h2:file:/...: SUCCESS
Database use case: JOBSCHEDULING, and url: jdbc:h2:file:/...: SUCCESS
Database use case: AUDITLOG, and url: jdbc:h2:file:/...: SUCCESS
Database use case: USERPREFS, and url: jdbc:h2:file:/...: SUCCESS
Database use case: CONFIGDB, and url: jdbc:h2:file:/...: SUCCESS
Database use case: CDS, and url: jdbc:h2:file:/...: SUCCESS

Docker image usage

The configuration file should be always mounted into volume /usr/local/migration/configuration. The output directory/file should be always be configured to the directory /usr/local/migration/generated.

Following examples show how the tool can be executed with a configuration file located in /usr/local/migration/conf/config.xml or with connection details passed as commandline arguments.

Worth noting is the lack of the password parameter in the examples. To allow the user interaction with the running container, the image must be run with -it flag. Then a password prompt appears. The password must no be encoded! With this way passwords won’t be stored in the history of executed commands.

Database validation

The tool was executed against two empty databases: repodb_postgresql and cds_postgresql. As both databases were empty, both execution results are marked as FAILED. The configuration file was mounted into volume /usr/local/migration/configuration.

$ docker run --rm -v /usr/local/migration/conf/config.xml:/usr/local/migration/configuration/config.xml cadenza/database-migration-tool:<version> -m validate -cf /usr/local/migration/configuration/config.xml
2022-09-23T09:12:57,735 INFO  [main] net.disy.cadenza.database.migration.tool.DatabaseMigrationTool - Execution finished.
Execution results:
repodb_postgresql: FAILED
cds_postgresql: FAILED

The tool was executed against an empty database, that should be used in the future as database repository (repodb). The database was empty so execution result is marked as FAILED.

$ docker run --rm -it cadenza/database-migration-tool:<version> -c jdbc:postgresql://localhost:5432/test -u repodb -d repodb -m validate
Enter password for repodb user:
Execution results:
Database as cmd params: FAILED

SQL script generation

This mode is still in development and we can not yet recommend it to generate a real upgrade script. Please use this mode only for generating a preview that allows you to verify what will happen in a migration. Use the update mode to do the actual update of the schema.

The tool was executed against two empty databases: repodb_postgresql and cds_postgresql. Execution was successful for both databases. In the mapped output directory two files should have been created: repodb_postgresql.sql and cds_postgresql.sql.

$ docker run --rm -v /usr/local/migration/conf/config.xml:/usr/local/migration/configuration/config.xml -v  /usr/local/migration/generated:/usr/local/migration/generated cadenza/database-migration-tool:<version> -m generateSQL -cf /usr/local/migration/configuration/config.xml
Execution results:
repodb_postgresql: SUCCESS
cds_postgresql: SUCCESS

The tool was executed against an empty database, that should be used in the future as database repository (repodb). Execution result is marked as SUCCESS, in the program output we can see path to file where generated sql file was stored.

$ docker run --rm -it -v /usr/local/migration/generated:/usr/local/migration/generated cadenza/database-migration-tool:<version> -c jdbc:postgresql://localhost:5432/test -u repodb -d repodb -m generateSQL -o /usr/local/migration/generated/generated.sql
2022-09-23T09:25:02,132 INFO  [main] net.disy.cadenza.database.migration.tool.DatabaseMigrationTool - Application started.
Enter password for repodb user:
2022-09-23T09:24:18,498 INFO  [main] net.disy.cadenza.database.migration.tool.DatabaseMigrationTool - Schema update SQL script successfully written to /usr/local/migration/generated/generated.sql.
2022-09-23T09:24:18,498 INFO  [main] net.disy.cadenza.database.migration.tool.DatabaseMigrationTool - Execution finished.
Execution results:
Database as cmd params: SUCCESS

Database migration

The tool was executed against two empty databases: repodb_postgresql and cds_postgresql. After successful execution, both databases should have the required schema structure applied. The configuration file was mounted into volume /usr/local/migration/configuration.

$ docker run --rm -v /usr/local/migration/conf/config.xml:/usr/local/migration/configuration/config.xml cadenza/database-migration-tool:<version> -m update -cf /usr/local/migration/configuration/config.xml
Execution results:
repodb_postgresql: SUCCESS
cds_postgresql: SUCCESS

The tool was executed against an empty database, that should be used in the future as database repository (repodb). Execution result is marked as SUCCESS, database can be used with given Cadenza version.

$ docker run --rm -it cadenza/database-migration-tool:<version> -c jdbc:postgresql://localhost:5432/test -u repodb -d repodb -m update
2022-09-23T09:15:22,585 INFO  [main] net.disy.cadenza.database.migration.tool.DatabaseMigrationTool - Application started.
Enter password for repodb user:
2022-09-23T09:14:13,297 INFO  [main] net.disy.cadenza.database.migration.tool.DatabaseMigrationTool - Migration finished successfully for database: Database as cmd params.
2022-09-23T09:14:13,298 INFO  [main] net.disy.cadenza.database.migration.tool.DatabaseMigrationTool - Execution finished.
Execution results:
Database as cmd params: SUCCESS

The tool as executed in --cadenza-mode. Only -cm parameter is enough provided that the environment-variables/system-properties are set properly. You would need to mount the config-directory and set the environment-variable/system properties for it. An example is shown below for both environment-variable and system-properties

$ docker run --rm --mount type=bind,source="/config-path/",target=/cadenza-config -e CADENZA_CONFIG_PATH=condenza-config --mount type=bind,source="/license-folder-path/",target=/license-folder -e JAVA_OPTS="-DCADENZA_LICENSE_CONFIG_FOLDER=file:/license-folder" -it cadenza/database-migration-tool:9.4.0 -cm
Application started.
Starting execution..
Executing migration for database: database use case: REPODB, and url: jdbc:h2:file:/.../.
Migration finished successfully for database: database use case: REPODB, and url: jdbc:h2:file:/.../.
Executing migration for database: database use case: REPODB, and url: jdbc:h2:file:/.../.
Migration finished successfully for database: database use case: REPODB, and url: jdbc:h2:file:/.../.
Executing migration for database: database use case: AUTHORIZATION, and url: jdbc:h2:file:/.../.
Migration finished successfully for database: database use case: AUTHORIZATION, and url: jdbc:h2:file:/.../.
Executing migration for database: database use case: AUTHENTICATION, and url: jdbc:h2:file:/.../.
Migration finished successfully for database: database use case: AUTHENTICATION, and url: jdbc:h2:file:/.../.
Executing migration for database: database use case: AUDITLOG, and url: jdbc:h2:file:/.../.
Migration finished successfully for database: database use case: AUDITLOG, and url: jdbc:h2:file:/.../.
Executing migration for database: database use case: JOBSCHEDULING, and url: jdbc:h2:file:/.../.
Migration finished successfully for database: database use case: JOBSCHEDULING, and url: jdbc:h2:file:/.../.
Executing migration for database: database use case: USERPREFS, and url: jdbc:h2:file:/.../.
Migration finished successfully for database: database use case: USERPREFS, and url: jdbc:h2:file:/.../.
Executing migration for database: database use case: CONFIGDB, and url: jdbc:h2:file:/.../.
Migration finished successfully for database: database use case: CONFIGDB, and url: jdbc:h2:file:/.../.
Executing migration for database: database use case: CDS, and url: jdbc:h2:file:/.../.
Migration finished successfully for database: database use case: CDS, and url: jdbc:h2:file:/.../.
Executing migration for database: database use case: CDS, and url: jdbc:h2:file:/.../.
Migration finished successfully for database: database use case: CDS, and url: jdbc:h2:file:/.../.
Execution finished.
Execution results:
Database use case: REPODB, and url: jdbc:h2:file:/...: SUCCESS
Database use case: REPODB, and url: jdbc:h2:file:/...: SUCCESS
Database use case: AUTHENTICATION, and url: jdbc:h2:file:/...: SUCCESS
Database use case: AUTHORIZATION, and url: jdbc:h2:file:/...: SUCCESS
Database use case: JOBSCHEDULING, and url: jdbc:h2:file:/...: SUCCESS
Database use case: AUDITLOG, and url: jdbc:h2:file:/...: SUCCESS
Database use case: USERPREFS, and url: jdbc:h2:file:/...: SUCCESS
Database use case: CONFIGDB, and url: jdbc:h2:file:/...: SUCCESS
Database use case: CDS, and url: jdbc:h2:file:/...: SUCCESS
Database use case: CDS, and url: jdbc:h2:file:/...: SUCCESS

Running the tool as a job on Kubernetes

If your Cadenza deployment is running on Kubernetes it might be necessary to execute the migration tool as a pod on Kubernetes in order to be able to use the same firewall rules as the application itself. This can be done by using a job definition together with a ConfigMap which contains the config.xml.

For running this job in your environment please make sute to change all appropriate places in the ConfigMap and in the job definition:

ConfigMap

  • adjust all connection details and add additional CDS databases

  • jdbcURL: make sure to configure the correct database URL. Normally the database URLs used in Cadenza are the correct ones to use. If the database is running as a deployment in Kubernetes the hostname for the database URL is the Kubernetes service name.

  • password: the password needs to be encoded. See the section about "Encoding Passwords for Use with Cadenza" in the Administrator Documentation.

Job definition

  • replace the image name in the container definition with the correct image name for your environment and use the correct target version for your destination version (registry-ext.disy.net/cadenza/database-migration-tool:<exact-cadenza-target-version-here>-release)

  • this job does only work for PostgreSQL and Oracle databases. If you want to use it in order to migrate any other supported database you will need to ensure that the appropriate JDBC driver is available to the tool, either by creating your own image derived from the Disy image or by mounting the necessary jar file to the appropriate place.

apiVersion: v1
kind: ConfigMap
metadata:
  name: migrationtool
data:
  config.xml: |
    <?xml version="1.0" encoding="UTF-8"?>
    <dbMigrationConfiguration>
      <outputDirectory>/tmp</outputDirectory>
      <databases>
        <database>
          <id>configdb</id>
          <jdbcURL>jdbc:postgresql://host:port/database</jdbcURL>
          <user>here-goes-the-user</user>
          <password>the-disy-encoded-password</password>
          <schemaName>here-goes-the-schema</schemaName>
          <databaseUseCase>CONFIGDB</databaseUseCase>
        </database>
        <database>
          <id>authentication</id>
          <jdbcURL>jdbc:postgresql://host:port/database</jdbcURL>
          <user>here-goes-the-user</user>
          <password>the-disy-encoded-password</password>
          <schemaName>here-goes-the-schema</schemaName>
          <databaseUseCase>AUTHENTICATION</databaseUseCase>
        </database>
        <database>
          <id>authorization</id>
          <jdbcURL>jdbc:postgresql://host:port/database</jdbcURL>
          <user>here-goes-the-user</user>
          <password>the-disy-encoded-password</password>
          <schemaName>here-goes-the-schema</schemaName>
          <databaseUseCase>AUTHORIZATION</databaseUseCase>
        </database>
        <database>
          <id>auditlog</id>
          <jdbcURL>jdbc:postgresql://host:port/database</jdbcURL>
          <user>here-goes-the-user</user>
          <password>the-disy-encoded-password</password>
          <schemaName>here-goes-the-schema</schemaName>
          <databaseUseCase>AUDITLOG</databaseUseCase>
        </database>
        <database>
          <id>dbrepo</id>
          <jdbcURL>jdbc:postgresql://host:port/database</jdbcURL>
          <user>here-goes-the-user</user>
          <password>the-disy-encoded-password</password>
          <schemaName>here-goes-the-schema</schemaName>
          <databaseUseCase>REPODB</databaseUseCase>
        </database>
        <database>
          <id>cds</id>
          <jdbcURL>jdbc:postgresql://host:port/database</jdbcURL>
          <user>here-goes-the-user</user>
          <password>the-disy-encoded-password</password>
          <schemaName>here-goes-the-schema</schemaName>
          <databaseUseCase>CDS</databaseUseCase>
        </database>
        <database>
          <id>userpreferences</id>
          <jdbcURL>jdbc:postgresql://host:port/database</jdbcURL>
          <user>here-goes-the-user</user>
          <password>the-disy-encoded-password</password>
          <schemaName>here-goes-the-schema</schemaName>
          <databaseUseCase>USERPREFS</databaseUseCase>
        </database>
        <database>
          <id>jobscheduling</id>
          <jdbcURL>jdbc:postgresql://host:port/database</jdbcURL>
          <user>here-goes-the-user</user>
          <password>the-disy-encoded-password</password>
          <schemaName>here-goes-the-schema</schemaName>
          <databaseUseCase>JOBSCHEDULING</databaseUseCase>
        </database>
      </databases>
    </dbMigrationConfiguration>
---
apiVersion: batch/v1
kind: Job
metadata:
  name: migrationtool
spec:
  template:
    metadata:
      name: migrationtool
    spec:
      securityContext:
        fsGroup: 2000
      containers:
      - name: migrationtool
        image: registry-ext.disy.net/cadenza/database-migration-tool:<exact-cadenza-target-version-here>-release
        args:
        # - "-m"
        # - "validate"
        - "-cf"
        - "/config/config.xml"
        resources:
          requests:
            cpu: 1000m
            memory: 1Gi
          limits:
            cpu: 1000m
            memory: 1Gi
        volumeMounts:
        - name: migrationtool-config
          mountPath: /config
      restartPolicy: Never
      volumes:
      - name: migrationtool-config
        configMap:
          name: migrationtool
  backoffLimit: 3 # Number of retries

Troubleshooting

For more detailed information about errors, please check log files in the directory logs .