Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 3 days 4 hours ago

Dbvisit Standby 9 : Do you know the new snapshot feature?

Thu, 2020-01-16 10:23

Dbvisit snapshot option is a new feature available starting from version 9.0.06. I have tested this option and in this blog I am describing the tasks I have done.
The configuration I am using is following
dbvist1 : primary server
dbvist 2 : standby server
orcl : oracle 19c database
We suppose that the dbvisit environment is already set and the replication is going fine. See previous blog for setting up dbvisit standby
First there are two options
-Snapshot Groups
-Single Snapshots

Snapshot Groups
This option is ideal for companies that are using Oracle Standard Edition (SE,SE1,SE2) where they would like to have a “database that is read-only, but also being kept up to date” . It allows to create 2 or more read-only snapshots of the standby at a time interval. The snapshot will be opened in a read-only mode after its creation. A new service which will point to the latest snapshot is created and will be automatically managed by the listener.
The use of the feature is very easy. As it required managing filesystems like mount and umount, the user which will create the snapshot need some admin privileges.
In our case we have configured the user with full sudo privileges, but you can find all required privileges in the documentation .

oracle@dbvisit2:/home/oracle/ [orcl (CDB$ROOT)] sudo grep oracle /etc/sudoers
oracle ALL=(ALL) NOPASSWD:ALL
oracle@dbvisit2:/home/oracle/ [orcl (CDB$ROOT)]

The use of the option can be done by command line or using the graphical dbvserver console. But It is highly recommended to use the graphical tool.
When the option snapshot is available with your license, you will see following tab in the dbvserver console

To create a snapshot groups we just have to choose the option NEW SNAPSHOT GROUP

And then fill the required information. In this example
-We will create a service named snap_service
-We will generate a snapshot every 15 minutes (900 secondes)
There will be a maximum of 2 snapshots. When the limit is reached, the oldest snapshop is deleted. Just note that the latest snapshot will be deleted only when the new snapshot is successfully created. That’s why we can have sometimes 3 snapshots
-The snapshots will be prefixed by MySna

After the snapshots are created without errors, we have to start the snapshots generation by clicking on the green button

And then we can see the status

On OS level we can verify that the first snapshot is created and that the corresponding instance started

oracle@dbvisit2:/home/oracle/ [orcl (CDB$ROOT)] ps -ef | grep pmon
oracle    2300     1  0 09:05 ?        00:00:00 ora_pmon_orcl
oracle    6765     1  0 09:57 ?        00:00:00 ora_pmon_MySna001
oracle    7794  1892  0 10:00 pts/0    00:00:00 grep --color=auto pmon
oracle@dbvisit2:/home/oracle/ [orcl (CDB$ROOT)]

We can also verify that the service snap_service is registered in the listener. This service will be automatically pointed to the latest snapshot of the group.

oracle@dbvisit2:/home/oracle/ [orcl (CDB$ROOT)] lsnrctl status

LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 16-JAN-2020 10:01:49

Copyright (c) 1991, 2019, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbvisit2)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 19.0.0.0.0 - Production
Start Date                16-JAN-2020 09:05:07
Uptime                    0 days 0 hr. 56 min. 42 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/dbvisit2/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dbvisit2)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
…
…
…
Service "snap_service" has 1 instance(s).
  Instance "MySna001", status READY, has 1 handler(s) for this service...
The command completed successfully
oracle@dbvisit2:/home/oracle/ [orcl (CDB$ROOT)]

To connect to this service, we just have to create an alias like

snapgroup =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = dbvisit2)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = snap_service)
    )
  )

15 minutes later we can see that a new snapshot was generated

oracle@dbvisit2:/home/oracle/ [orcl (CDB$ROOT)] ps -ef | grep pmon
oracle    2300     1  0 09:05 ?        00:00:00 ora_pmon_orcl
oracle    6765     1  0 09:57 ?        00:00:00 ora_pmon_MySna001
oracle   11355     1  0 10:11 ?        00:00:00 ora_pmon_MySna002
oracle   11866  1892  0 10:13 pts/0    00:00:00 grep --color=auto pmon
oracle@dbvisit2:/home/oracle/ [orcl (CDB$ROOT)]

Note that we can only open the snapshot in a read only mode

oracle@dbvisit1:/home/oracle/ [orcl (CDB$ROOT)] sqlplus sys/*****@snapgroup as sysdba

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           MOUNTED
         4 PDB2                           MOUNTED
SQL> alter pluggable database all open;
alter pluggable database all open
*
ERROR at line 1:
ORA-65054: Cannot open a pluggable database in the desired mode.


SQL>  alter pluggable database all open read only;

Pluggable database altered.

Of course you have probably seen that you can stop, start, pause and remove snapshots from the GUI.

Single Snapshot
This option allows to create read-only as well read-write snapshots for the database.
To create a single snapshot , we just have to choose the NEW SINGLE SNAPSHOT
In the following example the snapshot will be opened in a read write mode

At this end of the creation we can see the status

We can verify that a service SingleSn was also created

oracle@dbvisit2:/home/oracle/ [orcl (CDB$ROOT)] lsnrctl status | grep Singl
  Instance "SingleSn", status READY, has 1 handler(s) for this service...
  Instance "SingleSn", status READY, has 1 handler(s) for this service...
Service "SingleSn" has 1 instance(s).
  Instance "SingleSn", status READY, has 1 handler(s) for this service...
  Instance "SingleSn", status READY, has 1 handler(s) for this service...
  Instance "SingleSn", status READY, has 1 handler(s) for this service...
  Instance "SingleSn", status READY, has 1 handler(s) for this service

And that the instance SinglSn is started

oracle@dbvisit2:/home/oracle/ [orcl (CDB$ROOT)] ps -ef | grep pmon
oracle    3294     1  0 16:04 ?        00:00:00 ora_pmon_SingleSn
oracle    3966  1748  0 16:08 pts/0    00:00:00 grep --color=auto pmon
oracle   14349     1  0 13:57 ?        00:00:00 ora_pmon_orcl
oracle@dbvisit2:/home/oracle/ [orcl (CDB$ROOT)]

We can also remark that at OS level, a filesystem is mounted for the snap. So there must be enough free space at the LVM that host the database.

oracle@dbvisit2:/home/oracle/ [MySna004 (CDB$ROOT)] df -h | grep snap
/dev/mapper/ora_data-SingleSn   25G   18G  6.3G  74% /u01/app/dbvisit/standby/snap/orcl/SingleSn
oracle@dbvisit2:/home/oracle/ [MySna004 (CDB$ROOT)]

Using the alias

singlesnap =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = dbvisit2)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = SingleSn)
    )
  )

We can see that new snapshot is opened in read write mode

SQL> select name,open_mode from v$database;

NAME      OPEN_MODE
--------- --------------------
SINGLESN  READ WRITE

SQL>
Conclusion

What we will retain is that a snapshot groups is a group of snapshots taken at a regular time interval. These kinds of snapshots can only be opened in a read only mode.
The single snapshot can be created in a read only or read write mode. When opened in read write mode, it can be compared maybe to Oracle snapshot standby.
The option dbvisit snapshot required a license and is only supported on linux.

Cet article Dbvisit Standby 9 : Do you know the new snapshot feature? est apparu en premier sur Blog dbi services.

ARRAYSIZE or ROWPREFETCH in sqlplus?

Wed, 2020-01-08 12:15
ARRAYSIZE or ROWPREFETCH in sqlplus?

What is the difference between the well known sqlplus-setting arraysize and the new sqlplus-12.2.-feature rowprefetch? In Blog
https://blog.dbi-services.com/oracle-fasttrue-in-sqlplus-some-thoughts-about-rowprefetch/ I showed a case, which helps to reduce the logical IOs when using rowprefetch.

Here the definition of arraysize and rowprefetch according the documentation:

arraysize:

SET System Variable Summary: Sets the number of rows, called a batch, that SQL*Plus will fetch from the database at one time. Valid values are 1 to 5000. A large value increases the efficiency of queries and subqueries that fetch many rows, but requires more memory. Values over approximately 100 provide little added performance. ARRAYSIZE has no effect on the results of SQL*Plus operations other than increasing efficiency

About SQL*Plus Script Tuning: The effectiveness of setting ARRAYSIZE depends on how well Oracle Database fills network packets and your network latency and throughput. In recent versions of SQL*Plus and Oracle Database, ARRAYSIZE may have little effect. Overlarge sizes can easily take more SQL*Plus memory which may decrease overall performance.

REMARK: The arraysize setting also has an impact on the COPY-command with the COPYCOMMIT-setting (commits every n arraysize batches of records).

rowprefetch:

SET System Variable Summary: Sets the number of rows that SQL*Plus will prefetch from the database at one time. The default value is 1 (max is 32767).
Note: The amount of data contained in the prefetched rows should not exceed the maximum value of 2147483648 bytes (2 Gigabytes). The setting in the oraaccess.xml file can override the SET ROWPREFETCH setting in SQL*Plus.

Differences between ARRAYSIZE and ROWPREFETCH

When doing my tests one of the important differences between ARRAYSIZE and ROWPREFETCH is that ROWPREFETCH allows Oracle to transfer query results on return from its internal OCI execute call. I.e. in a 10046-trace the first FETCH is showing ROWPREFETCH rows fetched regardless of the ARRAYSIZE setting. E.g. with the default setting of ROWPREFETCH 1, ARRAYSIZE 15 I can see the following number of rows fetched (see the r= in the trace):

FETCH #139623638001936:c=448,e=1120,p=0,cr=8,cu=0,mis=0,r=1,dep=0,og=1,plh=3403427028,tim=110487525476
...
FETCH #139623638001936:c=66,e=66,p=0,cr=1,cu=0,mis=0,r=15,dep=0,og=1,plh=3403427028,tim=110487525830
...
FETCH #139623638001936:c=15,e=15,p=0,cr=1,cu=0,mis=0,r=15,dep=0,og=1,plh=3403427028,tim=110487526093
...

I.e. 1, 15, 15,…

With ROWPREFETCH 3, ARRAYSIZE 15 the rows fetched are 3, 15, 15, …

The following table shows the number of rows fetched with different settings of ROWPREFETCH and ARRAYSIZE from a query, which returns 70 rows:


ROWPREFETCH ARRAYSIZE ROWS_FETCH1 ROWS_FETCH2 ROWS_FETCH3 ROWS_FETCH4 ROWS_FETCH5 ROWS_FETCH6 ROWS_FETCH7 ROWS_FETCH8 
 1          15         1          15          15          15          15          9
 2          15         2          15          15          15          15          8
20          15        20          30          20
16          15        16          30          24
 6           5         6          10          10          10          10          10          10          4
 9           5         9          10          10          10          10          10          10          1
10          10        10          20          20          20          0
10           5        10          15          15          15          15          0
16           3        16          18          18          18          0

We can see 3 things here:
- The first FETCH (from the internal OCI execute) contains always the number of rows as defined in the ROWPREFETCH setting
- The second FETCH (and all subsequent fetches) contains a multiple of the ARRAYSIZE setting rows. The following code fragment should show the logic:

2nd_Fetch_Rows = if ROWPREFETCH < ARRAYSIZE 
                 then ARRAYSIZE 
                 else (TRUNC(ROWPREFETCH/ARRAYSIZE)+1)*ARRAYSIZE


- If a fetch does not detect the end of the data in the cursor then an additional fetch is necessary. In 3 cases above a last fetch fetched 0 rows.

Memory required by the client

With the Linux pmap command I checked how much memory the client requires for different ROWPREFETCH and ARRAYSIZE settings.

Testcase:


SQL> connect cbleile/cbleile@orclpdb1
Connected.
SQL> create table big_type (a varchar2(2000), b varchar2(2000), c varchar2(2000), d varchar2(2000), e varchar2(2000));
 
Table created.
 
SQL> insert into big_type select 
  2  rpad('X',2000,'Y'),
  3  rpad('X',2000,'Y'),
  4  rpad('X',2000,'Y'),
  5  rpad('X',2000,'Y'),
  6  rpad('X',2000,'Y') from xmltable('1 to 4100');
 
4100 rows created.
 
SQL> commit;
 
Commit complete.
 
SQL> exec dbms_stats.gather_table_stats(user,'BIG_TYPE');
SQL> select avg_row_len from tabs where table_name='BIG_TYPE';
 
AVG_ROW_LEN
-----------
      10005

Before the test:


oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] ps -ef | grep sqlplus
oracle    31537  31636  3 17:49 pts/2    00:01:20 sqlplus   as sysdba
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 31537 | grep anon
0000000000be6000   1580K rw---   [ anon ]
00007f7efcda6000    516K rw---   [ anon ]
...
 
SQL> show rowprefetch arraysize
rowprefetch 1
arraysize 15
SQL> set arraysize 1000 pages 2 pause on lines 20000
SQL> select * from big_type;
 
--> 2 times <RETURN>
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 31537 | grep anon
0000000000be6000   1580K rw---   [ anon ]
00007f7efc40f000  10336K rw---   [ anon ]
...
 
Ctrl-C <RETURN>
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 31537 | grep anon
0000000000be6000   1580K rw---   [ anon ]
00007f7efcda6000    516K rw---   [ anon ]
 
SQL> set arraysize 1
SQL> set rowprefetch 1000
SQL> select * from big_type;
 
--> 2 times <RETURN>
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 31537 | grep anon
0000000000be6000  12664K rw---   [ anon ]
00007f7efcda6000    516K rw---   [ anon ]
 
Ctrl-C <RETURN>
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 31537 | grep anon
0000000000be6000  12664K rw---   [ anon ]
00007f7efcda6000    516K rw---   [ anon ]
 
SQL> set rowprefetc 1
SQL> set arraysize 1000
SQL> select * from big_type;
 
--> 2 times <RETURN>
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 31537 | grep anon
0000000000be6000  22472K rw---   [ anon ]
00007f7efcda6000    516K rw---   [ anon ]
 
Ctrl-C <RETURN>
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 31537 | grep anon
0000000000be6000  12660K rw---   [ anon ]
00007f7efcda6000    516K rw---   [ anon ]
 
SQL> set rowprefetch 501 arraysize 500 pages 502
SQL> select * from big_type;
 
--> 2 times <RETURN>
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 31537 | grep anon
0000000000be6000  17568K rw---   [ anon ]
00007f7efcda6000    516K rw---   [ anon ]

New table with just 1 Byte per column:


SQL> create table big_type_small_data as select * from big_type where 1=2;
 
Table created.
 
SQL> insert into  big_type_small_data select 'X','X','X','X','X' from big_type;
 
4100 rows created.
 
SQL> commit;
 
Commit complete.
 
SQL> exec dbms_stats.gather_table_stats(user,'BIG_TYPE_SMALL_DATA');
 
PL/SQL procedure successfully completed.
 
SQL> select avg_row_len from tabs where table_name='BIG_TYPE_SMALL_DATA';
 
AVG_ROW_LEN
-----------
	 10
 
Client-Memory before the test:
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 14193 | grep anon
00000000014d1000   1580K rw---   [ anon ]
00007f3c3b8d2000    516K rw---   [ anon ]
 
SQL> show rowprefetch
rowprefetch 1
SQL> show array
arraysize 15
SQL> set arraysize 1000 rowprefetch 1 pages 2 pause on lines 20000
SQL> select * from big_type_small_table;
 
--> 2 times <RETURN>
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 14193 | grep anon
00000000014d1000   1580K rw---   [ anon ]
00007f3c3af3b000  10336K rw---   [ anon ]
 
--> 9.6MB allocated. 
 
Ctrl-C <RETURN>
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 14193 | grep anon
00000000014d1000   1580K rw---   [ anon ]
00007f3c3b8d2000    516K rw---   [ anon ]
 
--> All memory released.
 
SQL> set arraysize 1 rowprefetch 1000
SQL> select * from big_type_snall_data;
 
--> 2 times <RETURN>
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 14193 | grep anon
00000000014d1000   1852K rw---   [ anon ]
00007f3c3b8d2000    516K rw---   [ anon ]
 
--> Only 272K allocated.
 
Ctrl-C <RETURN>
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 15227 | grep anon
0000000001fba000   1852K rw---   [ anon ]
00007f38c5ef9000    516K rw---   [ anon ]
 
--> Memory not released.
 
Back to previous setting:
SQL> set arraysize 1000 rowprefetch 1
SQL> select * from big_type_small_data;
 
--> 2 times <RETURN>
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 15227 | grep anon
0000000001fba000  11644K rw---   [ anon ]
00007f38c5ef9000    516K rw---   [ anon ]
 
--> 9.6MB addtl memory allocated.
 
Ctrl-C <RETURN>
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 15227 | grep anon
0000000001fba000   1832K rw---   [ anon ]
00007f38c5ef9000    516K rw---   [ anon ]
 
--> Memory released, but not to the initial value. I.e. it seems the memory for the rowprefetch is still allocated.
 
Back to the settings with rowprefetch:
 
SQL> set arraysize 1 rowprefetch 1000
SQL> select * from big_type_small_data;
 
--> 2 times <RETURN>
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 15227 | grep anon
0000000001fba000   1832K rw---   [ anon ]
00007f38c5ef9000    516K rw---   [ anon ]
 
--> It obviously reused the previous memory.
 
SQL> set arraysize 500 rowprefetch 501 pages 503
SQL> select * from big_type_small_data;
 
--> 2 times <RETURN>
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 15227 | grep anon
0000000001fba000   6752K rw---   [ anon ]
00007f38c5ef9000    516K rw---   [ anon ]
 
--> Relogin
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 16334 | grep anon
00000000010b0000   1580K rw---   [ anon ]
00007ff8cc272000    516K rw---   [ anon ]
 
SQL> set arraysize 500 rowprefetch 501 pages 503 pause on lines 20000
SQL> select * from big_type_small_data;
 
--> 2 times <RETURN>
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 16334 | grep anon
00000000010b0000   1720K rw---   [ anon ]
00007ff8cbda4000   5436K rw---   [ anon ]
 
Ctrl-C <RETURN>
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 16334 | grep anon
00000000010b0000   1720K rw---   [ anon ]
00007ff8cc272000    516K rw---   [ anon ]
 
SQL> select * from big_type_small_data;
 
--> 2 times <RETURN>
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 16334 | grep anon
00000000010b0000   6608K rw---   [ anon ]
00007ff8cc272000    516K rw---   [ anon ]
 
Ctrl-C <RETURN>
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 16334 | grep anon
00000000010b0000   6608K rw---   [ anon ]
00007ff8cc272000    516K rw---   [ anon ]
 
--> This time the memory for the arraysize has not been released.
 
--> Relogin
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 17005 | grep anon
0000000001c40000   1580K rw---   [ anon ]
00007f90ea747000    516K rw---   [ anon ]
 
SQL> set arraysize 1 rowprefetch 32767 pages 3 pause on lines 20000
SQL> select * from big_type_small_data;
 
--> 2 times <RETURN>
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 17005 | grep anon
0000000001c40000   8312K rw---   [ anon ]
00007f90ea6a2000   1176K rw---   [ anon ]
 
Ctrl-C <RETURN>
 
oracle@oracle-19c-vagrant:/home/oracle/Blogs/rowprefetch_test/ [ORCLCDB (CDB$ROOT)] pmap 17005 | grep anon
0000000001c40000   8308K rw---   [ anon ]
00007f90ea6a2000   1176K rw---   [ anon ]
 
--> almost nothing released.

So the tests showed that ARRAYSIZE allocates more memory than ROWPREFETCH (i.e. it allocates according the data-type-size and not according the real data in the column), but in contrast to ROWPREFETCH memory is (often) released with ARRAYSIZE once the SQL finished fetching.

Summary

So when should ROWPREFETCH and ARRAYSIZE be used? As with all fetch-size-settings (e.g. for the JDBC-driver), both can be used to reduce the number of network roundtrips and logical IOs on the DB when lots of data has to be transferred between the server and the client. According my tests ROWPREFETCH requires less memory on the client, but does not release the memory after the query has finished. ARRAYSIZE requires more memory, but often releases memory when the query has finished. ROWPREFETCH = 2 is very useful in case only 1 row is returned by a query, because it returns the row with the internal OCI execute call (first fetch in the 10046 trace) and does not require a subsequent fetch to realize that all data has been fetched already. I.e. it saves 1 network roundtrip.

A good compromise is the use of

ROWPREFETCH = 2
ARRAYSIZE = 100

That setting is actually also used when starting sqlplus with -F(AST). If lots of data has to be transferred to the client then higher ROWPREFETCH or ARRAYSIZE settings can be used to reduce the number logical IOs and network roundtrips. But the best setting also depends on the data to transfer per row and client memory requirements may vary with higher ROWPREFETCH or ARRAYSIZE settings if sqlplus runs a batch-job with many queries or only a few queries. As usual, the best setting when transferring lots of data through sqlplus has to be found by testing the queries and scripts of your environment with different settings.

Cet article ARRAYSIZE or ROWPREFETCH in sqlplus? est apparu en premier sur Blog dbi services.

Oracle FAST=TRUE in sqlplus? Some thoughts about rowprefetch

Sun, 2020-01-05 16:56

During my time as a Consultant working on Tuning Tasks I had the feeling that many people think that there is an Oracle-parameter “FAST=TRUE” to speed up the performance and throughput of the database calls. Unfortunately such a parameter is not available, but since version 12cR2 Oracle provided the option “-F” or “-FAST” for sqlplus, which looks like a “FAST=TRUE”-setting. Here an excerpt from the documentation:


The FAST option improves general performance. This command line option changes the values of the following default settings:
 
- ARRAYSIZE = 100
- LOBPREFETCH = 16384
- PAGESIZE = 50000
- ROWPREFETCH = 2
- STATEMENTCACHE = 20

I was interested in where the rowprefetch-setting could result in an improvement.

The documentation about rowprefetch is as follows:


SET ROWPREFETCH {1 | n}
 
Sets the number of rows that SQL*Plus will prefetch from the database at one time. The default value is 1.
 
Example
 
To set the number of prefetched rows to 200, enter
 
SET ROWPREFETCH 200
 
If you do not specify a value for n, the default is 1 row. This means that rowprefetching is off.
 
Note: The amount of data contained in the prefetched rows should not exceed the maximum value of 2147483648 bytes (2 Gigabytes). The  setting in the oraaccess.xml file can override the SET ROWPREFETCH setting in SQL*Plus. For more information about oraaccess.xml, see the Oracle Call Interface Programmer's Guide. 

A simple test where rowprefetch can make a difference is the use of hash clusters (see the Buffers column in the execution plan below). E.g.


SQL> create cluster DEMO_CLUSTER(CUST_ID number) size 4096 single table hashkeys 1000 ;
 
Cluster created.
 
SQL> create table DEMO cluster DEMO_CLUSTER(CUST_ID) as select * from CUSTOMERS;
 
Table created.
 
SQL> exec dbms_stats.gather_table_stats(user,'DEMO');
 
PL/SQL procedure successfully completed.
 
SQL> select num_rows,blocks from user_tables where table_name='DEMO';
 
  NUM_ROWS     BLOCKS
---------- ----------
     55500	 1035
 
SQL> show rowprefetch
rowprefetch 1
SQL> select /*+ gather_plan_statistics */ rowid,cust_id from DEMO where cust_id=101;
 
ROWID		      CUST_ID
------------------ ----------
AAAR4qAAMAAAAedAAA	  101
 
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
 
PLAN_TABLE_OUTPUT
-----------------------------------------
SQL_ID	9g2nyr9h2ytk4, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ rowid,cust_id from DEMO where
cust_id=101
 
Plan hash value: 3286081706
 
------------------------------------------------------------------------------------
| Id  | Operation	  | Name | Starts | E-Rows | A-Rows |	A-Time	 | Buffers |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |	 |	1 |	   |	  1 |00:00:00.01 |	 2 |
|*  1 |  TABLE ACCESS HASH| DEMO |	1 |	 1 |	  1 |00:00:00.01 |	 2 |
------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - access("CUST_ID"=101)
 
SQL> set rowprefetch 2
SQL> select /*+ gather_plan_statistics */ rowid,cust_id from DEMO where cust_id=101;
 
ROWID		      CUST_ID
------------------ ----------
AAAR4qAAMAAAAedAAA	  101
 
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
 
PLAN_TABLE_OUTPUT
-----------------------------------------
SQL_ID	9g2nyr9h2ytk4, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ rowid,cust_id from DEMO where
cust_id=101
 
Plan hash value: 3286081706
 
------------------------------------------------------------------------------------
| Id  | Operation	  | Name | Starts | E-Rows | A-Rows |	A-Time	 | Buffers |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |	 |	1 |	   |	  1 |00:00:00.01 |	 1 |
|*  1 |  TABLE ACCESS HASH| DEMO |	1 |	 1 |	  1 |00:00:00.01 |	 1 |
------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - access("CUST_ID"=101)

Due to the prefetch of 2 rows Oracle detects that there actually is only 1 row and avoids the second logical IO (a second fetch).
If cust_id is unique then I would have created a unique (or primary) key constraint here, which would avoid a second fetch as well (because Oracle knows from the constraint that there can be max 1 row per cust_id), but in that case I have to maintain the created index.

I made a couple of tests, which compared the behaviour with different settings of rowprefetch and arraysize in sqlplus (what is actually the difference between the 2 settings?). That will be a subject of a future Blog.

Cet article Oracle FAST=TRUE in sqlplus? Some thoughts about rowprefetch est apparu en premier sur Blog dbi services.

Documentum – Java exception stack on iAPI/iDQL login

Sun, 2020-01-05 02:00

Recently, I was doing some sanity checks on a Documentum Server and I saw a Java exception stack while logging in using iAPI/iDQL to a Repository. It was reproducible for all Repositories. I’ve never seen something like that before (or at least I don’t remember it) so I was a little bit surprised. Whenever there are errors upon login, it is usually Documentum error messages that are printed and there is no exception stack. Since it took me some efforts finding the root cause, I thought about sharing it.

The exception stack displayed was the following one:

[dmadmin@cs-0 ~]$ echo "quit" | iapi gr_repo -Udmadmin -Pxxx

        EMC Documentum iapi - Interactive API interface
        (c) Copyright EMC Corp., 1992 - 2016
        All rights reserved.
        Client Library Release 7.3.0040.0025

Connecting to Server using docbase gr_repo
DfException:: THREAD: main; MSG: [DM_SESSION_I_SESSION_START]info:  "Session 0112d687800c9214 started for user dmadmin."; ERRORCODE: 100; NEXT: null
        at com.documentum.fc.client.impl.docbase.DocbaseExceptionMapper.newException(DocbaseExceptionMapper.java:57)
        at com.documentum.fc.client.impl.connection.docbase.MessageEntry.getException(MessageEntry.java:39)
        at com.documentum.fc.client.impl.connection.docbase.DocbaseMessageManager.getExceptionForAllMessages(DocbaseMessageManager.java:176)
        at com.documentum.fc.client.impl.connection.docbase.DocbaseConnection.getExceptionForAllMessages(DocbaseConnection.java:1518)
        at com.documentum.fc.client.impl.session.Session.getExceptionsForAllMessages(Session.java:1603)
        at com.documentum.fc.client.impl.session.SessionHandle.getExceptionsForAllMessages(SessionHandle.java:1301)
        at com.documentum.dmcl.impl.ApiContext.addMessages(ApiContext.java:423)
        at com.documentum.dmcl.impl.ApiContext.collectExceptionsForReporting(ApiContext.java:370)
        at com.documentum.dmcl.impl.GetMessageHandler.get(GetMessageHandler.java:23)
        at com.documentum.dmcl.impl.DmclApi.get(DmclApi.java:49)
        at com.documentum.dmcl.impl.DmclApiNativeAdapter.get(DmclApiNativeAdapter.java:145)
        at com.documentum.dmcl.impl.DmclApiNativeAdapter.get(DmclApiNativeAdapter.java:130)


Connected to Documentum Server running Release 7.3.0050.0039  Linux64.Oracle
Session id is s0
API> Bye
[dmadmin@cs-0 ~]$

 

The login was successful but still, a strange exception stack appeared. The first thing I did was checking the Repository log file but there was nothing out of the ordinary inside it except for one thing:

[dmadmin@cs-0 ~]$ cd $DOCUMENTUM/dba/log
[dmadmin@cs-0 log]$
[dmadmin@cs-0 log]$ grep -A3 "Agent Exec" gr_repo.log
Wed Sep 11 10:38:29 2019 [INFORMATION] [AGENTEXEC 1477] Detected during program initialization: Agent Exec connected to server gr_repo:  DfException:: THREAD: main; MSG: [DM_SESSION_I_SESSION_START]info:  "Session 0112d687800c8904 started for user dmadmin."; ERRORCODE: 100; NEXT: null
        at com.documentum.fc.client.impl.docbase.DocbaseExceptionMapper.newException(DocbaseExceptionMapper.java:57)
        at com.documentum.fc.client.impl.connection.docbase.MessageEntry.getException(MessageEntry.java:39)
        at com.documentum.fc.client.impl.connection.docbase.DocbaseMessageManager.getExceptionForAllMessages(DocbaseMessageManager.java:176)
        at com.documentu
[dmadmin@cs-0 log]$

 

While starting, the Agent Exec was therefore facing the same behavior with the exact same stack (which is cut at the 4th line but it’s the same stack until then so it’s safe to assume it’s the same). Therefore, to dig deeper and to find when the issue started exactly, I checked the logs from the agentexec/jobs since this will be kept until cleanup from the log purge and since it does login to the Repository:

[dmadmin@cs-0 log]$ cd $DOCUMENTUM/dba/log/gr_repo/agentexec
[dmadmin@cs-0 agentexec]$
[dmadmin@cs-0 agentexec]$ # Check the last file
[dmadmin@cs-0 agentexec]$ cat $(ls -tr job_* | tail -1)
Wed Sep 11 18:00:21 2019 [INFORMATION] [LAUNCHER 3184] Detected while preparing job dm_ConsistencyChecker for execution: Agent Exec connected to server gr_repo:  DfException:: THREAD: main; MSG: [DM_SESSION_I_SESSION_START]info:  "Session 0112d687800c8974 started for user dmadmin."; ERRORCODE: 100; NEXT: null
        at com.documentum.fc.client.impl.docbase.DocbaseExceptionMapper.newException(DocbaseExceptionMapper.java:57)
        at com.documentum.fc.client.impl.connection.docbase.MessageEntry.getException(MessageEntry.java:39)
        at com.documentum.fc.client.impl.connection.docbase.DocbaseMessageManager.getExceptionForAllMessages(DocbaseMessageManager.java:176)
        at com.documentu
[dmadmin@cs-0 agentexec]$
[dmadmin@cs-0 agentexec]$ # Finding the first file with the error
[dmadmin@cs-0 agentexec]$ for f in $(ls -tr); do r=$(grep "_I_SESSION_START.*ERRORCODE" "${f}"); if [[ "${r}" != "" ]]; then echo "${r}"; break; fi; done
Tue Sep 10 18:00:06 2019 [INFORMATION] [LAUNCHER 31113] Detected while preparing job dm_ConsistencyChecker for execution: Agent Exec connected to server gr_repo:  DfException:: THREAD: main; MSG: [DM_SESSION_I_SESSION_START]info:  "Session 0112d687800c8827 started for user dmadmin."; ERRORCODE: 100; NEXT: null
[dmadmin@cs-0 agentexec]$

 

In all the job’s sessions files, there were the same stack (or rather a piece of the stack). At first, I didn’t understand where this was coming from, all I know was that it was linked somehow to the login inside the Repository and that it appeared for the first time on the date returned by my last command above. It was not really an error message since it wasn’t showing any “_E_” messages but it was still printing an exception.

Knowing when it appeared the first time, I looked at all the files that have been modified on that day and among log files, which are expected and can be ignored, there were the dfc.properties file. This provided me the reason for this message: it was actually due to enabling the diagnostic mode on the dfc.properties of the Documentum Server. To be exact, it was due to the “dfc.diagnostics.exception.include_stack=true” entry:

[dmadmin@cs-0 agentexec]$ tail -5 $DOCUMENTUM_SHARED/config/dfc.properties
dfc.session.secure_connect_default=secure
dfc.time_zone=UTC
dfc.diagnostics.resources.enable=true
dfc.diagnostics.exception.include_stack=true
dfc.tracing.print_exception_stack=true
[dmadmin@cs-0 agentexec]$
[dmadmin@cs-0 agentexec]$ echo "quit" | iapi gr_repo -Udmadmin -Pxxx

        EMC Documentum iapi - Interactive API interface
        (c) Copyright EMC Corp., 1992 - 2016
        All rights reserved.
        Client Library Release 7.3.0040.0025

Connecting to Server using docbase gr_repo
DfException:: THREAD: main; MSG: [DM_SESSION_I_SESSION_START]info:  "Session 0112d687800c9235 started for user dmadmin."; ERRORCODE: 100; NEXT: null
        at com.documentum.fc.client.impl.docbase.DocbaseExceptionMapper.newException(DocbaseExceptionMapper.java:57)
        at com.documentum.fc.client.impl.connection.docbase.MessageEntry.getException(MessageEntry.java:39)
        at com.documentum.fc.client.impl.connection.docbase.DocbaseMessageManager.getExceptionForAllMessages(DocbaseMessageManager.java:176)
        at com.documentum.fc.client.impl.connection.docbase.DocbaseConnection.getExceptionForAllMessages(DocbaseConnection.java:1518)
        at com.documentum.fc.client.impl.session.Session.getExceptionsForAllMessages(Session.java:1603)
        at com.documentum.fc.client.impl.session.SessionHandle.getExceptionsForAllMessages(SessionHandle.java:1301)
        at com.documentum.dmcl.impl.ApiContext.addMessages(ApiContext.java:423)
        at com.documentum.dmcl.impl.ApiContext.collectExceptionsForReporting(ApiContext.java:370)
        at com.documentum.dmcl.impl.GetMessageHandler.get(GetMessageHandler.java:23)
        at com.documentum.dmcl.impl.DmclApi.get(DmclApi.java:49)
        at com.documentum.dmcl.impl.DmclApiNativeAdapter.get(DmclApiNativeAdapter.java:145)
        at com.documentum.dmcl.impl.DmclApiNativeAdapter.get(DmclApiNativeAdapter.java:130)


Connected to Documentum Server running Release 7.3.0050.0039  Linux64.Oracle
Session id is s0
API> Bye
[dmadmin@cs-0 agentexec]$
[dmadmin@cs-0 agentexec]$ sed -i sed 's,^dfc.diagnostics.exception.include_stack,#&,' $DOCUMENTUM_SHARED/config/dfc.properties
[dmadmin@cs-0 agentexec]$
[dmadmin@cs-0 agentexec]$ echo "quit" | iapi gr_repo -Udmadmin -Pxxx

        EMC Documentum iapi - Interactive API interface
        (c) Copyright EMC Corp., 1992 - 2016
        All rights reserved.
        Client Library Release 7.3.0040.0025

Connecting to Server using docbase gr_repo
[DM_SESSION_I_SESSION_START]info:  "Session 0112d687800c9237 started for user dmadmin."

Connected to Documentum Server running Release 7.3.0050.0039  Linux64.Oracle
Session id is s0
API> Bye
[dmadmin@cs-0 agentexec]$

 

As you can see above, commenting the line “dfc.diagnostics.exception.include_stack=true” (meaning setting it to false, the default value) caused the exception stack to disappear. Since I was curious about this stack and wanted confirmation that this is “expected”, I opened a case with the OpenText Support (#4331438) and they confirmed me after a few days that it wasn’t considered an “ERROR“, it was more of an “INFO” message. It’s a strange way to display informative messages but hey, who am I to judge!

 

Cet article Documentum – Java exception stack on iAPI/iDQL login est apparu en premier sur Blog dbi services.

Documentum – Connection to docbrokers and Repositories inside K8s from an external DFC Client

Sat, 2020-01-04 02:00

How can you connect an external DFC Client to docbrokers and Repositories hosted on Kubernetes Pods? That seems to be a very simple question yet it might prove difficult… Let’s talk about this challenge in this blog and possible solutions/workarounds.

As you all know, Kubernetes is using containers so just like for a basic Docker container, you won’t be able to access it from the outside by default. On Docker, you will need to expose some ports and then you can interact with whatever is running on that port. For Kubernetes, it’s the same principle but it obviously add other layers in addition which makes it even more complicated. Therefore, if you want to be able to connect to a docbroker inside a K8s Pod from the outside of K8s, then you will need to do a few things:

  • at the container level, to open the ports 1489/1490 (default ones, you can change them obviously)
  • a K8s Service to expose these ports inside K8s
  • an Nginx Ingres Controller for which the TCP ports 1489/1490 have been configured for external accesses (or other ports if these are already used for another namespace for example)
  • a “Load Balancer” K8s Service (still at the Nginx Ingres Controller level) which exposes these ports using an external IP

 

Once you have that, you should be able to communicate with a docbroker that is inside a K8s pod. If you want to have a chance to talk to a Repository, then you will also need to do the same thing but for the Repository ports. When you install a repo, you will specify in the /etc/services the ports it should use (just like for the docbroker).

For this example, let’s start simple with the same ports internally and externally:

  • DFC Client host: vm
  • K8s pod short name (hostname): cs-0
  • K8s pod full name (headless service / full hostname): cs-0.cs.dbi-ns01.svc.cluster.local
  • K8s pod IP: 1.1.1.100
  • K8s pod docbroker port: 1489/1490
  • K8s pod Repositories port: gr_repo=49400/49401    //    REPO1=49402/49403
  • K8s external hostname/lb: k8s-cs-dbi-ns01.domain.com
  • K8s external IP: 2.2.2.200
  • K8s external docbroker port: 1489/1490
  • K8s external Repositories port: gr_repo=49400/49401    //    REPO1=49402/49403

 

Considering the above setup (both the docbroker and Repositories ports configured on K8s), you can already talk to the docbroker properly:

[dmadmin@vm ~]$ grep "dfc.docbroker" dfc.properties
dfc.docbroker.host[0]=k8s-cs-dbi-ns01.domain.com
dfc.docbroker.port[0]=1489
[dmadmin@vm ~]$
[dmadmin@vm ~]$ nc -v k8s-cs-dbi-ns01.domain.com 1489
Ncat: Version 7.50 ( https://nmap.org/ncat )
Ncat: Connected to 2.2.2.200:1489.
^C
[dmadmin@vm ~]$ 
[dmadmin@vm ~]$ nc -v k8s-cs-dbi-ns01.domain.com 49402
Ncat: Version 7.50 ( https://nmap.org/ncat )
Ncat: Connected to 2.2.2.200:49402.
^C
[dmadmin@vm ~]$ 
[dmadmin@vm ~]$ dmqdocbroker -t k8s-cs-dbi-ns01.domain.com -p 1489 -c ping
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 16.4.0000.0185
Using specified port: 1489
Successful reply from docbroker at host (cs-0) on port(1490) running software version (16.4.0170.0234  Linux64).
[dmadmin@vm ~]$ 
[dmadmin@vm ~]$ dmqdocbroker -t k8s-cs-dbi-ns01.domain.com -p 1489 -c getdocbasemap
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 16.4.0000.0185
Using specified port: 1489
**************************************************
**     D O C B R O K E R    I N F O             **
**************************************************
Docbroker host            : cs-0
Docbroker port            : 1490
Docbroker network address : INET_ADDR: 01 2d3 01010164 cs-0 1.1.1.100
Docbroker version         : 16.4.0170.0234  Linux64
**************************************************
**     D O C B A S E   I N F O                  **
**************************************************
--------------------------------------------
Docbase name        : gr_repo
Docbase id          : 1234567
Docbase description : dbi-ns01 dev k8s gr
Govern docbase      :
Federation name     :
Server version      : 16.4.0170.0234  Linux64.Oracle
Docbase Roles       : Global Registry
Docbase Dormancy Status     :
--------------------------------------------
Docbase name        : REPO1
Docbase id          : 1234568
Docbase description : dbi-ns01 dev k8s repo1
Govern docbase      :
Federation name     :
Server version      : 16.4.0170.0234  Linux64.Oracle
Docbase Roles       :
Docbase Dormancy Status     :
--------------------------------------------
[dmadmin@vm ~]$

 

So as you can see above, the docbroker does respond properly with the list of Repositories that it is aware of (Repo name, ID, hostname, …) and for that purpose, there is no need for the Repositories’ ports to be opened, only the docbroker is enough. However, as soon as you want to go further and start talking to the Repositories, you will obviously need to open these additional ports as well. Above, I used 49402/49403 for the REPO1 Repository (both internal and external). When trying to login to a target Repository, it will fail:

[dmadmin@vm ~]$ echo "quit" | iapi REPO1 -Udmadmin -P${dm_pw}

        OpenText Documentum iapi - Interactive API interface
        Copyright (c) 2018. OpenText Corporation
        All rights reserved.
        Client Library Release 16.4.0000.0185

Connecting to Server using docbase REPO1
^C[dmadmin@vm ~]$
[dmadmin@vm ~]$

 

Why is that? Well the reason is that to connect to a docbroker, a DFC Client will use the value from the well-known “dfc.properties” file. By reading it, it will know where the docbroker can be found: in our case, it’s “k8s-cs-dbi-ns01.domain.com:1489“. When that is done, the docbroker replies with the list of Repositories known and it will also reply with the “host” that should be used to communicate with the Repositories. That’s because the Repositories might not be on the same host as the docbroker and therefore it needs to provides the information to the DFC Client. However, that “host” is actually an IP! When a Repository register itself with a docbroker, the docbroker records the source IP of the request and it will then forward this IP to the DFC Client that wants to talk to this Repository.

The problem here is that the Repositories are installed on K8s Pods and therefore the IP that the docbroker knows is actually the IP of the K8s Pod… Which is, therefore, not reachable from outside of K8s!

 

1. IP Forwarding, a solution?

If you want to validate a setup or do some testing, it’s pretty simple on Linux, you can quickly setup an IP Forwarding between the IP of the K8s Pod (which points to nothing) and the IP of the K8s LB Service that you configured previously for the docbroker and Repositories ports. Here is an example:

[dmadmin@vm ~]$ nslookup k8s-cs-dbi-ns01.domain.com
Server: 1.1.1.10
Address: 1.1.1.10#53

k8s-cs-dbi-ns01.domain.com     canonical name = k8s-cluster-worker2.domain.com.
Name:   k8s-cluster-worker2.domain.com
Address: 2.2.2.200
[dmadmin@vm ~]$
[dmadmin@vm ~]$ external_ip=2.2.2.200
[dmadmin@vm ~]$ ping -c 1 ${external_ip}
PING 2.2.2.200 (2.2.2.200) 56(84) bytes of data.
64 bytes from 2.2.2.200: icmp_seq=1 ttl=63 time=0.980 ms

--- 2.2.2.200 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.980/0.980/0.980/0.000 ms
[dmadmin@vm ~]$
[dmadmin@vm ~]$ internal_ip=1.1.1.100
[dmadmin@vm ~]$ ping -c 1 ${internal_ip}
PING 1.1.1.100 (1.1.1.100) 56(84) bytes of data.

--- 1.1.1.100 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms
[dmadmin@vm ~]$
[dmadmin@vm ~]$ echo "quit" | iapi REPO1 -Udmadmin -P${dm_pw}

        OpenText Documentum iapi - Interactive API interface
        Copyright (c) 2018. OpenText Corporation
        All rights reserved.
        Client Library Release 16.4.0000.0185

Connecting to Server using docbase REPO1
^C[dmadmin@vm ~]$
[dmadmin@vm ~]$
[dmadmin@vm ~]$
[dmadmin@vm ~]$ sudo sysctl -w net.ipv4.ip_forward=1
net.ipv4.ip_forward = 1
[dmadmin@vm ~]$ 
[dmadmin@vm ~]$ sudo iptables -t nat -A OUTPUT -d ${internal_ip} -j DNAT --to-destination ${external_ip}
[dmadmin@vm ~]$ 
[dmadmin@vm ~]$ echo "quit" | iapi REPO1 -Udmadmin -P${dm_pw}

        OpenText Documentum iapi - Interactive API interface
        Copyright (c) 2018. OpenText Corporation
        All rights reserved.
        Client Library Release 16.4.0000.0185

Connecting to Server using docbase REPO1
[DM_SESSION_I_SESSION_START]info:  "Session 0112d6888000152a started for user dmadmin."

Connected to OpenText Documentum Server running Release 16.4.0170.0234  Linux64.Oracle
Session id is s0
API> Bye
[dmadmin@vm ~]$

 

As you can see above, as soon as you configure an IP Forwarding from the Pod IP to the K8s LB IP, then the Repository connection is successful. Here, I just executed a “quit” command to close the iAPI session but it shows that the session creation is working so you are sure that the end-to-end communication is fine.

Obviously, that is just for testing… Indeed, your Pod IP is going to change in the future (after each restart of the pod for example) which means that the IP Forwarding will be broken at that time. This also requires being setup on the client directly because the DFC Client will try to communicate with a specific IP… But this IP most probably doesn’t point to anything and therefore the only way to make it happen correctly is either setting that up on the client or on the network layer, which is super annoying and isn’t really reliable anyway so this isn’t a solution.

 

2. Docbroker Translation, a solution?

Several years ago, a feature has been introduced in the docbroker that was initially planned to handle blocking rules on a FireWall: IP and Port Translation. I believe it was introduced for Documentum 6.5 but I might be wrong, it was a long time ago… Since the issue here for K8s is pretty similar to what would happen with a FireWall blocking the IP, we can actually use this feature to help us. Contrary to the IP Forwarding, which is done on the client side, the Translation is done on the server side which is therefore global for all clients. This has an obvious advantage that you can just do it once for all clients (or rather you will need to re-do this configuration at each start of your K8s Pod since the IP will be changed). However, it also has a drawback which is that there is no exception: all communications will be translated, even K8s internal communications… So this might be a problem. There is a KB to describe how it works (KB7701941) and you can also look at the documentation as well. However, the documentation might not be really correct. Indeed, if you look at the CS 7.1 documentation, you will find this definition:

[TRANSLATION]
port=inside_firewall_port=outside_firewall_port
{,inside_firewall_port=outside_firewall_port}
host=inside_firewall_IP=outside_firewall_IP
{,inside_firewall_IP=outside_firewall_IP}

 

If you look at the CS 16.4 documentation, you will find this definition:

[TRANSLATION]
port=inside_firewall_port=outside_firewall_port
{,inside_firewall_port=outside_firewall_port}
host=outside_firewall_IP=inside_firewall_IP
{,outside_firewall_IP=inside_firewall_IP}

 

Finally, if you look at the CS 16.7 documentation, you will find yet another definition:

[TRANSLATION]port=["]outside_firewall_port=inside_firewall_port
{,outside_firewall_port=inside_firewall_port}["]
host=["]outside_firewall_ip=inside_firewall_ip
{,outside_firewall_ip=inside_firewall_ip}["]

 

Three documentations on the same feature, three different definitions :D. In addition to that, there is an example in the documentation which is also wrong, on the three documentations. The real definition is the last one, after fixing the formatting errors that is… So in short, this is what you can do with the docbroker translation:

[TRANSLATION]
port=["]ext_port_1=int_port_1{,ext_port_2=int_port_2}{,ext_port_3=int_port_3}{,...}["]
host=["]ext_ip_1=int_ip_1{,ext_ip_2=int_ip_2}{,ext_ip_3=int_ip_3}{,...}["]

 

From what I could see, the double quotes aren’t mandatory but you can use them if you want to…

Let’s test all that after removing the IP Forwarding, obviously:

[dmadmin@vm ~]$ dmqdocbroker -t k8s-cs-dbi-ns01.domain.com -p 1489 -c getdocbasemap
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 16.4.0000.0185
Using specified port: 1489
**************************************************
**     D O C B R O K E R    I N F O             **
**************************************************
Docbroker host            : cs-0
Docbroker port            : 1490
Docbroker network address : INET_ADDR: 01 2d3 01010164 cs-0 1.1.1.100
Docbroker version         : 16.4.0170.0234  Linux64
**************************************************
**     D O C B A S E   I N F O                  **
**************************************************
--------------------------------------------
Docbase name        : gr_repo
Docbase id          : 1234567
Docbase description : dbi-ns01 dev k8s gr
Govern docbase      :
Federation name     :
Server version      : 16.4.0170.0234  Linux64.Oracle
Docbase Roles       : Global Registry
Docbase Dormancy Status     :
--------------------------------------------
Docbase name        : REPO1
Docbase id          : 1234568
Docbase description : dbi-ns01 dev k8s repo1
Govern docbase      :
Federation name     :
Server version      : 16.4.0170.0234  Linux64.Oracle
Docbase Roles       :
Docbase Dormancy Status     :
--------------------------------------------
[dmadmin@vm ~]$
[dmadmin@vm ~]$ echo "quit" | iapi REPO1 -Udmadmin -P${dm_pw}

        OpenText Documentum iapi - Interactive API interface
        Copyright (c) 2018. OpenText Corporation
        All rights reserved.
        Client Library Release 16.4.0000.0185

Connecting to Server using docbase REPO1
^C[dmadmin@vm ~]$
[dmadmin@vm ~]$

 

On the docbroker side (k8s), let’s configure the translation properly and restart for the new configuration to be applied:

[dmadmin@cs-0 ~]$ cd $DOCUMENTUM/dba
[dmadmin@cs-0 dba]$ cat Docbroker.ini
[DOCBROKER_CONFIGURATION]
secure_connect_mode=dual
[dmadmin@cs-0 dba]$
[dmadmin@cs-0 dba]$ external_ip=2.2.2.200
[dmadmin@cs-0 dba]$ external_port=1489
[dmadmin@cs-0 dba]$ internal_ip=1.1.1.100
[dmadmin@cs-0 dba]$ internal_port=1489
[dmadmin@cs-0 dba]$
[dmadmin@cs-0 dba]$ echo "[TRANSLATION]" >> Docbroker.ini
[dmadmin@cs-0 dba]$ echo "port=${external_port}=${internal_port}" >> Docbroker.ini
[dmadmin@cs-0 dba]$ echo "host=${external_ip}=${internal_ip}" >> Docbroker.ini
[dmadmin@cs-0 dba]$
[dmadmin@cs-0 dba]$ cat Docbroker.ini
[DOCBROKER_CONFIGURATION]
secure_connect_mode=dual
[TRANSLATION]
port=1489=1489
host=2.2.2.200=1.1.1.100
[dmadmin@cs-0 dba]$
[dmadmin@cs-0 dba]$ ./dm_stop_Docbroker; sleep 1; ./dm_launch_Docbroker
./dmshutdown 16.4.0000.0248  Linux64 Copyright (c) 2018. OpenText Corporation.
Shutdown request was processed by Docbroker on host cs-0 (INET_ADDR: 01 2d3 01010164 cs-0 1.1.1.100)
Reply status indicates a success: OK
starting connection broker on current host: [cs-0.cs.dbi-ns01.svc.cluster.local]
with connection broker log: [$DOCUMENTUM/dba/log/docbroker.cs-0.cs.dbi-ns01.svc.cluster.local.1489.log]
connection broker pid: 18219
[dmadmin@cs-0 dba]$
[dmadmin@cs-0 dba]$ head -7 log/docbroker.cs-0.cs.dbi-ns01.svc.cluster.local.1489.log
OpenText Documentum Connection Broker (version 16.4.0170.0234  Linux64)
Copyright (c) 2018. OpenText Corporation
HOST TRANSLATION TABLE:
    [1] From(1.1.1.100), to(2.2.2.200)
PORT TRANSLATION TABLE:
    [1] From(1489), to(1489)
2019-12-15T10:25:22.307379 [DM_DOCBROKER_I_START]info:  "Docbroker has started.  Process id: 18219"
[dmadmin@cs-0 dba]$

 

Once that is done, back on the DFC Client side, trying to connect to the Repository:

[dmadmin@vm ~]$ dmqdocbroker -t k8s-cs-dbi-ns01.domain.com -p 1489 -c getdocbasemap
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 16.4.0000.0185
Using specified port: 1489
**************************************************
**     D O C B R O K E R    I N F O             **
**************************************************
Docbroker host            : cs-0
Docbroker port            : 1490
Docbroker network address : INET_ADDR: 01 2d3 01010164 cs-0 1.1.1.100
Docbroker version         : 16.4.0170.0234  Linux64
**************************************************
**     D O C B A S E   I N F O                  **
**************************************************
--------------------------------------------
Docbase name        : gr_repo
Docbase id          : 1234567
Docbase description : dbi-ns01 dev k8s gr
Govern docbase      :
Federation name     :
Server version      : 16.4.0170.0234  Linux64.Oracle
Docbase Roles       : Global Registry
Docbase Dormancy Status     :
--------------------------------------------
Docbase name        : REPO1
Docbase id          : 1234568
Docbase description : dbi-ns01 dev k8s repo1
Govern docbase      :
Federation name     :
Server version      : 16.4.0170.0234  Linux64.Oracle
Docbase Roles       :
Docbase Dormancy Status     :
--------------------------------------------
[dmadmin@vm ~]$

 

As you can see above, the dmqdocbroker will still print the Internal IP (1.1.1.100), that’s fine/normal. However the Repository connection should now work:

[dmadmin@vm ~]$ echo "quit" | iapi REPO1 -Udmadmin -P${dm_pw}

        OpenText Documentum iapi - Interactive API interface
        Copyright (c) 2018. OpenText Corporation
        All rights reserved.
        Client Library Release 16.4.0000.0185

Connecting to Server using docbase REPO1
[DM_SESSION_I_SESSION_START]info:  "Session 0112d6888000175b started for user dmadmin."

Connected to OpenText Documentum Server running Release 16.4.0170.0234  Linux64.Oracle
Session id is s0
API> Bye
[dmadmin@vm ~]$

 

So as you can see above, using the docbroker translation mechanisms is indeed a solution to be able to connect to a Repository that is inside a K8s pod. There are drawbacks as mentioned above but at least, that’s a valid workaround.

 

3. Using different ports externally

Above, I have always been using the same ports internally and externally. However, in a real case, you will probably have, in the end, hundreds or even thousands of CS pods. So how do you manage that? Well you saw above that the docbroker translation can be used to translate an external port into an internal port but it’s not just for the docbroker port! You can actually use that for the Repository ports as well.

Let’s say for this example that I have a second namespace (dbi-ns02) with the following:

  • DFC Client Host: vm
  • K8s pod short name (hostname): cs-0
  • K8s pod full name (headless service / full hostname): cs-0.cs.dbi-ns02.svc.cluster.local
  • K8s pod IP: 1.1.1.200
  • K8s pod docbroker port: 1489/1490
  • K8s pod Repositories port: gr_repo=49400/49401    //    REPO2=49402/49403
  • K8s external hostname/lb: k8s-cs-dbi-ns02.domain.com
  • K8s external IP: 2.2.2.200
  • K8s external docbroker port: 1491/1492
  • K8s external Repositories port: gr_repo=49404/49405    //    REPO2=49406/49407

 

The external IP is still the same because it’s the same K8s Cluster but the external ports are now different. The internal IP is also different because it’s another namespace. So with the default docbroker configuration (no translation), then we have the same issue, obviously, where the iAPI session will hang and never respond because of the IP that doesn’t exist.

So if we try to setup the basic docbroker translation just like what we did above, then on the K8s pod, we will have the following:

[dmadmin@cs-0 ~]$ cd $DOCUMENTUM/dba
[dmadmin@cs-0 dba]$
[dmadmin@cs-0 dba]$ ifconfig | grep inet | grep -v 127.0.0.1
        inet 1.1.1.200  netmask 255.255.255.255  broadcast 0.0.0.0
[dmadmin@cs-0 dba]$
[dmadmin@cs-0 dba]$ cat Docbroker.ini
[DOCBROKER_CONFIGURATION]
secure_connect_mode=dual
[TRANSLATION]
port=1491=1489
host=2.2.2.200=1.1.1.200
[dmadmin@cs-0 dba]$

 

With this configuration, if you are trying to connect from an external DFC Client, then it will be able to talk to the docbroker (assuming you have all the K8s stuff in place for redirecting the ports properly) but won’t be able to talk to the Repository:

[dmadmin@vm ~]$ dmqdocbroker -t k8s-cs-dbi-ns02.domain.com -p 1491 -c getdocbasemap
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 16.4.0000.0185
Using specified port: 1491
**************************************************
**     D O C B R O K E R    I N F O             **
**************************************************
Docbroker host            : cs-0
Docbroker port            : 1490
Docbroker network address : INET_ADDR: 02 3d4 02020286 cs-0 1.1.1.200
Docbroker version         : 16.4.0170.0234  Linux64
**************************************************
**     D O C B A S E   I N F O                  **
**************************************************
--------------------------------------------
Docbase name        : gr_repo
Docbase id          : 1234569
Docbase description : dbi-ns02 dev k8s gr
Govern docbase      :
Federation name     :
Server version      : 16.4.0170.0234  Linux64.Oracle
Docbase Roles       : Global Registry
Docbase Dormancy Status     :
--------------------------------------------
Docbase name        : REPO2
Docbase id          : 1234570
Docbase description : dbi-ns02 dev k8s repo2
Govern docbase      :
Federation name     :
Server version      : 16.4.0170.0234  Linux64.Oracle
Docbase Roles       :
Docbase Dormancy Status     :
--------------------------------------------
[dmadmin@vm ~]$
[dmadmin@vm ~]$ echo "quit" | iapi REPO2 -Udmadmin -P${dm_pw}

        OpenText Documentum iapi - Interactive API interface
        Copyright (c) 2018. OpenText Corporation
        All rights reserved.
        Client Library Release 16.4.0000.0185

Connecting to Server using docbase REPO2
:Wrong docbase id: (1234570) expecting: (1234568)

Could not connect
[dmadmin@vm ~]$

 

The reason for that is that I have been talking to the docbroker on the external port 1491, which is therefore the docbroker 1489 of the second namespace (“dbi-ns02“). This docbroker replied to the DFC Client that the Repository is using the port 49402/49403, which is true but only internally… Therefore, my DFC Client has been trying to connect to the Repository REPO2 (from the second namespace) using the port which is actually the one used by the REPO1 (from the first namespace) and therefore there is a mismatch in the Repository ID.

For that purpose, you can update the docbroker translation to include the Repositories ports as well:

[dmadmin@cs-0 dba]$ cat Docbroker.ini
[DOCBROKER_CONFIGURATION]
secure_connect_mode=dual
[TRANSLATION]
port=1491=1489,49404=49400,49405=49401,49406=49402,49407=49403
host=2.2.2.200=1.1.1.200
[dmadmin@cs-0 dba]$

 

With this new docbroker translation configuration, the external DFC Client should be able to communicate properly with the repository:

[dmadmin@vm ~]$ echo "quit" | iapi REPO2 -Udmadmin -P${dm_pw}

        OpenText Documentum iapi - Interactive API interface
        Copyright (c) 2018. OpenText Corporation
        All rights reserved.
        Client Library Release 16.4.0000.0185

Connecting to Server using docbase REPO2
[DM_SESSION_I_SESSION_START]info:  "Session 0112d68a80001403 started for user dmadmin."

Connected to OpenText Documentum Server running Release 16.4.0170.0234  Linux64.Oracle
Session id is s0
API> Bye
[dmadmin@vm ~]$

 

Alternatively to all that, you might want to take a look at Traefik or Istio which might also help you to configure the correct communications from the outside of K8s to the inside. I had a case opened with the OpenText Support so that they could correct the documentation for all versions.

 

Cet article Documentum – Connection to docbrokers and Repositories inside K8s from an external DFC Client est apparu en premier sur Blog dbi services.

WebLogic Server – Automatic/Silent setup of a SAML2 SSO

Thu, 2020-01-02 15:31

In a previous blog, I explained how it is possible to create an LDAP/LDAPs Authentication Provider on WebLogic. My initial goal wasn’t just to setup an LDAP/LDAPs on WebLogic Server. That was only a first step needed in order to automate the setup of a SAML2 Single Sign-On linked with the authentication from a LDAPs. Therefore, in this blog, we will take a look at that second part. Just like for the LDAP Authentication Provider, there are plenty of examples on the internet to do just that but they are all always using the GUI. When I searched for it, I didn’t find even a single one explaining how it could be done without. Maybe there are some but if so, it looks like they are pretty well hidden. In addition to that, you might think about just recording the steps in the WebLogic Administration Console so that it would create you the needed WLST scripts (just like for the LDAPs provider creation). Unfortunately, it’s not that simple. Indeed, it doesn’t work for everything and in addition to that, most of the steps that will be needed are outside of an edit session and therefore can’t be recorded.

In this blog, I will SAML 2.0 and I will assume that there is already an Identity Provider (“Server side“) that has been configured and I will configure a WebLogic Server (“Client side” = Service Provider) to use this Identity Provider using a WebSSO partner. In the WebLogic examples provided with the OFM full installation, there is a complete example for SAML2 on both Server and Client sides. For the Client side, they are however using a manual creation of the IdP Partner, importing the SSL Certificate, defining the URLs, aso… A simpler & faster approach is to use a metadata file that can be extracted/exported from the Server side which contains all these information and then imported into the Client side. That’s what I will show below, so it is pretty different to what is done in the example.

Alright, so the first thing to be done is to create a new Authentication Provider using the SAML2IdentityAsserter type. Because this change requires a full restart of the WebLogic Server, I usually do it with the LDAP Authentication Provider but for this example, I will split things and only talk about the SAML2 part. Just like in the previous blog, I will use a properties file and a WLST script. You can disregard the LDAP Authentication Providers parameters, they are only used for the LDAP part in the other blog, except ATN_NAME which I still used below but that’s only in case you do have a LDAP/LDAPs Authentication Provider in addition to the SAML2 one that you want to create:

[weblogic@weblogic-server-0 ~]$ cat domain.properties
# AdminServer parameters
CONFIG_FILE=/home/weblogic/secure/configfile.secure
KEY_FILE=/home/weblogic/secure/keyfile.secure
ADMIN_URL=t3s://weblogic-server-0.domain.com:8443
# LDAP Authentication Providers parameters
ATN_NAME=Internal_LDAP
ATN_FLAG=SUFFICIENT
ATN_HOST=ldap.domain.com
ATN_PORT=636
ATN_PRINCIPAL=ou=APP,ou=applications,ou=intranet,dc=dbi services,dc=com
ATN_CREDENTIAL=T3stP4ssw0rd
ATN_SSL=true
ATN_BASE_DN=ou=people,ou=intranet,dc=dbi services,dc=com
ATN_USER_FILTER=(&(uid=%u)(objectclass=person))
ATN_USER_CLASS=person
ATN_USER_AS_PRINCIPAL=true
ATN_GROUP_FILTER=(&(cn=%g)(objectclass=groupofuniquenames))
ATN_TIMEOUT=30
# IdP Partner parameters
IDA_NAME=APP_SAML2_IDAsserter
IDP_NAME=APP_SAML2_IDPartner
IDP_METADATA=/home/weblogic/idp_metadata.xml
IDP_ENABLED=true
IDP_REDIRECT_URIS=['/D2-01/*','/D2-02/*']
# Managed Servers SSO parameters
SSO_MS=msD2-01,msD2-02
SSO_URLS=https://lb_url1/saml2,https://lb_url2/saml2
SSO_ENTITY_IDS=APP_SAML2_Entity_ID_01,APP_SAML2_Entity_ID_02
SSO_SP_ENABLED=true
SSO_SP_BINDING=HTTP/POST
[weblogic@weblogic-server-0 ~]$
[weblogic@weblogic-server-0 ~]$
[weblogic@weblogic-server-0 ~]$ cat createSAML2AuthenticationProviders.wlst
##################################################################
#
# Authors: Morgan Patou    
# Version: 1.4 - 30/08/2019
#
# File: createSAML2AuthenticationProviders.wlst
# Purpose: Script to create SAML2 Authentication Providers
# Parameters: input properties file (optional)
# Output:
#
##################################################################

# Get operating system (for vars)
import os

# Read the domain properties file
try:
  if len(sys.argv) == 2:
    domainProperties=sys.argv[1]
  else:
    domainProperties=os.path.realpath(os.path.dirname(sys.argv[0])) + "/domain.properties"
  loadProperties(domainProperties)
  print ">>> Loaded the properties file: " + domainProperties
  print

except:
  exit(exitcode=1)

try:
  redirect('/dev/null','false')
  # Connect to AdminServer
  connect(userConfigFile=CONFIG_FILE,userKeyFile=KEY_FILE,url=ADMIN_URL)
  print ">>> Connected to the AdminServer."

  # Start Edit Session
  edit()
  startEdit()
  stopRedirect()
  print ">>> Edit Session started."

  # Get default Realm
  realm=cmo.getSecurityConfiguration().getDefaultRealm()

  # Create Authentication Providers
  saml2IdA=realm.lookupAuthenticationProvider(IDA_NAME)
  if saml2IdA != None:
    realm.destroyAuthenticationProvider(saml2IdA)
  saml2IdA=realm.createAuthenticationProvider(IDA_NAME,'com.bea.security.saml2.providers.SAML2IdentityAsserter')
  print ">>> Authentication Provider created."

  # Reorder Authentication Providers
  defaultAtn=realm.lookupAuthenticationProvider('DefaultAuthenticator')
  defaultIdA=realm.lookupAuthenticationProvider('DefaultIdentityAsserter')
  iplanetAtn=realm.lookupAuthenticationProvider(ATN_NAME)
  realm.setAuthenticationProviders(jarray.array([saml2IdA,iplanetAtn,defaultAtn,defaultIdA],weblogic.management.security.authentication.AuthenticationProviderMBean))
  print ">>> Authentication Providers re-ordered."

except Exception, e:
  print "ERROR... check error messages for cause."
  print e
  exit(exitcode=1)

redirect('/dev/null','false')
save()
activate()
disconnect()
exit(exitcode=0)
[weblogic@weblogic-server-0 ~]$

 

So let’s execute this script then:

[weblogic@weblogic-server-0 ~]$ ls
configServiceProviders.wlst  createSAML2AuthenticationProviders.wlst  createWebSSOIdPPartners.wlst  domain.properties  idp_metadata.xml
[weblogic@weblogic-server-0 ~]$
[weblogic@weblogic-server-0 ~]$ $ORACLE_HOME/oracle_common/common/bin/wlst.sh createSAML2AuthenticationProviders.wlst

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

>>> Loaded the properties file: /home/weblogic/domain.properties
>>> Connected to the AdminServer.
>>> Edit Session started.
>>> Authentication Provider created.
>>> Authentication Providers re-ordered.
[weblogic@weblogic-server-0 ~]$

 

As mentioned previously, you will need to restart the WebLogic Domain at this point. Once done, you can continue with the next part which is to create the IdP Partner. Using the same properties file and another WLST script:

[weblogic@weblogic-server-0 ~]$ cat createWebSSOIdPPartners.wlst
##################################################################
#
# Authors: Morgan Patou    
# Version: 1.4 - 30/08/2019
#
# File: createWebSSOIdPPartners.wlst
# Purpose: Script to create a WebSSO IdP Partner
# Parameters: input properties file (optional)
# Output:
#
##################################################################

# Get operating system (for vars)
import os

# Read the domain properties file
try:
  if len(sys.argv) == 2:
    domainProperties=sys.argv[1]
  else:
    domainProperties=os.path.realpath(os.path.dirname(sys.argv[0])) + "/domain.properties"
  loadProperties(domainProperties)
  print ">>> Loaded the properties file: " + domainProperties
  print

except:
  exit(exitcode=1)

try:
  redirect('/dev/null','false')
  # Connect to AdminServer
  connect(userConfigFile=CONFIG_FILE,userKeyFile=KEY_FILE,url=ADMIN_URL)
  print ">>> Connected to the AdminServer."
  stopRedirect()

  # Get default Realm
  realm=cmo.getSecurityConfiguration().getDefaultRealm()

  # Config Web SSO IdP Partner
  saml2IdA=realm.lookupAuthenticationProvider(IDA_NAME)
  if saml2IdA != None:
    if saml2IdA.idPPartnerExists(IDP_NAME):
      saml2IdA.removeIdPPartner(IDP_NAME)
    idpPartner=saml2IdA.consumeIdPPartnerMetadata(IDP_METADATA)
    idpPartner.setName(IDP_NAME)
    idpPartner.setEnabled(Boolean(IDP_ENABLED))
    idpPartner.setRedirectURIs(array(eval(IDP_REDIRECT_URIS),java.lang.String))
    saml2IdA.addIdPPartner(idpPartner)
  print ">>> Web SSO IdP Partner created."

except Exception, e:
  print "ERROR... check error messages for cause."
  print e
  exit(exitcode=1)

redirect('/dev/null','false')
disconnect()
exit(exitcode=0)
[weblogic@weblogic-server-0 ~]$

 

As you can see above, this one doesn’t require an edit session and therefore can’t be recorded. The key part above is the “consumeIdPPartnerMetadata(…)” method which is loading the metadata file that was generated by the Identity Provider (“Server side“). It will take care of setting up the SSL Certificate for the Identity Provider as well as all the usable URLs, aso… The path and name of this input metadata file can be found in the properties file. The execution of the WLST is simple and smooth:

[weblogic@weblogic-server-0 ~]$ $ORACLE_HOME/oracle_common/common/bin/wlst.sh createWebSSOIdPPartners.wlst

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

>>> Loaded the properties file: /home/weblogic/domain.properties
>>> Connected to the AdminServer.
>>> Web SSO IdP Partner created.
[weblogic@weblogic-server-0 ~]$

 

The next step is then to configure your Managed Servers by creating the Service Providers, defining the published URL, the Entity ID and other parameters. Then generate an output metadata file for each of your Managed Servers. These output metadata files will need to be imported into the Identity Provider to close the SAML2 SSO chain. Again a new WLST script for this last part:

[weblogic@weblogic-server-0 ~]$ cat configServiceProviders.wlst
##################################################################
#
# Authors: Morgan Patou    
# Version: 1.4 - 30/08/2019
#
# File: configServiceProviders.wlst
# Purpose: Script to configure SSO Service Providers
# Parameters: input properties file (optional)
# Output:
#
##################################################################

# Get operating system (for vars)
import os

# Read the domain properties file
try:
  if len(sys.argv) == 2:
    domainProperties=sys.argv[1]
  else:
    domainProperties=os.path.realpath(os.path.dirname(sys.argv[0])) + "/domain.properties"
  loadProperties(domainProperties)
  print ">>> Loaded the properties file: " + domainProperties
  print

except:
  exit(exitcode=1)

try:
  redirect('/dev/null','false')
  # Connect to AdminServer
  connect(userConfigFile=CONFIG_FILE,userKeyFile=KEY_FILE,url=ADMIN_URL)
  print ">>> Connected to the AdminServer."

  # Start Edit Session
  edit()
  startEdit()
  stopRedirect()
  print ">>> Edit Session started."

  # Config SSO Service Providers
  publishedSiteURLs=SSO_URLS.split(',')
  entityIDs=SSO_ENTITY_IDS.split(',')
  id=0
  for ssoServerName in SSO_MS.split(','):
    ssoServer=cmo.lookupServer(ssoServerName)
    ssoService=ssoServer.getSingleSignOnServices()
    ssoService.setPublishedSiteURL(publishedSiteURLs[id])
    ssoService.setEntityID(entityIDs[id])
    ssoService.setServiceProviderEnabled(Boolean(SSO_SP_ENABLED))
    ssoService.setServiceProviderPreferredBinding(SSO_SP_BINDING)
    id=id+1
  print ">>> SSO Service Providers configured."

except Exception, e:
  print "ERROR... check error messages for cause."
  print e
  exit(exitcode=1)

redirect('/dev/null','false')
save()
activate()

try:
  # Start Runtime Session
  domainRuntime()
  stopRedirect()
  print ">>> Runtime Session started."

  # Export Service Providers metadata
  for ssoServerName in SSO_MS.split(','):
    cd('/ServerRuntimes/'+ssoServerName)
    cmo.getSingleSignOnServicesRuntime().publish('/tmp/'+ssoServerName+'_sp_metadata.xml',false)
  print ">>> Service Providers metadata files exported."

except Exception, e:
  print "ERROR... check error messages for cause."
  print e
  exit(exitcode=1)

redirect('/dev/null','false')
disconnect()
exit(exitcode=0)
[weblogic@weblogic-server-0 ~]$

 

So as mentioned above, the first section is looping on the Managed Servers list from the parameters to configure the SAML2 SSO for all of them. This part requires an edit session. The second section is doing the export of the Service Providers metadata files under /tmp and this doesn’t need any edit session, it needs to be done with a runtime session instead. Again, the execution:

[weblogic@weblogic-server-0 ~]$ $ORACLE_HOME/oracle_common/common/bin/wlst.sh configServiceProviders.wlst

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

>>> Loaded the properties file: /home/weblogic/domain.properties
>>> Connected to the AdminServer.
>>> Edit Session started.
>>> SSO Service Providers configured.
>>> Runtime Session started.
>>> Service Providers metadata files exported.
[weblogic@weblogic-server-0 ~]$
[weblogic@weblogic-server-0 ~]$ ls /tmp/*metadata.xml
/tmp/msD2-01_sp_metadata.xml  /tmp/msD2-02_sp_metadata.xml
[weblogic@weblogic-server-0 ~]$

 

At that point, the WebLogic Server acting as Service Provider is fully configured. You can now transfer these metadata files to the Identity Provider side and import them there.

There is one last thing that I didn’t talk about and that’s the configuration of the Application itself, if needed, to use the SAML2 SSO. In the case of Documentum D2, it does support the LDAP + SAML2 SSO, you just have to have some basic configuration in the web.xml and weblogic.xml. There is an example I wrote a little bit more than two years ago: here.

 

Cet article WebLogic Server – Automatic/Silent setup of a SAML2 SSO est apparu en premier sur Blog dbi services.

WebLogic Server – Automatic/Silent creation of an LDAP Authentication Provider

Thu, 2020-01-02 15:20

In a previous blog, I explained how it is possible to create an LDAP/LDAPs connection from a Documentum Content Server automatically/silently (without any need for a GUI). So I thought I would do the same thing but from a WebLogic Server to have the full chain from the Application to the backend, all connected to the LDAP/LDAPs. This blog isn’t linked to Documentum, it is really just WebLogic Server specific so if you want to do the same but for another application, that’s also what you need to do. There are plenty of blogs on the internet about how to configure WebLogic but they are (almost?) always using the GUI… Which is good because it’s simple, but it is also annoying because you cannot really automate that.

As mentioned in this subsequent blog, my goal was a little bit more than just an LDAP setup so I first searched about any hints on what would be needed to setup everything. The only thing I found that was a little bit helpful was actually the examples that are shipped with the OFM (if you included them). We usually install only the minimal requirements so we don’t have the examples but you can choose to have the examples as well when you install the binaries. In the silent properties file, you can just set the install type to include “… With Examples“. Inside these examples, there is a SAML2 SSO one which seems pretty complex. There is a plethora of files for the purpose of the example obviously but most of that is completely useless outside of this scope. Also, from what I could see, it was designed for a WebLogic Server 9 so that seemed to be pretty old… Since I was using WLS 12c, I obviously expected a lot of things going wrong. It was nonetheless a good starting point to have some details about where can you find the needed elements in WLST but you will still need a lot of knowledge in WLS and WLST to be able to make something out of it. That’s where this blog comes in.

For the LDAP Authentication Provider creation, you can also record the execution from the Administration Console, it will gives you good information about what needs to be done (at least for this part).

The first thing to do to setup a LDAPs (it doesn’t apply to a plain LDAP) is to add the LDAPs SSL Certificate chain into the WebLogic Server’s trust store:

[weblogic@weblogic-server-0 ~]$ cert_location="/tmp/certs"
[weblogic@weblogic-server-0 ~]$ ssl_ldap_root_ca_file="LDAP_Root_CA.cer"
[weblogic@weblogic-server-0 ~]$ ssl_ldap_int_ca_file="LDAP_Int_CA.cer"
[weblogic@weblogic-server-0 ~]$ tks_file="$DOMAIN_HOME/certs/trust.jks"
[weblogic@weblogic-server-0 ~]$ tks_pwd="MyP4ssw0rd"
[weblogic@weblogic-server-0 ~]$
[weblogic@weblogic-server-0 ~]$ $JAVA_HOME/bin/keytool -import -trustcacerts -alias ssl_ldap_root_ca -file ${cert_location}/${ssl_ldap_root_ca_file} -keystore ${tks_file} -storepass ${tks_pwd} -noprompt
Certificate was added to keystore
[weblogic@weblogic-server-0 ~]$ $JAVA_HOME/bin/keytool -import -trustcacerts -alias ssl_ldap_int_ca -file ${cert_location}/${ssl_ldap_int_ca_file} -keystore ${tks_file} -storepass ${tks_pwd} -noprompt
Certificate was added to keystore
[weblogic@weblogic-server-0 ~]$

 

Once that is done, you can start the creation of the LDAP Authentication Provider. To be able to automate that, the best for me is to use a WLST script. Make sure the AdminServer is up, running and reachable before trying to execute a WLST script. I put all my parameters in a properties file and I’m loading this file in the WLST so that it creates the correct object with all the needed parameters. Here are the properties and the WLST script to create the LDAP (you can disregard the IdP and Managed Servers parameters, they are only used for the SAML2 SSO part in the other blog):

[weblogic@weblogic-server-0 ~]$ cat domain.properties
# AdminServer parameters
CONFIG_FILE=/home/weblogic/secure/configfile.secure
KEY_FILE=/home/weblogic/secure/keyfile.secure
ADMIN_URL=t3s://weblogic-server-0.domain.com:8443
# LDAP Authentication Providers parameters
ATN_NAME=Internal_LDAP
ATN_FLAG=SUFFICIENT
ATN_HOST=ldap.domain.com
ATN_PORT=636
ATN_PRINCIPAL=ou=APP,ou=applications,ou=intranet,dc=dbi services,dc=com
ATN_CREDENTIAL=T3stP4ssw0rd
ATN_SSL=true
ATN_BASE_DN=ou=people,ou=intranet,dc=dbi services,dc=com
ATN_USER_FILTER=(&(uid=%u)(objectclass=person))
ATN_USER_CLASS=person
ATN_USER_AS_PRINCIPAL=true
ATN_GROUP_FILTER=(&(cn=%g)(objectclass=groupofuniquenames))
ATN_TIMEOUT=30
# IdP Partner parameters
IDA_NAME=APP_SAML2_IDAsserter
IDP_NAME=APP_SAML2_IDPartner
IDP_METADATA=/home/weblogic/idp_metadata.xml
IDP_ENABLED=true
IDP_REDIRECT_URIS=['/D2-01/*','/D2-02/*']
# Managed Servers SSO parameters
SSO_MS=msD2-01,msD2-02
SSO_URLS=https://lb_url1/saml2,https://lb_url2/saml2
SSO_ENTITY_IDS=APP_SAML2_Entity_ID_01,APP_SAML2_Entity_ID_02
SSO_SP_ENABLED=true
SSO_SP_BINDING=HTTP/POST
[weblogic@weblogic-server-0 ~]$
[weblogic@weblogic-server-0 ~]$
[weblogic@weblogic-server-0 ~]$ cat createLDAPAuthenticationProviders.wlst
##################################################################
#
# Authors: Morgan Patou    
# Version: 1.4 - 30/08/2019
#
# File: createLDAPAuthenticationProviders.wlst
# Purpose: Script to create LDAP/LDAPs Authentication Providers
# Parameters: input properties file (optional)
# Output:
#
##################################################################

# Get operating system (for vars)
import os

# Read the domain properties file
try:
  if len(sys.argv) == 2:
    domainProperties=sys.argv[1]
  else:
    domainProperties=os.path.realpath(os.path.dirname(sys.argv[0])) + "/domain.properties"
  loadProperties(domainProperties)
  print ">>> Loaded the properties file: " + domainProperties
  print

except:
  exit(exitcode=1)

try:
  redirect('/dev/null','false')
  # Connect to AdminServer
  connect(userConfigFile=CONFIG_FILE,userKeyFile=KEY_FILE,url=ADMIN_URL)
  print ">>> Connected to the AdminServer."

  # Start Edit Session
  edit()
  startEdit()
  stopRedirect()
  print ">>> Edit Session started."

  # Get default Realm
  realm=cmo.getSecurityConfiguration().getDefaultRealm()

  # Create Authentication Providers
  iplanetAtn=realm.lookupAuthenticationProvider(ATN_NAME)
  if iplanetAtn != None:
    realm.destroyAuthenticationProvider(iplanetAtn)
  iplanetAtn=realm.createAuthenticationProvider(ATN_NAME,'weblogic.security.providers.authentication.IPlanetAuthenticator')
  print ">>> Authentication Provider created."

  # Config Authentication Providers
  iplanetAtn.setControlFlag(ATN_FLAG)
  iplanetAtn.setHost(ATN_HOST)
  iplanetAtn.setPort(int(ATN_PORT))
  iplanetAtn.setPrincipal(ATN_PRINCIPAL)
  iplanetAtn.setCredential(ATN_CREDENTIAL)
  iplanetAtn.setSSLEnabled(Boolean(ATN_SSL))
  iplanetAtn.setUserBaseDN(ATN_BASE_DN)
  iplanetAtn.setUserFromNameFilter(ATN_USER_FILTER)
  iplanetAtn.setUserObjectClass(ATN_USER_CLASS)
  iplanetAtn.setUseRetrievedUserNameAsPrincipal(Boolean(ATN_USER_AS_PRINCIPAL))
  iplanetAtn.setGroupBaseDN(ATN_PRINCIPAL)
  iplanetAtn.setGroupFromNameFilter(ATN_GROUP_FILTER)
  iplanetAtn.setConnectTimeout(int(ATN_TIMEOUT))
  print ">>> Authentication Provider configured."

  # Reorder Authentication Providers
  defaultAtn=realm.lookupAuthenticationProvider('DefaultAuthenticator')
  defaultIdA=realm.lookupAuthenticationProvider('DefaultIdentityAsserter')
  realm.setAuthenticationProviders(jarray.array([iplanetAtn,defaultAtn,defaultIdA],weblogic.management.security.authentication.AuthenticationProviderMBean))
  print ">>> Authentication Providers re-ordered."

except Exception, e:
  print "ERROR... check error messages for cause."
  print e
  exit(exitcode=1)

redirect('/dev/null','false')
save()
activate()
disconnect()
exit(exitcode=0)
[weblogic@weblogic-server-0 ~]$

 

With the above, you have everything needed to simply create an LDAP Authentication Provider. I won’t really describe what the WLST script is doing, I believe it is pretty self-explanatory and there is a commented line before each section which describes the use of the commands. If you have any questions, please feel free to ask them in the comments below! I used an IPlanet Authenticator but you can obviously choose something else. I also set the group base dn as my principal because I don’t need the groups but you can set whatever you want/need. There are other properties as well that you can set, just check them in WLST to have the correct method name (or use the record method as mentioned previously). In the above WLST script, the last thing done is also to re-order the Authentication Providers so that the newly created LDAP one is the first to be checked. The control flag is set as “SUFFICIENT“, meaning that if the authentication is successful for the LDAP, then WebLogic can proceed. For the LDAP user’s principal and password, you can also use an encrypted file containing the username and password with the “setEncrypted(…)” method instead.

To execute the WLST script and therefore create the LDAP Authentication Provider, just execute the script:

[weblogic@weblogic-server-0 ~]$ ls
createLDAPAuthenticationProviders.wlst  domain.properties
[weblogic@weblogic-server-0 ~]$
[weblogic@weblogic-server-0 ~]$ $ORACLE_HOME/oracle_common/common/bin/wlst.sh createLDAPAuthenticationProviders.wlst

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

>>> Loaded the properties file: /home/weblogic/domain.properties
>>> Connected to the AdminServer.
>>> Edit Session started.
>>> Authentication Provider created.
>>> Authentication Provider configured.
>>> Authentication Providers re-ordered.
[weblogic@weblogic-server-0 ~]$

 

As shown above, you can put a parameter to the script with the full path and name of the properties file to be loaded. Alternatively, if you do not provide any parameter, it will assume that the properties file is located just beside the WLST script with a certain name (“domain.properties” by default). In all cases, once the LDAP Authentication Provider has been created, you will need to restart the full Domain. That’s all there is to do to create an LDAP/LDAPs connection on WebLogic Server.

 

Cet article WebLogic Server – Automatic/Silent creation of an LDAP Authentication Provider est apparu en premier sur Blog dbi services.

One Host, multiple PostgreSQL Clusters – Monitoring using EDB PEM

Tue, 2019-12-24 07:21

At a customer we implemented the EDB Postgres Enterprise Manager. As they run multiple PostgreSQL cluster on one server, they wanted, for sure, to monitor all the cluster using PEM. This blog should give you a short guidance on how to add databases to PEM.

Starting position

– PEM Server installed, up and running
– PEM Agent installed on the hosts, that should be monitored
– Hosts already registered in the PEM
– pg_hba.conf of all databases prepared to accept connections from PEM

host    postgres        enterprisedb    192.168.22.53/32           trust
First impression

As you can see, the hosts, where the agent is installed and registered with PEM are visible in PEM, but the cluster are not automatically detected.

Add a cluster

Let’s add a PostgeSQL cluster.


First of all make sure to choose a self-documenting name. This name (per default) is not unique. So you can add many servers with the same name. Very confusing.

In the connection tab, enter the hostname or IP address and the other connection settings for the PostgreSQL cluster.

In case you run EFM High Availability, you can configure this on the Tab “Advanced”


Add the second cluster

Use the same steps as in step “Add a cluster”. Just take another name and type the correct port.

That’s it, really simple and straight forward. And all cluster and their databases are shown in a nice manner.

Cet article One Host, multiple PostgreSQL Clusters – Monitoring using EDB PEM est apparu en premier sur Blog dbi services.

A Ruthless Repository Shutdown Utility, Part II

Wed, 2019-12-18 15:51
Stopping the unreachable repositories

Suppose that the docbroker has been stopped prematurely and that we want to shut down the repositories but the out-of-the-box dm_shutdown_repository is not effective. Why is it so by the way ? If we look closely inside the shutdown script, we quickly notice the reason:

#!/bin/sh
################## DOCUMENTUM SERVER SHUTDOWN FILE ######################
#
# 1994-2018 OpenText Corporation. All rights reserved
# Version 16.4 of the Documentum Server.
#
# A generated server shutdown script for a repository.
# This file was generated on Fri Aug 30 12:15:10 CEST 2019 by user dmadmin.
#
check_connect_status() {
status=$1
if [ ! $status = 0 ] ; then
  cat <<-END
  ***** $0: ERROR
  ***** Unable to complete shutdown - unable to connect
  ***** to the server to issue $2 request.
END
  exit 1
fi
}
...
# Stop the server
echo Stopping Documentum server for repository: [dmtestgr02]
echo ''
DM_DMADMIN_USER=dmadmin
#
# Get the pid for the root process
#
DM_PID=`./iapi dmtestgr02 -U$DM_DMADMIN_USER -P -e << EOF  | grep 'root_pid' | sed -e 's/ .*[: A-Za-z]//'
apply,s0,NULL,LIST_SESSIONS
next,s0,q0
dump,s0,q0
exit
EOF`
status=$?
check_connect_status $status LIST_SESSIONS
...
            kill -9 $child_pid
...
  kill -9 $DM_PID
...
         kill -9 $child_pid
...

On line 29, the shutdown script first attempts to connect to the repository in order to retrieve the root pid of the server processes. On line 36, this attempt’s result is checked by the function check_connect_status defined earlier in the script at line 10. If something went wrong during the connection, iapi’s return status will be != 0 and check_connect_status will simply exit the script on line 18. So, if a repository has gone berserk, or no free sessions are available, or the docbroker is unreachable, the script will not be able to stop it. That logic is quite restrictive and we must fall back to killing the repository’s processes ourselves anyway.
Strangely enough, the script is not scared of killing processes, it does this from several places, but it rather looks like it is a bit shy in identifying the right ones and therefore relies on the server itself or, ultimately, on the user, for help in this area.
Admittedly, it is not always easy to pinpoint the right processes from the list returned by the command ps, especially if the repository is running in HA on the same machine, or if several repositories share the same machine, so extra care must be used in order not to kill the wrong ones. The dm_shutdown_docbase avoids this difficulty altogether by asking the content server (aka CS) its root pid and that is why it aborts if it cannot contact it.
Historically, the “kill” command could only “kill -9” (SIGKILL, forceful, coercive kill) but nowadays it has been generalized to send signals and could just as well have been forked to “signal” or “send”. So, can a signal be sent to the main executable ${DM_HOME}/bin/documentum to ask it to cleanly shut down the repository ? We wish but this has not been implemented. Signals such as SIGQUIT, SIGTRAP, SIGINT and SIGABRT are trapped indeed but will only kill the server after printing to the server’s log the last executed SQL or the call stack trace, e.g. after a SIGINT was sent:

2019-10-11T13:14:14.045467 24429[24429] 0100c35080004101 Error: dm_bear_trap: Unexpected exception, (SIGINT: interrupt: (2) at (Connection Failure)), during new session creation in module dmapply.cxx after line 542. Process exiting.
Last SQL statement executed by DB was:
 
 
Last SQL statement executed by DB was:
Last SQL statement executed by DB was:
 
 
 
 
Last SQL statement executed by DB was:
 
 
(23962) Outer Exception handler caught exception: SIGINT: interrupt: (2) at (RPC MAIN)

Thus, a corruption is theoretically possible while using any of those signals, just as it is when a SIGKILL signal is issued.
According to OTX Support, a trap handler that shuts down cleanly the repository has not been implemented because it needs a session to invoke the shutdown server method. OK, and what if a hidden session were opened at startup time and kept around just for such administrative cases ? How about a handler to immediately force a projection to the available docbrokers instead of waiting for the next checkpoint cycle ? As you see, there are ways to make the shutdown more resilient but my overall feeling is there is a lack of willingness to improve the content server.
Therefore, if waiting about 5 minutes for the repository to project to a docbroker is not acceptable, there is no other alternative than kill -9 the repository’s processes, start the docbroker(s) and then the repository. Other signals can work, but not always, and are not any safer.
In order to use that command, one needs to know the content server’s root pid and since the CS does not accept any connection at this point, one must get it from another source. Once the root pid is available, it can be given to the kill command with a slight subtlety: in order to include its children processes, the root pid must be negated, e.g.:

# use the standalone /bin/kill command;
$ /bin/kill --signal SIGKILL -12345
# or use bash's kill builtin:
$ command kill -s SIGKILL -12345

This will transitively kill the process with pid 12345 and all the others in same group, which are the ones it started itself, directly or indirectly.
If a numeric signal is preferred, the equivalent command is:

$ /bin/kill -9 -12345

I leave it to you to decide which one is more readable.
So now, we need to identify the repository’s root process. Once found, we can send its negated value the SIGKILL signal, which will propagate to all the child processes. Let’s see now how to identify this root process.

Identifying the content server’s root process

Ordinarily, the LIST_SESSIONS server method returns a collection containing the root_pid attribute among other valuable information, e.g.:

API> apply,c,NULL,LIST_SESSIONS
...
q0
API> next,c,q0
...
OK
API> dump,c,q0
...
USER ATTRIBUTES
 
  root_start                      : 12/11/2019 22:53:19
  root_pid                        : 25329
  shared_mem_id                   : 2588691
  semaphore_id                    : 0
  session                      [0]: 0100c3508000a11c
                               [1]: 0100c3508000a102
                               [2]: 0100c3508000a101
  db_session_id                [0]: 272
                               [1]: 37
                               [2]: 33
  typelockdb_session_id        [0]: -1
                               [1]: -1
                               [2]: -1
  tempdb_session_ids           [0]: -1
                               [1]: 45
                               [2]: 36
  pid                          [0]: 17686
                               [1]: 26512
                               [2]: 26465
  user_name                    [0]: dmadmin
                               [1]: dmadmin
                               [2]: dmadmin
  user_authentication          [0]: Trusted Client
                               [1]: Password
                               [2]: Trusted Client
  client_host                  [0]: docker
                               [1]: 172.19.0.3
                               [2]: docker
  client_lib_ver               [0]: 16.4.0070.0035
                               [1]: 16.4.0070.0035
                               [2]: 16.4.0070.0035
...

But in our case, the CS is not reachable so it cannot be queried.
An easy alternative is to simply look into the CS’s log:

dmadmin@docker:/app/dctm$ less /app/dctm/dba/log/dmtest.log
 
    OpenText Documentum Content Server (version 16.4.0080.0129  Linux64.Oracle)
    Copyright (c) 2018. OpenText Corporation
    All rights reserved.
 
2019-12-11T22:53:19.757264      25329[25329]    0000000000000000        [DM_SERVER_I_START_SERVER]info:  "Docbase dmtest attempting to open"
 
2019-12-11T22:53:19.757358      25329[25329]    0000000000000000        [DM_SERVER_I_START_KEY_STORAGE_MODE]info:  "Docbase dmtest is using database for cryptographic key storage"
...

The number 25329 is the root_pid. It can be extracted from the log file as shown below:

$ grep "\[DM_SERVER_I_START_SERVER\]info" /app/dctm/dba/log/dmtest.log | gawk '{if (match($2, /\[[0-9]+\]/)) {print substr($2, RSTART + 1, RLENGTH - 2); exit}}'
25329
# or compacter:
gawk '{if (match($0, /\[([0-9]+)\].+\[DM_SERVER_I_START_SERVER\]info/, root_pid)) {print root_pid[1]; exit}}' /app/dctm/dba/log/dmtest.log
25329

The extracted root_pid can be confirmed by the ps command with options ajxf showing a nice tree-like view of the running processes. E.g.:

dmadmin@docker:/app/dctm$ ps_gpid 25329
 PPID   PID  PGID   SID TTY      TPGID STAT   UID   TIME COMMAND
    1 25329 25329 25329 ?           -1 Ss    1001   0:01 ./documentum -docbase_name dmtest -security acl -init_file /app/dctm/dba/config/dmtest/server.ini
25329 25370 25329 25329 ?           -1 S     1001   0:00  \_ /app/dctm/product/16.4/bin/mthdsvr master 0xe901fc83, 0x7f084db15000, 0x223000 50000  5 25329 dmtest /app/dctm/dba/log
25370 25371 25329 25329 ?           -1 Sl    1001   0:05  |   \_ /app/dctm/product/16.4/bin/mthdsvr worker 0xe901fc83, 0x7f084db15000, 0x223000 50000  5 0 dmtest /app/dctm/dba/log
25370 25430 25329 25329 ?           -1 Sl    1001   0:05  |   \_ /app/dctm/product/16.4/bin/mthdsvr worker 0xe901fc83, 0x7f084db15000, 0x223000 50000  5 1 dmtest /app/dctm/dba/log
25370 25451 25329 25329 ?           -1 Sl    1001   0:05  |   \_ /app/dctm/product/16.4/bin/mthdsvr worker 0xe901fc83, 0x7f084db15000, 0x223000 50000  5 2 dmtest /app/dctm/dba/log
25370 25464 25329 25329 ?           -1 Sl    1001   0:05  |   \_ /app/dctm/product/16.4/bin/mthdsvr worker 0xe901fc83, 0x7f084db15000, 0x223000 50000  5 3 dmtest /app/dctm/dba/log
25370 25482 25329 25329 ?           -1 Sl    1001   0:05  |   \_ /app/dctm/product/16.4/bin/mthdsvr worker 0xe901fc83, 0x7f084db15000, 0x223000 50000  5 4 dmtest /app/dctm/dba/log
25329 25431 25329 25329 ?           -1 S     1001   0:00  \_ ./documentum -docbase_name dmtest -security acl -init_file /app/dctm/dba/config/dmtest/server.ini
25329 25432 25329 25329 ?           -1 S     1001   0:00  \_ ./documentum -docbase_name dmtest -security acl -init_file /app/dctm/dba/config/dmtest/server.ini
25329 25453 25329 25329 ?           -1 S     1001   0:00  \_ ./documentum -docbase_name dmtest -security acl -init_file /app/dctm/dba/config/dmtest/server.ini
25329 25465 25329 25329 ?           -1 S     1001   0:00  \_ ./documentum -docbase_name dmtest -security acl -init_file /app/dctm/dba/config/dmtest/server.ini
25329 25489 25329 25329 ?           -1 S     1001   0:00  \_ ./documentum -docbase_name dmtest -security acl -init_file /app/dctm/dba/config/dmtest/server.ini
25329 26439 25329 25329 ?           -1 Sl    1001   0:11  \_ ./dm_agent_exec -docbase_name dmtest.dmtest -docbase_owner dmadmin -sleep_duration 0
25329 26465 25329 25329 ?           -1 S     1001   0:00  \_ ./documentum -docbase_name dmtest -security acl -init_file /app/dctm/dba/config/dmtest/server.ini
    1 10112 25329 25329 ?           -1 Rl    1001   0:03 ./dm_agent_exec -docbase_name dmtest.dmtest -docbase_owner dmadmin -trace_level 0 -job_id 0800c3508000218b -log_directory /app/dctm/dba/log -docbase_id 50000

On line 3, the CS for docbase dmtest was started with pid 25329 and same value for its pgid. This process started then a few child processes all with the pgid 25329.
ps_pgid on line 1 is a bash function defined in ~/.bashrc as follows:

# returns the lines from ps -ajxf with given gpid;
# the ps command's header line is printed only if at least 1 entry is found;
function ps_pgid {
   pgid=$1
   ps -ajxf | gawk -v pgid=$pgid 'BEGIN {getline; header = $0; h_not_printed = 1} {if ($3 == pgid) {if (h_not_printed) {print header; h_not_printed = 0}; print}}'
}

The command does not show the method server nor the docbroker as they were started separately from the CS.
Thus, if we execute the command below:

$ kill --signal SIGKILL -25329

the CS will be killed along with all its child processes, which is exactly what we want.

Putting both commands together, we get:

kill --signal SIGKILL -$(grep "\[DM_SERVER_I_START_SERVER\]info" /app/dctm/dba/log/dmtest.log | gawk '{if (match($2, /\[[0-9]+\]/)) {print substr($2, RSTART + 1, RLENGTH - 2); exit}}')

It may be worth defining a bash function for it too:

function kill_cs {
   repo=$1
   kill --signal SIGKILL -$(grep "\[DM_SERVER_I_START_SERVER\]info" /app/dctm/dba/log/${repo}.log | gawk '{if (match($2, /\[[0-9]+\]/)) {print substr($2, RSTART + 1, RLENGTH - 2); exit}}')
}
 
# source it:
. ~/.bashrc
 
# call it:
kill_cs dmtest

where test is the content server to kill.
The naive way to search the running content server via the command “ps -ef | grep docbase_name” can be too ambiguous in case of multiple content servers for the same repository (e.g. in a high-availability installation) or when docbase_name is the stem of a family of docbases (e.g. dmtest_1, dmtest_2, …, dmtest_10, etc…). Besides, even if no ambiguity were possible, it would return too many processes to be killed individually. xargs could do it at once, sure, but why risk killing the wrong ones ? The above ps_pgid function is directly looking for the given group id which is the root_pid of the content server of interest taken straight out of its log file, no ambiguity here.

Hardening start-stop.sh

This ruthless kill functionality could be added to the start-stop script listed above, either as a command-line option to the stop parameter (say, like -k as in the dm_shutdown_repository script) or as a full parameter on a par with the stop | start | status ones, i.e.:

start-stop.sh stop | start | status | kill ...

or, simply by deciding that a stop should always succeed and forcing a kill if needed. In such variant, the stop_docbase() function becomes:

stop_docbase() {
   echo "stopping $docbase"
   docbase=$1
   ./dm_shutdown_${docbase}
   if [[ $? -eq 1 ]]; then
      echo "killing docbase $docbase"
      kill_cs $docbase
   fi
   echo "docbase $docbase stopped"
}
Conclusion

If the content server were open source we wouldn’t have this article’s title. Instead, it would be “Forcing impromptu projections to docbrokers through signal handling in content server: an implementation” or “Shutting down a content server by sending a signal: a proposal”. We could send this request to the maintainers and probably receive a positive answer. Or we could implement the changes ourselves and submit them as a RFC. This model does not work so much in closed, commercial source which evolves following its own marketing agenda. Nonetheless, this situation gives us the opportunity to rant about it and find work-arounds. Imagine a world where all software were flawless, would it be as fun ?

Cet article A Ruthless Repository Shutdown Utility, Part II est apparu en premier sur Blog dbi services.

A Ruthless Repository Shutdown Utility, Part I

Wed, 2019-12-18 13:16

You have finally completed that migration and need to restart all the Documentum processes. So, you shut down the docbroker and move on to the repositories but then you receive an error message about them not being reachable any more. Or conversely, you want to start all the Documentum processes and you start first the repositories and later the docbrokers. Next, you want to connect to one repository and you receive the same error message. Of course, you finally remember, since the docbroker is a requirement for the repositories, it must be started first and shut down last but it is too late now. How to get out if this annoying situation ? You could just (re)start the docbroker and wait for the next repostories’ checkpoint, at most 5 minutes by default. If this is not acceptable, at first sight, there is no other solution than to “kill -9” the repositories’ processes, start the docbroker and only next the repositories. Let’s see if we can find a better way. Spoiler alert: to stop this insufferable suspens, I must say up front that there is no other way, sorry, but there are a few ways to alleviate this inconvenience.

A quick clarification

Let’s first clarify a point of terminology here: there is a difference between docbases/repositories and content servers. A docbase encompasses the actual content and their persistent data and technical information whereas the content server is the set of running processes that give access to and manage one docbase. It is very similar to Oracle’s databases and instances, where one database can be served by several instances, providing parallelism and high availability. A docbase can be served by more than one content server, generally spread over different machines, with its own set of dm_start_docbase and dm_shutdown_docbase scripts and server.ini. A docbase knows how many content servers use it because they each have their own dm_server_config object. If there is just one content server, both docbase and content server can be used interchangeably but when there are several content servers for the same docbase, when one says “stopping the docbase” it really means “stopping one particular content server”, and this is the meaning that will be used in the rest of the article. If the docbase has more than one content servers, just extend the presented manipulations to each of them.

Connecting to the repositories without a docbroker

If one could connect to a repository without a running docbroker, the situation that triggered this article, things would be much easier. In the ancient, simpler times, the dmcl.ini parameters below could help working around an unavailable docbroker:

[DOCBROKER_DEBUG]
docbase_id = <id of docbase as specified in its server.ini file>
host =  <host's name the docbase server is running on>
port = <docbase's port as specified in /etc/services>
service = <docbase's service name as specified in /etc/services>

and they used to work.
After the switch to the dfc.properties file, those parameters were renamed as follows:

dfc.docbroker.debug.docbase_id=<id of docbase as specified in its server.ini file>
dfc.docbroker.debug.host=<host's name the docbase server is running on>
dfc.docbroker.debug.port=<docbase's port as specified in /etc/services>
dfc.docbroker.debug.service=<docbase's service name as specified in /etc/services>

Unfortunately, they don’t work any more. Actually, although they are still documented in the dfcfull.properties, they have not been implemented and will never be according to OTX. Moreover, they will be removed in the future. Too bad, that would have been such a cheap way to extricate oneself from an uncomfortable situation.

Preventing the situation

The best solution is obviously to prevent it to happen. This can be easily realized by using a central script for stopping and starting the Documentum stack. And, while we are at it, inquiring its status.
Documentum already provides such a script, e.g. see here Linux scripts for automatic startup and shutdown of Documentum Content Server. Here is another more sophisticated implementation:

#!/bin/bash
#
# See Usage() function below for explanations; 
# cec - dbi-services - April 2019
#

general_status=0

Usage() {
   cat <<EoU
Usage:
    start-stop.sh [(help) | start | stop | status] [(all) | docbases | docbrokers | docbase={,} | docbroker={,} | method_server]
 E.g.:
    display this help screen:
       start-stop.sh
    start all:
       start-stop.sh start [all]
    stop all:
       start-stop.sh stop [all]
    status all:
       start-stop.sh status [all]
    start docbroker01:
       start-stop.sh start docbroker=docbroker01
    start docbases global_registry and dmtest01:
       start-stop.sh docbase=global_registry,dmtest01
    start all the docbases:
       start-stop.sh docbases
    start all the docbrokers:
       start-stop.sh docbrokers
EoU
}

start_docbroker() {
   docbroker=$1
   echo "starting up docbroker $docbroker ..."
   ./dm_launch_${docbroker}
}

start_all_docbrokers() {
   echo "starting the docbrokers ..."
   DOCBROKERS=`ls -1 dm_launch_* 2>/dev/null | cut -f3 -d_`
   nb_items=0
   for docbroker in $DOCBROKERS; do
      start_docbroker $docbroker
      (( nb_items++ ))
   done
   echo "$nb_items docbrokers started"

}

start_docbase() {
   docbase=$1
   echo "starting $docbase"
   ./dm_start_${docbase}
}

start_all_docbases() {
   echo "starting the repositories ..."
   DOCBASES=`ls -1 config 2>/dev/null `
   nb_items=0
   for docbase in $DOCBASES; do
      start_docbase $docbase
      (( nb_items++ ))
   done
   echo "$nb_items repositories started"
}

start_method_server() {
   echo "starting the method server ..."
   cd ${DOCUMENTUM}/${JBOSS}/server
   nohup ${DOCUMENTUM}/${JBOSS}/server/startMethodServer.sh 2>&1 > /tmp/nohup.log &
   echo "method server started"
}

start_all() {
   echo "starting all the documentum processes ..."
   start_all_docbrokers
   start_all_docbases
   start_method_server
}

status_docbroker() {
   docbroker_name=$1
   docbroker_host=$(grep "^host=" /app/dctm/dba/dm_launch_${docbroker_name} | cut -d= -f2)
   docbroker_port=$(grep "dmdocbroker -port " /app/dctm/dba/dm_launch_${docbroker_name} | cut -d\  -f3)
   dmqdocbroker -t $docbroker_host -p $docbroker_port -c ping 2> /dev/null 1> /dev/null
   local_status=$?
   if [ $local_status -eq 0 ]; then
      echo "$(date +"%Y/%m/%d %H:%M:%S"): successfully pinged docbroker $docbroker_name listening on port $docbroker_port on host $docbroker_host"
   else
      echo "$(date +"%Y/%m/%d %H:%M:%S"): docbroker $docbroker_name listening on port $docbroker_port on host $docbroker_host is unhealthy"
      general_status=1
   fi
   echo "status for docbroker $docbroker_name:$docbroker_port: $local_status, i.e. $(if [[ $local_status -eq 0 ]]; then echo OK; else echo NOK;fi)"
}

status_all_docbrokers() {
   DOCBROKERS=`ls -1 dm_launch_* 2>/dev/null | cut -f3 -d_`
   DOCBROKERS_PORTS=`grep -h "./dmdocbroker" dm_launch_* | cut -f3 -d\ `
   for f in `ls -1 dm_launch_* 2>/dev/null `; do
      docbroker_name=`echo $f | cut -f3 -d_`
      docbroker_port=`grep "./dmdocbroker" $f | cut -f3 -d\ `
      status_docbroker $docbroker_name $docbroker_port
   done
   echo "general status for all docbrokers: $general_status, i.e. $(if [[ $general_status -eq 0 ]]; then echo OK; else echo NOK;fi)"
}

status_docbase() {
   docbase=$1
   timeout --preserve-status 30s idql $docbase -Udmadmin -Pxx 2> /dev/null 1> /dev/null <<eoq
     quit
eoq
   local_status=$?
   if [[ $local_status -eq 0 ]]; then
      echo "$(date +"%Y/%m/%d %H:%M:%S"): successful connection to repository $docbase"
   else
      echo "$(date +"%Y/%m/%d %H:%M:%S"): repository $docbase is unhealthy"
      general_status=1
   fi
   echo "status for docbase $docbase: $local_status, i.e. $(if [[ $local_status -eq 0 ]]; then echo OK; else echo NOK;fi)"
}

status_all_docbases() {
   DOCBASES=`ls -1 config 2>/dev/null `
   for docbase in $DOCBASES; do
      status_docbase $docbase
   done
   echo "general status for all docbases: $general_status, i.e. $(if [[ $general_status -eq 0 ]]; then echo OK; else echo NOK;fi)"
}

status_method_server() {
   # check the method server;
   curl --silent --fail -k http://${HOSTNAME}:9080/DmMethods/servlet/DoMethod 2>&1 > /dev/null
   local_status=$?
   if [ $local_status -eq 0 ]; then
      echo "$(date +"%Y/%m/%d %H:%M:%S"): method server successfully contacted"
   else
      echo "$(date +"%Y/%m/%d %H:%M:%S"): method server is unhealthy"
      general_status=1
   fi
   echo "status for method_server: $local_status, i.e. $(if [[ $local_status -eq 0 ]]; then echo OK; else echo NOK;fi)"
}

status_all() {
   status_all_docbrokers
   status_all_docbases
   status_method_server
   echo "General status: $general_status, i.e. $(if [[ $general_status -eq 0 ]]; then echo OK; else echo NOK;fi)"
}

stop_docbase() {
   echo "stopping $docbase"
   docbase=$1
   ./dm_shutdown_${docbase}
   echo "docbase $docbase stopped"
}

stop_all_docbases() {
   echo "stopping the repositories ..."
   DOCBASES=`ls -1 config 2>/dev/null `
   nb_items=0
   for docbase in $DOCBASES; do
      stop_docbase $docbase
      (( nb_items++ ))
   done
   echo "$nb_items repositories stopped"
}

stop_docbroker() {
   echo "stopping docbroker $docbroker ..."
   docbroker=$1
   ./dm_stop_${docbroker}
   echo "docbroker $docbroker stopped"
}

stop_all_docbrokers() {
   echo "stopping the docbrokers ..."
   DOCBROKERS=`ls -1 dm_stop_* 2>/dev/null | cut -f3 -d_`
   nb_items=0
   for docbroker in $DOCBROKERS; do
      stop_docbroker $docbroker
      (( nb_items++ ))
   done
   echo "$nb_items docbrokers stopped"
}

stop_method_server() {
   echo "stopping the method server ..."
   ${DOCUMENTUM}/${JBOSS}/server/stopMethodServer.sh
   echo "method server stopped"
}

stop_all() {
   echo "stopping all the documentum processes ..."
   stop_all_docbases
   stop_method_server
   stop_all_docbrokers
   echo "all documentum processes stopped"
   ps -ajxf | egrep '(PPID|doc|java)' | grep -v grep | sort -n -k2,2
}

# -----------
# main;
# -----------
   [[ -f ${DM_HOME}/bin/dm_set_server_env.sh ]] && . ${DM_HOME}/bin/dm_set_server_env.sh
   cd ${DOCUMENTUM}/dba
   if [[ $# -eq 0 ]]; then
      Usage
      exit 0
   else
      while [[ $# -ge 1 ]]; do
         case $1 in
	    help)
	       Usage
	       exit 0
	    ;;
            start|stop|status)
	       cmd=$1
	       shift
	       if [[ -z $1 || $1 = "all" ]]; then
	          ${cmd}_all
	       elif [[ $1 = "docbases" ]]; then
	          ${cmd}_all_docbases
	       elif [[ $1 = "docbrokers" ]]; then
	          ${cmd}_all_docbrokers
	       elif [[ ${1%%=*} = "docbase" ]]; then
	          docbases=`echo ${1##*=} | gawk '{gsub(/,/, " "); print}'`
                  for docbase in $docbases; do
	             ${cmd}_docbase $docbase
	          done
	       elif [[ ${1%%=*} = "docbroker" ]]; then
	          docbrokers=`echo ${1##*=} | gawk '{gsub(/,/, " "); print}'`
                  for docbroker in $docbrokers; do
	             ${cmd}_docbroker $docbroker
	          done
	       elif [[ $1 = "method_server" ]]; then
                  ${cmd}_method_server
               fi
               exit $general_status
            ;;
            *)
               echo "syntax error"
	       Usage
	       exit 1
	    ;;
         esac
         shift
      done
   fi

See lines 11 to 29 for its usage.
Note on line 110 the timeout command when attempting to connect to a docbase to check its status; see the article Adding a timeout in monitoring probes for an explanation.
We couldn’t help but adding the option to address each component individually, or a few of them, in addition to all of them at once. So, the script lets us stop, start and inquire the status of one particular docbroker or docbase or method server, or a list of docbrokers or a list of docbases, or everything at once.
After a maintenance task, to stop all the Documentum processes, the command below could be used:

$ start-stop.sh stop all

Similarly, to start everything:

$ start-stop.sh start all

Thus, the proper order is guaranteed to be used and human error is prevented. By standardizing on such script and using it as shown, the aforementioned problem won’t occur anymore.

That is fine but if we didn’t use the script and find ourselves in the situation where no docbroker is running and we must shut down the repositories, is there a way to do it easily and cleanly ? Well, easily, certainly, but cleanly, no. Please, continue reading on Part II.

Cet article A Ruthless Repository Shutdown Utility, Part I est apparu en premier sur Blog dbi services.

Power BI Report Server – Kerberos Advanced configuration

Mon, 2019-12-16 12:21
Introduction

Following the basic configuration explained in a previous blog (Link), I describe here how to make more advanced configurations in some specific cases but often met by customers. It is only complementing what has being described in the previous blog in some specific situations needing additional or alternative configurations

Configuration using HTTPS protocol with DNS alias

If you are requested to secure the access to you Power BI Report Server, the solution is to use the HTTPS protocol of course, but first you will need a server certificate installed on your Power BI Report Server host. The aim of this blog is not to explain how to create certificate but just to give your the trick to make it compliant with Kerberos delegation when using a DNS alias for your server.
In that case, the URL used to access your Power BI Report Server portal is based on a DNS alias, you have to generate a certificate with a CN matching your URL but do not forget also to specify an alternative name with type DNS matching also your DNS alias.

After having you certificate issued and installed on your server you can use it for your Web Service URL and your Web Portal URL using the Report Server Configuration manager.

Finally do not forget to create the Http service SPN for the Power BI Report Server service account using your certificate URL.

SetSpn -a http/PowerBIRS.dbi-test.local PBIRSServiceAccount

 

Using Data Sources on SQL Server AlwaysOn with read-only ApplicationIntent

If your report data sources are linked to a SQL Server databases participating in availability groups, with Replica set as read-only, you probably wish your reporting system to read in order to minimize the load on your primary node.
To force the reports to query the data from the read-only replica the parameter ApplicationIntent=ReadOnly is specified in the connection string of the data source (Data Source=;Initial Catalog=;ApplicationIntent=ReadOnly;MultiSubnetFailover=True;Encrypt= True)
In this case a redirection is made by MSSQL Server listener to the dedicated read-only node.
In this context, if you use integrated security using Kerberos, you have to deal with the SPN of the read-only node, reading will be redirected to it
In this case additional SPN must be created on each SQL Server SQL Database Engine instance name (or DNS alias) participating in the availability group targeted. I recommend to create all the involved SPN to cover all case when the roles of your replicas are changing.
To illustrate this case, see the figure below as the SPN’s created for the SQL Sever service account:

SetSPN –a MSSQLSvc/LiDBSrc001:1433 svc_srvsql
SetSPN –a MSSQLSvc/LiDBSrc001.dbi-test.local:1433 svc_srvsql
SetSPN –a MSSQLSvc/ DBSrcIn01r1 svc_srvsql
SetSPN –a MSSQLSvc/ DBSrcIn01r1.dbi-test.local:1433 svc_srvsql
SetSPN –a MSSQLSvc/ DBSrcIn01r2 svc_srvsql
SetSPN –a MSSQLSvc/ DBSrcIn01r2.dbi-test.local:1433 svc_srvsql
SetSPN –a MSSQLSvc/ DBSrcIn01r3 svc_srvsql
SetSPN –a MSSQLSvc/ DBSrcIn01r3.dbi-test.local:1433 svc_srvsql

If you are using constraint delegation, do not forget to add all these services to your Power BI Report Server service account trusting it to allow the delegation to this published services.

Cet article Power BI Report Server – Kerberos Advanced configuration est apparu en premier sur Blog dbi services.

Documentum – NoSuchMethodError on setResourceBundle with D2-REST 16.4 on WebLogic

Sat, 2019-12-14 02:00

In the scope of an upgrade project, with some colleagues, we have been deploying some D2-REST applications on Kubernetes pods using WebLogic Server. As explained in a previous blog, we first tried to upgrade our D2-REST 4.x into 16.4 but faced a small error. I don’t know if you already used/deployed D2-REST but it seems to me that the deployment is always kinda chaotic. Sometimes you will need to apply some steps and then for the next version it’s not needed anymore but later it will be needed again, aso… So in the end, we always try to deploy the OOTB with some small improvements and whenever we face an error, we try to fix it for this specific version and this version only. Never assume that a fix for a version is good for all versions.

Below, I will be using some scripts and properties files that are present in this “dbi_resources” folder: it’s some utilities and stuff that we are using for automation and to simplify our lives. So we tried to deploy the D2-REST 16.4 on our WebLogic Servers 12.2.1.x:

[weblogic@wsd2rest-0 ~]$ cd $APPLICATIONS
[weblogic@wsd2rest-0 dbi]$
[weblogic@wsd2rest-0 dbi]$ wlst_cmd="$ORACLE_HOME/oracle_common/common/bin/wlst.sh"
[weblogic@wsd2rest-0 dbi]$ wlst_script="${dbi_resources}/manageApplication.wls"
[weblogic@wsd2rest-0 dbi]$ domain_prop="${dbi_resources}/domain.properties"
[weblogic@wsd2rest-0 dbi]$ deploy_prop="${dbi_resources}/D2-REST.deploy"
[weblogic@wsd2rest-0 dbi]$ undeploy_prop="${dbi_resources}/D2-REST.undeploy"
[weblogic@wsd2rest-0 dbi]$
[weblogic@wsd2rest-0 dbi]$ ${wlst_cmd} ${wlst_script} ${domain_prop} ${deploy_prop}

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

>>> Loaded the properties file: ${dbi_resources}/domain.properties
>>> Loaded the properties file: ${dbi_resources}/D2-REST.deploy
>>> Connected to the AdminServer.
>>> Edit Session started.

<Dec 11, 2019 4:49:19 PM UTC> <Info> <J2EE Deployment SPI> <BEA-260121> <Initiating deploy operation for application, D2-REST [archive: $APPLICATIONS/D2-REST.war], to msD2-REST-02 msD2-REST-01 .>
ERROR... check error messages for cause.
Error occurred while performing activate : Error while Activating changes. : java.lang.NoSuchMethodError: org.apache.log4j.Logger.setResourceBundle(Ljava/util/ResourceBundle;)V
Use dumpStack() to view the full stacktrace :
Problem invoking WLST - Traceback (innermost last):
  File "${dbi_resources}/manageApplication.wls", line 77, in ?
  File "<iostream>", line 569, in stopEdit
  File "<iostream>", line 553, in raiseWLSTException
WLSTException: Error occurred while performing stopEdit : Cannot call stopEdit without an edit session in progress

[weblogic@wsd2rest-0 dbi]$

 

At this point, the application has been deployed but it cannot be started properly. It will therefore be stuck in “New” status on the WebLogic side. On the D2-REST log file, the error message looks like this:

2019-12-11 16:49:57,112 UTC [ERROR] ([ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)') - o.springframework.web.context.ContextLoader   : Context initialization failed
java.lang.NoSuchMethodError: org.apache.log4j.Logger.setResourceBundle(Ljava/util/ResourceBundle;)V
        at com.documentum.fc.common.DfLogger.<clinit>(DfLogger.java:622) ~[dfc-16.4.jar:na]
        at com.documentum.fc.common.impl.logging.LoggingConfigurator.onPreferencesInitialized(LoggingConfigurator.java:178) ~[dfc-16.4.jar:na]
        at com.documentum.fc.common.DfPreferences.initialize(DfPreferences.java:71) ~[dfc-16.4.jar:na]
        at com.documentum.fc.common.DfPreferences.getInstance(DfPreferences.java:43) ~[dfc-16.4.jar:na]
        at com.documentum.fc.client.DfSimpleDbor.getDefaultDbor(DfSimpleDbor.java:78) ~[dfc-16.4.jar:na]
        at com.documentum.fc.client.DfSimpleDbor.<init>(DfSimpleDbor.java:66) ~[dfc-16.4.jar:na]
        at com.documentum.fc.client.DfClient$ClientImpl.<init>(DfClient.java:350) ~[dfc-16.4.jar:na]
        at com.documentum.fc.client.DfClient.<clinit>(DfClient.java:766) ~[dfc-16.4.jar:na]
        at com.emc.documentum.rest.context.WebAppContextInitializer.getDfcVersion(WebAppContextInitializer.java:104) ~[_wl_cls_gen.jar:na]
        at com.emc.documentum.rest.context.WebAppContextInitializer.collectInfo(WebAppContextInitializer.java:81) ~[_wl_cls_gen.jar:na]
        at com.emc.documentum.rest.context.WebAppContextInitializer.preloadAppEnvironment(WebAppContextInitializer.java:67) ~[_wl_cls_gen.jar:na]
        at com.emc.documentum.rest.context.WebAppContextInitializer.initialize(WebAppContextInitializer.java:38) ~[_wl_cls_gen.jar:na]
        at com.emc.documentum.rest.context.WebAppContextInitializer.initialize(WebAppContextInitializer.java:31) ~[_wl_cls_gen.jar:na]
        at org.springframework.web.context.ContextLoader.customizeContext(ContextLoader.java:482) ~[spring-web-4.3.10.RELEASE.jar:4.3.10.RELEASE]
        at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:442) ~[spring-web-4.3.10.RELEASE.jar:4.3.10.RELEASE]
        at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:325) ~[spring-web-4.3.10.RELEASE.jar:4.3.10.RELEASE]
        at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:107) [spring-web-4.3.10.RELEASE.jar:4.3.10.RELEASE]
        at weblogic.servlet.internal.EventsManager$FireContextListenerAction.run(EventsManager.java:705) [com.oracle.weblogic.servlet.jar:12.2.1.3]
        at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:328) [com.oracle.weblogic.security.subject.jar:12.2.1.3]
        at weblogic.security.service.SecurityManager.runAsForUserCode(SecurityManager.java:197) [com.oracle.weblogic.security.subject.jar:12.2.1.3]
        at weblogic.servlet.provider.WlsSecurityProvider.runAsForUserCode(WlsSecurityProvider.java:203) [com.oracle.weblogic.servlet.jar:12.2.1.3]
        at weblogic.servlet.provider.WlsSubjectHandle.run(WlsSubjectHandle.java:71) [com.oracle.weblogic.servlet.jar:12.2.1.3]
        at weblogic.servlet.internal.EventsManager.executeContextListener(EventsManager.java:251) [com.oracle.weblogic.servlet.jar:12.2.1.3]
        at weblogic.servlet.internal.EventsManager.notifyContextCreatedEvent(EventsManager.java:204) [com.oracle.weblogic.servlet.jar:12.2.1.3]
        at weblogic.servlet.internal.EventsManager.notifyContextCreatedEvent(EventsManager.java:192) [com.oracle.weblogic.servlet.jar:12.2.1.3]
        at weblogic.servlet.internal.WebAppServletContext.preloadResources(WebAppServletContext.java:1921) [com.oracle.weblogic.servlet.jar:12.2.1.3]
        at weblogic.servlet.internal.WebAppServletContext.start(WebAppServletContext.java:3106) [com.oracle.weblogic.servlet.jar:12.2.1.3]
        at weblogic.servlet.internal.WebAppModule.startContexts(WebAppModule.java:1843) [com.oracle.weblogic.servlet.jar:12.2.1.3]
        at weblogic.servlet.internal.WebAppModule.start(WebAppModule.java:884) [com.oracle.weblogic.servlet.jar:12.2.1.3]
        at weblogic.application.internal.ExtensibleModuleWrapper$StartStateChange.next(ExtensibleModuleWrapper.java:360) [com.oracle.weblogic.application.jar:12.2.1.3]
        at weblogic.application.internal.ExtensibleModuleWrapper$StartStateChange.next(ExtensibleModuleWrapper.java:356) [com.oracle.weblogic.application.jar:12.2.1.3]
        at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:45) [com.oracle.weblogic.application.jar:12.2.1.3]
        at weblogic.application.internal.ExtensibleModuleWrapper.start(ExtensibleModuleWrapper.java:138) [com.oracle.weblogic.application.jar:12.2.1.3]
        at weblogic.application.internal.flow.ModuleListenerInvoker.start(ModuleListenerInvoker.java:124) [com.oracle.weblogic.application.jar:12.2.1.3]
        at weblogic.application.internal.flow.ModuleStateDriver$3.next(ModuleStateDriver.java:233) [com.oracle.weblogic.application.jar:12.2.1.3]
        at weblogic.application.internal.flow.ModuleStateDriver$3.next(ModuleStateDriver.java:228) [com.oracle.weblogic.application.jar:12.2.1.3]
        at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:45) [com.oracle.weblogic.application.jar:12.2.1.3]
        at weblogic.application.internal.flow.ModuleStateDriver.start(ModuleStateDriver.java:78) [com.oracle.weblogic.application.jar:12.2.1.3]
        at weblogic.application.internal.flow.StartModulesFlow.activate(StartModulesFlow.java:52) [com.oracle.weblogic.application.jar:12.2.1.3]
        at weblogic.application.internal.BaseDeployment$2.next(BaseDeployment.java:752) [com.oracle.weblogic.application.jar:12.2.1.3]
        at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:45) [com.oracle.weblogic.application.jar:12.2.1.3]
        at weblogic.application.internal.BaseDeployment.activate(BaseDeployment.java:262) [com.oracle.weblogic.application.jar:12.2.1.3]
        at weblogic.application.internal.SingleModuleDeployment.activate(SingleModuleDeployment.java:52) [com.oracle.weblogic.application.jar:12.2.1.3]
        at weblogic.application.internal.DeploymentStateChecker.activate(DeploymentStateChecker.java:165) [com.oracle.weblogic.application.jar:12.2.1.3]
        at weblogic.deploy.internal.targetserver.AppContainerInvoker.activate(AppContainerInvoker.java:90) [com.oracle.weblogic.deploy.jar:12.2.1.3]
        at weblogic.deploy.internal.targetserver.operations.AbstractOperation.activate(AbstractOperation.java:631) [com.oracle.weblogic.deploy.jar:12.2.1.3]
        at weblogic.deploy.internal.targetserver.operations.ActivateOperation.activateDeployment(ActivateOperation.java:171) [com.oracle.weblogic.deploy.jar:12.2.1.3]
        at weblogic.deploy.internal.targetserver.operations.ActivateOperation.doCommit(ActivateOperation.java:121) [com.oracle.weblogic.deploy.jar:12.2.1.3]
        at weblogic.deploy.internal.targetserver.operations.AbstractOperation.commit(AbstractOperation.java:348) [com.oracle.weblogic.deploy.jar:12.2.1.3]
        at weblogic.deploy.internal.targetserver.DeploymentManager.handleDeploymentCommit(DeploymentManager.java:907) [com.oracle.weblogic.deploy.jar:12.2.1.3]
        at weblogic.deploy.internal.targetserver.DeploymentManager.activateDeploymentList(DeploymentManager.java:1468) [com.oracle.weblogic.deploy.jar:12.2.1.3]
        at weblogic.deploy.internal.targetserver.DeploymentManager.handleCommit(DeploymentManager.java:459) [com.oracle.weblogic.deploy.jar:12.2.1.3]
        at weblogic.deploy.internal.targetserver.DeploymentServiceDispatcher.commit(DeploymentServiceDispatcher.java:181) [com.oracle.weblogic.deploy.jar:12.2.1.3]
        at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.doCommitCallback(DeploymentReceiverCallbackDeliverer.java:217) [com.oracle.weblogic.deploy.service.jar:12.2.1.3]
        at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.access$100(DeploymentReceiverCallbackDeliverer.java:14) [com.oracle.weblogic.deploy.service.jar:12.2.1.3]
        at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer$2.run(DeploymentReceiverCallbackDeliverer.java:69) [com.oracle.weblogic.deploy.service.jar:12.2.1.3]
        at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:678) [com.bea.core.weblogic.workmanager.jar:12.2.1.3]
        at weblogic.invocation.ComponentInvocationContextManager._runAs(ComponentInvocationContextManager.java:352) [com.bea.core.utils.full.jar:12.2.1.3]
        at weblogic.invocation.ComponentInvocationContextManager.runAs(ComponentInvocationContextManager.java:337) [com.bea.core.utils.full.jar:12.2.1.3]
        at weblogic.work.LivePartitionUtility.doRunWorkUnderContext(LivePartitionUtility.java:57) [com.oracle.weblogic.work.jar:12.2.1.3]
        at weblogic.work.PartitionUtility.runWorkUnderContext(PartitionUtility.java:41) [com.bea.core.weblogic.workmanager.jar:12.2.1.3]
        at weblogic.work.SelfTuningWorkManagerImpl.runWorkUnderContext(SelfTuningWorkManagerImpl.java:652) [com.bea.core.weblogic.workmanager.jar:12.2.1.3]
        at weblogic.work.ExecuteThread.execute(ExecuteThread.java:420) [com.bea.core.weblogic.workmanager.jar:12.2.1.3]
        at weblogic.work.ExecuteThread.run(ExecuteThread.java:360) [com.bea.core.weblogic.workmanager.jar:12.2.1.3]

 

The solution is quite simple, it’s just a conflict for the log4j jars that comes with the OOTB war file provided by OpenText… So you just need to undeploy the application, remove the conflict and redeploy it afterwards. If you are facing the error above, then it’s linked to the “log4j-over-slf4j” jar file and you can solve it like that:

[weblogic@wsd2rest-0 dbi]$ ${wlst_cmd} ${wlst_script} ${domain_prop} ${undeploy_prop}

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

>>> Loaded the properties file: ${dbi_resources}/domain.properties
>>> Loaded the properties file: ${dbi_resources}/D2-REST.undeploy
>>> Connected to the AdminServer.
>>> Edit Session started.

<Dec 11, 2019 4:54:46 PM UTC> <Info> <J2EE Deployment SPI> <BEA-260121> <Initiating undeploy operation for application, D2-REST [archive: null], to msD2-REST-01 msD2-REST-02 .>

Current Status of your Deployment:
Deployment command type: undeploy
Deployment State : completed
Deployment Message : no message
None

>>> Execution completed.
[weblogic@wsd2rest-0 dbi]$
[weblogic@wsd2rest-0 dbi]$
[weblogic@wsd2rest-0 dbi]$ jar -tvf D2-REST.war | grep "WEB-INF/lib/log4j"
481535 Tue Jan 05 05:02:00 UTC 2016 WEB-INF/lib/log4j-1.2.16.jar
 12359 Mon Dec 12 03:29:02 UTC 2016 WEB-INF/lib/log4j-over-slf4j-1.6.1.jar
[weblogic@wsd2rest-0 dbi]$
[weblogic@wsd2rest-0 dbi]$ zip -d D2-REST.war WEB-INF/lib/log4j-over-slf4j*
deleting: WEB-INF/lib/log4j-over-slf4j-1.6.1.jar
[weblogic@wsd2rest-0 dbi]$
[weblogic@wsd2rest-0 dbi]$
[weblogic@wsd2rest-0 dbi]$ ${wlst_cmd} ${wlst_script} ${domain_prop} ${deploy_prop}

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

>>> Loaded the properties file: ${dbi_resources}/domain.properties
>>> Loaded the properties file: ${dbi_resources}/D2-REST.deploy
>>> Connected to the AdminServer.
>>> Edit Session started.

<Dec 11, 2019 4:56:05 PM UTC> <Info> <J2EE Deployment SPI> <BEA-260121> <Initiating deploy operation for application, D2-REST [archive: $APPLICATIONS/D2-REST.war], to msD2-REST-02 msD2-REST-01 .>

Current Status of your Deployment:
Deployment command type: deploy
Deployment State : completed
Deployment Message : no message
None

>>> Execution completed.
[weblogic@wsd2rest-0 dbi]$

 

As you can see above, the D2-REST 16.4 is now successfully deployed. You can access it and work with it without any issues.

[weblogic@wsd2rest-0 dbi]$ curl -s -k https://lb_url/D2-REST/product-info | python -mjson.tool
{
    "links": [
        {
            "href": "https://lb_url/D2-REST/product-info",
            "rel": "self"
        }
    ],
    "name": "documentum-rest-services-product-info",
    "properties": {
        "build_number": "0511",
        "major": "16.4",
        "minor": "0000",
        "product": "Documentum D2 REST Services",
        "product_version": "16.4.0000.0511",
        "revision_number": "NA"
    }
}
[weblogic@wsd2rest-0 dbi]$

 

Cet article Documentum – NoSuchMethodError on setResourceBundle with D2-REST 16.4 on WebLogic est apparu en premier sur Blog dbi services.

Documentum – Cast trouble with D2-REST 16.5.x on WebLogic

Sat, 2019-12-14 02:00

In the scope of an upgrade project, with some colleagues, we have been deploying some D2-REST applications on Kubernetes pods using WebLogic Server. At the beginning, we started using D2-REST 16.4 and that was working properly (once the issue described here is fixed (and some others linked to FIPS 140-2, aso…)). After that, we tried to switch to higher versions (16.5.0 Pxx, 16.5.1 P00 or P04) but it stopped working with some error. We were able to replicate the issue with WebLogic Server 12.2.1.3 and 12.2.1.4 so it’s not just specific to one small use case but it seems more global to the D2-REST 16.5.x versions on WebLogic. It might impact other Application Servers as well, that would need some testing.

Upon accessing the D2-REST URL (E.g.: https://lb_url/D2-REST), the service seemed to be working but while going further on the product information page for example (E.g.: https://lb_url/D2-REST/product-info), then the following error was always displayed:

<error xmlns="http://identifiers.emc.com/vocab/documentum" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
  <status>500</status>
  <code>E_INTERNAL_SERVER_ERROR</code>
  <message>An internal server error occurs.</message>
  <details>
    org.owasp.esapi.reference.DefaultSecurityConfiguration cannot be cast to com.emc.d2.web.security.D2SecurityConfiguration
  </details>
  <id>51872a76-g47f-4d6e-9d47-e9fa5d8c1291</id>
</error>

 

The error generated on the D2-REST logs at that time was:

java.lang.ClassCastException: org.owasp.esapi.reference.DefaultSecurityConfiguration cannot be cast to com.emc.d2.web.security.D2SecurityConfiguration
	at com.emc.d2.web.security.D2HttpUtilities.getHeader(D2HttpUtilities.java:40)
	at com.emc.documentum.d2.rest.filter.AppInfoFilter.getRemoteAddr(AppInfoFilter.java:82)
	at com.emc.documentum.d2.rest.filter.AppInfoFilter.doFilter(AppInfoFilter.java:36)
	at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
	at com.emc.documentum.rest.security.filter.RepositoryNamingFilter.doFilter(RepositoryNamingFilter.java:40)
	at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
	at com.emc.documentum.rest.filter.RestCorsFilter.doFilterInternal(RestCorsFilter.java:47)
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
	at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
	at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:197)
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
	at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
	at com.emc.documentum.rest.filter.CompressionFilter.doFilter(CompressionFilter.java:73)
	at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
	at com.emc.documentum.rest.log.MessageLoggingFilter.doFilter(MessageLoggingFilter.java:69)
	at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
	at com.emc.documentum.rest.security.filter.ExceptionHandlerFilter.doFilterInternal(ExceptionHandlerFilter.java:31)
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
	at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
	at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.wrapRun(WebAppServletContext.java:3797)
	at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3763)
	at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:344)
	at weblogic.security.service.SecurityManager.runAsForUserCode(SecurityManager.java:197)
	at weblogic.servlet.provider.WlsSecurityProvider.runAsForUserCode(WlsSecurityProvider.java:203)
	at weblogic.servlet.provider.WlsSubjectHandle.run(WlsSubjectHandle.java:71)
	at weblogic.servlet.internal.WebAppServletContext.doSecuredExecute(WebAppServletContext.java:2451)
	at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2299)
	at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2277)
	at weblogic.servlet.internal.ServletRequestImpl.runInternal(ServletRequestImpl.java:1720)
	at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1680)

 

Since we couldn’t find anything obvious, we opened an OpenText Support case (#4322241). There is a KB (KB14050670) around Internal Server Error but it didn’t help us in this case. After some research on OpenText side, it seems that this is a known issue and there is a solution for it but it is not documented at the moment: that’s the whole purpose of this blog. The thing is that the solution is going to be in the next version of the D2FS REST Services Development Guide and therefore if you are looking into the OpenText Support Site, you won’t find anything related to this error yet. Don’t ask me why it will be in the Development guide, maybe they didn’t find another suitable location.

So the solution is very simple, you just have to add a small piece into the D2-REST web.xml file:

[weblogic@wsd2rest-0 ~]$ cd $APPLICATIONS
[weblogic@wsd2rest-0 dbi]$
[weblogic@wsd2rest-0 dbi]$ jar -xvf D2-REST.war WEB-INF/web.xml
 inflated: WEB-INF/web.xml
[weblogic@wsd2rest-0 dbi]$
[weblogic@wsd2rest-0 dbi]$ cat WEB-INF/web.xml
<?xml version="1.0" encoding="UTF-8"?>

<web-app xmlns="http://java.sun.com/xml/ns/javaee"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://java.sun.com/xml/ns/javaee
         http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"
         version="3.0"
         metadata-complete="true">

  <display-name>D2-REST</display-name>
  <description>D2-REST</description>
  <error-page>
    <error-code>404</error-code>
    <location>/errors/redirect/404</location>
  </error-page>
  <error-page>
    <error-code>500</error-code>
    <location>/errors/redirect/500</location>
  </error-page>
</web-app>
[weblogic@wsd2rest-0 dbi]$
[weblogic@wsd2rest-0 dbi]$ sed -i 's,</web-app>,  <listener>\n&,' WEB-INF/web.xml
[weblogic@wsd2rest-0 dbi]$ sed -i 's,</web-app>,    <listener-class>com.emc.d2.rest.context.WebAppContextListener</listener-class>\n&,' WEB-INF/web.xml
[weblogic@wsd2rest-0 dbi]$ sed -i 's,</web-app>,  </listener>\n&,' WEB-INF/web.xml
[weblogic@wsd2rest-0 dbi]$
[weblogic@wsd2rest-0 dbi]$ cat WEB-INF/web.xml
<?xml version="1.0" encoding="UTF-8"?>

<web-app xmlns="http://java.sun.com/xml/ns/javaee"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://java.sun.com/xml/ns/javaee
         http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"
         version="3.0"
         metadata-complete="true">

  <display-name>D2-REST</display-name>
  <description>D2-REST</description>
  <error-page>
    <error-code>404</error-code>
    <location>/errors/redirect/404</location>
  </error-page>
  <error-page>
    <error-code>500</error-code>
    <location>/errors/redirect/500</location>
  </error-page>
  <listener>
    <listener-class>com.emc.d2.rest.context.WebAppContextListener</listener-class>
  </listener>
</web-app>
[weblogic@wsd2rest-0 dbi]$
[weblogic@wsd2rest-0 dbi]$ jar -uvf D2-REST.war WEB-INF/web.xml
adding: WEB-INF/web.xml(in = 753) (out= 326)(deflated 56%)
[weblogic@wsd2rest-0 dbi]$
[weblogic@wsd2rest-0 dbi]$ rm -rf WEB-INF/
[weblogic@wsd2rest-0 dbi]$

 

As you can see above, it’s all about adding a new listener into the web.xml file for the “WebAppContextListener“. This class – based on its name – has absolutely nothing to do with the error shown above and yet, adding this listener will solve the cast issue. So just redeploy/update your Application in WebLogic and that’s it, the issue should be gone.

 

Cet article Documentum – Cast trouble with D2-REST 16.5.x on WebLogic est apparu en premier sur Blog dbi services.

Documentum – MigrationUtil – 5 – Change Installation Owner

Thu, 2019-12-12 12:59

The Documentum installation owner is the operating system user that owns the server executable and other related files along with the OS process when the server is running. It is originally determined when the server is installed, in fact, it is the logged-in user that performed the Documentum installation. Of course, it is preferable to install Documentum and never change the installation owner. However, sometimes company policy change and dictates that the original installation must be changed, for example, the user name does not conform to a new naming policy.

This blog is the last one of the MigrationUtil blogs serie (please find links of other blogs below), I will show how to change the Installation Owner using the MigrationUtil. If you want to change only the password please read this blog.

The installation owner user is important at the operating system and in the Docbase/Content Server level, it is given the following privileges:

  • Operating System:
    – Rights to start Documentum Services such as Docbase, Docbroker, Java Method Server and other installed Documentum products.
    – Permission to change the Content Server configuration (i.e. upgrade, create, and delete docbases).
    – Folder level permission to view data, configuration, and many log files located under the DOCUMENTUM_HOME directory.
  • Docbase and Content Server:
    – Superuser and System Administrator rights.
    – Set as the r_install_owner value in the dm_server_config object.
    – Set as the operating system user to run several Administrative jobs.

As you can deduce, the change of the installer owner is not a minor change within Documentum, so it is very critical. That’s why you have to prepare very well this operation and determine the best approach to execute it.

Below two change levels to be done:

  • OS Level change:
    – Create the new install owner at the operating system level, it should correspond to the user_os_name of the new docbase user.
  • Docbase level change:
    – Create a new user in the docbase to be the installation owner and reassign the previous installation owner’s objects to the new user. The MigrationUtil will be able to this part.
Preparation Before any change
  • Clean the environment:
    Run the Consistency Checker job: The report gives you a list of bad data within your system. Cleaning up inconsistent data before making the change will speed up the process.
    Purge all old log files: Changing the installation owner requires updating permissions on Documentum data and log files. The purge will reduce work on “unneeded data” and will greatly speed up the process.
  • Back up:
    – Back up all the impacted environment before performing any major change within Documentum (Content Server files and the Database).
Create new user – OS Level

Add the new installation user at the OS Level, in the same group as the actual installation user :

[root@vmtestdctm01 ~]# useradd -g 1004 dmdocbase1
[root@vmtestdctm01 ~]# passwd dmdocbase1
Changing password for user dmdocbase1.
New password: 
Retype new password: 
passwd: all authentication tokens updated successfully.
[root@vmtestdctm01 ~]# 

To know the group of actual installation user:

[root@vmtestdctm01 ~]# cat /etc/passwd
...
dmadmin:x:1002:1004::/home/dmadmin:/bin/bash
Create new user – Docbase Level

You need to create the user in all docbases, using below dql query :

CREATE dm_user OBJECT
SET user_name = 'dmdocbase1',
SET user_password = 'install164',
SET user_login_name = 'dmdocbase1',
SET user_address = 'dmdocbase1@dbi-services.com',
SET description = 'This User is the owner of docbase1',
SET user_os_name = 'dmdocbase1',
SET client_capability = 8,
SET user_privileges = 16,
SET user_xprivileges = 56,
SET user_source = 'inline password'

Result:

object_created  
----------------
1101e24080000500
Configure the MigrationUtil

Adapt the MigrationUtil configuration file like below:

[dmadmin@vmtestdctm01 ~]$ cat $DM_HOME/install/external_apps/MigrationUtil/config.xml 
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
<comment>Database connection details</comment>
<entry key="dbms">oracle</entry> <!-- This would be either sqlserver, oracle, db2 or postgres -->
<entry key="tgt_database_server">vmtestdctm01</entry> <!-- Database Server host or IP -->
<entry key="port_number">1521</entry> <!-- Database port number -->
<entry key="InstallOwnerPassword">install164</entry>
<entry key="isRCS">no</entry>    <!-- set it to yes, when running the utility on secondary CS -->

<!-- <comment>List of docbases in the machine</comment> -->
<entry key="DocbaseName.1">docbase1</entry>

<!-- <comment>docbase owner password</comment> -->
<entry key="DocbasePassword.1">install164</entry>

...

<entry key="ChangeInstallOwner">yes</entry>
<entry key="InstallOwner">dmadmin</entry>
<entry key="NewInstallOwner">dmdocbase1</entry>
<entry key="NewInstallOwnerPassword">install164</entry>
...
Migration Stop Docbase(s) and Docbroker(s)

Before you execute the migration you have to stop the docbase(s) and the docbroker(s).

$DOCUMENTUM/dba/dm_shutdown_Docbase1
$DOCUMENTUM/dba/dm_stop_DocBroker
Execute the migration script

Once every thing stopped, you can execute the migration script:

[dmadmin@vmtestdctm01 ~]$ $DM_HOME/install/external_apps/MigrationUtil/MigrationUtil.sh

Welcome... Migration Utility invoked.
 
Skipping Docbase ID Changes...

Skipping Host Name Change...

Changing Install Owner...
Created new log File: /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/InstallOwnerChange.log
Finished changing Install Owner...Please check log file for more details/errors
Finished changing Install Owner...

Skipping Server Name Change...

Skipping Docbase Name Change...

Skipping Docker Seamless Upgrade scenario...

Migration Utility completed.

Have a look on the Migration log:

[dmadmin@vmtestdctm01 ~]$ cat /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/InstallOwnerChange.log
Start: 2019-12-12 05:14:37.191
Changing Install Owner
=====================
InstallOwner: dmadmin
New InstallOwner: dmdocbase1
Changing InstallOwner for docbase: Docbase1
Retrieving server.ini path for docbase: Docbase1
Found path: /app/dctm/product/16.4/dba/config/Docbase1/server.ini

Database Details:
Database Vendor:oracle
Database Name:dctmdb.local
Databse User:Docbase1
Database URL:jdbc:oracle:thin:@vmtestdctm01:1521/dctmdb.local
Successfully connected to database....

Processing Database Changes for docbase: Docbase1
Created database backup File '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/InstallOwnerChange_Docbase1_DatabaseRestore.sql'
Processing _s table...
select r_object_id,object_name from dm_sysobject_s where object_name = 'dmadmin'
update dm_sysobject_s set object_name = 'dmdocbase1' where r_object_id = '0c01e24080000105'
select r_object_id,r_install_owner from dm_server_config_s where r_install_owner = 'dmadmin'
update dm_server_config_s set r_install_owner = 'dmdocbase1' where r_object_id = '3d01e24080000102'
select r_object_id,user_name from dm_user_s where user_name = 'dmadmin'
update dm_user_s set user_name = 'dmdocbase1' where r_object_id = '1101e24080000102'
select r_object_id,user_os_name from dm_user_s where user_os_name = 'dmadmin'
update dm_user_s set user_os_name = 'dmdocbase1' where r_object_id = '1101e24080000102'
...
update dm_workflow_r set r_last_performer = 'dmdocbase1' where r_last_performer = 'dmadmin'
update dm_workflow_s set r_creator_name = 'dmdocbase1' where r_creator_name = 'dmadmin'
update dm_workflow_s set supervisor_name = 'dmdocbase1' where supervisor_name = 'dmadmin'
Successfully updated database values...
Committing all database operations...
Finished processing database changes for docbase: Docbase1

Processing server.ini changes for docbase: Docbase1
Backed up '/app/dctm/product/16.4/dba/config/Docbase1/server.ini' to '/app/dctm/product/16.4/dba/config/Docbase1/server.ini_install_dmadmin.backup'
Updated server.ini file:/app/dctm/product/16.4/dba/config/Docbase1/server.ini
Updating acs.properties for docbase: Docbase1
Backed up '/app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties' to '/app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties_install_dmadmin.backup'
Updated acs.properties: /app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties
Backed up '/app/dctm/product/16.4/dba/dm_shutdown_Docbase1' to '/app/dctm/product/16.4/dba/dm_shutdown_Docbase1_install_dmadmin.backup'
Updated shutdown script: /app/dctm/product/16.4/dba/dm_shutdown_Docbase1
...
Processing Services File Changes...
Backed up '/app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/ServerApps.ear/DmMethods.war/WEB-INF/web.xml' to '/app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/ServerApps.ear/DmMethods.war/WEB-INF/web.xml_install_dmadmin.backup'
Updated web.xml: /app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/ServerApps.ear/DmMethods.war/WEB-INF/web.xml
WARNING...File /app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/bpm.ear/bpm.war/WEB-INF/web.xml doesn't exist
No need to update method server startup script: /app/dctm/product/16.4/wildfly9.0.1/server/startMethodServer.sh
Finished processing File changes...

Finished changing Install Owner...
End: 2019-12-12 05:14:39.815
Change permissions

Change the permissions of all folders and files under DOCUMENTUM_HOME directory, if your content storage directories are not located under the DOCUMENTUM_HOME directory, change the permissions on each content storage directory as well.

[root@vmtestdctm01 ~]$ chown -R dmdocbase1 /app/dctm
Start Docbase(s) and Docbroker(s)

Start the Docbroker and the docbase:

$DOCUMENTUM/dba/dba/dm_launch_DocBroker
$DOCUMENTUM/dba/dba/dm_start_Docbase1
Post Migration

Check the docbase logs:

...
2019-12-12T05:30:09.982774      13301[13301]    0000000000000000        [DM_MQ_I_DAEMON_START]info:  "Message queue daemon (pid : 13570, session 0101e24080000456) is started sucessfully."
2019-12-12T05:30:20.255917      13569[13569]    0101e24080000003        [DM_DOCBROKER_I_PROJECTING]info:  "Sending information to Docbroker located on host (vmtestdctm01) with port (1490).  Information: (Config(Docbase1), Proximity(1), Status(Open), Dormancy Status(Active))."
Wed Dec 12 05:30:32 2019 [INFORMATION] [AGENTEXEC 13628] Detected during program initialization: Version: 16.4.0000.0248  Linux64
Wed Dec 12 05:30:35 2019 [INFORMATION] [AGENTEXEC 13628] Detected during program initialization: Agent Exec connected to server Docbase1:  [DM_SESSION_I_SESSION_START]info:  "Session 0101e24080000500 started for user dmdocbase1."

Try to connect with old installation owner to Docbase1 throw idql:

...
Connecting to Server using docbase Docbase1
Could not connect
[DM_SESSION_E_AUTH_FAIL]error:  "Authentication failed for user dmadmin with docbase Docbase1."

This is the expected behavior, the old installation owner is no more active.

The environment I used to make this test is a very simple one ( with only one Docbase, no HA, no FullText, aso) and created only for this purpose. It worked fine on my environment, but be careful if you have a more complex environment!

Cet article Documentum – MigrationUtil – 5 – Change Installation Owner est apparu en premier sur Blog dbi services.

SQL Server – Collecting last backup information in an AlwaysOn environment

Thu, 2019-12-12 09:02
Introduction

Sometimes you face interesting challenges with unusual environment. One of my customer needed a automated and flexible backup solution. Said like that nothing very complex you will say. But if I mention that some databases were 60TB big with more than 30 filegroups and around 600 database data files each and moreover synchronized in an AlwayOn availability group, it is not the same story and you can easily imagine that working with standard backup strategy will not be viable. Therefore I was working on implementing solution using partial full, partial differential and read-only filegroups backups to minimize the time needed.
Well this post is not explaining the whole solution, but only a way to collect the last backup information of my databases, especially for the ones being in an AlwaysOn availability group and which filegroup states changed.
If you already worked with partial backups and read-only filegroups backups you know that the backup sequence is very important, but if you don’t you will quickly notice it if you need to restore, and you can easily understand why this last backup information is crucial. As the backups always have to run on the primary replica, you have to collect the information on all replicas if failover occurred and the primary changed to ensure that you execute the right backups at the right moment and not make unnecessary backups (remember the data volumes).

 

Explanation of the solution and code

Another thing to mentioned, because of security policies, it was forbidden to use linked server, but hopefully xp_CmdShell was possible. I wanted each replica to work independently, and needed a way to query the remote replicas to collect the last backup information on each SQL Server instances involved. Because backup history might be cleans, I need to store this information in local tables. I created 2 tables, one to stored last database backups information and the other to store last read-only filegroup backups information. Additionally I created 2 tables to collect temporarily the information coming from all replicas.

Creation of the last backup information tables:

--########################################################
--###Backup generator - backup last date info temporary table
--########################################################

USE [<YourDatabaseName>]
GO
/*
if OBJECT_ID('[dbo].[bakgen_backuplastdt_databases_temp]') is not null
	drop table [dbo].[bakgen_backuplastdt_databases_temp]
*/
create table [dbo].[bakgen_backuplastdt_databases_temp] (
	ServerName sysname not null,
	SqlInstanceName sysname  not null,
	SqlServerName sysname  not null,
	ServiceBrokerGuid uniqueidentifier not null,
	DatabaseCreationDate datetime  not null,
	DatabaseName sysname  not null,
	BackupType char(1) not null,
	LastBackupDate datetime  not null,
	LastBackupSize numeric(20,0) not null,
	is_primary bit null,
	insertdate datetime  not null
)
GO
create unique clustered index idx_bakgen_backuplastdt_databases_temp on [dbo].[bakgen_backuplastdt_databases_temp](DatabaseCreationDate,DatabaseName,BackupType,ServerName,SqlInstanceName)



--########################################################
--###Backup generator - backup last date info
--########################################################

USE [<YourDatabaseName>]
GO
/*
if OBJECT_ID('[dbo].[bakgen_backuplastdt_databases]') is not null
	drop table [dbo].[bakgen_backuplastdt_databases]
*/
create table [dbo].[bakgen_backuplastdt_databases] (
	ServerName sysname  not null,
	SqlInstanceName sysname  not null,
	SqlServerName sysname  not null,
	ServiceBrokerGuid uniqueidentifier not null,
	DatabaseCreationDate datetime  not null,
	DatabaseName sysname  not null,
	BackupType char(1) not null,
	LastBackupDate datetime  not null,
	LastBackupSize numeric(20,0) not null,
	is_primary bit null,
	insertdate datetime  not null
)
GO
create unique clustered index idx_bakgen_backuplastdt_databases on [dbo].[bakgen_backuplastdt_databases](DatabaseCreationDate,DatabaseName,BackupType,ServerName,SqlInstanceName)

I finally decided to work with a stored procedure calling a PowerShell scripts to remotely execute the queries on the replicas.
The stored procedure lists the existing replicas and collects the last database backup information, then the read-only filegroup backup information creating 2 different queries to execute locally on the server and store the data in the temp tables first. It will create similar queries, excluding the databases not involved in availability groups and execute them on the remote replicas using xp_CmdShell running PowerShell scripts. The PowerShell scripts are dynamically created using the TSQL queries generated. They used one function of the well-known DBATools. So you will have to install it first.
You will notice that in order to log the scripts generated are nicely formatted in order to read and debug them easier. But before executing you PowerShell script through xp_CmdShell you need to apply some string formatting like the 2 lines I added to avoid the execution to fail:

set @PSCmd = replace(replace(@PSCmd, nchar(13), N”), nchar(10), N’ ‘)
set @PSCmd = replace(@PSCmd, ‘>’, N’^>’)

Do not forget to escape some characters, otherwise the execution will fails, in my case omitting to escape the ‘>’ sign raise an “Access is denied” message in the output of the xp_CmdShell execution.

After that the code is comparing what has been collected in the temp tables with the final information and update information if needed.

Here is the complete code of the stored procedure:

use [<YourDatabaseName>]
if OBJECT_ID('dbo.bakgen_p_getbakinfo') is not null
            drop procedure dbo.bakgen_p_getbakinfo 
go

CREATE PROCEDURE dbo.bakgen_p_getbakinfo 
AS
/************************************
*   dbi-services SA, Switzerland    *
*   http://www.dbi-services.com        *
*************************************
    Group/Privileges..: DBA
    Script Name......:       bakgen_p_getbakinfo.sql
    Author...........:          Christophe Cosme
    Date.............:           2019-09-20
    Version..........:          SQL Server 2016 / 2017
    Description......:        Get the backup information locally but also on the replica involved

    Input parameters.: 

            Output parameter: 
                                               
    Called by........:         Stored Procdedure : [dbo].[bakgen_p_bakexe]
************************************************************************************************
    Historical
    Date        Version    Who    Whats                  Comments
    ----------  -------    ---    --------    -----------------------------------------------------
    2019-09-30  1.0        CHC    Creation
************************************************************************************************/ 
BEGIN 

BEGIN TRY
            
            set nocount on

            declare 
    @ErrorMessage  NVARCHAR(4000), 
    @ErrorSeverity INT, 
    @ErrorState    INT;

            declare @ModuleName sysname,
                                    @ProcName sysname,
                                    @InfoLog nvarchar(max),
                                    @Execute char(1)
                        
            set @ModuleName = 'BakGen'
            set @ProcName = OBJECT_NAME(@@PROCID)
            set @Execute = 'A'

            set @InfoLog = 'Retrieve backup information'
            execute dbo.bakgen_p_log       
                        @ModuleName = @ModuleName,
                        @ProcedureName = @ProcName,
                        @ExecuteMode = @Execute,
                        @LogType = 'INFO',
                        @DatabaseName = null,
                        @Information = @InfoLog,
                        @Script = null


            --###variable to store error message
            declare @errmsg varchar(4000)
            --###variable with the current datetime
            declare @cdt datetime = getdate()

            --###variabler to store the sql and powershell commands to execute
            declare @sqllocalDB nvarchar(4000),
                                    @sqllocalFG nvarchar(4000),
                                    @sqlremoteDB nvarchar(4000),
                                    @sqlremoteFG nvarchar(4000),
                                    @PSCmd nvarchar(4000)

            --###variable to store the local SQL server name
            declare @LocalSqlServerName sysname
            --###variable to store the list of replicas
            declare @TAgReplica table (AgReplicaName sysname)
            --###variable for the cursors
            declare @AgReplicaName sysname

            --###set the local SQL Server name
            set @LocalSqlServerName = lower(convert(sysname,serverproperty('ServerName')))
                        

            --############################################################################
            --### check if tables exist
            --############################################################################
            if object_id('[dbo].[bakgen_backuplastdt_databases_temp]') is null
            begin
                        set @errmsg = 'Get Backup info : table not found'
                        set @errmsg += '          table name = [dbo].[bakgen_backuplastdt_databases_temp]' 
                        raiserror (@errmsg,11,1);
            end
            if object_id('[dbo].[bakgen_backuplastdt_fgreadonly_temp]') is null
            begin
                        set @errmsg = 'Get Backup info : table not found'
                        set @errmsg += '          table name = [dbo].[bakgen_backuplastdt_fgreadonly_temp]' 
                        raiserror (@errmsg,11,1);                      
            end

            if object_id('[dbo].[bakgen_backuplastdt_databases]') is null
            begin
                        set @errmsg = 'Get Backup info : table not found'
                        set @errmsg += '          table name = [dbo].[bakgen_backuplastdt_databases]' 
                        raiserror (@errmsg,11,1);
            end
            if object_id('[dbo].[bakgen_backuplastdt_fgreadonly]') is null
            begin
                        set @errmsg = 'Get Backup info : table not found'
                        set @errmsg += '          table name = [dbo].[bakgen_backuplastdt_fgreadonly]' 
                        raiserror (@errmsg,11,1);                      
            end


            
            --############################################################################
            --### select the replicas involved adding first the local server
            --############################################################################
            insert into @TAgReplica (AgReplicaName ) select @LocalSqlServerName

            --###check if alwayson feature is activated
            if (serverproperty('IsHadrEnabled') = 1)
            begin
                        insert into @TAgReplica (AgReplicaName )
                        select lower(agr.replica_server_name) from sys.availability_replicas agr
                                    where agr.replica_server_name <> @LocalSqlServerName
            end


            --############################################################################
            --### construct the SQL command to execute on the local SQL Server
            --############################################################################
            set @sqllocalDB = ''
            set @sqllocalDB +='

            declare @Tbi table (
                        ServerName sysname,
                        SqlInstanceName sysname,
                        SqlServerName sysname,
                        ServiceBrokerGuid uniqueidentifier,
                        DatabaseCreationDate datetime,
                        DatabaseName sysname,
                        BackupType char(1),
                        LastBackupDate datetime,
                        is_primary bit null,
                        insertdate datetime       
            )


            insert into @Tbi (
                        [ServerName],
                        [SqlInstanceName],
                        [SqlServerName],
                        [ServiceBrokerGuid],
                        [DatabaseCreationDate],
                        [DatabaseName],
                        [BackupType],
                        [LastBackupDate],
                        [is_primary],
                        [insertdate])
            select  
                        lower(convert(sysname,serverproperty(''machinename''))) as ServerName,
                        lower(convert(sysname,serverproperty(''InstanceName''))) as SqlInstanceName,
                        lower(convert(sysname,serverproperty(''ServerName''))) as SqlServerName,
                        db.service_broker_guid as ServiceBrokerGuid,
                        db.create_date as DatabaseCreationDate,
                        bs.database_name as DatabaseName,
                        bs.type as BackupType,
                        max(bs.backup_finish_date) as LastBackupDate,
                        sys.fn_hadr_is_primary_replica(bs.database_name) as is_primary,
                        ''' + convert(varchar,@cdt,120) + '''   
            from msdb.dbo.backupset bs
                        inner join sys.databases db on db.name = bs.database_name
                        where bs.type in (''D'',''I'',''P'',''Q'')
                                    and bs.is_copy_only = 0
                                    and coalesce(sys.fn_hadr_is_primary_replica(bs.database_name),-1) in (-1,0,1)
                        group by
                                    db.service_broker_guid,
                                    db.create_date,
                                    bs.database_name,
                                    bs.type, 
                                    sys.fn_hadr_is_primary_replica(bs.database_name)

            insert into [dbo].[bakgen_backuplastdt_databases_temp] (
                        [ServerName],
                        [SqlInstanceName],
                        [SqlServerName],
                        [ServiceBrokerGuid],
                        [DatabaseCreationDate],
                        [DatabaseName],
                        [BackupType],
                        [LastBackupDate],
                        [LastBackupSize],
                        [is_primary],
                        [insertdate])
            select  
                        t.[ServerName],
                        t.[SqlInstanceName],
                        t.[SqlServerName],
                        t.[ServiceBrokerGuid],
                        t.[DatabaseCreationDate],
                        t.[DatabaseName],
                        t.[BackupType],
                        t.[LastBackupDate],
                        bs.[backup_size],
                        t.[is_primary],
                        t.[insertdate]
            from @Tbi t
                        inner join msdb.dbo.backupset bs on 
                                    bs.backup_finish_date = t.LastBackupDate  
                                    and bs.database_name collate database_default = t.DatabaseName collate database_default
                                    and bs.type collate database_default = t.BackupType collate database_default
'




            set @sqllocalFG = ''
            set @sqllocalFG +='

            insert into [dbo].[bakgen_backuplastdt_fgreadonly_temp]
           ([ServerName],
           [SqlInstanceName],
           [SqlServerName],
                           [ServiceBrokerGuid],
                           [DatabaseCreationDate],
           [DatabaseName],
           [BackupType],
           [filegroup_name],
           [file_logicalname],
           [filegroup_guid],
           [file_guid],
           [LastBackupDate],
                           [LastBackupReadOnlyLsn],
           [is_primary],
                           [insertdate])
            select  
                        lower(convert(sysname,serverproperty(''machinename''))) as ServerName,
                        lower(convert(sysname,serverproperty(''InstanceName''))) as SqlInstanceName,
                        lower(convert(sysname,serverproperty(''ServerName''))) as SqlServerName,
                        db.service_broker_guid as ServiceBrokerGuid,
                        db.create_date as DatabaseCreationDate,
                        bs.database_name as DatabaseName,
                        bs.type as BackupType,
                        bf.filegroup_name,
                        bf.logical_name as file_logicalname,
                        bf.filegroup_guid,
                        bf.file_guid,
                        max(bs.backup_finish_date) as LastBackupDate,
                        max(bf.read_only_lsn) as LastBackupReadOnlyLsn,
                        sys.fn_hadr_is_primary_replica(bs.database_name) as is_primary, 
                        ''' + convert(varchar,@cdt,120) + '''   
            from msdb.dbo.backupset bs
                                    inner join msdb.dbo.backupfile bf on  bf.backup_set_id = bs.backup_set_id
                                    inner join sys.databases db on db.name = bs.database_name 
                        where 
                                    bs.backup_finish_date >= db.create_date 
                                    and bs.type in (''F'')
                                    and bs.is_copy_only = 0
                                    and coalesce(sys.fn_hadr_is_primary_replica(bs.database_name),-1) in (-1,0,1)
                                    and bf.is_present = 1
                                    and bf.is_readonly = 1
                                    and bf.file_type = ''D''
                        group by
                                    db.service_broker_guid,
                                    db.create_date,
                                    bs.database_name, 
                                    bs.type,
                                    bf.filegroup_name,
                                    bf.logical_name, 
                                    bf.filegroup_guid,
                                    bf.file_guid,
                                    sys.fn_hadr_is_primary_replica(bs.database_name)
'


            
            --############################################################################
            --### construct the SQL command to execute on the remote SQL Server
            --############################################################################
            set @sqlremoteDB = ''
            set @sqlremoteDB +='

            declare @Tbi table (
                        ServerName sysname,
                        SqlInstanceName sysname,
                        SqlServerName sysname,
                        ServiceBrokerGuid uniqueidentifier,
                        DatabaseCreationDate datetime, 
                        DatabaseName sysname,
                        BackupType char(1),
                        LastBackupDate datetime,
                        is_primary bit null,
                        insertdate datetime       
            )

            insert into @Tbi (
                        [ServerName],
                        [SqlInstanceName],
                        [SqlServerName],
                        [ServiceBrokerGuid],
                        [DatabaseCreationDate],
                        [DatabaseName],
                        [BackupType],
                        [LastBackupDate],
                        [is_primary],
                        [insertdate])
            select  
                        lower(convert(sysname,serverproperty(''machinename''))) as ServerName,
                        lower(convert(sysname,serverproperty(''InstanceName''))) as SqlInstanceName,
                        lower(convert(sysname,serverproperty(''ServerName''))) as SqlServerName,
                        db.service_broker_guid as ServiceBrokerGuid,
                        db.create_date as DatabaseCreationDate,
                        bs.database_name as DatabaseName,
                        bs.type as BackupType,
                        max(bs.backup_finish_date) as LastBackupDate,
                        sys.fn_hadr_is_primary_replica(bs.database_name) as is_primary, 
                        ''' + convert(varchar,@cdt,120) + '''     
            from msdb.dbo.backupset bs
                        inner join sys.databases db on db.name = bs.database_name 
                        where bs.type in (''D'',''I'',''P'',''Q'')
                                    and bs.is_copy_only = 0
                                    and coalesce(sys.fn_hadr_is_primary_replica(bs.database_name),-1) in (0,1)
                        group by
                                    db.service_broker_guid,
                                    db.create_date,
                                    bs.database_name,
                                    bs.type,
                                    sys.fn_hadr_is_primary_replica(bs.database_name) 

            select  
                        t.[ServerName],
                        t.[SqlInstanceName],
                        t.[SqlServerName],
                        t.[ServiceBrokerGuid],
                        t.[DatabaseCreationDate],
                        t.[DatabaseName],
                        t.[BackupType],
                        t.[LastBackupDate],
                        bs.[backup_size],
                        t.[is_primary],
                        t.[insertdate]
            from @Tbi t
                        inner join msdb.dbo.backupset bs on 
                                    bs.backup_finish_date = t.LastBackupDate 
                                    and bs.database_name collate database_default = t.DatabaseName collate database_default
                                    and bs.type collate database_default = t.BackupType collate database_default

'

            set @sqlremoteFG = ''
            set @sqlremoteFG +='

            select  
                        lower(convert(sysname,serverproperty(''machinename''))) as ServerName,
                        lower(convert(sysname,serverproperty(''InstanceName''))) as SqlInstanceName,
                        lower(convert(sysname,serverproperty(''ServerName''))) as SqlServerName,
                        db.service_broker_guid as ServiceBrokerGuid,
                        db.create_date as DatabaseCreationDate,
                        bs.database_name as DatabaseName,
                        bs.type as BackupType,
                        bf.filegroup_name,
                        bf.logical_name as file_logicalname,
                        bf.filegroup_guid,
                        bf.file_guid,
                        max(bs.backup_finish_date) as LastBackupDate,
                        max(bf.read_only_lsn) as LastReadOnlyLsn,
                        sys.fn_hadr_is_primary_replica(bs.database_name) as is_primary, 
                        ''' + convert(varchar,@cdt,120) + '''   
            from msdb.dbo.backupset bs
                                    inner join msdb.dbo.backupfile bf on  bf.backup_set_id = bs.backup_set_id
                                    inner join sys.databases db on db.name = bs.database_name 
                        where 
                                    bs.backup_finish_date >= db.create_date 
                                    and bs.type in (''F'')
                                    and bs.is_copy_only = 0
                                    and coalesce(sys.fn_hadr_is_primary_replica(bs.database_name),-1) in (0,1)
                                    and bf.is_present = 1
                                    and bf.is_readonly = 1
                                    and bf.file_type = ''D''
                        group by
                                    db.service_broker_guid,
                                    db.create_date, 
                                    bs.database_name, 
                                    bs.type,
                                    bf.filegroup_name,
                                    bf.logical_name, 
                                    bf.filegroup_guid,
                                    bf.file_guid,
                                    sys.fn_hadr_is_primary_replica(bs.database_name) 
'

            --############################################################################
            --### delete all records in the backup info tables
            --############################################################################
            delete from [dbo].[bakgen_backuplastdt_databases_temp]
            delete from [dbo].[bakgen_backuplastdt_fgreadonly_temp]

            --############################################################################
            --### loop for all replicas involved
            --############################################################################
            declare cur_replica cursor
            static local forward_only
            for 
                        select AgReplicaName
                        from @TAgReplica
                 
            open cur_replica
            fetch next from cur_replica into 
                        @AgReplicaName                    


            while @@fetch_status = 0
            begin 
                                    
                        if @LocalSqlServerName = @AgReplicaName
                        begin 

                                    set @InfoLog = 'Get database backup information on local SQL Server instance ' + QUOTENAME(@AgReplicaName)
                                    execute dbo.bakgen_p_log       
                                               @ModuleName = @ModuleName,
                                               @ProcedureName = @ProcName,
                                                @ExecuteMode = @Execute,
                                               @LogType = 'INFO',
                                               @DatabaseName = null,
                                               @Information = @InfoLog,
                                               @Script = @sqllocalDB
                                    execute sp_executesql @sqllocalDB

                                    set @InfoLog = 'Get read-only filegroup backup information on local SQL Server instance ' + QUOTENAME(@AgReplicaName)
                                    execute dbo.bakgen_p_log       
                                               @ModuleName = @ModuleName,
                                               @ProcedureName = @ProcName,
                                               @ExecuteMode = @Execute,
                                               @LogType = 'INFO',
                                               @DatabaseName = null,
                                               @Information = @InfoLog,
                                               @Script = @sqllocalFG
                                    execute sp_executesql @sqllocalFG

                        end 
                        else
                        begin
                                    --############################################################################
                                    --### construct the PowerShell command to execute on the remote SQL Server
                                    --############################################################################
                                    set @PSCmd  = ''
                                    set @PSCmd += 'PowerShell.exe '
                                    set @PSCmd += '-Command "'
                                    set @PSCmd += '$qrydb = \"' + @sqlremoteDB + '\"; ' 
                                    set @PSCmd += '$qryfg = \"' + @sqlremoteFG + '\"; ' 
                                    set @PSCmd += '$rdb = Invoke-DbaQuery -SqlInstance ' + @AgReplicaName + ' -Query $qrydb; '
                                    set @PSCmd += '$rfg = Invoke-DbaQuery -SqlInstance ' + @AgReplicaName + ' -Query $qryfg; '
                                    set @PSCmd += 'if ($rdb -ne $null) { '
                                    set @PSCmd += 'Write-DbaDbTableData -SqlInstance ' + @LocalSqlServerName + ' -Database ' + db_name() + ' -Schema dbo -Table bakgen_backuplastdt_databases_temp -InputObject $rdb;'
                                    set @PSCmd += '} '
                                    set @PSCmd += 'if ($rfg -ne $null) { '
                                    set @PSCmd += 'Write-DbaDbTableData -SqlInstance ' + @LocalSqlServerName + ' -Database ' + db_name() + ' -Schema dbo -Table bakgen_backuplastdt_fgreadonly_temp -InputObject $rfg;'
                                    set @PSCmd += '} '
                                    set @PSCmd += '"'

                                    set @InfoLog = 'Get backup information on replica SQL Server instance ' + QUOTENAME(@AgReplicaName) + ' executing master..xp_cmdshell PowerShell script'
                                    execute dbo.bakgen_p_log       
                                               @ModuleName = @ModuleName,
                                               @ProcedureName = @ProcName,
                                               @ExecuteMode = @Execute,
                                               @LogType = 'INFO',
                                               @DatabaseName = null,
                                               @Information = @InfoLog,
                                               @Script = @PSCmd

                                    --###remove CRLF for xp_cmdshell and PowerShell 
                                    set @PSCmd = replace(replace(@PSCmd, nchar(13), N''), nchar(10), N' ')
                                    set @PSCmd = replace(@PSCmd, '>', N'^>')
                                    --###Execute the powershell command on the replica and store the result in the temporary tables
                                    exec master..xp_cmdshell @PSCmd
                        end
                        
                        fetch next from cur_replica into 
                                    @AgReplicaName                    


            end
            close cur_replica
            deallocate cur_replica


            --############################################################################
            --### Update and insert backup information in final tables
            --############################################################################

            --###Update first the database creation date with the local ones
            Update t
                        set t.DatabaseCreationDate = db.create_date
            from [dbo].[bakgen_backuplastdt_databases_temp] t
                        inner join sys.databases db 
                                    on db.name collate database_default = t.DatabaseName collate database_default 
                                               and db.service_broker_guid = t.ServiceBrokerGuid

            Update t
                        set t.DatabaseCreationDate = db.create_date
            from [dbo].[bakgen_backuplastdt_fgreadonly_temp] t
                        inner join sys.databases db 
                                    on db.name collate database_default = t.DatabaseName collate database_default 
                                               and db.service_broker_guid = t.ServiceBrokerGuid




            BEGIN TRY

                        begin transaction 

                        delete f
                                    from [dbo].[bakgen_backuplastdt_databases_temp] t
                                               inner join [dbo].[bakgen_backuplastdt_databases] f 
                                                           on f.DatabaseCreationDate = t.DatabaseCreationDate
                                                                       and f.DatabaseName = t.DatabaseName 
                                                                       and f.BackupType = t.BackupType 
                                                                       and f.ServerName = t.ServerName 
                                                                       and t.SqlInstanceName = f.SqlInstanceName
                                               where f.LastBackupDate < t.LastBackupDate

                        Insert into [dbo].[bakgen_backuplastdt_databases] (
                                    ServerName,
                                    SqlInstanceName,
                                    SqlServerName,
                                    DatabaseCreationDate,
                                    DatabaseName,
                                    BackupType,
                                    LastBackupDate,
                                    LastBackupSize,
                                    is_primary,
                                    insertdate 
                        )
                        select 
                                    t.ServerName,
                                    t.SqlInstanceName,
                                    t.SqlServerName,
                                    t.DatabaseCreationDate,
                                    t.DatabaseName,
                                    t.BackupType,
                                    t.LastBackupDate,
                                    t.LastBackupSize,
                                    t.is_primary,
                                    t.insertdate 
                                    from [dbo].[bakgen_backuplastdt_databases_temp] t
                                               where not exists (select 1 from [dbo].[bakgen_backuplastdt_databases] f 
                                                                                                                      where f.DatabaseName = t.DatabaseName 
                                                                                                                                  and f.BackupType = t.BackupType 
                                                                                                                                  and f.ServerName = t.ServerName 
                                                                                                                                  and t.SqlInstanceName = f.SqlInstanceName)
                                    
                        
                        commit

                        begin transaction

                        delete f
                                    from [dbo].[bakgen_backuplastdt_fgreadonly_temp] t
                                               inner join [dbo].[bakgen_backuplastdt_fgreadonly] f 
                                                           on f.DatabaseName = t.DatabaseName 
                                                                       and f.BackupType = t.BackupType 
                                                                       and f.filegroup_name = t.filegroup_name
                                                                       and f.ServerName = t.ServerName 
                                                                       and f.SqlInstanceName = t.SqlInstanceName
                                               where f.LastBackupDate < t.LastBackupDate


                        Insert into [dbo].[bakgen_backuplastdt_fgreadonly] (
                                    ServerName,     
                                    SqlInstanceName,
                                    SqlServerName,           
                                    DatabaseCreationDate,
                                    DatabaseName,            
                                    BackupType,
                                    filegroup_name,
                                    file_logicalname,          
                                    filegroup_guid, 
                                    file_guid,          
                                    LastBackupDate,          
                                    LastBackupReadOnlyLsn,
                                    is_primary,
                                    insertdate                     
                        )
                        select 
                                    t.ServerName,   
                                    t.SqlInstanceName,
                                    t.SqlServerName,
                                    t.DatabaseCreationDate,
                                    t.DatabaseName,          
                                    t.BackupType,
                                    t.filegroup_name,
                                    t.file_logicalname,        
                                    t.filegroup_guid,           
                                    t.file_guid,        
                                    t.LastBackupDate,        
                                    t.LastBackupReadOnlyLsn,
                                    t.is_primary,
                                    t.insertdate                   
                        from [dbo].[bakgen_backuplastdt_fgreadonly_temp] t                                        
                                    where not exists (
                                               select 1 from  [dbo].[bakgen_backuplastdt_fgreadonly] f 
                                               where f.DatabaseName = t.DatabaseName 
                                                                       and f.BackupType = t.BackupType 
                                                                       and f.filegroup_name = t.filegroup_name
                                                                       and f.ServerName = t.ServerName 
                                                                       and t.SqlInstanceName = f.SqlInstanceName)

                        
                        commit
            END TRY
            BEGIN CATCH
                SELECT 
                                    @ErrorMessage = ERROR_MESSAGE(), 
                                    @ErrorSeverity = ERROR_SEVERITY(), 
                                    @ErrorState = ERROR_STATE();

                        IF @@TRANCOUNT > 0
                                    ROLLBACK
                        
                        raiserror(@ErrorMessage, @ErrorSeverity, @ErrorState);

            END CATCH



RETURN;

END TRY
BEGIN CATCH
    SELECT 
        @ErrorMessage = ERROR_MESSAGE(), 
        @ErrorSeverity = ERROR_SEVERITY(), 
        @ErrorState = ERROR_STATE();

            set @InfoLog = '@ErrorState = ' + convert(nvarchar, @ErrorState) + '/@ErrorSeverity = ' + convert(nvarchar, @ErrorSeverity) + '/@ErrorMessage = ' + @ErrorMessage
            execute dbo.bakgen_p_log       
                        @ModuleName = @ModuleName,
                        @ProcedureName = @ProcName,
                        @ExecuteMode = @Execute,
                        @LogType = 'ERROR',
                        @DatabaseName = null,
                        @Information = @InfoLog,
                        @Script = null

    raiserror(@ErrorMessage, @ErrorSeverity, @ErrorState);
END CATCH;

RETURN
END
Other Objects needed

As mentioned above I used the DBATools Write-DbaDbTableData function, so need to install it before being able to run the above stored procedure.

I share also the 2 other objects used in the above stored procedure, but of course you can adapt the code to your needs

Creation of the log table:

--########################################################
--###Backup generator - logs
--########################################################

USE [<YourDatabaseName>]
GO
/*
if OBJECT_ID('[dbo].[bakgen_logs]') is not null
	drop table [dbo].[bakgen_logs]
*/
create table [dbo].[bakgen_logs] (
	id bigint identity(1,1) not null,
	LogDate datetime,
	SqlServerName sysname,
	ModuleName sysname,
	ProcedureName sysname,
	ExecuteMode char(1),
	LogType nvarchar(50),
	DatabaseName sysname null,
	Information nvarchar(max) null,
	Scripts nvarchar(max) null,
CONSTRAINT [PK_bakgen_logs] PRIMARY KEY CLUSTERED 
(
	[id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
)
GO

Creation of the stored procedure to write the logs:

use [<YourDatabaseName>]
if OBJECT_ID('dbo.bakgen_p_log') is not null
	drop procedure dbo.bakgen_p_log 
go

CREATE PROCEDURE dbo.bakgen_p_log 
(
	@ModuleName sysname,
	@ProcedureName sysname,
	@ExecuteMode char(1),
	@LogType nvarchar(50),
	@DatabaseName sysname = null,
	@Information nvarchar(max) =  null,
	@Script nvarchar(max)  = null
)

AS
/************************************
*   dbi-services SA, Switzerland    *
*   http://www.dbi-services.com        *
*************************************
    Group/Privileges..: DBA
    Script Name......:	bakgen_p_log.sql
    Author...........:	Christophe Cosme
    Date.............:	2019-09-20
    Version..........:	SQL Server 2016 / 2017
    Description......:	write information to the log table to keep trace of the step executed

    Input parameters.: 

	Output parameter: 
				
************************************************************************************************
    Historical
    Date        Version    Who    Whats		Comments
    ----------  -------    ---    --------	-----------------------------------------------------
    2019-10-14  1.0        CHC    Creation
************************************************************************************************/ 
BEGIN 

BEGIN TRY
	
	--###variable to store error message
	declare @errmsg varchar(4000)

	if OBJECT_ID('[dbo].[bakgen_logs]') is null
	begin
		set @errmsg = 'bakgen_p_log : table not found - be sure the table exists'
		set @errmsg += '	table name = [dbo].[bakgen_logs]' 
		raiserror (@errmsg,11,1);
	end		

	insert into [dbo].[bakgen_logs] (
		LogDate,
		SqlServerName,
		ModuleName,
		ProcedureName,
		ExecuteMode,
		LogType,
		DatabaseName,
		Information,
		Scripts
		)
	values(
		getdate(),
		convert(sysname,SERVERPROPERTY('servername')),
		@ModuleName,
		@ProcedureName,
		@ExecuteMode,
		@LogType,
		@DatabaseName,
		@Information,
		@Script
		)


RETURN;

END TRY
BEGIN CATCH
	declare 
    @ErrorMessage  NVARCHAR(4000), 
    @ErrorSeverity INT, 
    @ErrorState    INT;
    SELECT 
        @ErrorMessage = ERROR_MESSAGE(), 
        @ErrorSeverity = ERROR_SEVERITY(), 
        @ErrorState = ERROR_STATE();
 
    -- return the error inside the CATCH block
    raiserror(@ErrorMessage, @ErrorSeverity, @ErrorState);
END CATCH;

RETURN
END
Conclusion

Triggering PowerShell from a stored procedure did the trick for my special case and is very practical. But to find the right syntax to make the script running through xp_CmdShell was not so trivial. I admit to spend sometimes to figure out what was causing the issue.
But I definitely enjoyed the solution for retrieving information outside the local SQL Server instance.

Cet article SQL Server – Collecting last backup information in an AlwaysOn environment est apparu en premier sur Blog dbi services.

Upgrade your Power BI Report Server

Wed, 2019-12-11 10:08
Introduction

Even if upgrading your Power BI Report Server is straight forward, I have been asked many times where to find the installation files and how to run it that I thought a blog is worth it.

Before you start
Before upgrading your Power BI Report Server it is recommended to perform some backup steps.
  • Back up the encryption keys
  • Back up the report server databases

Back up the configuration files (in the default installation location folders)

C:\Program Files\Microsoft Power BI Report Server\PBIRS\ReportServer

  • Rsreportserver.config
  • Rssvrpolicy.config
  • Web.config

C:\Program Files\Microsoft Power BI Report Server\PBIRS\ReportServer\bin

  • Reportingservicesservice.exe.config

C:\Program Files\Microsoft Power BI Report Server\PBIRS\RSHostingService

  • config.json
  • RSHostingService.exe.config
Download the latest version

Most of the time searching for download the latest Power BI Report Server version you will land to this site: https://powerbi.microsoft.com/en-us/report-server/ As I was asked several times, I found useful to mention to select “Advanced download option” to find the download site.

Notice the Power BI Version available, then click download

Select the install files you want. I always advice to select both the server and the desktop version at the same time and to distribute the desktop version to your report developers to avoid surprises later when publishing the report on the portal

Upgrade the report server

Execute PowerBIReportServer.exe you downloaded Click on “Upgrade Power BI Report Server Accept the license terms and click Upgrade

When the upgrade is competed, you can close the application

Check if the version has been installed correctly either using the Power BI Report Server configuration manager…

…or directly within the web portal.

Be aware that if you do not see the new version installed, restart the upgrade process, you will be probably requested to restart your computer before.

Cet article Upgrade your Power BI Report Server est apparu en premier sur Blog dbi services.

Documentum – LDAP Config Object “certdb_location” has invalid value

Sun, 2019-12-08 02:00

In a previous blog, I talked about the automation of the LDAP/LDAPs creation. However, the first time that I applied these steps, I actually faced an issue and I couldn’t really get my head around it, at first. This will be a rather short post but I still wanted to share my thoughts because it might avoid you some headache. The issue is only linked to the SSL part of the setup so there is no problem with the basis non-secure LDAP communications.

So, after applying all the steps, everything went fine and I therefore tried to run the dm_LDAPSynchronization job to validate the setup. Doing so, the generated log file wasn’t so great:

[dmadmin@content-server-0 ~]$ cat $DOCUMENTUM/dba/log/repo01/sysadmin/LDAPSynchronizationDoc.txt
LDAPSynchronization Report For DocBase repo01 As Of 2019/09/22 12:45:57

2019-09-22 12:45:56:124 UTC [default task-79]: LDAP Synchronization Started @ Sun Sep 22 12:45:56 UTC 2019
2019-09-22 12:45:56:124 UTC [default task-79]:
2019-09-22 12:45:56:124 UTC [default task-79]: $JMS_HOME/server/DctmServer_MethodServer/deployments/ServerApps.ear/lib/dmldap.jar
2019-09-22 12:45:56:125 UTC [default task-79]: ---------------------------------------------------------------------------------
2019-09-22 12:45:56:125 UTC [default task-79]: Product-Name : Content Server-LDAPSync
2019-09-22 12:45:56:125 UTC [default task-79]: Product-Version : 16.4.0110.0167
2019-09-22 12:45:56:125 UTC [default task-79]: Implementation-Version : 16.4.0110.0167
2019-09-22 12:45:56:125 UTC [default task-79]: ---------------------------------------------------------------------------------
2019-09-22 12:45:56:125 UTC [default task-79]:
2019-09-22 12:45:56:126 UTC [default task-79]: Preparing LDAP Synchronization...
2019-09-22 12:45:57:101 UTC [default task-79]: INFO: Job Status: [LDAP Synchronization Started @ Sun Sep 22 12:45:56 UTC 2019]
2019-09-22 12:45:57:120 UTC [default task-79]: INFO: Job Status updated
2019-09-22 12:45:58:415 UTC [default task-79]: INFO: List of Ldap Configs chosen for Synchronization
2019-09-22 12:45:58:415 UTC [default task-79]: INFO:    >>>0812d6878000252c - Internal_LDAP<<<
2019-09-22 12:45:58:415 UTC [default task-79]: INFO:
2019-09-22 12:45:58:418 UTC [default task-79]:
2019-09-22 12:45:58:418 UTC [default task-79]: ==================================================================================
2019-09-22 12:45:58:420 UTC [default task-79]: Starting Sychronization for ldap config object >>>Internal_LDAP<<< ...
2019-09-22 12:45:58:425 UTC [default task-79]: Unexpected Error. Caused by: [DM_LDAP_SYNC_E_EXCEPTION_ERROR]error:  "Ldap Config Property "certdb_location" has invalid value "ldap_chain"."
2019-09-22 12:45:58:426 UTC [default task-79]: ERROR: DmLdapException:: THREAD: default task-79; MSG: [DM_LDAP_SYNC_E_EXCEPTION_ERROR]error:  "Ldap Config Property "certdb_location" has invalid value "ldap_chain"."; ERRORCODE: 100; NEXT: null
        at com.documentum.ldap.internal.sync.SynchronizationContextBuilder.getCertDbLocation(SynchronizationContextBuilder.java:859)
        at com.documentum.ldap.internal.sync.SynchronizationContextBuilder.setCertDbLocation(SynchronizationContextBuilder.java:225)
        at com.documentum.ldap.internal.sync.SynchronizationContextBuilder.buildSynchronizationContext(SynchronizationContextBuilder.java:49)
        at com.documentum.ldap.LDAPSync.prepareSync(LDAPSync.java:438)
        at com.documentum.ldap.LDAPSync.processJob(LDAPSync.java:238)
        at com.documentum.ldap.LDAPSync.execute(LDAPSync.java:80)
        at com.documentum.mthdservlet.DfMethodRunner.runIt(Unknown Source)
        at com.documentum.mthdservlet.AMethodRunner.runAndReturnStatus(Unknown Source)
        at com.documentum.mthdservlet.DoMethod.invokeMethod(Unknown Source)
        at com.documentum.mthdservlet.DoMethod.doPost(Unknown Source)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
        at io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:86)
        at io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62)
        at io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36)
        at org.wildfly.extension.undertow.security.SecurityContextAssociationHandler.handleRequest(SecurityContextAssociationHandler.java:78)
        at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
        at io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:131)
        at io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:57)
        at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
        at io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46)
        at io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:64)
        at io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:58)
        at io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:72)
        at io.undertow.security.handlers.NotificationReceiverHandler.handleRequest(NotificationReceiverHandler.java:50)
        at io.undertow.security.handlers.SecurityInitialHandler.handleRequest(SecurityInitialHandler.java:76)
        at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
        at org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.handleRequest(JACCContextIdHandler.java:61)
        at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
        at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
        at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:282)
        at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:261)
        at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:80)
        at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:172)
        at io.undertow.server.Connectors.executeRootHandler(Connectors.java:199)
        at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:774)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

2019-09-22 12:45:58:426 UTC [default task-79]: WARNING:   **** Skipping Ldap Config Object - Internal_LDAP ****
2019-09-22 12:45:58:775 UTC [default task-79]: Synchronization of ldap config object >>>Internal_LDAP<<< is finished
2019-09-22 12:45:58:775 UTC [default task-79]: ==================================================================================
2019-09-22 12:45:58:775 UTC [default task-79]:
2019-09-22 12:45:58:786 UTC [default task-79]: INFO: Job Status: [dm_LDAPSynchronization Tool had ERRORS at 2019/09/22 12:45:58. Total duration was 2 seconds.View the job's report for details.]
2019-09-22 12:45:58:800 UTC [default task-79]: INFO: Job Status updated
2019-09-22 12:45:58:800 UTC [default task-79]: LDAP Synchronization Ended @ Sun Sep 22 12:45:58 UTC 2019
2019-09-22 12:45:58:800 UTC [default task-79]: Session s2 released successfully
Report End  2019/09/22 12:45:58
[dmadmin@content-server-0 ~]$

 

After a bunch of checks inside the repository, everything seemed fine. All the objects had the correct content, the correct references, aso… However there was one thing that wasn’t exactly as per the KB6321243 and that was the extension of the Trust Chain file. If you look at the basis of SSL Certificate encodings, then there are two main possibilities: DER (binary = not readable) or PEM (ASCII = readable). In addition to that, you can also have files with CRT or CER extensions but they are always either DER or PEM encoded. The KB asks you to have a PEM encoded SSL Certificate so this file can technically have either a “.pem” or “.cer” or “.crt” extension, that’s almost synonymous. Therefore, here I was, thinking that I could keep my “.crt” extension for the PEM encoded SSL Trust Chain.

To validate that this was the issue, I switched my file to the “.pem” extension and updated the “dm_location” Object:

[dmadmin@content-server-0 ~]$ cd $DOCUMENTUM/dba/secure/ldapdb
[dmadmin@content-server-0 ldapdb]$ mv ldap_chain.crt ldap_chain.pem
[dmadmin@content-server-0 ldapdb]$ 
[dmadmin@content-server-0 ldapdb]$ iapi repo01 -U${USER} -Pxxx


        OpenText Documentum iapi - Interactive API interface
        Copyright (c) 2018. OpenText Corporation
        All rights reserved.
        Client Library Release 16.4.0110.0058

Connecting to Server using docbase repo01
[DM_SESSION_I_SESSION_START]info:  "Session 0112d68780001402 started for user dmadmin."

Connected to OpenText Documentum Server running Release 16.4.0110.0167  Linux64.Oracle
Session id is s0
API> retrieve,c,dm_location where object_name='ldap_chain'
...
3a12d68780002522
API> get,c,l,file_system_path
...
$DOCUMENTUM/dba/secure/ldapdb/ldap_chain.cer
API> set,c,l,file_system_path
SET> $DOCUMENTUM/dba/secure/ldapdb/ldap_chain.pem
...
OK
API> get,c,l,file_system_path
...
$DOCUMENTUM/dba/secure/ldapdb/ldap_chain.pem
API> save,c,l
...
OK
API> ?,c,UPDATE dm_job OBJECTS set run_now=true, set a_next_invocation=DATE(NOW) WHERE object_name='dm_LDAPSynchronization'
objects_updated
---------------
              1
(1 row affected)
[DM_QUERY_I_NUM_UPDATE]info:  "1 objects were affected by your UPDATE statement."

API> exit
Bye
[dmadmin@content-server-0 ldapdb]$

 

With the above, I just changed the extension of the file on the file system and its reference in the “dm_location” Object. The last iAPI command triggered the dm_LDAPSynchronization job to run. Checking the new log file, the issue was indeed solved which confirmed that despite the fact that the Trust Chain was a PEM encoded certificate, it wasn’t enough. There is actually a hardcoded value/check inside Documentum, which forces you to use the “.pem” extension and nothing else:

[dmadmin@content-server-0 ldapdb]$ cat $DOCUMENTUM/dba/log/repo01/sysadmin/LDAPSynchronizationDoc.txt
LDAPSynchronization Report For DocBase repo01 As Of 2019/09/22 13:19:33

2019-09-22 13:19:30:360 UTC [default task-87]: LDAP Synchronization Started @ Sun Sep 22 13:19:30 UTC 2019
2019-09-22 13:19:30:360 UTC [default task-87]:
2019-09-22 13:19:30:361 UTC [default task-87]: $JMS_HOME/server/DctmServer_MethodServer/deployments/ServerApps.ear/lib/dmldap.jar
2019-09-22 13:19:30:367 UTC [default task-87]: ---------------------------------------------------------------------------------
2019-09-22 13:19:30:367 UTC [default task-87]: Product-Name : Content Server-LDAPSync
2019-09-22 13:19:30:367 UTC [default task-87]: Product-Version : 16.4.0110.0167
2019-09-22 13:19:30:367 UTC [default task-87]: Implementation-Version : 16.4.0110.0167
2019-09-22 13:19:30:367 UTC [default task-87]: ---------------------------------------------------------------------------------
2019-09-22 13:19:30:367 UTC [default task-87]:
2019-09-22 13:19:30:370 UTC [default task-87]: Preparing LDAP Synchronization...
2019-09-22 13:19:32:425 UTC [default task-87]: INFO: Job Status: [LDAP Synchronization Started @ Sun Sep 22 13:19:30 UTC 2019]
2019-09-22 13:19:32:453 UTC [default task-87]: INFO: Job Status updated
2019-09-22 13:19:34:292 UTC [default task-87]: INFO: List of Ldap Configs chosen for Synchronization
2019-09-22 13:19:34:292 UTC [default task-87]: INFO:    >>>0812d6878000252c - Internal_LDAP<<<
2019-09-22 13:19:34:292 UTC [default task-87]: INFO:
2019-09-22 13:19:34:294 UTC [default task-87]:
2019-09-22 13:19:34:294 UTC [default task-87]: ==================================================================================
2019-09-22 13:19:34:297 UTC [default task-87]: Starting Sychronization for ldap config object >>>Internal_LDAP<<< ...
2019-09-22 13:19:35:512 UTC [default task-87]: INFO: Directory Type: Sun ONE Directory Server ...
2019-09-22 13:19:35:517 UTC [default task-87]: INFO: Ldap Connection: SSL connection
2019-09-22 13:19:35:517 UTC [default task-87]: INFO: ldap://ldap.domain.com:636
2019-09-22 13:19:35:517 UTC [default task-87]: INFO: {java.naming.provider.url=ldap://ldap.domain.com:636, java.naming.factory.initial=com.sun.jndi.ldap.LdapCtxFactory}
2019-09-22 13:19:35:597 UTC [Thread-91752]: INFO: DM_LDAP_IGNORE_HOSTNAME_CHECK environment variable is enabled.
2019-09-22 13:19:35:598 UTC [Thread-91752]: INFO: Skipping hostname check
2019-09-22 13:19:35:598 UTC [Thread-91752]: INFO: DctmTrustMangaer.checkServerTrusted(): Successfully validated the certificate chain sent from server.
2019-09-22 13:19:35:635 UTC [default task-87]: INFO: DM_LDAP_IGNORE_HOSTNAME_CHECK environment variable is enabled.
2019-09-22 13:19:35:635 UTC [default task-87]: INFO: Skipping hostname check
2019-09-22 13:19:35:635 UTC [default task-87]: INFO: DctmTrustMangaer.checkServerTrusted(): Successfully validated the certificate chain sent from server.
2019-09-22 13:19:35:663 UTC [default task-87]: INFO: LDAP Search Retry: is_child_context = true
2019-09-22 13:19:35:663 UTC [default task-87]: INFO: LDAP Search Retry: Retry count = 1
2019-09-22 13:19:35:665 UTC [default task-87]: Starting the group synchronization...
...
2019-09-22 13:19:35:683 UTC [default task-87]: Group synchronization finished.
2019-09-22 13:19:35:683 UTC [default task-87]:
2019-09-22 13:19:35:683 UTC [default task-87]: INFO: Updating Last Run Time: [20190922131935Z]
2019-09-22 13:19:35:683 UTC [default task-87]: INFO: Updating Last Change No: [null]
2019-09-22 13:19:35:749 UTC [default task-87]: INFO: Ldap Config Object >>>>Internal_LDAP<<<< updated
2019-09-22 13:19:35:751 UTC [default task-87]: Disconnected from LDAP Server successfully.
2019-09-22 13:19:36:250 UTC [default task-87]: Synchronization of ldap config object >>>Internal_LDAP<<< is finished
2019-09-22 13:19:36:250 UTC [default task-87]: ==================================================================================
2019-09-22 13:19:36:250 UTC [default task-87]:
2019-09-22 13:19:36:265 UTC [default task-87]: INFO: Job Status: [dm_LDAPSynchronization Tool Completed with WARNINGS at 2019/09/22 13:19:36. Total duration was 6 seconds.]
2019-09-22 13:19:36:278 UTC [default task-87]: INFO: Job Status updated
2019-09-22 13:19:36:278 UTC [default task-87]: LDAP Synchronization Ended @ Sun Sep 22 13:19:36 UTC 2019
2019-09-22 13:19:36:278 UTC [default task-87]: Session s2 released successfully
Report End  2019/09/22 13:19:36
[dmadmin@content-server-0 ldapdb]$

 

A pretty annoying design but there is nothing you can do about it. Fortunately, it’s not hard to fix the issue once you know what’s the problem!

 

Cet article Documentum – LDAP Config Object “certdb_location” has invalid value est apparu en premier sur Blog dbi services.

Documentum – Automatic/Silent creation of LDAP/LDAPs Server Config Objects

Sun, 2019-12-08 02:00

If you have been working with Documentum, then you probably already created/configured an LDAP/LDAPs Server Config Object (or several) so that your users can be globally managed in your organization. There are several compatible LDAP Servers so I will just take one (Sun One/Netscpae/iPlanet Directory Server). To create this LDAP/LDAPs Server Config Object, you probably used Documentum Administrator because it’s simple and quick to setup, however that’s not enough for automation. In this blog, I will show and explain the steps needed to configure the same but without any need for DA.

The problem with DA is that it usually does some magic and you cannot always do exactly the same without it. Here, this also applies but to a smaller extent since it is only the SSL part (LDAPs) that needs specific steps. For this, there is a KB created by EMC some years ago (migrated to OpenText): KB6321243.

Before starting, let’s setup some parameters that will be used in this blog:

[dmadmin@content-server-0 ~]$ repo="repo01"
[dmadmin@content-server-0 ~]$ dm_location_name="ldap_chain"
[dmadmin@content-server-0 ~]$ file_path="$DOCUMENTUM/dba/secure/ldapdb/${dm_location_name}.pem"
[dmadmin@content-server-0 ~]$ ldap_server_name="Internal_LDAP"
[dmadmin@content-server-0 ~]$ ldap_host="ldap.domain.com"
[dmadmin@content-server-0 ~]$ ldap_ssl=1 #0 for LDAP, 1 for LDAPs
[dmadmin@content-server-0 ~]$ ldap_port=636
[dmadmin@content-server-0 ~]$ location=`if ((${ldap_ssl} == 1)); then echo ${dm_location_name}; else echo "ldapcertdb_loc"; fi`
[dmadmin@content-server-0 ~]$ ldap_principal="ou=APP,ou=applications,ou=intranet,dc=dbi services,dc=com"
[dmadmin@content-server-0 ~]$ ldap_pwd="T3stP4ssw0rd"
[dmadmin@content-server-0 ~]$ ldap_user_filter="objectclass=person"
[dmadmin@content-server-0 ~]$ ldap_user_class="person"
[dmadmin@content-server-0 ~]$ ldap_group_filter="objectclass=groupofuniquenames"
[dmadmin@content-server-0 ~]$ ldap_group_class="groupofuniquenames"

 

1. Preparation steps for LDAPs

The steps in this section are only needed in case you need to configure SSL communications between your LDAP Server and Documentum. It can be done upfront without any issues. So let’s start with setting up the environment. Without the use of DA, the only way you have to import/trust SSL Certificate for the LDAPs connection is by adding an environment variable named “DM_LDAP_CERT_FILE” and setting it to “1”. This will allow Documentum to use certificate files for the trust chain instead of doing what DA is doing (the magic part) that we cannot replicate.

It is a little bit out of scope for this blog but a second variable is often needed “DM_LDAP_IGNORE_HOSTNAME_CHECK” which drives the validation of the hostname. Setting this to “1” will disable the hostname check and therefore allow you to use an LDAP Server that is behind a Proxy or a Load Balancer. This would also be needed with a LDAP (non-secure).

[dmadmin@content-server-0 ~]$ grep DM_LDAP ~/.bash_profile
[dmadmin@content-server-0 ~]$ echo $DM_LDAP_CERT_FILE -- $DM_LDAP_IGNORE_HOSTNAME_CHECK
--
[dmadmin@content-server-0 ~]$
[dmadmin@content-server-0 ~]$ echo "export DM_LDAP_CERT_FILE=1" >> ~/.bash_profile
[dmadmin@content-server-0 ~]$ echo "export DM_LDAP_IGNORE_HOSTNAME_CHECK=1" >> ~/.bash_profile
[dmadmin@content-server-0 ~]$
[dmadmin@content-server-0 ~]$ grep DM_LDAP ~/.bash_profile
export DM_LDAP_CERT_FILE=1
export DM_LDAP_IGNORE_HOSTNAME_CHECK=1
[dmadmin@content-server-0 ~]$
[dmadmin@content-server-0 ~]$ source ~/.bash_profile
[dmadmin@content-server-0 ~]$ echo $DM_LDAP_CERT_FILE -- $DM_LDAP_IGNORE_HOSTNAME_CHECK
1 -- 1
[dmadmin@content-server-0 ~]$

 

For the variables to take effect, you will need to restart the Repositories. I usually set everything up (LDAPs specific pieces + LDAP steps) and only then restart the repositories so it’s done once at the very end of the setup.

The next step is then to create/prepare the Trust Chain. In DA, you can import the Trust Chain one certificate at a time, the Root first and then the Intermediate one. While using “DM_LDAP_CERT_FILE=1” (so without DA), you can unfortunately use only one file per LDAP and therefore this file will need to contain the full Trust Chain. To do that, simply put in a file the content of both Root and Intermediate SSL Certificates one after the other. So in the end, you file should contain something like that:

[dmadmin@content-server-0 ~]$ vi ${dm_location_name}.pem
[dmadmin@content-server-0 ~]$ cat ${dm_location_name}.pem
-----BEGIN CERTIFICATE-----
<<<content_of_root_ca>>>
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
<<<content_of_intermediate_ca>>>
-----END CERTIFICATE-----
[dmadmin@content-server-0 ~]$
[dmadmin@content-server-0 ~]$ mv ${dm_location_name}.pem ${file_path}
[dmadmin@content-server-0 ~]$

 

Once you have the file, you can put it wherever you want with the name that you want but it absolutely needs to be a “.pem” extension. You can check this blog, which explains what happens if this isn’t the case and what needs to be done to fix it. As you can see above, I choose to put the file where DA is putting them as well.

The last step for this SSL specific part is then to create a “dm_location” Object that will reference the file that has been created so that the LDAP Server Config Object can use it and trust the target LDAP Server. Contrary to the LDAP Certificate Database Management in DA, which is global to all Repositories (so it needs to be done only one), here you will need to create the “dm_location” Object in all the Repositories that are going to use the LDAP Server. This can be done very easily via iAPI:

[dmadmin@content-server-0 ~]$ iapi ${repo} -U${USER} -Pxxx << EOF
create,c,dm_location
set,c,l,object_name
${dm_location_name}
set,c,l,path_type
file
set,c,l,file_system_path
${file_path}
save,c,l
exit
EOF


        OpenText Documentum iapi - Interactive API interface
        Copyright (c) 2018. OpenText Corporation
        All rights reserved.
        Client Library Release 16.4.0110.0058

Connecting to Server using docbase repo01
[DM_SESSION_I_SESSION_START]info:  "Session 0112d6878000111b started for user dmadmin."

Connected to OpenText Documentum Server running Release 16.4.0110.0167  Linux64.Oracle
Session id is s0
API> ...
3a12d68780002522
API> SET> ...
OK
API> SET> ...
OK
API> SET> ...
OK
API> ...
OK
API> Bye
[dmadmin@content-server-0 ~]$

 

The name of the “dm_location” Object doesn’t have to be the same as the name of the Trust Chain file. I’m just using the same here so it’s simpler to see the relation between both. These are the only steps that are specific to SSL communications between your LDAP Server and Documentum.

 

2. Global steps for LDAP

This section applies to all cases. Whether you are trying to setup an LDAP or LDAPs Server, then you will need to create the “dm_ldap_config” Object and everything else described below. As mentioned previously, I’m using one type of LDAP Server for this example (value of “dm_ldap_config.a_application_type“). If you aren’t very familiar with the settings inside the Repository, then the simplest thing to do to find out which parameters you would need (and the associated values) is simply to create one LDAP Config Object using DA. Once done, just dump it and you can re-use that same configuration in the future.

So let’s start with creating a sample LDAP Server Config Object:

[dmadmin@content-server-0 ~]$ iapi ${repo} -U${USER} -Pxxx << EOF
create,c,dm_ldap_config
set,c,l,object_name
${ldap_server_name}
set,c,l,map_attr[0]
user_name
set,c,l,map_attr[1]
user_login_name
set,c,l,map_attr[2]
user_address
set,c,l,map_attr[3]
group_name
set,c,l,map_attr[4]
client_capability
set,c,l,map_attr[5]
user_xprivileges
set,c,l,map_attr[6]
default_folder
set,c,l,map_attr[7]
workflow_disabled
set,c,l,map_val[0]
uniqueDisplayName
set,c,l,map_val[1]
uid
set,c,l,map_val[2]
mail
set,c,l,map_val[3]
cn
set,c,l,map_val[4]
2
set,c,l,map_val[5]
32
set,c,l,map_val[6]
/Home/${uniqueDisplayName}
set,c,l,map_val[7]
false
set,c,l,map_attr_type[0]
dm_user
set,c,l,map_attr_type[1]
dm_user
set,c,l,map_attr_type[2]
dm_user
set,c,l,map_attr_type[3]
dm_group
set,c,l,map_attr_type[4]
dm_user
set,c,l,map_attr_type[5]
dm_user
set,c,l,map_attr_type[6]
dm_user
set,c,l,map_attr_type[7]
dm_user
set,c,l,map_val_type[0]
A
set,c,l,map_val_type[1]
A
set,c,l,map_val_type[2]
A
set,c,l,map_val_type[3]
A
set,c,l,map_val_type[4]
V
set,c,l,map_val_type[5]
V
set,c,l,map_val_type[6]
E
set,c,l,map_val_type[7]
V
set,c,l,ldap_host
${ldap_host}
set,c,l,port_number
${ldap_port}
set,c,l,person_obj_class
${ldap_user_class}
set,c,l,group_obj_class
${ldap_group_class}
set,c,l,per_search_base
${ldap_principal}
set,c,l,grp_search_base
${ldap_principal}
set,c,l,per_search_filter
${ldap_user_filter}
set,c,l,grp_search_filter
${ldap_group_filter}
set,c,l,bind_dn
${ldap_principal}
set,c,l,user_subtype
dm_user
set,c,l,deactivate_user_option
T
set,c,l,import_mode
groups
set,c,l,bind_type
bind_by_dn
set,c,l,ssl_mode
${ldap_ssl}
set,c,l,ssl_port
${ldap_port}
set,c,l,certdb_location
${location}
set,c,l,map_rejection[0]
2
set,c,l,map_rejection[1]
2
set,c,l,map_rejection[2]
0
set,c,l,map_rejection[3]
2
set,c,l,map_rejection[4]
0
set,c,l,map_rejection[5]
0
set,c,l,map_rejection[6]
2
set,c,l,map_rejection[7]
0
set,c,l,retry_count
3
set,c,l,retry_interval
3
set,c,l,failover_use_interval
300
set,c,l,r_is_public
F
set,c,l,a_application_type
netscape
set,c,l,a_full_text
T
save,c,l
exit
EOF


        OpenText Documentum iapi - Interactive API interface
        Copyright (c) 2018. OpenText Corporation
        All rights reserved.
        Client Library Release 16.4.0110.0058

Connecting to Server using docbase repo01
[DM_SESSION_I_SESSION_START]info:  "Session 0112d68780001123 started for user dmadmin."

Connected to OpenText Documentum Server running Release 16.4.0110.0167  Linux64.Oracle
Session id is s0
API> ...
0812d6878000252c
API> SET> ...
OK
...
...
...
[dmadmin@content-server-0 ~]$

 

Once the LDAP Server Config Object has been created, you can register it in the “dm_server_config” Objects. In our silent scripts, we have using the r_object_id of the object just created so that we are sure it is the correct value but below, for simplification, I’m using a select to retrieve the r_object_id based on the LDAP Object Name (so make sure it’s unique if you use the below):

[dmadmin@content-server-0 ~]$ iapi ${repo} -U${USER} -Pxxx << EOF
?,c,update dm_server_config object set ldap_config_id=(select r_object_id from dm_ldap_config where object_name='${ldap_server_name}')
exit
EOF


        OpenText Documentum iapi - Interactive API interface
        Copyright (c) 2018. OpenText Corporation
        All rights reserved.
        Client Library Release 16.4.0110.0058

Connecting to Server using docbase repo01
[DM_SESSION_I_SESSION_START]info:  "Session 0112d6878000112f started for user dmadmin."

Connected to OpenText Documentum Server running Release 16.4.0110.0167  Linux64.Oracle
Session id is s0
API> objects_updated
---------------
              1
(1 row affected)
[DM_QUERY_I_NUM_UPDATE]info:  "1 objects were affected by your UPDATE statement."

API> Bye
[dmadmin@content-server-0 ~]$

 

Then, it is time to encrypt the password of the LDAP Account used that is used for the “bind_dn” (${ldap_principal} above):

[dmadmin@content-server-0 ~]$ crypto_docbase=`grep ^dfc.crypto $DOCUMENTUM_SHARED/config/dfc.properties | tail -1 | sed 's,.*=[[:space:]]*,,'`
[dmadmin@content-server-0 ~]$ 
[dmadmin@content-server-0 ~]$ iapi ${crypto_docbase} -U${USER} -Pxxx << EOF
encrypttext,c,${ldap_pwd}
exit
EOF


        OpenText Documentum iapi - Interactive API interface
        Copyright (c) 2018. OpenText Corporation
        All rights reserved.
        Client Library Release 16.4.0110.0058

Connecting to Server using docbase gr_repo
[DM_SESSION_I_SESSION_START]info:  "Session 0112d68880001135 started for user dmadmin."

Connected to OpenText Documentum Server running Release 16.4.0110.0167  Linux64.Oracle
Session id is s0
API> ...
DM_ENCR_TEXT_V2=AAAAEHQfx8vF52wIC1Lg8KoxAflW/I7ZnbHwEDJCciKx/thqFZxAvIFNtpsBl6JSGmI4XKYCCuUl/NMY7BTsCa2GeIdUebL2LYfA/nJivzuikqOt::gr_repo
API> Bye
[dmadmin@content-server-0 ~]$

 

Finally, the only thing left is to create the file “$DOCUMENTUM/dba/config/${repo}/ldap_${dm_ldap_config_id}.cnt” and put in it the content of the encrypted password (the whole line “DM_ENCR_TEXT_V2=…::gr_repo“). As mentioned previously, after a small restart of the Repository, you should then be able to run your dm_LDAPSynchronization job. You might want to configure the job with some specific properties but that’s up to you.

With all the commands above, you have already a very good basis to automate the creation/setup of your LDAP/LDAPs Server without issue. In our automation, instead of printing the result of the iAPI commands to the console, we are usually putting that in a log file. With that, we can automatically retrieve the result of the previous commands and continue the execution based on the outcome so there is no need for any human interaction. In the scope of this blog, it was much more human friendly to display it directly.

Maybe one final note: the above steps are for a Primary Content Server. In case you are trying to do the same thing on a Remote Content Server (RCS/CFS), then some steps aren’t needed. For example, you will need to put the Trust Chain to the correct location but you won’t need to create the “dm_location” or “dm_ldap_config” Objects since they are inside the Repository and therefore already present.

 

Cet article Documentum – Automatic/Silent creation of LDAP/LDAPs Server Config Objects est apparu en premier sur Blog dbi services.

Real time replication from Oracle to PostgreSQL using Data Replicator from DBPLUS

Mon, 2019-12-02 07:15

I’ve done quite some real time logical replication projects in the past, either using Oracle Golden Gate or EDB replication server. Build in logical replication in PostgreSQL (which is available since PostgreSQL 10) can be used as well when both, the source and the target are PostgreSQL instances. While being at the DOAG conference and exhibition 2019 I got in contact with people from DBPLUS and they provide a product which is called “Data Replicator”. The interesting use case for me is the real time replication from Oracle to PostgreSQL as the next project for such a setup is already in the pipe so I thought I’ll give it try.

The “Data Replicator” software needs to be installed on a Windows machine and all traffic will go through that machine. The following picture is stolen from the official “Data Replicator” documentation and it pretty well describes the architecture when the source system is Oracle:

As “Data Replicator” will use Oracle LogMiner, no triggers need to be installed on the source system. Installing something on a validated system might become tricky so this already is a huge benefit compared to some other solutions, e.g. SymmetricDS. When you know GoldenGate the overall architecture is not so much different: What GoldeGate calls the extract is the “Reader” in Data Replicator and the replicat becomes the “Applier”.

The installation on the Windows machine is so simple, that I’ll just be providing the screenshots without any further comments:





In the background three new services have been created and started by the installation program:

There is the replication manager which is responsible for creating replication processes. And then there are two more services for reading from source and writing data to the target. In addition the graphical user interface was installed (which could also be running on another windows machine) which looks like this once you start it up:

Before connecting with the GUI you should do the basic configuration by using the “DBPLUS Replication Manager Configuration” utility:

Once that is done you can go back to the client and connect:

The initial screen has not much content, except for the possibility to create a new replication and I really like that: No overloaded, very hard to initially understand interface but easy and tidy. With only one choice it is easy to go forward so lets create a new replication:

Some concept here: Very clean interface, only 5 steps to follow. My source system is Oracle 19.3 EE and all I have to do is to provide the connection parameters, admin user and a new user/password combination I want to us for the logical replication:

Asking “Data Replicator” to create the replication user, and all is fine:

SQL> r
  1* select username,profile from dba_users where username = 'REPLUSR'

USERNAME                       PROFILE
------------------------------ ------------------------------
REPLUSR                        DEFAULT

Of course some system privileges have been granted to the user that got created:

SQL> select privilege from dba_sys_privs where grantee = 'REPLUSR';

PRIVILEGE
----------------------------------------
SELECT ANY TRANSACTION
LOGMINING
SELECT ANY DICTIONARY
SELECT ANY TABLE

Proceeding with the target database, which is PostgreSQL 12.1 in my case:

As you can see there is no option to create a user on the target. What I did is this:

postgres=# create user replusr with login password 'xxxxxxx';
CREATE ROLE
postgres=# create database offloadoracle with owner = 'replusr';
CREATE DATABASE
postgres=# 

Once done, the connection succeeds and can be saved:

That’s all for the first step and we can proceed to step two:

I have installed the Oracle sample schemas for this little demo and as I only want to replicate these I’ve changed the selection to “REPLICATE ONLY SELECTED SCHEMAS AND TABLES”.

Once more this is all that needs to be done and the next step would be to generate the report for getting an idea of possible issues:

The reported issues totally make sense and you even get the commands to fix it, except for the complaints about the unique keys, of course (If you go for logical replication you should anyway make sure that each table either contains a primary key or at last a unique key). Once the Oracle database is in archive mode and supplemental log data was added the screen will look fine (I will ignore the two warnings as they are not important for this demo):

The next step is to define the “Start Options” and when you select “automatic” you’ll have to specify the options for the transfer server:

There is a small configuration utility for that as well:

When you are happy with it, provide the details in the previous screen and complete the replication setup by providing a name in the last step:

That’s all you need to do and the replication is ready to be started:

… and then it immediately fails because we do not have a valid license. For getting a trial license you need to provide the computer ID which can be found in the information section:

Provide that to DBPLUS and request a trial license. Usually they are responding very fast:

Starting the replication once more:

You’ll see new processes on the PostgreSQL side:

postgres@centos8pg:/home/postgres/ [121] ps -ef | grep postgres
root      1248   769  0 12:58 ?        00:00:00 sshd: postgres [priv]
postgres  1252     1  0 12:58 ?        00:00:00 /usr/lib/systemd/systemd --user
postgres  1256  1252  0 12:58 ?        00:00:00 (sd-pam)
postgres  1262  1248  0 12:58 ?        00:00:00 sshd: postgres@pts/0
postgres  1263  1262  0 12:58 pts/0    00:00:00 -bash
postgres  1667     1  0 12:58 ?        00:00:00 /u01/app/postgres/product/12/db_0/bin/postgres -D /u02/pgdata/12
postgres  1669  1667  0 12:58 ?        00:00:00 postgres: checkpointer   
postgres  1670  1667  0 12:58 ?        00:00:00 postgres: background writer   
postgres  1671  1667  0 12:58 ?        00:00:00 postgres: walwriter   
postgres  1672  1667  0 12:58 ?        00:00:00 postgres: autovacuum launcher   
postgres  1673  1667  0 12:58 ?        00:00:00 postgres: stats collector   
postgres  1674  1667  0 12:58 ?        00:00:00 postgres: logical replication launcher   
postgres  2560  1667  0 14:40 ?        00:00:00 postgres: replusr offloadoracle 192.168.22.1(40790) idle
postgres  2562  1667  0 14:40 ?        00:00:00 postgres: replusr offloadoracle 192.168.22.1(40800) idle
postgres  2588  1263  0 14:40 pts/0    00:00:00 ps -ef
postgres  2589  1263  0 14:40 pts/0    00:00:00 grep --color=auto postgres

… and you’ll see LogMiner proceses on the Oracle side:

LOGMINER: summary for session# = 2147710977
LOGMINER: StartScn: 2261972 (0x00000000002283d4)
LOGMINER: EndScn: 18446744073709551615 (0xffffffffffffffff)
LOGMINER: HighConsumedScn: 0
LOGMINER: PSR flags: 0x0
LOGMINER: Session Flags: 0x4000441
LOGMINER: Session Flags2: 0x0
LOGMINER: Read buffers: 4
LOGMINER: Region Queue size: 256
LOGMINER: Redo Queue size: 4096
LOGMINER: Memory LWM: limit 10M, LWM 12M, 80%
LOGMINER: Memory Release Limit: 0M
LOGMINER: Max Decomp Region Memory: 1M
LOGMINER: Transaction Queue Size: 1024
2019-11-22T14:05:54.735533+01:00
LOGMINER: Begin mining logfile for session -2147256319 thread 1 sequence 8, /u01/app/oracle/oradata/DB1/onlinelog/o1_mf_2_gxh8fbhr_.log
2019-11-22T14:05:54.759197+01:00
LOGMINER: End   mining logfile for session -2147256319 thread 1 sequence 8, /u01/app/oracle/oradata/DB1/onlinelog/o1_mf_2_gxh8fbhr_.log

In the details tab there is more information about what is currently going on:

Although it looked quite good at the beginning there is the first issue:

Oracle data type is unknown: OE.CUST_ADDRESS_TYP
Stack trace:
System.ArgumentException: Oracle data type is unknown: OE.CUST_ADDRESS_TYP
   at DbPlus.DataTypes.Oracle.OracleDataTypes.Get(String name)
   at DbPlus.Replicator.Tracking.TableCopyParameterCreator.c__DisplayClass22_0.g__MapSourceColumnType|1(TableColumn sourceColumn, String targetColumnName)
   at DbPlus.Replicator.Tracking.TableCopyParameterCreator.c__DisplayClass25_0.g__GetColumnMapping|4(TableColumn sourceColumn)
   at System.Linq.Enumerable.WhereSelectEnumerableIterator`2.MoveNext()
   at System.Linq.Enumerable.WhereEnumerableIterator`1.MoveNext()
   at System.Linq.Buffer`1..ctor(IEnumerable`1 source)
   at System.Linq.Enumerable.ToArray[TSource](IEnumerable`1 source)
   at DbPlus.Replicator.Tracking.TableCopyParameterCreator.c__DisplayClass25_0.b__5()
   at DbPlus.Replicator.Alerts.AsyncTransientErrorHandler.Execute[T](Func`1 operation)
   at DbPlus.Replicator.Tracking.TableCopyParameterCreator.GetTableCopyParameters(ReplicatedTable sourceTable)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.b__61_2(ReplicatedTable table)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.ExecuteOneWithTableLock(Func`1 source, Action`1 operation, Nullable`1 timeout)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.ExecuteAllWithTableLock(Func`1 source, Action`1 operation, Nullable`1 timeout)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.b__61_0()
   at DbPlus.Replicator.Alerts.AsyncTransientErrorHandler.Block(Action action)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.StartDataTransfer()
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.ProcessRemoteOperations()
   at DbPlus.Tasks.Patterns.TaskTemplates.c__DisplayClass0_0.<g__Run|0>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at DbPlus.Tasks.Patterns.TaskGroup.Run(CancellationToken cancellationToken)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.Run()
   at DbPlus.Replicator.ComponentModel.Component.RunInternal()

As with all logical replication solutions custom types are tricky and usually not supported. What I will be doing now is to replicate the “HR” and “SH” schemas only, which do not contain any custom type:

Once again, starting the replication, next issue:

Oracle data type is unknown: ROWID
Stack trace:
System.ArgumentException: Oracle data type is unknown: ROWID
   at DbPlus.DataTypes.Oracle.OracleDataTypes.Get(String name)
   at DbPlus.Replicator.Tracking.TableCopyParameterCreator.c__DisplayClass22_0.g__MapSourceColumnType|1(TableColumn sourceColumn, String targetColumnName)
   at DbPlus.Replicator.Tracking.TableCopyParameterCreator.c__DisplayClass25_0.g__GetColumnMapping|4(TableColumn sourceColumn)
   at System.Linq.Enumerable.WhereSelectEnumerableIterator`2.MoveNext()
   at System.Linq.Enumerable.WhereEnumerableIterator`1.MoveNext()
   at System.Linq.Buffer`1..ctor(IEnumerable`1 source)
   at System.Linq.Enumerable.ToArray[TSource](IEnumerable`1 source)
   at DbPlus.Replicator.Tracking.TableCopyParameterCreator.c__DisplayClass25_0.b__5()
   at DbPlus.Replicator.Alerts.AsyncTransientErrorHandler.Execute[T](Func`1 operation)
   at DbPlus.Replicator.Tracking.TableCopyParameterCreator.GetTableCopyParameters(ReplicatedTable sourceTable)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.b__61_2(ReplicatedTable table)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.ExecuteOneWithTableLock(Func`1 source, Action`1 operation, Nullable`1 timeout)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.ExecuteAllWithTableLock(Func`1 source, Action`1 operation, Nullable`1 timeout)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.b__61_0()
   at DbPlus.Replicator.Alerts.AsyncTransientErrorHandler.Block(Action action)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.StartDataTransfer()
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.ProcessRemoteOperations()
   at DbPlus.Tasks.Patterns.TaskTemplates.c__DisplayClass0_0.<g__Run|0>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at DbPlus.Tasks.Patterns.TaskGroup.Run(CancellationToken cancellationToken)
   at DbPlus.Replicator.Tracking.ReplicatedTablesTracker.Run()
   at DbPlus.Replicator.ComponentModel.Component.RunInternal()

Lets check which column(s) and table(s) that is/are:

SQL> SELECT owner, table_name, column_name from dba_tab_columns where data_type = 'ROWID' and owner in ('HR','SH');

OWNER                TABLE_NAME                     COLUMN_NAME
-------------------- ------------------------------ ------------------------------
SH                   DR$SUP_TEXT_IDX$U              RID
SH                   DR$SUP_TEXT_IDX$K              TEXTKEY

Such columns can be easily excluded:



Starting over again, next issue:

At least the schemas need to exist on the target, so:

postgres=# \c offloadoracle postgres
You are now connected to database "offloadoracle" as user "postgres".
offloadoracle=# create schema sh;
CREATE SCHEMA
offloadoracle=# create schema hr;
CREATE SCHEMA
offloadoracle=# 

Next try:

On the source side:

SQL> grant flashback any table to REPLUSR;

Grant succeeded.

SQL> 

On the target side:

offloadoracle=# grant all on schema hr to replusr;
GRANT
offloadoracle=# grant all on schema sh to replusr;
GRANT

Finally most of the tables are replicating fine now:

There are a few warnings about missing unique keys and some tables can not be replicated at all:

For now I am just going to exclude the failed tables as this is fine for the scope of this post:

… an my replication is fine. A quick check on the target:

offloadoracle=# select * from sh.products limit 3;
 prod_id |           prod_name           |           prod_desc           | prod_subcategory | prod_subcategory_id | prod_subcategory_desc |        prod_category        | prod_category_id |     prod_category_desc      | prod_weight_class | prod_unit_of_measure | prod_pac>
---------+-------------------------------+-------------------------------+------------------+---------------------+-----------------------+-----------------------------+------------------+-----------------------------+-------------------+----------------------+--------->
      13 | 5MP Telephoto Digital Camera  | 5MP Telephoto Digital Camera  | Cameras          |     2044.0000000000 | Cameras               | Photo                       |   204.0000000000 | Photo                       |                 1 | U                    | P       >
      14 | 17" LCD w/built-in HDTV Tuner | 17" LCD w/built-in HDTV Tuner | Monitors         |     2035.0000000000 | Monitors              | Peripherals and Accessories |   203.0000000000 | Peripherals and Accessories |                 1 | U                    | P       >
      15 | Envoy 256MB - 40GB            | Envoy 256MB - 40Gb            | Desktop PCs      |     2021.0000000000 | Desktop PCs           | Hardware                    |   202.0000000000 | Hardware                    |                 1 | U                    | P       >
(3 rows)

lines 1-7/7 (END)

… confirms the data is there. As this post is already long enough here some final thoughts: The installation of “Data Replicator” is a no-brainer. I really like the simple interface and setting up a replication between Oracle and PostgreSQL is quite easy. Of course you need to know the issues you can run into with logical replication (missing unique or primary keys, not supported data types, …) but this is the same topic for all solutions. What I can say for sure is, that I never was as fast for setting up a demo replication as with “Data Replicator”. More testing to come …

Cet article Real time replication from Oracle to PostgreSQL using Data Replicator from DBPLUS est apparu en premier sur Blog dbi services.

AEM Forms – FIPS 140-2 Support

Mon, 2019-12-02 00:00

Around summer last year, one of the project I was working on started a new integration with AEM Forms for Digital Signatures and Reader Extension components. It was already using AEM Forms before but for other purposes. With this new requirement came new problems (obviously). This project was still using AEM Forms 6.4 JEE on WebLogic Server 12.2.1.3.

As mentioned a few times already, our policy is always security by default, unless the customer has some specific requirements that would prevent us to do that. Since we are usually working for critical businesses, that’s normally not a problem at all (quite the opposite). Therefore, when we install a WebLogic Server, we always set our best practices on top of it. One of these best practices is to enable the FIPS (Federal Information Processing Standards) 140-2 support and as much as possible it’s compliance. A software stack is FIPS 140-2 compliant if all its components support the FIPS 140-2 and they can all restrict their operations to FIPS 140-2 methods/transactions only. If a single piece of the stack (used) isn’t FIPS 140-2 compliant, then the whole software stack isn’t.

Alright, so why am I mentioning all that? Well, we are trying as much as possible to have fully FIPS 140-2 compliant installations and therefore we always use restricted ciphers, encryptions, protocols, aso… In this AEM Forms FIPS 140-2 compliant installation, we tried to add Digital Signatures & Reader Extensions on PDF but while doing some testing on the AEM Workbench, we encountered the following error pop-up:

FIPS 140-2 Reader Extension error

The complete error stack can be seen on the AEM Workbench logs:

ALC-DSC-003-000: com.adobe.idp.dsc.DSCInvocationException: Invocation error.
	at com.adobe.idp.dsc.component.impl.DefaultPOJOInvokerImpl.invoke(DefaultPOJOInvokerImpl.java:152)
	at com.adobe.idp.dsc.interceptor.impl.InvocationInterceptor.intercept(InvocationInterceptor.java:140)
	at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
	at com.adobe.idp.dsc.interceptor.impl.DocumentPassivationInterceptor.intercept(DocumentPassivationInterceptor.java:53)
	at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
	at com.adobe.idp.dsc.transaction.interceptor.TransactionInterceptor$1.doInTransaction(TransactionInterceptor.java:74)
	at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapterBean.execute(EjbTransactionCMTAdapterBean.java:357)
	at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapterBean.doSupports(EjbTransactionCMTAdapterBean.java:227)
	at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapter_yjcxi4_ELOImpl.__WL_invoke(Unknown Source)
	at weblogic.ejb.container.internal.SessionLocalMethodInvoker.invoke(SessionLocalMethodInvoker.java:33)
	at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapter_yjcxi4_ELOImpl.doSupports(Unknown Source)
	at com.adobe.idp.dsc.transaction.impl.ejb.EjbTransactionProvider.execute(EjbTransactionProvider.java:104)
	at com.adobe.idp.dsc.transaction.interceptor.TransactionInterceptor.intercept(TransactionInterceptor.java:72)
	at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
	at com.adobe.idp.dsc.interceptor.impl.InvocationStrategyInterceptor.intercept(InvocationStrategyInterceptor.java:55)
	at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
	at com.adobe.idp.dsc.interceptor.impl.InvalidStateInterceptor.intercept(InvalidStateInterceptor.java:37)
	at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
	at com.adobe.idp.dsc.interceptor.impl.AuthorizationInterceptor.intercept(AuthorizationInterceptor.java:188)
	at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
	at com.adobe.idp.dsc.interceptor.impl.JMXInterceptor.intercept(JMXInterceptor.java:48)
	at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
	at com.adobe.idp.dsc.engine.impl.ServiceEngineImpl.invoke(ServiceEngineImpl.java:121)
	at com.adobe.idp.dsc.routing.Router.routeRequest(Router.java:131)
	at com.adobe.idp.dsc.provider.impl.base.AbstractMessageReceiver.routeMessage(AbstractMessageReceiver.java:93)
	at com.adobe.idp.dsc.provider.impl.vm.VMMessageDispatcher.doSend(VMMessageDispatcher.java:225)
	at com.adobe.idp.dsc.provider.impl.base.AbstractMessageDispatcher.send(AbstractMessageDispatcher.java:69)
	at com.adobe.idp.dsc.clientsdk.ServiceClient.invoke(ServiceClient.java:215)
	at com.adobe.workflow.engine.PEUtil.invokeAction(PEUtil.java:893)
	at com.adobe.idp.workflow.dsc.invoker.WorkflowDSCInvoker.transientInvoke(WorkflowDSCInvoker.java:356)
	at com.adobe.idp.workflow.dsc.invoker.WorkflowDSCInvoker.invoke(WorkflowDSCInvoker.java:159)
	at com.adobe.idp.dsc.interceptor.impl.InvocationInterceptor.intercept(InvocationInterceptor.java:140)
	at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
	at com.adobe.idp.dsc.interceptor.impl.DocumentPassivationInterceptor.intercept(DocumentPassivationInterceptor.java:53)
	at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
	at com.adobe.idp.dsc.transaction.interceptor.TransactionInterceptor$1.doInTransaction(TransactionInterceptor.java:74)
	at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapterBean.execute(EjbTransactionCMTAdapterBean.java:357)
	at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapterBean.doRequiresNew(EjbTransactionCMTAdapterBean.java:299)
	at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapter_yjcxi4_ELOImpl.__WL_invoke(Unknown Source)
	at weblogic.ejb.container.internal.SessionLocalMethodInvoker.invoke(SessionLocalMethodInvoker.java:33)
	at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapter_yjcxi4_ELOImpl.doRequiresNew(Unknown Source)
	at com.adobe.idp.dsc.transaction.impl.ejb.EjbTransactionProvider.execute(EjbTransactionProvider.java:143)
	at com.adobe.idp.dsc.transaction.interceptor.TransactionInterceptor.intercept(TransactionInterceptor.java:72)
	at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
	at com.adobe.idp.dsc.interceptor.impl.InvocationStrategyInterceptor.intercept(InvocationStrategyInterceptor.java:55)
	at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
	at com.adobe.idp.dsc.interceptor.impl.InvalidStateInterceptor.intercept(InvalidStateInterceptor.java:37)
	at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
	at com.adobe.idp.dsc.interceptor.impl.AuthorizationInterceptor.intercept(AuthorizationInterceptor.java:188)
	at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
	at com.adobe.idp.dsc.interceptor.impl.JMXInterceptor.intercept(JMXInterceptor.java:48)
	at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
	at com.adobe.idp.dsc.engine.impl.ServiceEngineImpl.invoke(ServiceEngineImpl.java:121)
	at com.adobe.idp.dsc.routing.Router.routeRequest(Router.java:131)
	at com.adobe.idp.dsc.provider.impl.base.AbstractMessageReceiver.invoke(AbstractMessageReceiver.java:329)
	at com.adobe.idp.dsc.provider.impl.soap.axis.sdk.SoapSdkEndpoint.invokeCall(SoapSdkEndpoint.java:153)
	at com.adobe.idp.dsc.provider.impl.soap.axis.sdk.SoapSdkEndpoint.invoke(SoapSdkEndpoint.java:91)
	at sun.reflect.GeneratedMethodAccessor753.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.axis.providers.java.RPCProvider.invokeMethod(RPCProvider.java:397)
	at org.apache.axis.providers.java.RPCProvider.processMessage(RPCProvider.java:186)
	at org.apache.axis.providers.java.JavaProvider.invoke(JavaProvider.java:323)
	at org.apache.axis.strategies.InvocationStrategy.visit(InvocationStrategy.java:32)
	at org.apache.axis.SimpleChain.doVisiting(SimpleChain.java:118)
	at org.apache.axis.SimpleChain.invoke(SimpleChain.java:83)
	at org.apache.axis.handlers.soap.SOAPService.invoke(SOAPService.java:454)
	at org.apache.axis.server.AxisServer.invoke(AxisServer.java:281)
	at org.apache.axis.transport.http.AxisServlet.doPost(AxisServlet.java:699)
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
	at org.apache.axis.transport.http.AxisServletBase.service(AxisServletBase.java:327)
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
	at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:286)
	at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:260)
	at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:137)
	at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:350)
	at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:25)
	at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
	at com.adobe.idp.dsc.provider.impl.soap.axis.InvocationFilter.doFilter(InvocationFilter.java:43)
	at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
	at com.adobe.idp.um.auth.filter.ParameterFilter.doFilter(ParameterFilter.java:105)
	at com.adobe.idp.um.auth.filter.CSRFFilter.invokeNextFilter(CSRFFilter.java:141)
	at com.adobe.idp.um.auth.filter.CSRFFilter.doFilter(CSRFFilter.java:132)
	at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
	at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.wrapRun(WebAppServletContext.java:3701)
	at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3667)
	at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:326)
	at weblogic.security.service.SecurityManager.runAsForUserCode(SecurityManager.java:197)
	at weblogic.servlet.provider.WlsSecurityProvider.runAsForUserCode(WlsSecurityProvider.java:203)
	at weblogic.servlet.provider.WlsSubjectHandle.run(WlsSubjectHandle.java:71)
	at weblogic.servlet.internal.WebAppServletContext.doSecuredExecute(WebAppServletContext.java:2443)
	at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2291)
	at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2269)
	at weblogic.servlet.internal.ServletRequestImpl.runInternal(ServletRequestImpl.java:1705)
	at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1665)
	at weblogic.servlet.provider.ContainerSupportProviderImpl$WlsRequestExecutor.run(ContainerSupportProviderImpl.java:272)
	at weblogic.invocation.ComponentInvocationContextManager._runAs(ComponentInvocationContextManager.java:352)
	at weblogic.invocation.ComponentInvocationContextManager.runAs(ComponentInvocationContextManager.java:337)
	at weblogic.work.LivePartitionUtility.doRunWorkUnderContext(LivePartitionUtility.java:57)
	at weblogic.work.PartitionUtility.runWorkUnderContext(PartitionUtility.java:41)
	at weblogic.work.SelfTuningWorkManagerImpl.runWorkUnderContext(SelfTuningWorkManagerImpl.java:644)
	at weblogic.work.ExecuteThread.execute(ExecuteThread.java:415)
	at weblogic.work.ExecuteThread.run(ExecuteThread.java:355)
Caused by: com.adobe.livecycle.readerextensions.client.exceptions.ReaderExtensionsException: ALC-RES-001-008: Unable to apply the requested usage rights to the given document.
	at com.adobe.livecycle.readerextensions.ReaderExtensionsImplementation.applyUsageRights(ReaderExtensionsImplementation.java:125)
	at com.adobe.livecycle.readerextensions.ReaderExtensionsService.applyUsageRights(ReaderExtensionsService.java:166)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at com.adobe.idp.dsc.component.impl.DefaultPOJOInvokerImpl.invoke(DefaultPOJOInvokerImpl.java:118)
	... 102 more
Caused by: com.adobe.livecycle.readerextensions.client.ProcessingException: ALC-RES-001-008: Unable to apply the requested usage rights to the given document.
	... 109 more
Caused by: com.adobe.internal.pdftoolkit.core.exceptions.PDFInvalidParameterException: Exception encountered when applying the signature
	at com.adobe.internal.pdftoolkit.services.digsig.SignatureManager.applyUsageRights(SignatureManager.java:1803)
	at com.adobe.livecycle.readerextensions.ReaderExtensionsImplementation.applyUsageRights(ReaderExtensionsImplementation.java:110)
	... 108 more
Caused by: com.adobe.internal.pdftoolkit.core.exceptions.PDFSignatureException: com.adobe.idp.cryptoprovider.CryptoProviderException: Unknown Error in CryptoProvider ALC-CRP-302-002 (in the operation : sign)
 Caused By: ALC-DSS-310-048 Could not sign PKCS7 data (in the operation : sign)
  Caused By: Algorithm not allowable in FIPS140 mode: SHA1/RSA(null-1)
	at com.adobe.idp.cryptoprovider.LCPKCS7Signer.sign(LCPKCS7Signer.java:128)
	at com.adobe.internal.pdftoolkit.services.digsig.digsigframework.impl.SignatureHandlerPPKLite.writeSignatureAfterSave(SignatureHandlerPPKLite.java:816)
	at com.adobe.internal.pdftoolkit.services.digsig.impl.SigningUtils.doSigning(SigningUtils.java:801)
	at com.adobe.internal.pdftoolkit.services.digsig.SignatureManager.applyUsageRights(SignatureManager.java:1797)
	... 109 more
Caused by: com.adobe.idp.cryptoprovider.CryptoProviderException: Unknown Error in CryptoProvider ALC-CRP-302-002 (in the operation : sign)
 Caused By: ALC-DSS-310-048 Could not sign PKCS7 data (in the operation : sign)
  Caused By: Algorithm not allowable in FIPS140 mode: SHA1/RSA(null-1)
	... 113 more

 

From the AEM Forms side, the error that can be seen at the same time was:

####<Jun 18, 2018 10:14:54,073 AM UTC> <Warning> <com.adobe.idp.cryptoprovider.CryptoProviderException> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '56' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <7dc476bf-0258-4e62-96d4-e9bcf5274954-000001bd> <1529316894073> <[severity-value: 16] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <ALC-DSS-310-048 Could not sign PKCS7 data (in the operation : sign)
Caused By: Algorithm not allowable in FIPS140 mode: SHA1/RSA(null-1)>
####<Jun 18, 2018 10:14:54,074 AM UTC> <Error> <com.adobe.idp.cryptoprovider.CryptoProviderException> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '56' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <7dc476bf-0258-4e62-96d4-e9bcf5274954-000001bd> <1529316894074> <[severity-value: 8] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <Unknown Error in CryptoProvider ALC-CRP-302-002 (in the operation : sign)
Caused By: ALC-DSS-310-048 Could not sign PKCS7 data (in the operation : sign)
  Caused By: Algorithm not allowable in FIPS140 mode: SHA1/RSA(null-1)>
####<Jun 18, 2018 10:14:54,079 AM UTC> <Error> <com.adobe.livecycle.readerextensions.ReaderExtensionsImplementation> <aem-node-1> <msAEM-01> <[ACTIVE] ExecuteThread: '56' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <7dc476bf-0258-4e62-96d4-e9bcf5274954-000001bd> <1529316894079> <[severity-value: 8] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <ALC-RES-001-008: Unable to apply the requested usage rights to the given document.
com.adobe.internal.pdftoolkit.core.exceptions.PDFInvalidParameterException: Exception encountered when applying the signature
        at com.adobe.internal.pdftoolkit.services.digsig.SignatureManager.applyUsageRights(SignatureManager.java:1803)
        at com.adobe.livecycle.readerextensions.ReaderExtensionsImplementation.applyUsageRights(ReaderExtensionsImplementation.java:110)
        at com.adobe.livecycle.readerextensions.ReaderExtensionsService.applyUsageRights(ReaderExtensionsService.java:166)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at com.adobe.idp.dsc.component.impl.DefaultPOJOInvokerImpl.invoke(DefaultPOJOInvokerImpl.java:118)
        at com.adobe.idp.dsc.interceptor.impl.InvocationInterceptor.intercept(InvocationInterceptor.java:140)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
        at com.adobe.idp.dsc.interceptor.impl.DocumentPassivationInterceptor.intercept(DocumentPassivationInterceptor.java:53)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
        at com.adobe.idp.dsc.transaction.interceptor.TransactionInterceptor$1.doInTransaction(TransactionInterceptor.java:74)
        at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapterBean.execute(EjbTransactionCMTAdapterBean.java:357)
        at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapterBean.doSupports(EjbTransactionCMTAdapterBean.java:227)
        at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapter_yjcxi4_ELOImpl.__WL_invoke(Unknown Source)
        at weblogic.ejb.container.internal.SessionLocalMethodInvoker.invoke(SessionLocalMethodInvoker.java:33)
        at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapter_yjcxi4_ELOImpl.doSupports(Unknown Source)
        at com.adobe.idp.dsc.transaction.impl.ejb.EjbTransactionProvider.execute(EjbTransactionProvider.java:104)
        at com.adobe.idp.dsc.transaction.interceptor.TransactionInterceptor.intercept(TransactionInterceptor.java:72)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
        at com.adobe.idp.dsc.interceptor.impl.InvocationStrategyInterceptor.intercept(InvocationStrategyInterceptor.java:55)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
        at com.adobe.idp.dsc.interceptor.impl.InvalidStateInterceptor.intercept(InvalidStateInterceptor.java:37)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
        at com.adobe.idp.dsc.interceptor.impl.AuthorizationInterceptor.intercept(AuthorizationInterceptor.java:188)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
        at com.adobe.idp.dsc.interceptor.impl.JMXInterceptor.intercept(JMXInterceptor.java:48)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
        at com.adobe.idp.dsc.engine.impl.ServiceEngineImpl.invoke(ServiceEngineImpl.java:121)
        at com.adobe.idp.dsc.routing.Router.routeRequest(Router.java:131)
        at com.adobe.idp.dsc.provider.impl.base.AbstractMessageReceiver.routeMessage(AbstractMessageReceiver.java:93)
        at com.adobe.idp.dsc.provider.impl.vm.VMMessageDispatcher.doSend(VMMessageDispatcher.java:198)
        at com.adobe.idp.dsc.provider.impl.base.AbstractMessageDispatcher.send(AbstractMessageDispatcher.java:69)
        at com.adobe.idp.dsc.clientsdk.ServiceClient.invoke(ServiceClient.java:215)
        at com.adobe.livecycle.readerextensions.client.ReaderExtensionsServiceClient.invoke(ReaderExtensionsServiceClient.java:58)
        at com.adobe.livecycle.readerextensions.client.ReaderExtensionsServiceClient.applyUsageRights(ReaderExtensionsServiceClient.java:105)
        at com.adobe.livecycle.readerextensions.servlet.ApplyRightsServlet.applyRights(ApplyRightsServlet.java:241)
        at com.adobe.livecycle.readerextensions.servlet.ApplyRightsServlet.doOperation(ApplyRightsServlet.java:189)
        at com.adobe.livecycle.readerextensions.servlet.ApplyRightsServlet.doPost(ApplyRightsServlet.java:80)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
        at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:286)
        at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:260)
        at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:137)
        at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:350)
        at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:25)
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
        at com.adobe.idp.um.auth.filter.ParameterFilter.doFilter(ParameterFilter.java:105)
        at com.adobe.idp.um.auth.filter.CSRFFilter.invokeNextFilter(CSRFFilter.java:141)
        at com.adobe.idp.um.auth.filter.CSRFFilter.doFilter(CSRFFilter.java:132)
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
        at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.wrapRun(WebAppServletContext.java:3701)
        at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3667)
        at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:326)
        at weblogic.security.service.SecurityManager.runAsForUserCode(SecurityManager.java:197)
        at weblogic.servlet.provider.WlsSecurityProvider.runAsForUserCode(WlsSecurityProvider.java:203)
        at weblogic.servlet.provider.WlsSubjectHandle.run(WlsSubjectHandle.java:71)
        at weblogic.servlet.internal.WebAppServletContext.doSecuredExecute(WebAppServletContext.java:2443)
        at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2291)
        at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2269)
        at weblogic.servlet.internal.ServletRequestImpl.runInternal(ServletRequestImpl.java:1705)
        at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1665)
        at weblogic.servlet.provider.ContainerSupportProviderImpl$WlsRequestExecutor.run(ContainerSupportProviderImpl.java:272)
        at weblogic.invocation.ComponentInvocationContextManager._runAs(ComponentInvocationContextManager.java:352)
        at weblogic.invocation.ComponentInvocationContextManager.runAs(ComponentInvocationContextManager.java:337)
        at weblogic.work.LivePartitionUtility.doRunWorkUnderContext(LivePartitionUtility.java:57)
        at weblogic.work.PartitionUtility.runWorkUnderContext(PartitionUtility.java:41)
        at weblogic.work.SelfTuningWorkManagerImpl.runWorkUnderContext(SelfTuningWorkManagerImpl.java:644)
        at weblogic.work.ExecuteThread.execute(ExecuteThread.java:415)
        at weblogic.work.ExecuteThread.run(ExecuteThread.java:355)
Caused By: com.adobe.internal.pdftoolkit.core.exceptions.PDFSignatureException: com.adobe.idp.cryptoprovider.CryptoProviderException: Unknown Error in CryptoProvider ALC-CRP-302-002 (in the operation : sign)
Caused By: ALC-DSS-310-048 Could not sign PKCS7 data (in the operation : sign)
  Caused By: Algorithm not allowable in FIPS140 mode: SHA1/RSA(null-1)
        at com.adobe.idp.cryptoprovider.LCPKCS7Signer.sign(LCPKCS7Signer.java:128)
        at com.adobe.internal.pdftoolkit.services.digsig.digsigframework.impl.SignatureHandlerPPKLite.writeSignatureAfterSave(SignatureHandlerPPKLite.java:816)
        at com.adobe.internal.pdftoolkit.services.digsig.impl.SigningUtils.doSigning(SigningUtils.java:801)
        at com.adobe.internal.pdftoolkit.services.digsig.SignatureManager.applyUsageRights(SignatureManager.java:1797)
        at com.adobe.livecycle.readerextensions.ReaderExtensionsImplementation.applyUsageRights(ReaderExtensionsImplementation.java:110)
        at com.adobe.livecycle.readerextensions.ReaderExtensionsService.applyUsageRights(ReaderExtensionsService.java:166)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at com.adobe.idp.dsc.component.impl.DefaultPOJOInvokerImpl.invoke(DefaultPOJOInvokerImpl.java:118)
        at com.adobe.idp.dsc.interceptor.impl.InvocationInterceptor.intercept(InvocationInterceptor.java:140)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
        at com.adobe.idp.dsc.interceptor.impl.DocumentPassivationInterceptor.intercept(DocumentPassivationInterceptor.java:53)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
        at com.adobe.idp.dsc.transaction.interceptor.TransactionInterceptor$1.doInTransaction(TransactionInterceptor.java:74)
        at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapterBean.execute(EjbTransactionCMTAdapterBean.java:357)
        at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapterBean.doSupports(EjbTransactionCMTAdapterBean.java:227)
        at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapter_yjcxi4_ELOImpl.__WL_invoke(Unknown Source)
        at weblogic.ejb.container.internal.SessionLocalMethodInvoker.invoke(SessionLocalMethodInvoker.java:33)
        at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapter_yjcxi4_ELOImpl.doSupports(Unknown Source)
        at com.adobe.idp.dsc.transaction.impl.ejb.EjbTransactionProvider.execute(EjbTransactionProvider.java:104)
        at com.adobe.idp.dsc.transaction.interceptor.TransactionInterceptor.intercept(TransactionInterceptor.java:72)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
        at com.adobe.idp.dsc.interceptor.impl.InvocationStrategyInterceptor.intercept(InvocationStrategyInterceptor.java:55)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
        at com.adobe.idp.dsc.interceptor.impl.InvalidStateInterceptor.intercept(InvalidStateInterceptor.java:37)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
        at com.adobe.idp.dsc.interceptor.impl.AuthorizationInterceptor.intercept(AuthorizationInterceptor.java:188)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
        at com.adobe.idp.dsc.interceptor.impl.JMXInterceptor.intercept(JMXInterceptor.java:48)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
        at com.adobe.idp.dsc.engine.impl.ServiceEngineImpl.invoke(ServiceEngineImpl.java:121)
        at com.adobe.idp.dsc.routing.Router.routeRequest(Router.java:131)
        at com.adobe.idp.dsc.provider.impl.base.AbstractMessageReceiver.routeMessage(AbstractMessageReceiver.java:93)
        at com.adobe.idp.dsc.provider.impl.vm.VMMessageDispatcher.doSend(VMMessageDispatcher.java:198)
        at com.adobe.idp.dsc.provider.impl.base.AbstractMessageDispatcher.send(AbstractMessageDispatcher.java:69)
        at com.adobe.idp.dsc.clientsdk.ServiceClient.invoke(ServiceClient.java:215)
        at com.adobe.livecycle.readerextensions.client.ReaderExtensionsServiceClient.invoke(ReaderExtensionsServiceClient.java:58)
        at com.adobe.livecycle.readerextensions.client.ReaderExtensionsServiceClient.applyUsageRights(ReaderExtensionsServiceClient.java:105)
        at com.adobe.livecycle.readerextensions.servlet.ApplyRightsServlet.applyRights(ApplyRightsServlet.java:241)
        at com.adobe.livecycle.readerextensions.servlet.ApplyRightsServlet.doOperation(ApplyRightsServlet.java:189)
        at com.adobe.livecycle.readerextensions.servlet.ApplyRightsServlet.doPost(ApplyRightsServlet.java:80)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
        at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:286)
        at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:260)
        at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:137)
        at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:350)
        at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:25)
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
        at com.adobe.idp.um.auth.filter.ParameterFilter.doFilter(ParameterFilter.java:105)
        at com.adobe.idp.um.auth.filter.CSRFFilter.invokeNextFilter(CSRFFilter.java:141)
        at com.adobe.idp.um.auth.filter.CSRFFilter.doFilter(CSRFFilter.java:132)
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
        at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.wrapRun(WebAppServletContext.java:3701)
        at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3667)
        at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:326)
        at weblogic.security.service.SecurityManager.runAsForUserCode(SecurityManager.java:197)
        at weblogic.servlet.provider.WlsSecurityProvider.runAsForUserCode(WlsSecurityProvider.java:203)
        at weblogic.servlet.provider.WlsSubjectHandle.run(WlsSubjectHandle.java:71)
        at weblogic.servlet.internal.WebAppServletContext.doSecuredExecute(WebAppServletContext.java:2443)
        at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2291)
        at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2269)
        at weblogic.servlet.internal.ServletRequestImpl.runInternal(ServletRequestImpl.java:1705)
        at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1665)
        at weblogic.servlet.provider.ContainerSupportProviderImpl$WlsRequestExecutor.run(ContainerSupportProviderImpl.java:272)
        at weblogic.invocation.ComponentInvocationContextManager._runAs(ComponentInvocationContextManager.java:352)
        at weblogic.invocation.ComponentInvocationContextManager.runAs(ComponentInvocationContextManager.java:337)
        at weblogic.work.LivePartitionUtility.doRunWorkUnderContext(LivePartitionUtility.java:57)
        at weblogic.work.PartitionUtility.runWorkUnderContext(PartitionUtility.java:41)
        at weblogic.work.SelfTuningWorkManagerImpl.runWorkUnderContext(SelfTuningWorkManagerImpl.java:644)
        at weblogic.work.ExecuteThread.execute(ExecuteThread.java:415)
        at weblogic.work.ExecuteThread.run(ExecuteThread.java:355)
Caused By: com.adobe.idp.cryptoprovider.CryptoProviderException: Unknown Error in CryptoProvider ALC-CRP-302-002 (in the operation : sign)
Caused By: ALC-DSS-310-048 Could not sign PKCS7 data (in the operation : sign)
  Caused By: Algorithm not allowable in FIPS140 mode: SHA1/RSA(null-1)
        at com.adobe.idp.cryptoprovider.LCPKCS7Signer.sign(LCPKCS7Signer.java:128)
        at com.adobe.internal.pdftoolkit.services.digsig.digsigframework.impl.SignatureHandlerPPKLite.writeSignatureAfterSave(SignatureHandlerPPKLite.java:816)
        at com.adobe.internal.pdftoolkit.services.digsig.impl.SigningUtils.doSigning(SigningUtils.java:801)
        at com.adobe.internal.pdftoolkit.services.digsig.SignatureManager.applyUsageRights(SignatureManager.java:1797)
        at com.adobe.livecycle.readerextensions.ReaderExtensionsImplementation.applyUsageRights(ReaderExtensionsImplementation.java:110)
        at com.adobe.livecycle.readerextensions.ReaderExtensionsService.applyUsageRights(ReaderExtensionsService.java:166)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at com.adobe.idp.dsc.component.impl.DefaultPOJOInvokerImpl.invoke(DefaultPOJOInvokerImpl.java:118)
        at com.adobe.idp.dsc.interceptor.impl.InvocationInterceptor.intercept(InvocationInterceptor.java:140)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
        at com.adobe.idp.dsc.interceptor.impl.DocumentPassivationInterceptor.intercept(DocumentPassivationInterceptor.java:53)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
        at com.adobe.idp.dsc.transaction.interceptor.TransactionInterceptor$1.doInTransaction(TransactionInterceptor.java:74)
        at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapterBean.execute(EjbTransactionCMTAdapterBean.java:357)
        at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapterBean.doSupports(EjbTransactionCMTAdapterBean.java:227)
        at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapter_yjcxi4_ELOImpl.__WL_invoke(Unknown Source)
        at weblogic.ejb.container.internal.SessionLocalMethodInvoker.invoke(SessionLocalMethodInvoker.java:33)
        at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapter_yjcxi4_ELOImpl.doSupports(Unknown Source)
        at com.adobe.idp.dsc.transaction.impl.ejb.EjbTransactionProvider.execute(EjbTransactionProvider.java:104)
        at com.adobe.idp.dsc.transaction.interceptor.TransactionInterceptor.intercept(TransactionInterceptor.java:72)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
        at com.adobe.idp.dsc.interceptor.impl.InvocationStrategyInterceptor.intercept(InvocationStrategyInterceptor.java:55)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
        at com.adobe.idp.dsc.interceptor.impl.InvalidStateInterceptor.intercept(InvalidStateInterceptor.java:37)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
        at com.adobe.idp.dsc.interceptor.impl.AuthorizationInterceptor.intercept(AuthorizationInterceptor.java:188)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
        at com.adobe.idp.dsc.interceptor.impl.JMXInterceptor.intercept(JMXInterceptor.java:48)
        at com.adobe.idp.dsc.interceptor.impl.RequestInterceptorChainImpl.proceed(RequestInterceptorChainImpl.java:60)
        at com.adobe.idp.dsc.engine.impl.ServiceEngineImpl.invoke(ServiceEngineImpl.java:121)
        at com.adobe.idp.dsc.routing.Router.routeRequest(Router.java:131)
        at com.adobe.idp.dsc.provider.impl.base.AbstractMessageReceiver.routeMessage(AbstractMessageReceiver.java:93)
        at com.adobe.idp.dsc.provider.impl.vm.VMMessageDispatcher.doSend(VMMessageDispatcher.java:198)
        at com.adobe.idp.dsc.provider.impl.base.AbstractMessageDispatcher.send(AbstractMessageDispatcher.java:69)
        at com.adobe.idp.dsc.clientsdk.ServiceClient.invoke(ServiceClient.java:215)
        at com.adobe.livecycle.readerextensions.client.ReaderExtensionsServiceClient.invoke(ReaderExtensionsServiceClient.java:58)
        at com.adobe.livecycle.readerextensions.client.ReaderExtensionsServiceClient.applyUsageRights(ReaderExtensionsServiceClient.java:105)
        at com.adobe.livecycle.readerextensions.servlet.ApplyRightsServlet.applyRights(ApplyRightsServlet.java:241)
        at com.adobe.livecycle.readerextensions.servlet.ApplyRightsServlet.doOperation(ApplyRightsServlet.java:189)
        at com.adobe.livecycle.readerextensions.servlet.ApplyRightsServlet.doPost(ApplyRightsServlet.java:80)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
        at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:286)
        at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:260)
        at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:137)
        at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:350)
        at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:25)
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
        at com.adobe.idp.um.auth.filter.ParameterFilter.doFilter(ParameterFilter.java:105)
        at com.adobe.idp.um.auth.filter.CSRFFilter.invokeNextFilter(CSRFFilter.java:141)
        at com.adobe.idp.um.auth.filter.CSRFFilter.doFilter(CSRFFilter.java:132)
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
        at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.wrapRun(WebAppServletContext.java:3701)
        at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3667)
        at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:326)
        at weblogic.security.service.SecurityManager.runAsForUserCode(SecurityManager.java:197)
        at weblogic.servlet.provider.WlsSecurityProvider.runAsForUserCode(WlsSecurityProvider.java:203)
        at weblogic.servlet.provider.WlsSubjectHandle.run(WlsSubjectHandle.java:71)
        at weblogic.servlet.internal.WebAppServletContext.doSecuredExecute(WebAppServletContext.java:2443)
        at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2291)
        at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2269)
        at weblogic.servlet.internal.ServletRequestImpl.runInternal(ServletRequestImpl.java:1705)
        at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1665)
        at weblogic.servlet.provider.ContainerSupportProviderImpl$WlsRequestExecutor.run(ContainerSupportProviderImpl.java:272)
        at weblogic.invocation.ComponentInvocationContextManager._runAs(ComponentInvocationContextManager.java:352)
        at weblogic.invocation.ComponentInvocationContextManager.runAs(ComponentInvocationContextManager.java:337)
        at weblogic.work.LivePartitionUtility.doRunWorkUnderContext(LivePartitionUtility.java:57)
        at weblogic.work.PartitionUtility.runWorkUnderContext(PartitionUtility.java:41)
        at weblogic.work.SelfTuningWorkManagerImpl.runWorkUnderContext(SelfTuningWorkManagerImpl.java:644)
        at weblogic.work.ExecuteThread.execute(ExecuteThread.java:415)
        at weblogic.work.ExecuteThread.run(ExecuteThread.java:355)
>

 

Based on all these logs, it is pretty clear that the issue was that a specific component of the AEM Forms signature process was trying to use a non-FIPS 140-2 supported algorithm. Since our WebLogic Servers are restricting this kind of weak algorithms, then the method failed on the AEM Forms server side, which was propagated on the AEM Workbench obviously.

According to the Adobe documentation, the AEM Forms is supposed to be FIPS compliant. Globally, it seems to support the FIPS but for this specific piece, it’s doesn’t and therefore the AEM Forms isn’t FIPS 140-2 compliant. Since there is nothing we can do on our side to change that, we opened a SR with the Adobe Support (#160202). After almost three months trying to explain the situation and our requirements as well as investigating the issue on Adobe side, they finally found the piece of code that was still using SHA-1.

Adobe then started a PoC to change this and have a fully (hopefully) FIPS 140-2 compliant software. The fix was tested and validated in November 2018 and it was therefore included in the next release: AEM Forms 6.4.3 (6.4 SP3) published on December 20, 2018. We installed it day-1 since it was released to fix our requirement and after that, the issue was indeed gone. Therefore, if you need to work with AEM Forms in a FIPS compliant environment, you should work with an AEM version released after that date.

 

Cet article AEM Forms – FIPS 140-2 Support est apparu en premier sur Blog dbi services.

Pages