Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 6 days 16 hours ago

PostgreSQL partitioning (1): Preparing the data set

Sun, 2019-06-02 08:20

This is the start of a series about partitioning in PostgreSQL. For this and the following posts I will use PostgreSQL 12 (which currently is in beta) so some stuff might not work if you are on PostgreSQL 11 or even on PostgreSQL 10 when declarative partitioning was introduced. This very first post is only about preparing the data set and I decided to separate this one from the next post because it shows various things around loading data in PostgreSQL and therefore might be useful on its one.

When you are looking for free data sets “The home of the U.S. Government’s open data” is a great source. It provides free data sets for research, development or just data you can play with for whatever reason. For the scope of this and the following posts we will use the Traffic violations data set. It contains more the 1.5 millions of rows and is a good starting point for diving into PostgreSQL partitioning.

The initial import goes into a table which only contains text columns and from there we can load our partitions later on. The structure of the table is:

create table traffic_violations
( seqid text
, date_of_stop text
, time_of_stop text
, agency text
, subagency text
, description text
, location text
, latitude text
, longitude text 
, accident text
, belts text
, personal_injury text
, property_damage text
, fatal text
, commercial_license text
, hazmat text
, commercial_vehicle text
, alcohol text
, workzone text
, state text
, vehicletype text
, year text
, make text
, model text
, color text
, violation_type text
, charge text
, article text
, contributed_to_accident text
, race text
, gender text
, driver_city text
, driver_state text
, dl_state text
, arrest_type text
, geolocation text
, council_districts text
, councils text
, communities text
, zip_codes text
, municipalities text
);

For loading the table the easiest way is to use PostgreSQL’s copy command:

postgres=# \! ls -l Traffic_Violations.csv
-rw-rw-r--. 1 postgres postgres 634161687 May 28 11:28 Traffic_Violations.csv
postgres=# copy traffic_violations from '/home/postgres/Traffic_Violations.csv' with ( format csv, header true, null 'null', delimiter ',');
COPY 1528078

So we have around 1.5 millions of rows with real data. As this table contains only text columns we will create a materialized view with data types that fit the data. Looking at one row of the table:

postgres=# \x
Expanded display is on.
postgres=# select * from traffic_violations limit 1;
-[ RECORD 1 ]-----------+-------------------------------------------------------------------------
seqid                   | e13d2082-55a7-4a93-8836-173be19d2648
date_of_stop            | 06/01/2019
time_of_stop            | 13:38:00
agency                  | MCP
subagency               | 2nd District, Bethesda
description             | DRIVER FAILURE TO YIELD RIGHT-OF-WAY TO VEH. UPON EMERGING FROM DRIVEWAY
location                | RANDOLPH RD / NEW HAMPSHIRE
latitude                | 39.07592
longitude               | -77.0011316666667
accident                | Yes
belts                   | No
personal_injury         | No
property_damage         | Yes
fatal                   | No
commercial_license      | No
hazmat                  | No
commercial_vehicle      | No
alcohol                 | No
workzone                | No
state                   | MD
vehicletype             | 02 - Automobile
year                    | 2014
make                    | HYUN
model                   | TK
color                   | SILVER
violation_type          | Warning
charge                  | 21-705(c)
article                 | Transportation Article
contributed_to_accident | false
race                    | WHITE
gender                  | F
driver_city             | SILVER SPRING
driver_state            | MD
dl_state                | MD
arrest_type             | A - Marked Patrol
geolocation             | (39.07592, -77.0011316666667)
council_districts       | 5
councils                | 5
communities             | 26
zip_codes               | 12
municipalities          | 1

The materialized view becomes:

create materialized view mv_traffic_violations
( seqid 
, date_of_stop  
, time_of_stop 
, agency  
, subagency  
, description 
, location 
, latitude 
, longitude 
, accident 
, belts 
, personal_injury 
, property_damage 
, fatal 
, commercial_license 
, hazmat 
, commercial_vehicle 
, alcohol 
, workzone 
, state 
, vehicletype 
, year 
, make 
, model 
, color 
, violation_type 
, charge 
, article 
, contributed_to_accident 
, race 
, gender 
, driver_city 
, driver_state 
, dl_state 
, arrest_type 
, geolocation 
, council_districts 
, councils 
, communities 
, zip_codes 
, municipalities 
)
as
select seqid
     , to_date(date_of_stop,'MM/DD/YYYY')
     , time_of_stop::time
     , agency
     , subagency 
     , description
     , location
     , latitude::numeric
     , longitude::numeric
     , accident
     , belts::boolean
     , personal_injury::boolean
     , property_damage::boolean
     , fatal::boolean
     , commercial_license::boolean
     , hazmat::boolean
     , commercial_vehicle::boolean
     , alcohol::boolean
     , workzone::boolean
     , state
     , vehicletype
     , case year
         when '' then null
         else year::smallint
       end
     , make
     , model
     , color
     , violation_type
     , charge
     , article
     , contributed_to_accident::boolean
     , race
     , gender
     , driver_city
     , driver_state
     , dl_state
     , arrest_type
     , geolocation::point
     , case council_districts
         when '' then null
         else council_districts::smallint
       end
     , case councils
         when '' then null
         else councils::smallint
       end
     , case communities
         when '' then null
         else communities::smallint
       end
     , case zip_codes
         when '' then null
         else zip_codes::smallint
       end
     , case municipalities
         when '' then null
         else municipalities::smallint
       end
  from traffic_violations;

The statement contains some “case” expressions to test for empty strings. If you do not do this you get issues like this when you cast into a specific data type:

psql: ERROR:  invalid input syntax for type smallint: ""

The beauty of a materialized view is, that you can refresh whenever the underlying data set changed, e.g.:

postgres=# refresh materialized view mv_traffic_violations WITH data;
REFRESH MATERIALIZED VIEW

Note that this will block access to the materialized view for the time of the refresh. You could avoid that with refreshing concurrently but that only works when you have a unique index on the materialized view. With this data set we can not create a unique index because some of the rows are duplicates, e.g.:

postgres=# select * from mv_traffic_violations where seqid='b87c908c-ce2d-4c10-89fa-ca48735af485' and date_of_stop = '2012-11-07' and time_of_stop = '05:07:00' and description = 'FAILURE OF VEH. DRIVER IN ACCIDENT TO LOCATE AND NOTIFY OWNER OF UNATTENDED VEH. OF DAMAGE' and charge = '20-105(b)' and violation_type = 'Citation';
-[ RECORD 1 ]-----------+-------------------------------------------------------------------------------------------
seqid                   | b87c908c-ce2d-4c10-89fa-ca48735af485
date_of_stop            | 2012-11-07
time_of_stop            | 05:07:00
agency                  | MCP
subagency               | 2nd District, Bethesda
description             | FAILURE OF VEH. DRIVER IN ACCIDENT TO LOCATE AND NOTIFY OWNER OF UNATTENDED VEH. OF DAMAGE
location                | IFO 2531 ROSS RD
latitude                | 39.0222578333333
longitude               | -77.04575825
accident                | Yes
belts                   | f
personal_injury         | f
property_damage         | t
fatal                   | f
commercial_license      | f
hazmat                  | f
commercial_vehicle      | f
alcohol                 | f
workzone                | f
state                   | MD
vehicletype             | 02 - Automobile
year                    | 2002
make                    | SUBARU
model                   | FORESTER
color                   | SILVER
violation_type          | Citation
charge                  | 20-105(b)
article                 | Transportation Article
contributed_to_accident | f
race                    | HISPANIC
gender                  | M
driver_city             | SILVER SPRING
driver_state            | MD
dl_state                | MD
arrest_type             | A - Marked Patrol
geolocation             | (39.022257833333,-77.04575825)
council_districts       | 5
councils                | 5
communities             | 10
zip_codes               | 17
municipalities          | 1
-[ RECORD 2 ]-----------+-------------------------------------------------------------------------------------------
seqid                   | b87c908c-ce2d-4c10-89fa-ca48735af485
date_of_stop            | 2012-11-07
time_of_stop            | 05:07:00
agency                  | MCP
subagency               | 2nd District, Bethesda
description             | FAILURE OF VEH. DRIVER IN ACCIDENT TO LOCATE AND NOTIFY OWNER OF UNATTENDED VEH. OF DAMAGE
location                | IFO 2531 ROSS RD
latitude                | 39.0222578333333
longitude               | -77.04575825
accident                | Yes
belts                   | f
personal_injury         | f
property_damage         | t
fatal                   | f
commercial_license      | f
hazmat                  | f
commercial_vehicle      | f
alcohol                 | f
workzone                | f
state                   | MD
vehicletype             | 02 - Automobile
year                    | 2002
make                    | SUBARU
model                   | FORESTER
color                   | SILVER
violation_type          | Citation
charge                  | 20-105(b)
article                 | Transportation Article
contributed_to_accident | f
race                    | HISPANIC
gender                  | M
driver_city             | SILVER SPRING
driver_state            | MD
dl_state                | MD
arrest_type             | A - Marked Patrol
geolocation             | (39.022257833333,-77.04575825)
council_districts       | 5
councils                | 5
communities             | 10
zip_codes               | 17
municipalities          | 1

So now we have the data set we can use for the partitioning examples in the next posts. To summarize, what did we do:

  • https://www.data.gov/ is a great source for free data sets and we used one of them to load data into PostgreSQL
  • copy is great for loading data into PostgreSQL
  • The “case” statement is great for testing various conditions in a statement
  • materialized view are great when you want to refresh from an underlying data set that changes, but you will need a unique index for refreshing concurrently
  • You might need to cast one data type into another

In the next post we will look at the first partitioning strategy: Range partitioning.

Cet article PostgreSQL partitioning (1): Preparing the data set est apparu en premier sur Blog dbi services.

PostgreSQL 12: New partition reporting functions

Sun, 2019-06-02 01:00

PostgreSQL 10 introduced declarative partitioning (with some limitations), PostgreSQL 11 improved that a lot (Updating the partition key now works in PostgreSQL 11, Insert…on conflict with partitions finally works in PostgreSQL 11, Local partitioned indexes in PostgreSQL 11, Hash Partitioning in PostgreSQL 11) and PostgreSQL 12 goes even further. Beside that foreign keys can now reference partitioned tables three new functions made it into PostgreSQL 12 that will give you information about your partitioned tables.

Our little demo setup consist of a list partitioned table with three partitions:

postgres=# create table animals ( id int generated always as identity ( cache 10 ),
postgres(#                        name text unique,
postgres(#                        primary key(id,name)
postgres(#                      ) 
postgres-#                      partition by list (name);
CREATE TABLE
postgres=# create table animals_elephants
postgres-#   partition of animals
postgres-#   for values in ('elephant');
CREATE TABLE
postgres=# create table animals_cats
postgres-#   partition of animals
postgres-#   for values in ('cats');
CREATE TABLE
postgres=# create table animals_dogs
postgres-#   partition of animals
postgres-#   for values in ('dogs');
CREATE TABLE

What already worked in PostgreSQL 11 is that psql will give you information about your partitions:

postgres=# \d animals
                   Partitioned table "public.animals"
 Column |  Type   | Collation | Nullable |           Default            
--------+---------+-----------+----------+------------------------------
 id     | integer |           | not null | generated always as identity
 name   | text    |           | not null | 
Partition key: LIST (name)
Indexes:
    "animals_pkey" PRIMARY KEY, btree (id, name)
    "animals_name_key" UNIQUE CONSTRAINT, btree (name)
Number of partitions: 3 (Use \d+ to list them.)

Using “\d+” will even show you more information:

postgres=# \d+ animals
                                       Partitioned table "public.animals"
 Column |  Type   | Collation | Nullable |           Default            | Storage  | Stats target | Description 
--------+---------+-----------+----------+------------------------------+----------+--------------+-------------
 id     | integer |           | not null | generated always as identity | plain    |              | 
 name   | text    |           | not null |                              | extended |              | 
Partition key: LIST (name)
Indexes:
    "animals_pkey" PRIMARY KEY, btree (id, name)
    "animals_name_key" UNIQUE CONSTRAINT, btree (name)
    "animals_i1" btree (name)
Partitions: animals_cats FOR VALUES IN ('cats'),
            animals_dogs FOR VALUES IN ('dogs'),
            animals_elephants FOR VALUES IN ('elephant')

Now with PostgreSQL 12 there are three new functions which you can use get information about your partitioned tables and the partitions itself. The first one will give you the partition tree:

postgres=# select pg_partition_tree('animals');
        pg_partition_tree        
---------------------------------
 (animals,,f,0)
 (animals_elephants,animals,t,1)
 (animals_dogs,animals,t,1)
 (animals_cats,animals,t,1)
(4 rows)

The second one will give you the ancestor relations of the given partition:

postgres=# select pg_partition_ancestors('animals_dogs');
 pg_partition_ancestors 
------------------------
 animals_dogs
 animals
(2 rows)

The third one will give you the root for a given partition:

postgres=# select pg_partition_root('animals_cats');
 pg_partition_root 
-------------------
 animals
(1 row)

All of them do also work for partitioned indexes:

postgres=# create index animals_i1 on animals (name);
CREATE INDEX
postgres=# select pg_partition_tree('animals_i1');
              pg_partition_tree              
---------------------------------------------
 (animals_i1,,f,0)
 (animals_cats_name_idx,animals_i1,t,1)
 (animals_dogs_name_idx,animals_i1,t,1)
 (animals_elephants_name_idx,animals_i1,t,1)
(4 rows)

Nice.

Cet article PostgreSQL 12: New partition reporting functions est apparu en premier sur Blog dbi services.

WebLogic – Upgrade of Java

Sat, 2019-06-01 03:05

If you are used to Java, you probably know that there are really important security fixes published every quarter in the Java Critical Patch Update (CPU) and it’s really important to keep it up to date. It’s always easy to upgrade your Java installation, you just have to deploy the new version entirely. In most cases, it’s also pretty easy to update the Components that are using Java (E.g.: Application Servers like Apache Tomcat), since they are – most of the time – relying on an environment variable (JAVA_HOME) to know which Java should be used. On the other hand, there is the WebLogic case and that’s where all the “fun” is…

In a previous blog, I talked about the JAVA_HOME management inside the WebLogic Server files. If you want to upgrade the Java version that is used by your WebLogic Server, then you basically have three choices so let’s see that in detail:

 

I. Use a symlink for the JAVA_HOME

You might be tempted to do something like that:

[weblogic@weblogic_server_01 ~]$ cd /app/Java/
[weblogic@weblogic_server_01 Java]$
[weblogic@weblogic_server_01 Java]$ ls -l
total 0
drwxr-xr-x. 1 weblogic weblogic 17 May 18 13:55 jdk1.8.0_151
drwxr-xr-x. 1 weblogic weblogic 17 May 18 14:02 jdk1.8.0_192
[weblogic@weblogic_server_01 Java]$
[weblogic@weblogic_server_01 Java]$ ln -s jdk1.8.0_151/ jdk
[weblogic@weblogic_server_01 Java]$ ls -l
total 0
lrwxrwxrwx. 1 weblogic weblogic 13 May 18 14:05 jdk -> jdk1.8.0_151/
drwxr-xr-x. 1 weblogic weblogic 17 May 18 13:55 jdk1.8.0_151
drwxr-xr-x. 1 weblogic weblogic 17 May 18 14:02 jdk1.8.0_192
[weblogic@weblogic_server_01 Java]$
[weblogic@weblogic_server_01 Java]$ JAVA_HOME=/app/Java/jdk
[weblogic@weblogic_server_01 Java]$
[weblogic@weblogic_server_01 Java]$ $JAVA_HOME/bin/java -version
java version "1.8.0_151"
Java(TM) SE Runtime Environment (build 1.8.0_151-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.151-b12, mixed mode)
[weblogic@weblogic_server_01 Java]$
[weblogic@weblogic_server_01 Java]$ rm jdk
[weblogic@weblogic_server_01 Java]$ ln -s jdk1.8.0_192/ jdk
[weblogic@weblogic_server_01 Java]$ ls -l
total 0
lrwxrwxrwx. 1 weblogic weblogic 13 May 18 14:06 jdk -> jdk1.8.0_192/
drwxr-xr-x. 1 weblogic weblogic 17 May 18 13:55 jdk1.8.0_151
drwxr-xr-x. 1 weblogic weblogic 17 May 18 14:02 jdk1.8.0_192
[weblogic@weblogic_server_01 Java]$
[weblogic@weblogic_server_01 Java]$ $JAVA_HOME/bin/java -version
java version "1.8.0_192"
Java(TM) SE Runtime Environment (build 1.8.0_192-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.192-b12, mixed mode)
[weblogic@weblogic_server_01 Java]$

 

And then you would use “/app/Java/jdk” (a symlink) for WebLogic. Please don’t, it’s a terrible idea!

You might think that using a symlink is a good idea and I thought that too at the beginning… But Oracle officially doesn’t support it and worse than that, it will actually not stay. You can technically use a symlink for your WebLogic Server, it will run (even if it’s not officially supported) but as soon as you are going to run an installer/configurator, it will actually overwrite and replace the symlink path (“/app/Java/jdk“) in WebLogic files with the path of the target folder (“/app/Java/jdk1.8.0_192“).

It’s unfortunate, really, but WebLogic will run as planned and it will use “/app/Java/jdk/bin/java” at the beginning and as soon as you upgrade WebLogic or when you will run an installer, you will find out that WebLogic switched to “/app/Java/jdk1.8.0_192/bin/java” on its own and it overwritten the configuration that you put in place.

If you are using WLS 12.2.1.3 and you therefore have access to the setProperty.sh script (see section II.a below), it will also not accept a symlink, it will just set the JAVA_HOME value to the target folder behind the symlink:

[weblogic@weblogic_server_01 bin]$ ./setProperty.sh -name JAVA_HOME -value /app/Java/jdk
Property JAVA_HOME successfully set to "/app/Java/jdk1.8.0_192"
[weblogic@weblogic_server_01 bin]$

 

So, in summary, don’t use symlink, it will be pretty for a few minutes/hours/days but in the end, it won’t work anymore and you will therefore loose the purpose of using that.

 

II. Manually replace file content with the new JAVA_HOME

Here you have two choices again:

a. Minimal update

As mentioned in my previous blog, there are some specific configuration files that are used when starting components. If you want to change the path of your JAVA_HOME without too much work, a quick (& a little bit dirty) way is to update only the necessary files:

  • ${DOMAIN_HOME}/bin/setDomainEnv.sh
  • ${DOMAIN_HOME}/nodemanager/nodemanager.properties
  • (WLS 12.1.x) ${MW_HOME}/oracle_common/common/bin/commEnv.sh
  • (WLS 12.2.x) ${DOMAIN_HOME}/bin/setNMJavaHome.sh
  • (WLS 12.2.1.2 and below) ${MW_HOME}/oracle_common/common/bin/commBaseEnv.sh
  • (WLS 12.2.1.3 and above) ${MW_HOME}/oui/.globalEnv.properties

So, as you can see, if you want to update the JAVA_HOME in a very simple WebLogic Server, you need to change three or four files which will change depending on the version of WebLogic that you are using…

If you are using WLS 12.2.1.3, you don’t have to update the file “${MW_HOME}/oui/.globalEnv.properties” directly, there is actually an utility provided since 12.2.1.3 that will do it for you (more information here):

[weblogic@weblogic_server_01 ~]$ cd ${MW_HOME}/oui/bin/
[weblogic@weblogic_server_01 bin]$
[weblogic@weblogic_server_01 bin]$ cat ../.globalEnv.properties
#This file is automatically generated
#Sat May 18 14:34:24 UTC 2019
JAVA_HOME=/app/Java/jdk1.8.0_151
JAVA_HOME_1_8=/app/Java/jdk1.8.0_151
JVM_64=
[weblogic@weblogic_server_01 bin]$
[weblogic@weblogic_server_01 bin]$ ./getProperty.sh JAVA_HOME
/app/Java/jdk1.8.0_151
[weblogic@weblogic_server_01 bin]$
[weblogic@weblogic_server_01 bin]$ ./setProperty.sh -name JAVA_HOME -value /app/Java/jdk1.8.0_192
Property JAVA_HOME successfully set to "/app/Java/jdk1.8.0_192"
[weblogic@weblogic_server_01 bin]$
[weblogic@weblogic_server_01 bin]$ cat ../.globalEnv.properties
#This file is automatically generated
#Sat May 18 15:02:54 UTC 2019
JAVA_HOME=/app/Java/jdk1.8.0_192
JAVA_HOME_1_8=/app/Java/jdk1.8.0_151
JVM_64=
[weblogic@weblogic_server_01 bin]$

 

This solution is working but it’s not very clean. You will remain with references to the old path in a lot of files so at some point, you (or someone else) might wonder which Java is actually used. That’s why I personally don’t really like this solution.

b. Cleaner update

Instead of updating only the minimal files, what about being more thorough and update all the files with the mention of the Java path from the old one to the new one? This requires some (not so complicated) commands to find all relevant files, excluding logs, excluding previous patch files, excluding binaries and a few other things and then you can just replace the old Java with the new one.

This is a better solution but it has a few drawbacks:

  • You absolutely need to know what you are doing and you need to test it thoroughly. Don’t try sed commands (or whatever you prefer) to update files directly if you aren’t sure about what you are doing in the first place
  • It takes some time to list all files and update them properly, especially if there is a lot of files under $DOMAIN_HOME or $MW_HOME/$ORACLE_HOME. Even if you automated it, since your Operating System still needs to go through all the existing files and since this needs to be done while WebLogic isn’t running, it will increase the downtime needed
  • This would need to be done for each and every Java patch/update

Despite these drawbacks, I still prefer this solution because it’s much cleaner. No end-user would care if there are still references to an old Java in the WebLogic files, as long as the Application is working for them but I would know that it’s there and it would drive me crazy (a little bit :D).

Below are some example commands that can be used to do that. Please test them and update them as per your need! If you are using different components or additional ones, you might want to take that into account.

[weblogic@weblogic_server_01 ~]$ OLD_JAVA="/app/Java/jdk1.8.0_151"
[weblogic@weblogic_server_01 ~]$ NEW_JAVA="/app/Java/jdk1.8.0_192"
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ grep -R "${OLD_JAVA}" $DOMAIN_HOME | grep -vE "^[^:]*.log:|^[^:]*/logs/|^[^:]*/nohupLogs/|^[^:]*/.patch_storage/|^Binary file " > ~/listJavaBefore_DOMAIN_HOME
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ grep -R "${OLD_JAVA}" $ORACLE_HOME | grep -vE "^[^:]*.log:|^[^:]*/logs/|^[^:]*/nohupLogs/|^[^:]*/.patch_storage/|^Binary file " > ~/listJavaBefore_ORACLE_HOME
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ awk -F':' '{print $1}' ~/listJavaBefore_DOMAIN_HOME | sort -u; echo
...

[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ awk -F':' '{print $1}' ~/listJavaBefore_ORACLE_HOME | sort -u; echo
...

[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ while read line; do FILE_TO_UPDATE=`echo ${line} | awk -F':' '{print $1}'`; sed -i "s,${OLD_JAVA},${NEW_JAVA},g" ${FILE_TO_UPDATE}; done < ~/listJavaBefore_DOMAIN_HOME
[weblogic@weblogic_server_01 ~]$
[weblogic@weblogic_server_01 ~]$ while read line; do FILE_TO_UPDATE=`echo ${line} | awk -F':' '{print $1}'`; sed -i "s,${OLD_JAVA},${NEW_JAVA},g" ${FILE_TO_UPDATE}; done < ~/listJavaBefore_ORACLE_HOME
[weblogic@weblogic_server_01 ~]$

 

The grep commands above list all occurrences of the $OLD_JAVA path, the awk commands list all the files that contain one or more occurrences and the while loop replace all occurrences with the value of the $NEW_JAVA. Once that’s done, you can check again with the grep command using both OLD_JAVA and NEW_JAVA to ensure that all references to the old JAVA_HOME are now all properly replaced with the new JAVA_HOME. Instead of OLD_JAVA, you can also use a regex, if you prefer, in case you had several Java versions in the past so it matches all of them and not just the last one. Something like ‘grep “${JAVA_BASE}/[a-zA-Z0-9._]*” …‘ for example or ‘grep “${JAVA_BASE}/jdk[0-9._]*” …

 

III. Use a generic folder

From my opinion, none of the above solutions matter because this one is the best solution and it should be the one that everybody uses. What I mean by a “generic folder” is simply a folder whose name never changes. For that purpose, I’m usually very happy with symlinks for other Application/Application Server but for WebLogic this needs to be a real folder.

In case of an upgrade, you will need to execute both this section III as well as the section II.b so that all references points to the generic folder. On the other hand, in case of a new installation, it’s just all about using the generic folder’s path while installing WebLogic and then you will never have anything else to do. You can see this section III as being somehow the opposite of the section I.

Below, I’m just taking the status at the section I where we have a symlink named “jdk” that point to “jdk1.8.0_192“. I’m changing that, removing the symlink, renaming the folder “jdk1.8.0_151” into “jdk” and then creating a new symlink named “jdk1.8.0_151” that points to “jdk” so it’s the other way around. With that, we have a generic folder named “jdk” that can old all/any Java version and we will never have to change the JAVA_HOME again:

[weblogic@weblogic_server_01 Java]$ ls -l
total 0
lrwxrwxrwx. 1 weblogic weblogic 13 May 18 14:06 jdk -> jdk1.8.0_192/
drwxr-xr-x. 1 weblogic weblogic 17 May 18 13:55 jdk1.8.0_151
drwxr-xr-x. 1 weblogic weblogic 17 May 18 14:02 jdk1.8.0_192
[weblogic@weblogic_server_01 Java]$
[weblogic@weblogic_server_01 Java]$ $JAVA_HOME/bin/java -version
java version "1.8.0_192"
Java(TM) SE Runtime Environment (build 1.8.0_192-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.192-b12, mixed mode)
[weblogic@weblogic_server_01 Java]$
[weblogic@weblogic_server_01 Java]$ rm jdk
[weblogic@weblogic_server_01 Java]$ mv jdk1.8.0_151 jdk
[weblogic@weblogic_server_01 Java]$ ln -s jdk/ jdk1.8.0_151
[weblogic@weblogic_server_01 Java]$
[weblogic@weblogic_server_01 Java]$ ls -l
total 0
drwxr-xr-x. 1 weblogic weblogic 17 May 18 13:55 jdk
lrwxrwxrwx. 1 weblogic weblogic 13 May 18 16:22 jdk1.8.0_151 -> jdk/
drwxr-xr-x. 1 weblogic weblogic 17 May 18 14:02 jdk1.8.0_192
[weblogic@weblogic_server_01 Java]$
[weblogic@weblogic_server_01 Java]$ $JAVA_HOME/bin/java -version
java version "1.8.0_151"
Java(TM) SE Runtime Environment (build 1.8.0_151-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.151-b12, mixed mode)
[weblogic@weblogic_server_01 Java]$

 

As mentioned above, at this point, if you are going to install a new WebLogic Server, then just use “/app/Java/jdk” as the JAVA_HOME and you will never have to touch any WebLogic files. If you have already a WebLogic Server installed and using something else than “/app/Java/jdk“, then you will have to do the section II.b. Section II.b is designed to be executed each time you change your Java version but here, since the folder name will always stay the same, you can do it once only.

To upgrade your version of Java once you are using the generic folder, it’s that simple:

[weblogic@weblogic_server_01 Java]$ ls -l
total 0
drwxr-xr-x. 1 weblogic weblogic 17 May 18 13:55 jdk
lrwxrwxrwx. 1 weblogic weblogic 13 May 18 16:22 jdk1.8.0_151 -> jdk/
drwxr-xr-x. 1 weblogic weblogic 17 May 18 14:02 jdk1.8.0_192
[weblogic@weblogic_server_01 Java]$
[weblogic@weblogic_server_01 Java]$ $JAVA_HOME/bin/java -version
java version "1.8.0_151"
Java(TM) SE Runtime Environment (build 1.8.0_151-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.151-b12, mixed mode)
[weblogic@weblogic_server_01 Java]$
[weblogic@weblogic_server_01 Java]$ rm jdk1.8.0_151
[weblogic@weblogic_server_01 Java]$ mv jdk jdk1.8.0_151
[weblogic@weblogic_server_01 Java]$ mv jdk1.8.0_192 jdk
[weblogic@weblogic_server_01 Java]$ ln -s jdk/ jdk1.8.0_192
[weblogic@weblogic_server_01 Java]$
[weblogic@weblogic_server_01 Java]$ ls -l
total 0
drwxr-xr-x. 1 weblogic weblogic 17 May 18 14:02 jdk
drwxr-xr-x. 1 weblogic weblogic 17 May 18 13:55 jdk1.8.0_151
lrwxrwxrwx. 1 weblogic weblogic 13 May 18 16:28 jdk1.8.0_192 -> jdk/
[weblogic@weblogic_server_01 Java]$
[weblogic@weblogic_server_01 Java]$ $JAVA_HOME/bin/java -version
java version "1.8.0_192"
Java(TM) SE Runtime Environment (build 1.8.0_192-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.192-b12, mixed mode)
[weblogic@weblogic_server_01 Java]$

 

Once that’s done, simply restart the WebLogic processes and you are done, Java has been upgraded without effort, the WebLogic files are still referencing the correct JAVA_HOME and it will stay clean, efficient and working, forever.

 

Cet article WebLogic – Upgrade of Java est apparu en premier sur Blog dbi services.

WebLogic – JAVA_HOME in WebLogic, a nightmare?

Sat, 2019-06-01 03:00

Everybody knows Java but not everybody loves Java. With everything Oracle is currently doing regarding Java, I can’t say I’m blaming them… But that’s not the topic of this blog! hurrah. Well actually I’m not sure the topic of this blog is much better since I will talk about the management of the JAVA_HOME environment variable with WebLogic and more specifically a very simple WebLogic Server in standalone, nothing more. I always wanted to write a blog about how to properly upgrade Java when using WebLogic so I thought about writing this blog first, as an introduction.

Before going deeply into how WebLogic is managing the JAVA_HOME (and therefore how it manages which Java version is used), in case you will be wondering below, these are some possible values for JAVA_VENDOR: Oracle, HP, IBM, Sun. I assume most people aren’t using this environment variable, they are just using JAVA_HOME and expect it to be working. Let’s see what WebLogic thinks about that…

 

I. JAVA_HOME in WLS 12.1.3

I didn’t check for all versions since it’s a quite lengthy and boring process, but I believe this section applies to all 12.1.x versions. In all sections below, ${MW_HOME} is the same as ${ORACLE_HOME} and ${WL_HOME} is ${ORACLE_HOME}/wlserver.

If you want to start a NodeManager, this is what WebLogic 12.1.3 will do in regard to the JAVA_HOME variable:

  • execute ${DOMAIN_HOME}/bin/startNodeManager.sh
    • execute ${WL_HOME}/server/bin/startNodeManager.sh
      • source ${MW_HOME}/oracle_common/common/bin/commEnv.sh ~~> If the environment variable JAVA_HOME is set, that’s good… But if JAVA_VENDOR isn’t set as well, then your JAVA_HOME isn’t used and instead it uses a hardcoded value set in this file directly. If the new JAVA_HOME’s folder (It specifically checks if it’s a FOLDER !!!) doesn’t exist, then it takes the environment variable JAVA_HOME that you defined but if the folder exists, it keeps the JAVA_HOME with the hardcoded value.

I believe this was done to hide the misery that it’s to handle JAVA_HOME in WebLogic… Basically if you upgraded your Java and replaced the JAVA_HOME environment variable without touching the WebLogic files, WebLogic would try to use the old Java and if the folder isn’t there anymore because you removed it, then it would use the value coming from your JAVA_HOME… Why would they do that? Don’t ask me.

 

If you want to start a Managed Server (it’s the same thing for the AdminServer, except the first line), this is what WebLogic 12.1.3 will do in regard to the JAVA_HOME variable:

  • execute ${DOMAIN_HOME}/bin/startManagedWebLogic.sh
    • execute ${DOMAIN_HOME}/bin/startWebLogic.sh
      • source ${DOMAIN_HOME}/bin/setDomainEnv.sh ~~> It doesn’t care if JAVA_HOME is set, it overwrites it. The value will depend on VM_TYPE and JAVA_VENDOR but it overwrites it using the hardcoded value from this file directly. At this point, JAVA_HOME is set and JAVA_VENDOR is overwritten, except if VM_TYPE=JRockit in which case JAVA_HOME is “@DEFAULT_BEA_JAVA_HOME” and JAVA_VENDOR isn’t set…
        • source ${WL_HOME}/common/bin/commEnv.sh
          • source ${MW_HOME}/oracle_common/common/bin/commEnv.sh ~~> Same script as for the NodeManager, except that this time, JAVA_HOME was overwritten by the setDomainEnv.sh script already… So, if JAVA_HOME OR JAVA_VENDOR isn’t set, then JAVA_HOME is again overwritten by yet another hardcoded value…
        • source ${DOMAIN_HOME}/bin/setStartupEnv.sh
        • source ${DOMAIN_HOME}/bin/setUserOverrides.sh

Summary: It’s a nightmare isn’t it? If you want to be sure that your environment variable JAVA_HOME is used in the end without being overwritten, then… Good luck because that’s not possible. For the NodeManager, it would be possible since you just need to define both JAVA_HOME and JAVA_VENDOR (or make sure you remove any old JDK from the FileSystem so it falls back to your JAVA_HOME) but for the Managed Servers, it’s not possible. Well actually it would be possible by setting your JAVA_HOME variable into the “${DOMAIN_HOME}/bin/setUserOverrides.sh” file… Since this file is loaded at the very end, it would use your values but please don’t do that, it’s so ugly!

You think that’s complicated? Please read below, there is more.

 

II. JAVA_HOME in WLS 12.2.1.2

In WLS 12.2, Oracle changed the way the JAVA_HOME is handled for the NodeManager and for the Managed Servers by introducing several things:

  • A new script is used to set the Java Home for the NodeManager: {DOMAIN_HOME}/bin/setNMJavaHome.sh
  • Two new scripts are used to set the environment: the script ${MW_HOME}/oracle_common/common/bin/commEnv.sh now doesn’t contain anything anymore but instead, it loads ${MW_HOME}/oracle_common/common/bin/commBaseEnv.sh as well as ${MW_HOME}/oracle_common/common/bin/commExtEnv.sh for both the NodeManager and the Managed Servers

 

If you want to start a NodeManager, this is what WebLogic 12.2.1.2 will do in regard to the JAVA_HOME variable:

  • execute ${DOMAIN_HOME}/bin/startNodeManager.sh
    • source ${DOMAIN_HOME}/bin/setNMJavaHome.sh ~~> It doesn’t care if JAVA_HOME is set, it overwrites it. The value will depend on VM_TYPE and JAVA_VENDOR but it overwrites it using the hardcoded value from this file directly. At this point, JAVA_HOME is set and JAVA_VENDOR is overwritten, except if VM_TYPE=JRockit in which case JAVA_HOME is empty and JAVA_VENDOR isn’t set…
    • execute ${WL_HOME}/server/bin/startNodeManager.sh
      • source ${MW_HOME}/oracle_common/common/bin/commEnv.sh
        • source ${MW_HOME}/oracle_common/common/bin/commBaseEnv.sh ~~> Different script but same piece of code as for the NodeManager in WLS 12.1.3, except that this time, JAVA_HOME was overwritten by the setNMJavaHome.sh script already… So, if JAVA_HOME OR JAVA_VENDOR isn’t set, then JAVA_HOME is again overwritten by yet another hardcoded value…
        • source ${MW_HOME}/oracle_common/common/bin/commExtEnv.sh

 

If you want to start a Managed Server, this is what WebLogic 12.2.1.2 will do in regard to the JAVA_HOME variable:

  • execute ${DOMAIN_HOME}/bin/startManagedWebLogic.sh
    • execute ${DOMAIN_HOME}/bin/startWebLogic.sh
      • source ${DOMAIN_HOME}/bin/setDomainEnv.sh ~~> It doesn’t care if JAVA_HOME is set, it overwrites it. The value will depend on VM_TYPE and JAVA_VENDOR but it overwrites it using the hardcoded value from this file directly. At this point, JAVA_HOME is set and JAVA_VENDOR is overwritten, except if VM_TYPE=JRockit in which case JAVA_HOME is “@DEFAULT_BEA_JAVA_HOME” and JAVA_VENDOR isn’t set…
        • source ${MW_HOME}/oracle_common/common/bin/commEnv.sh
          • source ${MW_HOME}/oracle_common/common/bin/commBaseEnv.sh ~~> Same script as for the NodeManager, except that this time, JAVA_HOME was overwritten by the setDomainEnv.sh script already… So, if JAVA_HOME OR JAVA_VENDOR isn’t set, then JAVA_HOME is again overwritten by yet another hardcoded value…
          • source ${MW_HOME}/oracle_common/common/bin/commExtEnv.sh
        • source ${DOMAIN_HOME}/bin/setStartupEnv.sh
        • source ${DOMAIN_HOME}/bin/setUserOverrides.sh

 

III. JAVA_HOME in WLS 12.2.1.3

In WLS 12.2.1.3, Oracle changed again the way the JAVA_HOME is handled for the NodeManager and for the Managed Servers by introducing one more thing: instead of hardcoding the JAVA_HOME in ${MW_HOME}/oracle_common/common/bin/commBaseEnv.sh, it now retrieve the value using a script (${MW_HOME}/oui/bin/getProperty.sh) that read a specific file (${MW_HOME}/oui/.globalEnv.properties).

If you want to start a NodeManager, this is what WebLogic 12.2.1.3 will do in regard to the JAVA_HOME variable:

  • execute ${DOMAIN_HOME}/bin/startNodeManager.sh
    • source ${DOMAIN_HOME}/bin/setNMJavaHome.sh ~~> Same script as for the NodeManager in WLS 12.2.1.2
    • execute ${WL_HOME}/server/bin/startNodeManager.sh
      • source ${MW_HOME}/oracle_common/common/bin/commEnv.sh
        • source ${MW_HOME}/oracle_common/common/bin/commBaseEnv.sh ~~> (Almost) Same script as for the NodeManager in WLS 12.2.1.2, except that this time, JAVA_HOME is again overwritten by retrieving properties value using the new script
          • execute ${MW_HOME}/oui/bin/getProperty.sh JAVA_HOME
            • read ${MW_HOME}/oui/.globalEnv.properties ~~> Always overwrite JAVA_HOME with the hardcoded value
        • source ${MW_HOME}/oracle_common/common/bin/commExtEnv.sh

 

If you want to start a Managed Server, this is what WebLogic 12.2.1.3 will do in regard to the JAVA_HOME variable:

  • execute ${DOMAIN_HOME}/bin/startManagedWebLogic.sh
    • execute ${DOMAIN_HOME}/bin/startWebLogic.sh
      • source ${DOMAIN_HOME}/bin/setDomainEnv.sh ~~> Same script as for the Managed Servers in WLS 12.2.1.2
        • source ${MW_HOME}/oracle_common/common/bin/commEnv.sh
          • source ${MW_HOME}/oracle_common/common/bin/commBaseEnv.sh ~~> (Almost) Same script as for the Managed Servers in WLS 12.2.1.2, except that this time, JAVA_HOME is again overwritten by retrieving properties value using the new script
            • execute ${MW_HOME}/oui/bin/getProperty.sh JAVA_HOME
              • read ${MW_HOME}/oui/.globalEnv.properties ~~> Always overwrite JAVA_HOME with the hardcoded value
          • source ${MW_HOME}/oracle_common/common/bin/commExtEnv.sh
        • source ${DOMAIN_HOME}/bin/setStartupEnv.sh
        • source ${DOMAIN_HOME}/bin/setUserOverrides.sh
        • source ${DOMAIN_HOME}/bin/setUserOverridesLate.sh

 

Easy right? Joke aside, even if it’s a huge mess, with WLS 12.2.1.3, you now have the file “${MW_HOME}/oui/.globalEnv.properties” which is used by both the NodeManager as well as the Admin/Managed Servers and therefore if you overwrite the value in this file, you would think that you are good to go right? Well… “Why would it be so easy?”, that’s probably what Oracle thought when they were thinking about how they should handle the JAVA_HOME internally ;).

Instead of going top to bottom, let’s go the other way around. Let’s say that you updated the value of JAVA_HOME inside the file “${MW_HOME}/oui/.globalEnv.properties”. This file is read using the “${MW_HOME}/oui/bin/getProperty.sh” script which is executed by “${MW_HOME}/oracle_common/common/bin/commBaseEnv.sh”. To execute the getProperty.sh, you need either the JAVA_HOME or the JAVA_VENDOR to be not set or empty.

Before the commBaseEnv.sh script, the only one that contains references to JAVA_HOME or JAVA_VENDOR is:

  • For the NodeManager: ${DOMAIN_HOME}/bin/setNMJavaHome.sh
    • JAVA_HOME can be empty only if VM_TYPE=JRockit, otherwise it’s always overwritten with the hardcoded values
    • JAVA_VENDOR can be empty only if VM_TYPE=JRockit, otherwise it’s always overwritten with “Oracle”
    • Conclusion: To be able to load the value from “${MW_HOME}/oui/.globalEnv.properties”, JAVA_HOME or JAVA_VENDOR needs to be empty so you need to use VM_TYPE=JRockit if you want to use this new “feature” added in WLS 12.2.1.3… And obviously, if your aren’t using a JRockit VM (which, by the way, doesn’t exist anymore: latest stable release was 7 years ago…), then the NodeManager won’t start so good luck trying to use this “${MW_HOME}/oui/.globalEnv.properties” file because it’s impossible ;).
  • For the Managed Servers: ${DOMAIN_HOME}/bin/setDomainEnv.sh
    • JAVA_HOME cannot be empty or not set because of the hardcoded values in this file and even if VM_TYPE=JRockit, then JAVA_HOME is set to “@DEFAULT_BEA_JAVA_HOME”
    • JAVA_VENDOR can be empty only if VM_TYPE=JRockit, otherwise it’s always overwritten with “Oracle”
    • Conclusion: To be able to load the value from “${MW_HOME}/oui/.globalEnv.properties”, JAVA_VENDOR needs to be empty so you need to use VM_TYPE=JRockit… Same conclusion as for the NodeManager, it’s not possible.

So, why did they introduce this new file in WLS 12.2.1.3? Well I can only hope that it is used somewhere! One thing is certain, it’s not for the NodeManager, AdminServer or Managed Servers (there is not much left… :D). If you are aware of anything that would make some use of that, don’t hesitate to share!

So, in summary, WebLogic basically doesn’t care if you set a JAVA_HOME environment variable, it will anyway just use its hardcoded values, contrary to most Application Servers which trusts the people installing/configuring them to set the JAVA_HOME to the expected value. In a future blog, I will talk about the upgrade process of Java in the scope of a WebLogic Server, to make it smoother (well, as much as technically possible… ;)).

 

Cet article WebLogic – JAVA_HOME in WebLogic, a nightmare? est apparu en premier sur Blog dbi services.

Documentum – DOCUMENTUM_SHARED is dead?

Sat, 2019-06-01 02:30

In June last year, I did my first manual installation (so without docker) of Documentum 16.4 and I was testing it with PostgreSQL. I quickly realized that there were some changes in Documentum and, unfortunately, I don’t believe that it’s for the best! In this blog, I will talk about the DOCUMENTUM_SHARED environment variable. I tested that almost a year ago with a PostgreSQL binary but it’s the same for all Documentum 16.4 binaries. This isn’t a very technical blog, it’s more like a small reflection about what OpenText is currently doing.

 

I. DOCUMENTUM_SHARED is dead

 

In Documentum 7.3 or below, you could define an environment variable named DOCUMENTUM_SHARED before installing Documentum (see this blog for example) which would then be used to define where the Shared components of a Content Server should be installed. This include mainly the following:

  • The DFC properties and libraries
  • The Java Method Server (JBoss/WildFly + all Applications)
  • The Java binaries

Starting with Documentum 16.4, this environment variable has been deprecated (see KB11002330) and Documentum will simply ignore it. So, you will end-up with all the above components being installed right under $DOCUMENTUM, with everything else. I don’t like that because on Linux, we are used to split things and therefore, we are used to have only a few folders under $DOCUMENTUM and a few others under $DOCUMENTUM_SHARED. Now everything is put under $DOCUMENTUM and even the DFC files/folders. By default in your 16.4 dfc.properties, you will have a definition of “dfc.data.dir=…” which points to $DOCUMENTUM as well ($DOCUMENTUM_SHARED before) so you will end-up with a lot of ugly stuff right under $DOCUMENTUM and it becomes messy! These are the DFC files/folder I’m talking about:

  • $DOCUMENTUM/apptoken/
  • $DOCUMENTUM/cache/
  • $DOCUMENTUM/checkout/
  • $DOCUMENTUM/export/
  • $DOCUMENTUM/identityInterprocessMutex.lock
  • $DOCUMENTUM/local/
  • $DOCUMENTUM/logs/

Obviously you can change the definition of the “dfc.data.dir” so this will be put elsewhere and you should really do that for all dfc.properties file but that’s kind of surprising. When I’m doing a review of an environment or an assessment of some sort, the first thing I’m always doing is going to the $DOCUMENTUM folder and listing its content. If this folder is clean (no log file, no backup, no temp files, no cache files, aso…), then there is a good chance that the complete installation is more or less clean as well. If there is a lot of mess even on the $DOCUMENTUM folder, then I know that it’ll be a long day.

 

II. long live DOCUMENTUM_SHARED! (for now)

 

So why am I saying that? Well as always when you try to deprecate something, there are leftovers here and there and it’s pretty hard to change people’s mind… Take for example the “docbase” VS “repository”… Since Documentum 7.0, a “docbase” is now officially called a “repository” but yet, a lot of people still uses “docbase” and even Documentum does (there are a lot of remaining references everywhere). I believe it will be the same for DOCUMENTUM_SHARED.

At the moment in Documentum 16.4, there are the following references to DOCUMENTUM_SHARED:

  • D2 16.4 still uses DOCUMENTUM_SHARED to know where the components are installed. This is used to deploy D2 libraries into the JMS mainly. I didn’t check but I guess it will be the same for the BPM/xCP
  • MigrationUtil (change docbase ID, docbase name, server config name, aso…) still uses DOCUMENTUM_SHARED to know where the dfc.properties is, where the JMS is, aso…
  • dm_set_server_env scripts still uses DOCUMENTUM_SHARED for defining other variables like LD_LIBRARY_PATH or CLASSPATH

Because of these remaining references (and probably much more), OpenText didn’t just remove completely the DOCUMENTUM_SHARED variable… No, it’s still there but they put it, with a hardcoded value (same as $DOCUMENTUM), directly into the dm_set_server_env scripts so other references are still working properly.

OpenText just probably didn’t want to completely remove the environment variable directly so they are proceeding step by step. First ignoring it and they will probably remove it completely in a future major version. Until then, I will continue to define my DOCUMENTUM_SHARED environment variable but for Documentum 16.4, I will set it with the same value as DOCUMENTUM because we never know, maybe in the next version, the variable will come back… ;).

 

Cet article Documentum – DOCUMENTUM_SHARED is dead? est apparu en premier sur Blog dbi services.

PostgreSQL 12: Control when generic plans are used

Fri, 2019-05-31 13:02

When you are using prepared statements in PostgreSQL you might get a custom or a generic plan. Custom plans come with overhead of re-planning while generic plans avoid re-planning of the statement. A soon as you get a generic plan that plan will be used for the lifetime of your session and there are situations when you do not want to see this as a generic plan might be more expensive than a custom plan. Starting with PostgreSQL 12 (which currently is in beta) you have more control over this.

Let use the same simple test case as in the previous post about custom and generic plans:

postgres=# select version();
                                                  version                                                  
-----------------------------------------------------------------------------------------------------------
 PostgreSQL 12devel on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.2.1 20180905 (Red Hat 8.2.1-3), 64-bit
(1 row)

postgres=# create table demo ( a int, b text );
CREATE TABLE
postgres=# insert into demo select i, 'aaa' from generate_series (1,100) i;
INSERT 0 100
postgres=# insert into demo select i, 'bbb' from generate_series (101,200) i;
INSERT 0 100
postgres=# insert into demo select i, 'ccc' from generate_series (201,300) i;
INSERT 0 100
postgres=# analyze demo;
ANALYZE
postgres=# 

When we prepare and then execute the blow statement 6 times we will see a generic plan:

postgres=# prepare my_stmt as select * from demo where b = $1;
PREPARE
postgres=# explain (analyze) execute my_stmt ( 'aaa' );
                                            QUERY PLAN                                            
--------------------------------------------------------------------------------------------------
 Seq Scan on demo  (cost=0.00..5.75 rows=100 width=8) (actual time=0.027..0.107 rows=100 loops=1)
   Filter: (b = 'aaa'::text)
   Rows Removed by Filter: 200
 Planning Time: 0.431 ms
 Execution Time: 0.198 ms
(5 rows)

postgres=# explain (analyze) execute my_stmt ( 'aaa' );
                                            QUERY PLAN                                            
--------------------------------------------------------------------------------------------------
 Seq Scan on demo  (cost=0.00..5.75 rows=100 width=8) (actual time=0.032..0.113 rows=100 loops=1)
   Filter: (b = 'aaa'::text)
   Rows Removed by Filter: 200
 Planning Time: 0.196 ms
 Execution Time: 0.155 ms
(5 rows)

postgres=# explain (analyze) execute my_stmt ( 'aaa' );
                                            QUERY PLAN                                            
--------------------------------------------------------------------------------------------------
 Seq Scan on demo  (cost=0.00..5.75 rows=100 width=8) (actual time=0.032..0.113 rows=100 loops=1)
   Filter: (b = 'aaa'::text)
   Rows Removed by Filter: 200
 Planning Time: 0.168 ms
 Execution Time: 0.154 ms
(5 rows)

postgres=# explain (analyze) execute my_stmt ( 'aaa' );
                                            QUERY PLAN                                            
--------------------------------------------------------------------------------------------------
 Seq Scan on demo  (cost=0.00..5.75 rows=100 width=8) (actual time=0.041..0.135 rows=100 loops=1)
   Filter: (b = 'aaa'::text)
   Rows Removed by Filter: 200
 Planning Time: 0.238 ms
 Execution Time: 0.193 ms
(5 rows)

postgres=# explain (analyze) execute my_stmt ( 'aaa' );
                                            QUERY PLAN                                            
--------------------------------------------------------------------------------------------------
 Seq Scan on demo  (cost=0.00..5.75 rows=100 width=8) (actual time=0.031..0.112 rows=100 loops=1)
   Filter: (b = 'aaa'::text)
   Rows Removed by Filter: 200
 Planning Time: 0.169 ms
 Execution Time: 0.154 ms
(5 rows)

postgres=# explain (analyze) execute my_stmt ( 'aaa' );
                                            QUERY PLAN                                            
--------------------------------------------------------------------------------------------------
 Seq Scan on demo  (cost=0.00..5.75 rows=100 width=8) (actual time=0.033..0.120 rows=100 loops=1)
   Filter: (b = $1)
   Rows Removed by Filter: 200
 Planning Time: 0.163 ms
 Execution Time: 0.163 ms
(5 rows)

PostgreSQL 12 will come with a new parameter to control that:

postgres=# show plan_cache_mode;
 plan_cache_mode 
-----------------
 auto
(1 row)
postgres=# select extra_desc from pg_settings where name = 'plan_cache_mode';
                                                                            extra_desc                                                                             
-------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Prepared statements can have custom and generic plans, and the planner will attempt to choose which is better.  This can be set to override the default behavior.
(1 row)

The default value is “auto” which means the same behavior is in previous versions of PostgreSQL. If you want to force custom plans you can set it to “force_custom_plan” or you can set it to “force_generic_plan” if you want to only see generic plans. Using the same example from above but setting the parameter to “force_custom_plan” we will not see a generic plan anymore:

postgres=# set plan_cache_mode = force_custom_plan;
SET
postgres=# explain (analyze) execute my_stmt ( 'aaa' );
                                            QUERY PLAN                                            
--------------------------------------------------------------------------------------------------
 Seq Scan on demo  (cost=0.00..5.75 rows=100 width=8) (actual time=0.034..0.127 rows=100 loops=1)
   Filter: (b = 'aaa'::text)
   Rows Removed by Filter: 200
 Planning Time: 0.209 ms
 Execution Time: 0.172 ms
(5 rows)

postgres=# explain (analyze) execute my_stmt ( 'aaa' );
                                            QUERY PLAN                                            
--------------------------------------------------------------------------------------------------
 Seq Scan on demo  (cost=0.00..5.75 rows=100 width=8) (actual time=0.152..0.236 rows=100 loops=1)
   Filter: (b = 'aaa'::text)
   Rows Removed by Filter: 200
 Planning Time: 0.170 ms
 Execution Time: 0.279 ms
(5 rows)

postgres=# explain (analyze) execute my_stmt ( 'aaa' );
                                            QUERY PLAN                                            
--------------------------------------------------------------------------------------------------
 Seq Scan on demo  (cost=0.00..5.75 rows=100 width=8) (actual time=0.031..0.112 rows=100 loops=1)
   Filter: (b = 'aaa'::text)
   Rows Removed by Filter: 200
 Planning Time: 0.170 ms
 Execution Time: 0.152 ms
(5 rows)

postgres=# explain (analyze) execute my_stmt ( 'aaa' );
                                            QUERY PLAN                                            
--------------------------------------------------------------------------------------------------
 Seq Scan on demo  (cost=0.00..5.75 rows=100 width=8) (actual time=0.029..0.122 rows=100 loops=1)
   Filter: (b = 'aaa'::text)
   Rows Removed by Filter: 200
 Planning Time: 0.206 ms
 Execution Time: 0.162 ms
(5 rows)

postgres=# explain (analyze) execute my_stmt ( 'aaa' );
                                            QUERY PLAN                                            
--------------------------------------------------------------------------------------------------
 Seq Scan on demo  (cost=0.00..5.75 rows=100 width=8) (actual time=0.038..0.133 rows=100 loops=1)
   Filter: (b = 'aaa'::text)
   Rows Removed by Filter: 200
 Planning Time: 0.200 ms
 Execution Time: 0.244 ms
(5 rows)

postgres=# explain (analyze) execute my_stmt ( 'aaa' );
                                            QUERY PLAN                                            
--------------------------------------------------------------------------------------------------
 Seq Scan on demo  (cost=0.00..5.75 rows=100 width=8) (actual time=0.032..0.114 rows=100 loops=1)
   Filter: (b = 'aaa'::text)
   Rows Removed by Filter: 200
 Planning Time: 0.169 ms
 Execution Time: 0.155 ms
(5 rows)

postgres=# explain (analyze) execute my_stmt ( 'aaa' );
                                            QUERY PLAN                                            
--------------------------------------------------------------------------------------------------
 Seq Scan on demo  (cost=0.00..5.75 rows=100 width=8) (actual time=0.034..0.117 rows=100 loops=1)
   Filter: (b = 'aaa'::text)
   Rows Removed by Filter: 200
 Planning Time: 0.301 ms
 Execution Time: 0.162 ms
(5 rows)

Nice addition that can help when you have generic plans that actually perform worse than custom plans.

Cet article PostgreSQL 12: Control when generic plans are used est apparu en premier sur Blog dbi services.

Securely store passwords in PostgreSQL

Fri, 2019-05-31 00:25

Every application somehow needs to deal with passwords. Some use external authentication methods such as ldap, others us the the framework the database already provides and create users and roles. But it is also not uncommon that applications implement their own concept for managing users. If an application does this it should be done the right way and passwords should never be stored in plain text in the database. PostgreSQL comes with a handy extension that supports you with that.

You might be already aware that PostgreSQL comes with a lot of additional modules by default. One of these modules is pgcrypto and it can be used for the use case described above: En- and decryption of strings so you do not have to implement that on your own. Lets start with a simple table which contains usernames and their passwords:

postgres=# create table app_users ( id int generated always as identity ( cache 10 ) primary key
postgres(#                        , username text not null unique
postgres(#                        , password text not null
postgres(#                        );
CREATE TABLE
postgres=# \d app_users
                         Table "public.app_users"
  Column  |  Type   | Collation | Nullable |           Default            
----------+---------+-----------+----------+------------------------------
 id       | integer |           | not null | generated always as identity
 username | text    |           | not null | 
 password | text    |           | not null | 
Indexes:
    "app_users_pkey" PRIMARY KEY, btree (id)
    "app_users_username_key" UNIQUE CONSTRAINT, btree (username)
postgres=# 

Both, the username and password columns are implement as plain text. If you keep it like that and just insert data the password of course will be stored as plain text. So how can we use pgcrypto to improve that? Obviously the first step is to install the extension:

postgres=# create extension pgcrypto;
CREATE EXTENSION
postgres=# \dx
                   List of installed extensions
    Name    | Version |   Schema   |         Description          
------------+---------+------------+------------------------------
 pg_prewarm | 1.2     | public     | prewarm relation data
 pgcrypto   | 1.3     | public     | cryptographic functions
 plpgsql    | 1.0     | pg_catalog | PL/pgSQL procedural language
(3 rows)

Btw: There is a catalog view which you can use to list all available extensions:

postgres=# \d pg_available_extensions;
         View "pg_catalog.pg_available_extensions"
      Column       | Type | Collation | Nullable | Default 
-------------------+------+-----------+----------+---------
 name              | name |           |          | 
 default_version   | text |           |          | 
 installed_version | text | C         |          | 
 comment           | text |           |          | 

postgres=# select * from pg_available_extensions limit 3;
  name   | default_version | installed_version |                comment                 
---------+-----------------+-------------------+----------------------------------------
 plpgsql | 1.0             | 1.0               | PL/pgSQL procedural language
 plperl  | 1.0             |                   | PL/Perl procedural language
 plperlu | 1.0             |                   | PL/PerlU untrusted procedural language
(3 rows)

The function to use (provided by the pgcrypto module) for encrypting strings is crypt(). This function takes two arguments:

  • The actual string to encrypt
  • The salt to use (a random value) for encrpyption

Adding a user with an encrypted password is as easy as:

postgres=# insert into app_users (username, password) 
postgres-#        values ( 'myuser', crypt('mypassword', gen_salt('bf')) );
INSERT 0 1

In this case we used the Blowfish algorithm to generate the salt. You can also use md5, xdes and des.

When we look at the password for our user we will see that it is not plain text anymore:

postgres=# select password from app_users where username = 'myuser';
                           password                           
--------------------------------------------------------------
 $2a$06$8wu4VWVubv/RBYBSuj.1TOojPm0q7FkRwuDSoW0OTOC6FzBGEslIC
(1 row)

This is for the encryption part. For comparing this encrypted string against the plain text version of the string we use the crypt() function again:

postgres=# select (password = crypt('mypassword', password)) AS pwd_match 
postgres-#   from app_users
postgres-#  where username = 'myuser';
 pwd_match 
-----------
 t
(1 row)

Providing the wrong password of course returns false:

postgres=# select (password = crypt('Xmypassword', password)) AS pwd_match 
  from app_users
 where username = 'myuser';
 pwd_match 
-----------
 f
(1 row)

So finally, it is not much you need to do for storing encrypted strings in PostgreSQL. Just use it.

Cet article Securely store passwords in PostgreSQL est apparu en premier sur Blog dbi services.

Can you start two (or more) PostgreSQL instances against the same data directory?

Thu, 2019-05-30 06:41

As PostgreSQL does not know the concept of running multiple instances against the same files on disk (e.g. like Oracle RAC) it should not be possible to start two or more instances against the same data directory. If that would work the result can only be corruption. In this post we will look at how PostgreSQL is detecting that and what mechanism are build in to avoid the situation of having multiple instances working against the same files on disk.

To start with we create a new cluster:

postgres@rhel8pg:/home/postgres/ [PGDEV] mkdir /var/tmp/pgtest
12:16:46 postgres@rhel8pg:/home/postgres/ [PGDEV] initdb -D /var/tmp/pgtest/
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locales
  COLLATE:  en_US.utf8
  CTYPE:    en_US.utf8
  MESSAGES: en_US.utf8
  MONETARY: de_CH.UTF-8
  NUMERIC:  de_CH.UTF-8
  TIME:     en_US.UTF-8
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /var/tmp/pgtest ... ok
creating subdirectories ... ok
selecting dynamic shared memory implementation ... posix
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default timezone ... Europe/Zurich
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

initdb: warning: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    pg_ctl -D /var/tmp/pgtest/ -l logfile start

We use a dedicated port and then start it up:

postgres@rhel8pg:/home/postgres/ [PGDEV] export PGPORT=8888
postgres@rhel8pg:/home/postgres/ [PGDEV] pg_ctl -D /var/tmp/pgtest start
waiting for server to start....2019-05-16 12:17:22.399 CEST [7607] LOG:  starting PostgreSQL 12devel on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.2.1 20180905 (Red Hat 8.2.1-3), 64-bit
2019-05-16 12:17:22.403 CEST [7607] LOG:  listening on IPv6 address "::1", port 8888
2019-05-16 12:17:22.403 CEST [7607] LOG:  listening on IPv4 address "127.0.0.1", port 8888
2019-05-16 12:17:22.409 CEST [7607] LOG:  listening on Unix socket "/tmp/.s.PGSQL.8888"
2019-05-16 12:17:22.446 CEST [7608] LOG:  database system was shut down at 2019-05-16 12:16:54 CEST
2019-05-16 12:17:22.455 CEST [7607] LOG:  database system is ready to accept connections
 done
server started

postgres@rhel8pg:/home/postgres/ [PGDEV] psql -p 8888 -c "select version()" postgres
                                                  version                                                  
-----------------------------------------------------------------------------------------------------------
 PostgreSQL 12devel on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.2.1 20180905 (Red Hat 8.2.1-3), 64-bit
(1 row)

What happens when we want to start another instance against that data directory?

postgres@rhel8pg:/home/postgres/ [PGDEV] pg_ctl -D /var/tmp/pgtest start
pg_ctl: another server might be running; trying to start server anyway
waiting for server to start....2019-05-16 12:18:26.252 CEST [7629] FATAL:  lock file "postmaster.pid" already exists
2019-05-16 12:18:26.252 CEST [7629] HINT:  Is another postmaster (PID 7607) running in data directory "/var/tmp/pgtest"?
 stopped waiting
pg_ctl: could not start server
Examine the log output.

When PostgreSQL is starting up it will look at a file called “postmaster.pid” which exists in the data directory once the instance is started. If that file exists PostgreSQL will not start up another instance against the same data directory. Once the instance is stopped the file is removed:

postgres@rhel8pg:/home/postgres/ [PGDEV] pg_ctl -D /var/tmp/pgtest/ stop
waiting for server to shut down....2019-05-16 12:48:50.636 CEST [7896] LOG:  received fast shutdown request
2019-05-16 12:48:50.641 CEST [7896] LOG:  aborting any active transactions
2019-05-16 12:48:50.651 CEST [7896] LOG:  background worker "logical replication launcher" (PID 7903) exited with exit code 1
2019-05-16 12:48:50.651 CEST [7898] LOG:  shutting down
2019-05-16 12:48:50.685 CEST [7896] LOG:  database system is shut down
 done
server stopped
postgres@rhel8pg:/home/postgres/ [PGDEV] ls -al /var/tmp/pgtest/postmaster.pid
ls: cannot access '/var/tmp/pgtest/postmaster.pid': No such file or directory

At least by default this is not possible to start two or more instances as PostgreSQL checks if postmaster.pid already exists. Lets remove that file and try again:

postgres@rhel8pg:/home/postgres/ [PGDEV] rm /var/tmp/pgtest/postmaster.pid
postgres@rhel8pg:/home/postgres/ [PGDEV] pg_ctl -D /var/tmp/pgtest start
waiting for server to start....2019-05-16 12:20:17.754 CEST [7662] LOG:  starting PostgreSQL 12devel on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.2.1 20180905 (Red Hat 8.2.1-3), 64-bit
2019-05-16 12:20:17.756 CEST [7662] LOG:  could not bind IPv6 address "::1": Address already in use
2019-05-16 12:20:17.756 CEST [7662] HINT:  Is another postmaster already running on port 8888? If not, wait a few seconds and retry.
2019-05-16 12:20:17.756 CEST [7662] LOG:  could not bind IPv4 address "127.0.0.1": Address already in use
2019-05-16 12:20:17.756 CEST [7662] HINT:  Is another postmaster already running on port 8888? If not, wait a few seconds and retry.
2019-05-16 12:20:17.756 CEST [7662] WARNING:  could not create listen socket for "localhost"
2019-05-16 12:20:17.756 CEST [7662] FATAL:  could not create any TCP/IP sockets
2019-05-16 12:20:17.756 CEST [7662] LOG:  database system is shut down
 stopped waiting
pg_ctl: could not start server
Examine the log output.

Again, this does not work and even the initial instance was shutdown because PostgreSQL detected that the lock file is not there anymore:

2019-05-16 12:20:22.540 CEST [7607] LOG:  could not open file "postmaster.pid": No such file or directory
2019-05-16 12:20:22.540 CEST [7607] LOG:  performing immediate shutdown because data directory lock file is invalid
2019-05-16 12:20:22.540 CEST [7607] LOG:  received immediate shutdown request
2019-05-16 12:20:22.540 CEST [7607] LOG:  could not open file "postmaster.pid": No such file or directory
2019-05-16 12:20:22.544 CEST [7612] WARNING:  terminating connection because of crash of another server process
2019-05-16 12:20:22.544 CEST [7612] DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2019-05-16 12:20:22.544 CEST [7612] HINT:  In a moment you should be able to reconnect to the database and repeat your command.
2019-05-16 12:20:22.549 CEST [7664] WARNING:  terminating connection because of crash of another server process
2019-05-16 12:20:22.549 CEST [7664] DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.

Lets start the first instance again:

postgres@rhel8pg:/home/postgres/ [PGDEV] pg_ctl -D /var/tmp/pgtest/ start
waiting for server to start....2019-05-16 12:22:20.136 CEST [7691] LOG:  starting PostgreSQL 12devel on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.2.1 20180905 (Red Hat 8.2.1-3), 64-bit
2019-05-16 12:22:20.140 CEST [7691] LOG:  listening on IPv6 address "::1", port 8888
2019-05-16 12:22:20.140 CEST [7691] LOG:  listening on IPv4 address "127.0.0.1", port 8888
2019-05-16 12:22:20.148 CEST [7691] LOG:  listening on Unix socket "/tmp/.s.PGSQL.8888"
2019-05-16 12:22:20.193 CEST [7693] LOG:  database system was interrupted; last known up at 2019-05-16 12:17:22 CEST
.2019-05-16 12:22:21.138 CEST [7693] LOG:  database system was not properly shut down; automatic recovery in progress
2019-05-16 12:22:21.143 CEST [7693] LOG:  redo starts at 0/15D3420
2019-05-16 12:22:21.143 CEST [7693] LOG:  invalid record length at 0/15D3458: wanted 24, got 0
2019-05-16 12:22:21.143 CEST [7693] LOG:  redo done at 0/15D3420
2019-05-16 12:22:21.173 CEST [7691] LOG:  database system is ready to accept connections
 done
server started

postgres@rhel8pg:/home/postgres/ [PGDEV] psql -p 8888 -c "select version()" postgres
                                                  version                                                  
-----------------------------------------------------------------------------------------------------------
 PostgreSQL 12devel on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.2.1 20180905 (Red Hat 8.2.1-3), 64-bit
(1 row)

Lets change the port for the second instance and then try again to start it against the same data directory:

postgres@rhel8pg:/home/postgres/ [PGDEV] export PGPORT=8889
postgres@rhel8pg:/home/postgres/ [PGDEV] pg_ctl -D /var/tmp/pgtest/ start
pg_ctl: another server might be running; trying to start server anyway
waiting for server to start....2019-05-16 12:24:41.700 CEST [7754] FATAL:  lock file "postmaster.pid" already exists
2019-05-16 12:24:41.700 CEST [7754] HINT:  Is another postmaster (PID 7741) running in data directory "/var/tmp/pgtest"?
 stopped waiting
pg_ctl: could not start server
Examine the log output.

Does not work as well, which is good. Lets be a bit more nasty and truncate the postmaster.pid file:

postgres@rhel8pg:/home/postgres/ [PGDEV] cat /var/tmp/pgtest/postmaster.pid 
7790
/var/tmp/pgtest
1558002434
8888
/tmp
localhost
  8888001    819201
ready   
postgres@rhel8pg:/home/postgres/ [PGDEV] cat /dev/null > /var/tmp/pgtest/postmaster.pid
postgres@rhel8pg:/home/postgres/ [PGDEV] cat /var/tmp/pgtest/postmaster.pid

The pid file is now empty and right after emptying that file we can see this in the PostgreSQL log file:

019-05-16 12:30:14.140 CEST [7790] LOG:  lock file "postmaster.pid" contains wrong PID: 0 instead of 7790
2019-05-16 12:30:14.140 CEST [7790] LOG:  performing immediate shutdown because data directory lock file is invalid
2019-05-16 12:30:14.140 CEST [7790] LOG:  received immediate shutdown request
2019-05-16 12:30:14.149 CEST [7795] WARNING:  terminating connection because of crash of another server process
2019-05-16 12:30:14.149 CEST [7795] DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2019-05-16 12:30:14.149 CEST [7795] HINT:  In a moment you should be able to reconnect to the database and repeat your command.
2019-05-16 12:30:14.160 CEST [7790] LOG:  database system is shut down

So even that case it is detected and PostgreSQL protects you from starting up another instance against the same data directory. Lets try something else and modify PGDATA in the postmaster.pid file:

postgres@rhel8pg:/home/postgres/ [PGDEV] cat /var/tmp/pgtest/postmaster.pid 
7896
/var/tmp/pgtest
1558002751
8888
/tmp
localhost
  8888001    851969
ready   

postgres@rhel8pg:/home/postgres/ [PGDEV] sed -i  's/\/var\/tmp\/pgtest/\/var\/tmp\/pgtest2/g' /var/tmp/pgtest/postmaster.pid 
postgres@rhel8pg:/home/postgres/ [PGDEV] cat /var/tmp/pgtest/postmaster.pid
7896
/var/tmp/pgtest2
1558002751
8888
/tmp
localhost
  8888001    851969
ready   

Although we changed PGDATA PostgreSQL will not start up another instance against this data directory:

postgres@rhel8pg:/home/postgres/ [PGDEV] pg_ctl -D /var/tmp/pgtest/ start
pg_ctl: another server might be running; trying to start server anyway
waiting for server to start....2019-05-16 12:35:28.540 CEST [7973] FATAL:  lock file "postmaster.pid" already exists
2019-05-16 12:35:28.540 CEST [7973] HINT:  Is another postmaster (PID 7896) running in data directory "/var/tmp/pgtest"?
 stopped waiting
pg_ctl: could not start server
Examine the log output.

So by default you can not get PostgreSQL to start two or even more instances against the same data directory. There is an comment about this behaviour in src/backend/postmaster/postmaster.c in the source code:

                /*
                 * Once a minute, verify that postmaster.pid hasn't been removed or
                 * overwritten.  If it has, we force a shutdown.  This avoids having
                 * postmasters and child processes hanging around after their database
                 * is gone, and maybe causing problems if a new database cluster is
                 * created in the same place.  It also provides some protection
                 * against a DBA foolishly removing postmaster.pid and manually
                 * starting a new postmaster.  Data corruption is likely to ensue from
                 * that anyway, but we can minimize the damage by aborting ASAP.
                 */

“Once a minute” might be critical and we might be able to start a second one if we are fast enough, so lets try again. This time we start the first one, remove the lock file and immediately start another one using another port:

export PGPORT=8888
pg_ctl -D /var/tmp/pgtest start
rm -f /var/tmp/pgtest/postmaster.pid
export PGPORT=8889
pg_ctl -D /var/tmp/pgtest start

And here you have it:

postgres@rhel8pg:/home/postgres/ [pg120] ps -ef | grep postgres
postgres  1445     1  0 May27 ?        00:00:00 /usr/lib/systemd/systemd --user
postgres  1456  1445  0 May27 ?        00:00:00 (sd-pam)
root      9780   786  0 06:09 ?        00:00:00 sshd: postgres [priv]
postgres  9783  9780  0 06:09 ?        00:00:00 sshd: postgres@pts/1
postgres  9784  9783  0 06:09 pts/1    00:00:00 -bash
postgres 10302     1  0 06:19 ?        00:00:00 /u01/app/postgres/product/DEV/db_1/bin/postgres -D /var/tmp/pgtest
postgres 10304 10302  0 06:19 ?        00:00:00 postgres: checkpointer   
postgres 10305 10302  0 06:19 ?        00:00:00 postgres: background writer   
postgres 10306 10302  0 06:19 ?        00:00:00 postgres: walwriter   
postgres 10307 10302  0 06:19 ?        00:00:00 postgres: autovacuum launcher   
postgres 10308 10302  0 06:19 ?        00:00:00 postgres: stats collector   
postgres 10309 10302  0 06:19 ?        00:00:00 postgres: logical replication launcher   
postgres 10313     1  0 06:19 ?        00:00:00 /u01/app/postgres/product/DEV/db_1/bin/postgres -D /var/tmp/pgtest
postgres 10315 10313  0 06:19 ?        00:00:00 postgres: checkpointer   
postgres 10316 10313  0 06:19 ?        00:00:00 postgres: background writer   
postgres 10317 10313  0 06:19 ?        00:00:00 postgres: walwriter   
postgres 10318 10313  0 06:19 ?        00:00:00 postgres: autovacuum launcher   
postgres 10319 10313  0 06:19 ?        00:00:00 postgres: stats collector   
postgres 10320 10313  0 06:19 ?        00:00:00 postgres: logical replication launcher   
postgres 10327  9784  0 06:19 pts/1    00:00:00 ps -ef

Conclusion: PostgreSQL does some basic checks to avoid starting two instances against the same files on disk. But if you really want (and of course you should never do that) then you can achieve that => with all the consequences! Don’t do it!

Cet article Can you start two (or more) PostgreSQL instances against the same data directory? est apparu en premier sur Blog dbi services.

Ton nuage n’attends que toi

Wed, 2019-05-29 05:01

Après plusieurs formations offertes par dbi services chez différents fournisseurs de service Cloud et plusieurs retours d’expériences, je suis enthousiaste quand à l’évolution des technologies du numérique. Azure, Amazon AWS et Oracle OCI proposent d’excellents services d’infrastructure dont les noms changes mais les concepts sont les mêmes. On retrouve aussi quelques différences entre les services Cloud infrastructure qui peuvent être intéressants.

Peu être que toi aussi tu es intéressé par le Cloud mais tout ça n’est pas clair. A ça je réponds de voir le Cloud plus comme un data center traditionnel mais au lieu de provisionner tes machines et tes connexions à la main dans ta salle sur-climatisée, tout se fait maintenant que quelque cliques (ou lignes de commandes).

Permet moi de te présenter quelques informations sur le Cloud OCI d’Oracle qui lève les doutes sur les rumeurs et permette d’aller de l’avant.

Serveurs

Il est possible d’avoir des machines physiques et/ou des machines virtuelles. Ces machines physiques et virtuelles sont ensuite déclinées en “shapes” (template):

  • Standard shapes: Un standard pour la plus part des applications
  • DenseIO shapes: Fait pour des charges de travail élevée sur les données
  • GPU shapes: Fait pour utiliser des processeur de carte graphiques (Calcul de Bitcoins, algorthym pour gagner au jeu de Go,…)
  • High performance computing (HPC) shapes: Pour machine physique seullement. Fait pour des besoins massif de calcul CPU en parallele

Temps de provisionnement: quelque minutes

Réseau

Les possibilités de connexion sont sans limite. Il est notamment possible de:

  • Connecter ton Infrastructure Cloud avec Internet et/ou ton data center
  • Il est possible de connecter le Cloud Oracle avec un autre Cloud (Amazon AWS par example)
  • Il est possible de connecter ton espace Cloud avec ton infrastructure locale avec un lien “FastConnect” offrant une connexion avec une bande passante d’au minimum 10Gb.

Ces options de connexion permettent de répondre à tous les scénarios et évitent d’être bloqué chez Oracle. Tout comme il est possible de convertir ton espace Cloud en extension de ton Data center grâce au “FastConnect”. Je comprends mieux avec un schéma:

Le service base de données autonome (Autonomous Database)

Le service base de données autonome sert 2 types de charges de travail:

  • La charge de travail applicative standard (courte requêtes très fréquentes)
  • La charge de travail pour les entrepôts de données (long traitements unitaires)

Avec une base de données autonome, on ne fait que configurer le nombre de CPUs et la capacité de stockage sans arrêt de service ou dégradation de performance. De plus toutes les opérations suivantes sont prises en charge automatiquement:

  • Création de la base de données
  • Sauvegarde de la base de données
  • Mise à jour des fonctionnalité de la base de données
  • Mise à jour des failles de sécurité et bug de la base de données
  • Optimisation de la base de données

Il est possible de garder ses licences existantes pour le Cloud ou d’utiliser les licences inclues dans le service.

Pays disponibles (pour l’instant)

Liste des pays disponibles pour votre Cloud: https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm

  • Royaume unis
  • Allemagne
  • Canada
  • États-Unis
  • Japon
  • Corée du Sud
  • Suisse (septembre 2019)
Prix

Tu peux payer au fur et à mesure de ta consommation ou par un abonnement. Tu peux comparer les prix toi même grâce aux calculettes onlines fournies par les fournisseurs:

  • Microsoft Azure: https://azure.microsoft.com/en-in/pricing/calculator
  • Oracle Cloud: https://cloud.oracle.com/en_US/cost-estimator
  • Amazon AWS: https://calculator.s3.amazonaws.com/index.html

Par exemple pour un serveur virtuel à 8 CPUs et environs 120GB de mémoire tu paieras mensuellement:

  • Microsoft Azure (DS13v2): 648.97 $
  • Oracle Cloud (B88514): 401.00 $
  • Amazon AWS (f1.x2large): 1207.00 $
Conclusion

Si les autres fournisseurs de Cloud étaient loin devant Oracle depuis quelques années, Oracle offre maintenant un service infrastructure (Oracle OCI) solide et facile à comprendre (comme les autres). L’avantage par rapport à d’autre est qu’il est possible d’avoir des machines physiques. Et enfin, nous ne sommes pas lié à Oracle grâce à l’option des réseaux privés ou public entre le cloud Oracle et d’autres fournisseurs de Cloud.

Enfin, même si avoir du cloud te débarrasse de la gestion matériel et fait passé ton temps de provisionnement à quelques minutes, cela n’empêchera pas que tu auras besoin d’ingénieurs système pour gérer les accès, les ressources, l’opérationnel, les sauvegardes et ton plan de reprise d’activité.

J’ai oublié un point? N’hésite pas à me laisser ta question ci-dessous et j’y répondrai avec plaisir.

Cet article Ton nuage n’attends que toi est apparu en premier sur Blog dbi services.

SUSE Expert Day Zürich

Tue, 2019-05-28 01:44

On May 16th I visited the SUSE Expert Day in Zürich.
An interesting Agenda was waiting for me, all under the topic: “My kind of open”

After a small welcome cafe, SUSE started with the Keynote of the Markus Wolf (Country Manager and Sales Director ALPS Region). After a short introduction of the Swiss SUSE Team, he talked about the IT Transformation and his vision of the next years in IT – nice to hear, that IT is getting even more complex as it is now.
One slide that really impressed me:

Amazing, isn’t it?

As a customer story, Nicolas Christener, CEO and CTO of the adfinis sygroup showed with an impressive example, what you can reach with the SUSE Cloud Application Platform and what matters for the end customer. He also mentioned the great collaboration with SUSE during the project. I think that’s really nice to know that you get the help and support of SUSE that is needed, especially in new pioneer projects.

As third speaker Bo Jin (Sales Engineer and Consultant at SUSE ) was on stage. Really impressive knowledge! He told a lot about CloundFoundry as well as a lot about Kubernetes, Cloud Application Platform, CaaS. A highlight for me was his really impressive demo about pushing code to Clound Goundry and how to deploy from GitHub into a container. Everything seems to be really easy to manage.

Last but not least we got some insight to the SUSE Manager, how it could help you to centralize the system administration, the patch handling as well as autoyast profiles and kickstart files. This tool is suitable for Ubuntu, CentOS, Red Hat and, of course, SUSE servers. Everything centrally handled for almost all distributions. That makes life much easier.
Bo Jin also showed us the kernel live patching in a demo and gave us some background information. Did you for example know, that even if you have Kernel Live Patching enabled, you have to reboot at least once in 12 month?

In a nutshell – nice to see how pationate and innovative SUSE is, they presented great tools. Even they were only able to show us the tools mostly in scope at the moment – can’t wait to test them!

Cet article SUSE Expert Day Zürich est apparu en premier sur Blog dbi services.

Configuring Oracle DB data source in JBoss EAP 7.1

Sun, 2019-05-26 02:05

Introduction

This blog explains how to install and use an Oracle database JDBC driver in JBoss EAP 7.1 standalone instance and in a domain deployment.

Oracle JDBC driver installation
The first step is to install the JDBC driver in the JBoss installation. This can be done copying the files to the right directory or using the JBoss CLI to do the install properly.
I will use the JBoss CLI script for this.

Start the JBoss CLI without connecting to any JBoss instance.

/opt/jboss-eap-7.1/bin/jboss-cli.sh

Then use the module add CLI command to install the Oracle JDBC driver to the right place.

module add --name=com.oracle --resources=/home/pascal/jdbc_drivers/ojdbc8.jar --dependencies=javax.api,javax.transaction.api

This will place the driver in the following directory:

$JBOSS_HOME/modules/com/oracle/main

Note: This CLI command has to be run on each host participating to a domain deployment.
Once the module is installed, it can be use to declare the JDBC driver inside the JBoss instance

Create a Data-Source

For a standalone instance using the default profile:
a. Start jboss_cli.sh to connect to the standalone server and declare the JDBC driver in the Jboss instance

/opt/jboss-eap-7.1/bin/jboss-cli.sh -c --controller=192.168.56.21:9990
[standalone@192.168.56.21:9990 /] /subsystem=datasources/jdbc-driver=oracle:add(driver-name=oracle,driver-module-name=com.oracle,driver-xa-datasource-class-name=oracle.jdbc.driver.OracleDriver)
{"outcome" => "success"}
[standalone@192.168.56.21:9990 /]

b. Confirm the JDBC driver has been declared successfully.

[standalone@192.168.56.21:9990 /] /subsystem=datasources/jdbc-driver=oracle:read-resource
{
    "outcome" => "success",
    "result" => {
        "deployment-name" => undefined,
        "driver-class-name" => undefined,
        "driver-datasource-class-name" => undefined,
        "driver-major-version" => undefined,
        "driver-minor-version" => undefined,
        "driver-module-name" => "com.oracle",
        "driver-name" => "oracle",
        "driver-xa-datasource-class-name" => "oracle.jdbc.driver.OracleDriver",
        "jdbc-compliant" => undefined,
        "module-slot" => undefined,
        "profile" => undefined,
        "xa-datasource-class" => undefined
    }
}

c. Create the data-source pointing to the Oracle Database

[standalone@192.168.56.21:9990 /] data-source add --name=testOracleDS --jndi-name=java:/jdbc/testOracleDS --driver-name=oracle --connection-url=jdbc:oracle:thin:@vm12:1521/orcl --user-name=scott --password=tiger --jta=true --use-ccm=true --use-java-context=true --enabled=true --user-name=scott --password=tiger --max-pool-size=10 --min-pool-size=5 --flush-strategy="FailingConnectionOnly"

d. Confirm the Datasource creation and parameters.

[standalone@192.168.56.21:9990 /] /subsystem=datasources/data-source=testOracleDS:read-resource
{
    "outcome" => "success",
    "result" => {
        "allocation-retry" => undefined,
        "allocation-retry-wait-millis" => undefined,
        "allow-multiple-users" => false,
        "authentication-context" => undefined,
        "background-validation" => undefined,
        "background-validation-millis" => undefined,
        "blocking-timeout-wait-millis" => undefined,
        "capacity-decrementer-class" => undefined,
        "capacity-decrementer-properties" => undefined,
        "capacity-incrementer-class" => undefined,
        "capacity-incrementer-properties" => undefined,
        "check-valid-connection-sql" => undefined,
        "connectable" => false,
        "connection-listener-class" => undefined,
        "connection-listener-property" => undefined,
        "connection-url" => "jdbc:oracle:thin:@vm12:1521/orcl",
        "credential-reference" => undefined,
        "datasource-class" => undefined,
        "driver-class" => undefined,
        "driver-name" => "oracle",
        "elytron-enabled" => false,
        "enabled" => true,
        "enlistment-trace" => false,
        "exception-sorter-class-name" => undefined,
        "exception-sorter-properties" => undefined,
        "flush-strategy" => "FailingConnectionOnly",
        "idle-timeout-minutes" => undefined,
        "initial-pool-size" => undefined,
        "jndi-name" => "java:/jdbc/testOracleDS",
        "jta" => true,
        "max-pool-size" => 10,
        "mcp" => "org.jboss.jca.core.connectionmanager.pool.mcp.SemaphoreConcurrentLinkedDequeManagedConnectionPool",
        "min-pool-size" => 5,
        "new-connection-sql" => undefined,
        "password" => "tiger",
        "pool-fair" => undefined,
        "pool-prefill" => undefined,
        "pool-use-strict-min" => undefined,
        "prepared-statements-cache-size" => undefined,
        "query-timeout" => undefined,
        "reauth-plugin-class-name" => undefined,
        "reauth-plugin-properties" => undefined,
        "security-domain" => undefined,
        "set-tx-query-timeout" => false,
        "share-prepared-statements" => false,
        "spy" => false,
        "stale-connection-checker-class-name" => undefined,
        "stale-connection-checker-properties" => undefined,
        "statistics-enabled" => false,
        "track-statements" => "NOWARN",
        "tracking" => false,
        "transaction-isolation" => undefined,
        "url-delimiter" => undefined,
        "url-selector-strategy-class-name" => undefined,
        "use-ccm" => true,
        "use-fast-fail" => false,
        "use-java-context" => true,
        "use-try-lock" => undefined,
        "user-name" => "scott",
        "valid-connection-checker-class-name" => undefined,
        "valid-connection-checker-properties" => undefined,
        "validate-on-match" => undefined,
        "connection-properties" => undefined,
        "statistics" => {
            "jdbc" => undefined,
            "pool" => undefined
        }
    }
}

At this stage, the data-source is available to all applications deployed in the standalone server. I started with the standalone.xml default profile configuration file. To have additional subsystems, an other standalone profile configuration file should be used.
For a JBoss domain using the Full-ha profile
The domain I’m using in my tests is having a domain controller and two slave Hosts running two servers organized in two server groups.
a. Start jboss_cli.sh to connect to the domain master

/opt/jboss-eap-7.1/bin/jboss-cli.sh -c --controller=192.168.56.21:9990

b. Register the Oracle JDBC driver

[domain@192.168.56.21:9990 /] /profile=full-ha/subsystem=datasources/jdbc-driver=oracle:add(driver-name=oracle,driver-module-name=com.oracle,driver-xa-datasource-class-name=oracle.jdbc.driver.OracleDriver)
{
    "outcome" => "success",
    "result" => undefined,
    "server-groups" => {"Group2" => {"host" => {
        "host1" => {"server-two" => {"response" => {
            "outcome" => "success",
            "result" => undefined
        }}},
        "host2" => {"server-four" => {"response" => {
            "outcome" => "success",
            "result" => undefined
        }}}
    }}}
}
[domain@192.168.56.21:9990 /]

In the JBoss domain I used for the testing, the full-ha profile has been used when creating the Group2 Servers group.
c. Confirm the JDBC driver has been declared successfully.

[domain@192.168.56.21:9990 /] /profile=full-ha/subsystem=datasources/jdbc-driver=oracle:read-resource
{
    "outcome" => "success",
    "result" => {
        "deployment-name" => undefined,
        "driver-class-name" => undefined,
        "driver-datasource-class-name" => undefined,
        "driver-major-version" => undefined,
        "driver-minor-version" => undefined,
        "driver-module-name" => "com.oracle",
        "driver-name" => "oracle",
        "driver-xa-datasource-class-name" => "oracle.jdbc.driver.OracleDriver",
        "jdbc-compliant" => undefined,
        "module-slot" => undefined,
        "profile" => undefined,
        "xa-datasource-class" => undefined
    }
}
[domain@192.168.56.21:9990 /]

d. Create the Data source pointing to the Oracle Database

[domain@192.168.56.21:9990 /] data-source add --profile=full-ha --name=testOracleDS --jndi-name=java:/jdbc/testOracleDS --driver-name=oracle --connection-url=jdbc:oracle:thin:@vm12:1521/orcl --user-name=scott --password=tiger --jta=true --use-ccm=true --use-java-context=true --enabled=true --user-name=scott --password=tiger --max-pool-size=10 --min-pool-size=5 --flush-strategy="FailingConnectionOnly"

e. Confirm the data source has been create correctly

[domain@192.168.56.21:9990 /] /profile=full-ha/subsystem=datasources/data-source=testOracleDS:read-resource
{
    "outcome" => "success",
    "result" => {
        "allocation-retry" => undefined,
        "allocation-retry-wait-millis" => undefined,
        "allow-multiple-users" => false,
        "authentication-context" => undefined,
        "background-validation" => undefined,
        "background-validation-millis" => undefined,
        "blocking-timeout-wait-millis" => undefined,
        "capacity-decrementer-class" => undefined,
        "capacity-decrementer-properties" => undefined,
        "capacity-incrementer-class" => undefined,
        "capacity-incrementer-properties" => undefined,
        "check-valid-connection-sql" => undefined,
        "connectable" => false,
        "connection-listener-class" => undefined,
        "connection-listener-property" => undefined,
        "connection-url" => "jdbc:oracle:thin:@vm12:1521/orcl",
        "credential-reference" => undefined,
        "datasource-class" => undefined,
        "driver-class" => undefined,
        "driver-name" => "oracle",
        "elytron-enabled" => false,
        "enabled" => true,
        "enlistment-trace" => false,
        "exception-sorter-class-name" => undefined,
        "exception-sorter-properties" => undefined,
        "flush-strategy" => "FailingConnectionOnly",
        "idle-timeout-minutes" => undefined,
        "initial-pool-size" => undefined,
        "jndi-name" => "java:/jdbc/testOracleDS",
        "jta" => true,
        "max-pool-size" => 10,
        "mcp" => "org.jboss.jca.core.connectionmanager.pool.mcp.SemaphoreConcurrentLinkedDequeManagedConnectionPool",
        "min-pool-size" => 5,
        "new-connection-sql" => undefined,
        "password" => "tiger",
        "pool-fair" => undefined,
        "pool-prefill" => undefined,
        "pool-use-strict-min" => undefined,
        "prepared-statements-cache-size" => undefined,
        "query-timeout" => undefined,
        "reauth-plugin-class-name" => undefined,
        "reauth-plugin-properties" => undefined,
        "security-domain" => undefined,
        "set-tx-query-timeout" => false,
        "share-prepared-statements" => false,
        "spy" => false,
        "stale-connection-checker-class-name" => undefined,
        "stale-connection-checker-properties" => undefined,
        "statistics-enabled" => false,
        "track-statements" => "NOWARN",
        "tracking" => false,
        "transaction-isolation" => undefined,
        "url-delimiter" => undefined,
        "url-selector-strategy-class-name" => undefined,
        "use-ccm" => true,
        "use-fast-fail" => false,
        "use-java-context" => true,
        "use-try-lock" => undefined,
        "user-name" => "scott",
        "valid-connection-checker-class-name" => undefined,
        "valid-connection-checker-properties" => undefined,
        "validate-on-match" => undefined,
        "connection-properties" => undefined
    }
}
[domain@192.168.56.21:9990 /]

At this stage, all servers in the Server group Group2 have been targeted with the data-source and all applications deployed to those servers can use it.
As the data-source has been targeted to the profile, all server groups created with this profile will allow their JBoss servers instances to use it.

Cet article Configuring Oracle DB data source in JBoss EAP 7.1 est apparu en premier sur Blog dbi services.

APEX Connect 2019 – Day 3

Thu, 2019-05-09 11:20

For the last conference day, after the Keynote about “JavaScript, Why Should I Care?” by Dan McGhan , I decided to attend some JavaScript learning sessions to improve myself and presentations on following topics:
– How to hack your APEX App… (only for testing)
– What you need to know about APEX validations

I also got the chance to have a 1:1 talk with Anthony Rayner to expose some wishes about APEX and talk about some issue on interactive grid search.

JavaScript programming language

Nowadays being an good APEX developer means being a full stack developer who master different areas:
– Server side (database, data modeling, SQL, PL/SQL)
– Client side (HTML, CSS, JavaScript)
So, even JavaScript was weird from the beginning you cannot avoid learning and mastering it. It’s simply the number 1 most used programming language (thanks to the web). Think APEX Dynamic Actions can solve all issues by hiding the complexity of Java Script just isn’t always possible anymore. Some statistics show that APEX team is already putting a lot of effort into JavaScript as it is more than 50% of APEX code way ahead from PL/SQL.
A couple of characteristics about JavaScript:
– It’s a no variable type language, meaning that the type is not in the variable but rather in the value assigned to it. This can some how be seen as polymorphism.
– It’s case sensitive
– 0 based array index (PL/SQL being a 1 based array index)
– There are no procedures, only functions
– Functions can be given other functions as parameter
– there is one convention: Functions starting with Uppercase are meant to be used with the new operator
While developing JavaScript your best friend are the web browser developer tools which allow to do a lot locally and test it before moving to the server and share with other developers and users.
There are a lot of resources on the internet to support the copy/paste way of work of JavaScript developers, so there are big chance that someone already did what you need. Just take care about licensing.
In APEX JavaScript can be encapsulated in Dynamic Actions, but try to keep that code as short as possible.
Oracle is also providing some very useful free Open Source Java development Toolkit: JET (Javascript Extension Toolkit)
It’s already integrated in APEX thru the charts.

How to hack your APEX App… (only for testing)

APEX generating web application it’s exposed to the same dangers than any other web application like SQL injection, XSS (cross site scripting, aso).
There is no excuse to ignore security issues because application is only used on the intranet or you think no one will ever find the issue…
… Security is the part of the job as a developer. Danger can come from the outside but also the inside with social engineering based hacking.
It’s very easy to find hacker tools on the internet like Kali Linux, based on Debian, which provides more that 600 tools for penetration testing like for example BEEF (Browser exploitation Framework.
In APEX the golden rule says “Don’t turn of escaping on your pages”.
Don’t forget “Security is hard. If it’s easy you’re not doing it right” so don’r forget it in your project planning.

What you need to know about APEX validations

There are 2 types of validations with web applications:
– Client side
– Server side
APEX is making use of both and even sometimes combines them but server side is the most used.
Where possible Client side validation should be used as well as it’s lighter (less network traffic), but be careful as it can be skirt with developer tools as it’s based on HTML attributes or JavaScript. Tht’s where Server side validation will be you second line of defense and the database triggers and constraints your very last line of defense.
Validation can make use of data patterns (regular expressions).
Interactive Grid validation can also be improved significantly with Java Script and Dynamic actions fired on value changes and/or on page submission.

There is always more to learn and thanks the community a lot of information is available. So keep sharing.
Enjoy APEX!

Cet article APEX Connect 2019 – Day 3 est apparu en premier sur Blog dbi services.

APEX Connect 2019 – Day 2

Wed, 2019-05-08 18:17

The day first started with a 5K fun run.

After the Keynote about “APEX: Where we have come from and where we’re heading: Personal insights on Application Express from the Development Team” by John Snyders, Anthony Rayner and Carsten Czarski explaining their work on APEX and some upcoming features, I decided to attend presentations on following topics:
– Know your Browser Dev Tools!
– APEX Devops with Database tooling
– Klick, Klick, Test – Automated Testing for APEX
– Sponsored Session Pitss: Migration from Oracle Forms to APEX – Approaches compared
– APEX-Actions

Potential upcoming features in APEX 19.2:
– APEX_EXEC enabled Interactive Grids
– Enhanced LOV
– New popup LOV
– Filter reports

Know your Browser Dev Tools!

Every web browser has it’s own set of developer tools but all of them mainly allow following functionalities:
– Manipulate HTML in DOM tree
– Execute JavaScript
– Apply CSS
– Client side debugging and logging
– Analyze network activity
– Simulate screensize
The most popular and complet set of tools is provided by Google Chrome with:
– Elements
– Console
– Sources
– Network
– Performance
– Memory
– Application
– Security
– Audits
Note that if you want to see console output as well as details of Dynamic Actions from APEX, you need to activate debug mode in your application.

APEX Devops with Database tooling

One of the goal of DevOps is to bring Development and Operations to work closer together and make deployment of application smoother.
In order to achieve that 100% automation of follwing tasks is helping a lot:
– Build
– Test
– Release
– Deploy
This is mainly supported by RESTful services within Oracle, ORDS being the corner stone.
Beside that Jenkins has been replaced by GitLab with better web services support.
Database changes are tracked based in Liquibase integrated and enhanced in SQLcl. Vault is also integrated in SQLcl to ease and automate the password management.
Another target of DevOps is zero downtime. This can be supported with tools like consul.io and fabiolb which permit to dynamically add ORDS servers covered by dynamic load balancing.

Klick, Klick, Test – Automated Testing for APEX

There are lots of automated testing tools on the market but they mostly are restricted to specific web browsers.
The target is to have a solution that fits most of them and allows testing of APEX web applications.
It needs a testing framework to abstract the scenario from underlying tool: codecept.io
The code generated by the testing framework being standardized it can be generated based on APEX metadata analysis with the help of a templating tool: handlebars
The process is then supported by an APEX application that can retrieve the applications from the workspace and manage the dedicated test scenarios as well as trigger them on docker containers.

Sponsored Session Pitss: Migration from Oracle Forms to APEX – Approaches compared

Migrating forms applications to APEX can be very cumbersome as they can be very large and complex.
The main phases fo such a migration are:
– Forms application preparation and analysis
– Migration
– APEX Application fine tuning
– Rollout
The success of such a migration lays on the combination of skilled FORMS developers and APEX developers.

APEX-Actions

Beside the well known Dynamic Actions in APEX, there is a new JavaScript library introduced in APEX 5.0: apex.actions
Documentation to it came with version 5.1 in the APEX JavaScript API documentation.
It’s used by the APEX development team in the Page Designer and is now available to all developers.
Actions allow to centrally encapsulate and define rendering, associated function and shortcuts of objects from the web pages all of it dynamically.
It uses an observer which allows to have the same behavior for multiple objects of the same type on the same page.

The day ended with Sponsor Pitches & Lightning Talks:
– APEX Competition Winner Plugin
– 5 things that make your life easier when using Docker
– Verwenden Sie kein PL/SQL!
– Improving Tech with Compassionate Coding

Cet article APEX Connect 2019 – Day 2 est apparu en premier sur Blog dbi services.

APEX Connect 2019 – Day 1

Tue, 2019-05-07 17:33

This year again the APEX connect conference spans over three days with mixed topics around APEX, like JavaScript, PL/SQL and much more.
After the welcome speech and the very funny and interesting Keynote about “PL/SQL: A Career Built On Top Of PL/SQL – 25 years of Coding, Sharing, and Learning” by Martin Widlake, I decided to attend presentations on following topics:
– Oracle Text – A Powerful Tool for Oracle APEX Developers
– Make It Pretty! MIP your APEX application!
– REST Services without APEX – but with APEX_EXEC
– Microservices with APEX
– SQL Magic!
– The UX of forms

PL/SQL: A Career Built On Top Of PL/SQL – 25 years of Coding, Sharing, and Learning:

Martin Widlake shared the story of 25 years development on Oracle from version 6 to the newest 19c.
The most important to retain from his professional journey is that “Good developers are made by other developers” and “Everything you learn will have some return sometime in the future”. That means sharing is the key, keep yourself curious and never stop learning, even things that are not yet obviously useful.

Oracle Text – A Powerful Tool for Oracle APEX Developers

That feature is embedded as a standard in Oracle databases since 1997 when it was named Car Text. In 1999 it became Intermedia Text and finally Oracle Text in 2001. It allows to index text based fields of the database as well as files in BLOBs, allowing much faster and easier search of text patterns (words, sentences, …). We went thru aspects like syntax, fuzzy search, snippets and lexer.
As search combinations require usage of specific operators and delimiters, which are cumbersome for end users, there is a useful package written by Roger Ford that allows to convert simple “Google” like requests into the right format for Oracle Text: PARSER download
His blog will provide nice information about it and Oracle documentation provides all details to the usage of Oracle Text.
You can find further information on following Blog:
Oracle text at a glance

Make It Pretty! MIP your APEX application!

The business logic is the heart of the application but the UI is its face and what users will judge first.
There are some rules which to Make It Pretty (MIP).
First of all it needs to fullfil user needs by either:
– stick to company brand rules
– stick to company webpage design
– stick to user wishes (can be drafted with template monster
Technical and non-technical aspects need to be considered.
Following Design rules help to improve the UI:
– Be consistent
– Make it intuitive
– Be responsive (give feedback to users)
– Keep it simple (not crowded)
– Fonts: max 2 per page, 16px rule (verify on fontpair.co)
– Color rules (verify on contrast checker)
– Have imagination
– Know your APEX universal theme

REST Services without APEX – but with APEX_EXEC

APEX is based on metadata to store definitions and PL/SQL packages support the engine.
That means APEX metadata support can be used outside the APEX application in any PL/SQL code.
One particular APEX PL/SQL package is APEX_EXEC introduced in APEX 18.1. It allows to abstract the data format (XML, json, …) in websources in order to be able to use data as it would come from any local table. It also takes care of pagination from web services to make the data retrieval transparent. But in order to make use of that package an APEX session must first be created to initiate the needed metadata. Fortunately this is made easy since APEX 18.1 with procedure create_session from the apex_session package.
The next version of APEX 19.2 might integrate websource modules with interactive grid.

Microservices with APEX

APEX can be compared to microservices by looking at the characteristics:
– Scalability
– Fail-safe
– Maintainable
– Technology independent
– Database independent
– Small
And it mostly matches!
But APEX also overrules the microservices drawbacks:
– Complex architecture
– Complex testing
– Migration efforts
– Complex development
To have a behavior close to microservices, APEX applications have to make use of web services for the data management and the interfacing with any kind of other services. This allows to clearly separate data management and applications. ORDS allows to enable REST at schema and also object level within APEX. Caching also needs to be considered based data change frequency to lower the lag time of data handling.

SQL Magic!

Since Oracle 12c the database provides the json data guide which allows easy json data manipulation like any standard table data. This comes also with new views like user_json_data_guide.
Oracle 11g introduced Invisible columns that hides columns from table description as well as standard “select *” but not specific select statements. This can be used to deprecate columns or add new columns without breaking existing applications with “select *”. Even though “select *” should be avoided in applications.
Oracle 12c also introduced polymorphic table function that can be used with pipelined tables to create views allowing to pivot and transpose tables whatever number of columns and rows they have.
All those features are very useful and should be used further.

The UX of forms

User eXperience (UX) rules to be applied in forms go beyond APEX. The aim to:
– Reduce cognitive load
– Prevent errors
– Make it user friendly
The rules are following:
– One thing per page
– Put field labels above rather than side to
– Replace small dropdowns by radio buttons
– Use Interactive data lists for long dropdowns
– For login pages, be specific on username type (name, e-mail) and password definition rules
– Avoid * for required fields but rather flag Optional fileds
– Adapt field size on expected data length
– Use smart default values
– Use entry masks
– Use date picker
– Define check before you start pattern to guide users and reduce form length
All that will improve UX.

Cet article APEX Connect 2019 – Day 1 est apparu en premier sur Blog dbi services.

Oracle 19C : Exporting and Importing Broker Configuration

Sat, 2019-05-04 06:57

Up to Oracle 19c, there was no automatic way to backup the configuration of the broker. One solution was to manually copy all executed instructions in a file.
With Oracle 19c, there is now the possibility to export and to import the configuration of the broker. Indeed the new EXPORT CONFIGURATION command will enable to save the metadata contained in the broker configuration file to a text file. This can be very useful if I have to recreate my configuration.
In this blog I have tested this command with following configuration

DGMGRL> show configuration

Configuration - db19c

  Protection Mode: MaxAvailability
  Members:
  DB19C_SITE1 - Primary database
    DB19C_SITE2 - Physical standby database
    DB19C_SITE3 - Physical standby database

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 47 seconds ago)

DGMGRL>

We can see the syntax of the EXPORT command with the help command

DGMGRL> help export

Export Data Guard Broker configuration to a file.

Syntax:

  EXPORT CONFIGURATION [TO ];

DGMGRL>

Now let’s export the configuration

DGMGRL> EXPORT CONFIGURATION TO db19c_config.txt
Succeeded.
DGMGRL>

The file is generated in the trace files directory.

[oracle@primaserver trace]$ pwd
/u01/app/oracle/diag/rdbms/db19c_site1/DB19C/trace
[oracle@primaserver trace]$ ls -l db19c_config.txt
-rw-r--r--. 1 oracle oinstall 8469 May  4 12:25 db19c_config.txt
[oracle@primaserver trace]$

Let’s remove the configuration

DGMGRL> EDIT CONFIGURATION SET PROTECTION MODE AS MAXPERFORMANCE;
Succeeded.
DGMGRL> remove configuration;
Removed configuration

DGMGRL> show configuration
ORA-16532: Oracle Data Guard broker configuration does not exist

Configuration details cannot be determined by DGMGRL
DGMGRL>

Now let’s use the IMPORT command to rebuild the configuration

DGMGRL> IMPORT CONFIGURATION FROM db19c_config.txt
Succeeded. Run ENABLE CONFIGURATION to enable the imported configuration.
DGMGRL>

As we can see the configuration is disabled after the import

DGMGRL> show configuration

Configuration - db19c

  Protection Mode: MaxAvailability
  Members:
  DB19C_SITE1 - Primary database
    DB19C_SITE2 - Physical standby database
    DB19C_SITE3 - Physical standby database

Fast-Start Failover:  Disabled

Configuration Status:
DISABLED

DGMGRL>

So let’s enable it

DGMGRL> ENABLE CONFIGURATION
Enabled.
DGMGRL> show configuration

Configuration - db19c

  Protection Mode: MaxAvailability
  Members:
  DB19C_SITE1 - Primary database
    Warning: ORA-16629: database reports a different protection level from the protection mode

    DB19C_SITE2 - Physical standby database
    DB19C_SITE3 - Physical standby database

Fast-Start Failover:  Disabled

Configuration Status:
WARNING   (status updated 6 seconds ago)

DGMGRL>

The warning is due to the fact that the protection mode was set to MaxPerformance to be able to drop the configuration.

DGMGRL> EDIT CONFIGURATION SET PROTECTION MODE AS MAXAVAILABILITY ;
Succeeded.
DGMGRL> show configuration

Configuration - db19c

  Protection Mode: MaxAvailability
  Members:
  DB19C_SITE1 - Primary database
    DB19C_SITE2 - Physical standby database
    DB19C_SITE3 - Physical standby database

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 37 seconds ago)

DGMGRL>

Now let’s run the EXPORT command without specifying a file name

DGMGRL> EXPORT CONFIGURATION
Succeeded.
DGMGRL>

We can see in the trace directory that a default name is generated for the file

DB19C_dmon_5912_brkmeta_1.trc

If we run again the export command

DGMGRL> EXPORT CONFIGURATION
Succeeded.
DGMGRL>

A second default file is created

DB19C_dmon_5912_brkmeta_2.trc

Cet article Oracle 19C : Exporting and Importing Broker Configuration est apparu en premier sur Blog dbi services.

Oracle 19C : Dynamic Change of Fast-Start Failover Target

Fri, 2019-05-03 15:04

Oracle 19C is now available on premise. There are lot of new features. One for the Data Guard Broker is that now we can dynamically change the fast-start failover target to a specified member in the list without disabling the fast-start failover.
I have tested this new feature and is describing this in this blog
I am using 3 servers with Oracle Linux
The Data Guard is already built and the broker is already configured
To enable the fast-start failover there are some requirements. Note that flashback database must be enabled for both databases.
First we put the the transport to SYNC for 3 databases

DGMGRL> edit database 'DB19C_SITE1' set property LogXptMode='SYNC';
Property "logxptmode" updated
DGMGRL> edit database 'DB19C_SITE2' set property LogXptMode='SYNC';
Property "logxptmode" updated
DGMGRL> edit database 'DB19C_SITE3' set property LogXptMode='SYNC';
Property "logxptmode" updated

After we change the protection to Maxavailability

DGMGRL>  EDIT CONFIGURATION SET PROTECTION MODE AS MaxAvailability;
Succeeded.

Then we set the fast-start failover target for both databases

DGMGRL> enable fast_start failover;
Enabled in Zero Data Loss Mode.

Below the status of the configuration. And we can see that DB19C_SITE2 is the target for the fast-start failover

DGMGRL> show configuration

Configuration - db19c

  Protection Mode: MaxAvailability
  Members:
  DB19C_SITE1 - Primary database
    DB19C_SITE2 - (*) Physical standby database
    DB19C_SITE3 - Physical standby database

Fast-Start Failover: Enabled in Zero Data Loss Mode

Configuration Status:
SUCCESS   (status updated 55 seconds ago)

DGMGRL>

The status of the observer will also show as the active target

DGMGRL> show observer

Configuration - db19c

  Primary:            DB19C_SITE1
  Active Target:      DB19C_SITE2

Observer "standserver2" - Master

  Host Name:                    standserver2
  Last Ping to Primary:         2 seconds ago
  Last Ping to Target:          4 seconds ago

DGMGRL>

For example let’s say we want to switchover to DB19C_SITE3

DGMGRL> switchover to 'DB19C_SITE3';
Performing switchover NOW, please wait...
Error: ORA-16655: specified standby database not the current fast-start failover target standby

Failed.
Unable to switchover, primary database is still "DB19C_SITE1"
DGMGRL>

As we can see we cannot because the first fast-start failover target is DB19C_SITE2. We have to change it to DB19C_SITE3
To dynamiccaly do this change , we use the command SET FAST_START FAILOVER TARGET.

DGMGRL> SET FAST_START FAILOVER TARGET TO 'DB19C_SITE3';
Waiting for Fast-Start Failover target to change to "DB19C_SITE3"...
Succeeded.
DGMGRL>

We can query the broker to verify the change

DGMGRL> show configuration

Configuration - db19c

  Protection Mode: MaxAvailability
  Members:
  DB19C_SITE1 - Primary database
    DB19C_SITE3 - (*) Physical standby database
    DB19C_SITE2 - Physical standby database

Fast-Start Failover: Enabled in Zero Data Loss Mode

Configuration Status:
SUCCESS   (status updated 22 seconds ago)

DGMGRL>

And then now I can switchover to DB19C_SITE3

DGMGRL> switchover to 'DB19C_SITE3';
Performing switchover NOW, please wait...
New primary database "DB19C_SITE3" is opening...
Operation requires start up of instance "DB19C" on database "DB19C_SITE1"
Starting instance "DB19C"...
Connected to an idle instance.
ORACLE instance started.
Connected to "DB19C_SITE1"
Database mounted.
Database opened.
Connected to "DB19C_SITE1"
Switchover succeeded, new primary is "DB19C_SITE3"
DGMGRL>

And the new status of the configuration

DGMGRL> show configuration

Configuration - db19c

  Protection Mode: MaxAvailability
  Members:
  DB19C_SITE3 - Primary database
    DB19C_SITE1 - (*) Physical standby database
    DB19C_SITE2 - Physical standby database

Fast-Start Failover: Enabled in Zero Data Loss Mode

Configuration Status:
SUCCESS   (status updated 51 seconds ago)

DGMGRL>

Cet article Oracle 19C : Dynamic Change of Fast-Start Failover Target est apparu en premier sur Blog dbi services.

Unable to add physical standby database

Tue, 2019-04-30 10:00

Recently I tried to setup some new physical standby databases and got the following strange message:


DGMGRL> ADD DATABASE "XXXX" as connect identifier is "XXXX" maintained as physical;
Error: ORA-16698: LOG_ARCHIVE_DEST_n parameter set for object to be added

Often you can read, that you have to check parameter log_archive_dest_2 on primary side. But this error message can also occur if log_archive_dest_2 on standby site points to another standby database.

Solution is following command on standby database to be added:


alter system set log_archive_dest_2 = '';

Cet article Unable to add physical standby database est apparu en premier sur Blog dbi services.

Oracle 18c Grid Infrastructure on Windows Server

Sat, 2019-04-27 10:05

Oracle Grid Infrastucture can be installed on Windows platform. The steps are the same that on other platforms. In this blog we are going to install Oracle GI 18c on Windows 2016.I have two disks on my server
Disk 0 : for the system
Disk 1 : for the ASM
I am using a VirtualBox virtual machine.
We suppose that the grid infrastructure sofware is already downloaded and decompressed in the grid home.
Like on other platforms, we have to configure the ASM disk. In the documentation we can read :
The only partitions that OUI displays for Windows systems are logical drives that are on disks and have been marked (or stamped) with asmtoolg or by Oracle Automatic Storage Management (Oracle ASM) Filter Driver.
So Disk1 should not be formatted and should not be assigned to a letter.
Then the first step is to create logical partition using Windows diskpart utility.

Microsoft Windows [Version 10.0.14393]
(c) 2016 Microsoft Corporation. All rights reserved.

C:\Users\Administrator>diskpart

Microsoft DiskPart version 10.0.14393.0

Copyright (C) 1999-2013 Microsoft Corporation.
On computer: RACWIN2

DISKPART> list disk

  Disk ###  Status         Size     Free     Dyn  Gpt
  --------  -------------  -------  -------  ---  ---
  Disk 0    Online           60 GB      0 B
  Disk 1    Online           20 GB    20 GB

DISKPART> select disk 1

Disk 1 is now the selected disk.

DISKPART> create partition extended

DiskPart succeeded in creating the specified partition.

DISKPART> create partition logical

DiskPart succeeded in creating the specified partition.

DISKPART>

We can then list existing partition for Disk 1

DISKPART> list partition

  Partition ###  Type              Size     Offset
  -------------  ----------------  -------  -------
  Partition 0    Extended            19 GB  1024 KB
* Partition 1    Logical             19 GB  2048 KB

DISKPART>

Once the logical partition created we can launch the asmtool or asmtoolg utility. This utility comes with the grid software

c:\app\grid\18000\bin>asmtoolg.exe

The first time we executed the asmtoolg.exe command, we get following error

According to this Oracle support note Windows: asmtoolg: MSVCR120.dll is missing from your computer (Doc ID 2251869.1), we have to download and install Visual C++ 2013 Redistributable Package.
Once done we launch again the asmtoolg utility

Clicking on next, we can choose the disk we want to stamp for ASM

Click on Next

Click on Next

And Click to Finish. We can then list the disks marked for ASM with the asmtool utility.

C:\Users\Administrator>cd c:\app\18000\grid\bin

c:\app\18000\grid\bin>asmtool.exe -list
NTFS                             \Device\Harddisk0\Partition1              500M
NTFS                             \Device\Harddisk0\Partition2            60938M
ORCLDISKDATA0                    \Device\Harddisk1\Partition1            20477M
c:\app\18000\grid\bin>

Now it’s time to launch the gridSetup executable

c:\app\grid\18000>gridSetup.bat






We decide to ignore the Warning


At the end, we got an error from the cluster verification utility. But it is normal because we ignored some perquisites.


We can verify that the insallation was fine

c:\app\18000\grid>crsctl status resource -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       racwin2                  STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       racwin2                  STABLE
ora.asm
               ONLINE  ONLINE       racwin2                  Started,STABLE
ora.ons
               OFFLINE OFFLINE      racwin2                  STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        ONLINE  ONLINE       racwin2                  STABLE
ora.evmd
      1        ONLINE  ONLINE       racwin2                  STABLE
--------------------------------------------------------------------------------

c:\app\18000\grid>

We can connect to the ASM instance

C:\Users\Administrator>set oracle_sid=+ASM

C:\Users\Administrator>sqlplus / as sysasm

SQL*Plus: Release 18.0.0.0.0 - Production on Sat Apr 27 05:49:38 2019
Version 18.3.0.0.0

Copyright (c) 1982, 2018, Oracle.  All rights reserved.


SQL> select name,state from v$asm_diskgroup;

NAME                           STATE
------------------------------ -----------
DATA                           MOUNTED

SQL>

Conclusion
Once the grid infrastructure configured, we have to install the Oracle database software.

Cet article Oracle 18c Grid Infrastructure on Windows Server est apparu en premier sur Blog dbi services.

Creating PostgreSQL users with a PL/pgSQL function

Thu, 2019-04-25 04:07

Sometimes you might want to create users in PostgreSQL using a function. One use case for this is, that you want to give other users the possibility to create users without granting them the right to do so. How is that possible then? Very much the same as in Oracle you can create functions in PostgreSQL that either execute under the permission of the user who created the function or they run under the permissions of the user who executes the function. Lets see how that works.

Here is a little PL/pgSQL function that creates a user with a given password, does some checks on the input parameters and tests if the user already exists:

create or replace function f_create_user ( pv_username name
                                         , pv_password text
                                         ) returns boolean
as $$
declare
  lb_return boolean := true;
  ln_count integer;
begin
  if ( pv_username is null )
  then
     raise warning 'Username must not be null';
     lb_return := false;
  end if;
  if ( pv_password is null )
  then
     raise warning 'Password must not be null';
     lb_return := false;
  end if;
  -- test if the user already exists
  begin
      select count(*)
        into ln_count
        from pg_user
       where usename = pv_username;
  exception
      when no_data_found then
          -- ok, no user with this name is defined
          null;
      when too_many_rows then
          -- this should really never happen
          raise exception 'You have a huge issue in your catalog';
  end;
  if ( ln_count > 0 )
  then
     raise warning 'The user "%" already exist', pv_username;
     lb_return := false;
  else
      execute 'create user '||pv_username||' with password '||''''||'pv_password'||'''';
  end if;
  return lb_return;
end;
$$ language plpgsql;

Once that function is created:

postgres=# \df
                                   List of functions
 Schema |     Name      | Result data type |        Argument data types         | Type 
--------+---------------+------------------+------------------------------------+------
 public | f_create_user | boolean          | pv_username name, pv_password text | func
(1 row)

… users can be created by calling this function when connected as a user with permissions to do so:

postgres=# select current_user;
 current_user 
--------------
 postgres
(1 row)

postgres=# select f_create_user('test','test');
 f_create_user 
---------------
 t
(1 row)

postgres=# \du
                                   List of roles
 Role name |                         Attributes                         | Member of 
-----------+------------------------------------------------------------+-----------
 postgres  | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
 test      |                                                            | {}

Trying to execute this function with a user that does not have permissions to create other users will fail:

postgres=# create user a with password 'a';
CREATE ROLE
postgres=# grant EXECUTE on function f_create_user(name,text) to a;
GRANT
postgres=# \c postgres a
You are now connected to database "postgres" as user "a".
postgres=> select f_create_user('test2','test2');
ERROR:  permission denied to create role
CONTEXT:  SQL statement "create user test2 with password 'pv_password'"
PL/pgSQL function f_create_user(name,text) line 35 at EXECUTE

You can make that work by saying that the function should run with the permissions of the user who created the function:

create or replace function f_create_user ( pv_username name
                                         , pv_password text
                                         ) returns boolean
as $$
declare
  lb_return boolean := true;
  ln_count integer;
begin
...
end;
$$ language plpgsql security definer;

From now on our user “a” is allowed to create other users:

postgres=> select current_user;
 current_user 
--------------
 a
(1 row)

postgres=> select f_create_user('test2','test2');
 f_create_user 
---------------
 t
(1 row)

postgres=> \du
                                   List of roles
 Role name |                         Attributes                         | Member of 
-----------+------------------------------------------------------------+-----------
 a         |                                                            | {}
 postgres  | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
 test      |                                                            | {}
 test2     |                                                            | {}

Before implementing something like this consider the “Writing SECURITY DEFINER Functions Safely” section in the documentation, there are some points to consider such as this:

postgres=# revoke all on function f_create_user(name,text) from public;
REVOKE

… and correctly setting the search_path.

Cet article Creating PostgreSQL users with a PL/pgSQL function est apparu en premier sur Blog dbi services.

Direct NFS, ODM 4.0 in 12.2: archiver stuck situation after a shutdown abort and restart

Wed, 2019-04-24 12:23

A customer had an interesting case recently. Since Oracle 12.2. he got archiver stuck situations after a shutdown abort and restart. I reproduced the issue and it is caused by direct NFS since running ODM 4.0 (i.e. since 12.2.). The issue also reproduced on 18.5. When direct NFS is enabled then the archiver-process writes to a file with a preceding dot in its name. E.g.


.arch_1_90_985274359.arc

When the file has been fully copied from the online redolog, then it is renamed to not contain the preceding dot anymore. I.e. using the previous example:


arch_1_90_985274359.arc

When I do a “shutdown abort” while the archiver is in process of writing to the archive-file (with the leading dot in its name) and I do restart the database then Oracle is not able to cope with that file. I.e. in the alert-log I do get the following errors:


2019-04-17T10:22:33.190330+02:00
ARC0 (PID:12598): Unable to create archive log file '/arch_backup/gen183/archivelog/arch_1_90_985274359.arc'
2019-04-17T10:22:33.253476+02:00
Errors in file /u01/app/oracle/diag/rdbms/gen183/gen183/trace/gen183_arc0_12598.trc:
ORA-19504: failed to create file "/arch_backup/gen183/archivelog/arch_1_90_985274359.arc"
ORA-17502: ksfdcre:8 Failed to create file /arch_backup/gen183/archivelog/arch_1_90_985274359.arc
ORA-17500: ODM err:File exists
2019-04-17T10:22:33.254078+02:00
ARC0 (PID:12598): Error 19504 Creating archive log file to '/arch_backup/gen183/archivelog/arch_1_90_985274359.arc'
ARC0 (PID:12598): Stuck archiver: inactive mandatory LAD:1
ARC0 (PID:12598): Stuck archiver condition declared

The DB continues to operate normal until it has to overwrite the online redologfile, which has not been fully archived yet. At that point the archiver becomes stuck and modifications on the DB are no longer possible.

When I remove the incomplete archive-file then the DB continues to operate normally:


rm .arch_1_90_985274359.arc

Using a 12.1-Database with ODM 3.0 I didn’t see that behavior. I.e. I could also see an archived redologfile with a preceding dot in its name, but when I shutdown abort and restart then Oracle removed the file itself and there was no archiver problem.

Testcase:

1.) make sure you have direct NFS enabled


cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk dnfs_on

2.) configure a mandatory log archive destination pointing to a NFS-mounted filesystem. E.g.


[root]# mount -t nfs -o rw,bg,hard,rsize=32768,wsize=32768,vers=3,nointr,timeo=600,proto=tcp,suid,nolock,noac nfs_server:/arch_backup /arch_backup
 
SQL> alter system set log_archive_dest_1='location=/arch_backup/gen183/archivelog mandatory reopen=30';

3.) Produce some DML-load on the DB

I created 2 tables t3 and t4 as a copy of all_objects with approx 600’000 rows:


SQL> create table t3 as select * from all_objects;
SQL> insert into t3 select * from t3;
SQL> -- repeat above insert until you have 600K rows in t3
SQL> commit;
SQL> create table t4 as select * from t3;

Run the following PLSQL-block to produce redo:


begin
for i in 1..20 loop
delete from t3;
commit;
insert into t3 select * from t4;
commit;
end loop;
end;
/

4.) While the PLSQL-block of 3.) is running check the archive-files produced in your log archive destination


ls -ltra /arch_backup/gen183/archivelog

Once you see a file created with a preceding dot in its name then shutdown abort the database:


oracle@18cR0:/arch_backup/gen183/archivelog/ [gen183] ls -ltra /arch_backup/gen183/archivelog
total 2308988
drwxr-xr-x. 3 oracle oinstall 23 Apr 17 10:13 ..
-r--r-----. 1 oracle oinstall 2136861184 Apr 24 18:24 arch_1_104_985274359.arc
drwxr-xr-x. 2 oracle oinstall 69 Apr 24 18:59 .
-rw-r-----. 1 oracle oinstall 2090587648 Apr 24 18:59 .arch_1_105_985274359.arc
 
SQL> shutdown abort

5.) If the file with the preceding dot is still there after the shutdown then you reproduced the issue. Just startup the DB and “tail -f” your alert-log-file.


oracle@18cR0:/arch_backup/gen183/archivelog/ [gen183] cdal
oracle@18cR0:/u01/app/oracle/diag/rdbms/gen183/gen183/trace/ [gen183] tail -f alert_gen183.log
...
2019-04-24T19:01:24.775991+02:00
Oracle instance running with ODM: Oracle Direct NFS ODM Library Version 4.0
...
2019-04-24T19:01:43.770196+02:00
ARC0 (PID:8876): Unable to create archive log file '/arch_backup/gen183/archivelog/arch_1_105_985274359.arc'
2019-04-24T19:01:43.790546+02:00
Errors in file /u01/app/oracle/diag/rdbms/gen183/gen183/trace/gen183_arc0_8876.trc:
ORA-19504: failed to create file "/arch_backup/gen183/archivelog/arch_1_105_985274359.arc"
ORA-17502: ksfdcre:8 Failed to create file /arch_backup/gen183/archivelog/arch_1_105_985274359.arc
ORA-17500: ODM err:File exists
ARC0 (PID:8876): Error 19504 Creating archive log file to '/arch_backup/gen183/archivelog/arch_1_105_985274359.arc'
ARC0 (PID:8876): Stuck archiver: inactive mandatory LAD:1
ARC0 (PID:8876): Stuck archiver condition declared
...

This is a serious problem, because it may cause an archiver stuck problem after a crash. I opened a Service Request at Oracle. The SR has been assigned to the ODM-team now. Once I get a resolution I’ll update this Blog.

Cet article Direct NFS, ODM 4.0 in 12.2: archiver stuck situation after a shutdown abort and restart est apparu en premier sur Blog dbi services.

Pages