Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 1 week 6 days ago

DockerCon2018 Barcelona – Day 1

Mon, 2018-12-03 17:00

As a football fan, traveling to Barcelona without watching a football game is inconceivable, so I started my travel by attending the game against Villareal in Camp Nou  8-)

FCB-min

 

DockerCon Day 1

Today, with David Barbarin, was our first day at DockerCon2018 in Barcelona. The conference is located in a huge conference center and split between types of sessions including workshops, keynotes, hands-on labs, and hallway track.

The first day was only focused on workshops, hands-on labs and hallway track in where you can meet Docker experts and exchange on multiple topics.

Interesting workshops was proposed today, but to follow a workshop you had to register on it before. Fortunately, for the latecomers, a waiting list was here based on first arrived first served.

We started by following a very interesting workshop: Migrating .NET application to Docker Containers.

DotNet_WS1-min

The instructor presents us, how to migrate a monolithic application to Docker Containers. The starting point was a simple .NET application running into a single container and steps by steps the instructors explain us, through efficient demos, how to split the different services of our application in microservices architecture. We easily had access to a lab environment hosted in Azure through an RDP connection.

The 2 hour workshop was split by the following part:

  1. Building a single .NET application in one Docker Container.
  2. Split the Home page to the rest of the website, by using two containers.
  3. Add an SQL Server database for the data with the persistent storage.
  4. Add an NGINX proxy to redirect requests for the homepage and other pages of the website
  5. Create a container for the API with a connection to the database.
  6. Add a Message queue for the website.

 

After lunch, we planned to follow another workshop concerning the storage in Docker but the session and the waiting list was full. So we decided to get started with Hands-On Labs. After signing up to the Hands-On Labs, you get access to your own hosted environment and can explore all the features and capabilities of Docker through different labs:

  • Docker for Beginners – Linux
  • Docker for Beginners – Windows
  • Docker EE – an Introduction
  • Docker Security
  • Modernizing Traditional Java Applications
  • Docker EE with Kubernetes

 

We finally end the day by attending to one of the last workshops of today: Kubernetes Security, by Dimitris Kapadinis. During this workshop, the instructor shows us, the different methods to secure a Kubernetes cluster.

Kubernetes-WS-min (1)

The workshop was composed by the following part:

  1. Create a Kubernetes cluster with Play-with-Kubernetes (or minikube locally).
  2. Create and deploy a simple web application using Kubernetes.
  3. Hack the web application by entering inside the web-app pod.
  4. Protect the application by creating a security context for Kubernetes – Securing Kubernetes components.

 

It was a very intensive and interesting first day, I learned a lot through the different workshops and labs I done, so see you tomorrow  ;-)

 

Cet article DockerCon2018 Barcelona – Day 1 est apparu en premier sur Blog dbi services.

Oooooops or how to undelete a file on an ext4 filesystem

Mon, 2018-12-03 07:06

It happens within the blink of an eye.
A delete command was executed and half a second after you hit the enter button you knew it. That was a mistake.
This is the scenario which leads to this blog entry in where I show you how you can get your files back if you are lucky…

Short summary for the desperate

If you land here you are probably in the same situation I was so here is a short summary
Extundelete did not work for me but ext4magic did and I had to compile it from the sources

  • Remount the filesystem read-only or umount it as soon as possible after the incident
  • Backup your inode table you will need it for the restore
    • debugfs -R "dump /tmp/VMSSD01.journal" /dev/mapper/VMSSD01-VMSSD01
  • Check at which time your files were still there
    • ext4magic /dev/mapper/VMSSD01-VMSSD01 -H -a $(date -d "-3hours" +%s)
  • List the files within this timepoint
    • ext4magic /dev/mapper/VMSSD01-VMSSD01 -a 1542796423 -f / -l
  • Restore the file to a different disc/mountpoint
    • ext4magic /dev/mapper/VMSSD01-VMSSD01 -a 1542796423 -f / -j /tmp/VMSSD01.journal -r -d /tmp/recover
  • Be happy and promise never doing it again
And now the hole story

So it happened that I deleted two VM images by accident. I was cleaning up my environment and there were two files centos75_base_clone-1.qcow2 and centos75_base_clone-2.qcow2: As you can see I was using a clean and good naming convention which points directly, that these are the OS image files for my “nomachine” and my “networkmaster” machine… Especially the second one with my dhcp, dns, nfs and iscsi configuration would take some time to configure again.
In the first place nothing seemed to be wrong, all VMs were running normally until I tried to restart one of them and I went from :cool: to :shock: and at the end to :oops:

I could remember, that it was very important to unmount the filesystem as quickly as possible and stop changing anything on this filesystem
umount /VMSSD01
So a solution had to be found. A short Google search brought me to a tool with the promising name “extundelete” which can be found in the CentOS repository in the actual version 0.2.4 from 2012….
So a yum install -y extundelete and a man extundelete later I tried the command
extundelete --restore-all --after $(date -d "-2 hours" +%s) /dev/mapper/VMSSD01-VMSSD01
And…. It does not work.
A cryptical core dump and no solution on google so I went from :shock: TO :cry: .
extundelete_coredump
But it was not the time to give up. With the courage of the despaired, I searched around and found the tool ext4magic. Magic never sounded better than in this right moment. The tool was newer then extundelete even when it builds on extundelete. So I downloaded and compiled the newest Version 0.3.2 (from 2014). Before you can compile the source you need some dependencies:

yum install -y libblkid \
libblkid-devel \
zerofree e2fsp* \
zlib-devel \
libbz2-devel \
bzip2-devel \
file-devel

and to add some more “Magic” you need also yum install -y perl-File-LibMagic

A short ./configure && make later I got a binary and to tell it with Star Was: “A New Hope” started growing in me.

I listed all the files deleted in the last 3 hours and there they are. At least I thought these have to be my image files:
./src/ext4magic /dev/mapper/VMSSD01-VMSSD01 -H -a $(date -d "-3hours" +%s)
ext4magic_showInode

I listed out the content on the different timestamps and found at least one of my files. The timestamp 1542797503 showed some files so I tried to list all files from an earlier timestamp and one of my missing image files showed up.
./src/ext4magic /dev/mapper/VMSSD01-VMSSD01 -a 1542796423 -f / -l
ext4magic_file2restore
My mood started getting better and better and switched from :cry: to :???:.
I tried to restore my file
./src/ext4magic /dev/mapper/VMSSD01-VMSSD01 -a 1542796423 -f / -j /tmp/VMSSD01.journal -r -d /VMSSD02/recovery/
ext4magic_restoreInProgress
My first file is back :grin: . But the tool did not stop, it recovers more and more files and my hope was growing, to get both files back. The first file was back with the original name. For the second one, it was not that clear what happened. The tool was still running and recovers file after file after file and put all in the subdirectories MAGIC-2.

I tried to cancel the recovery job and give it a shot with the recovered files.
ext4magic_file2restore_unknown
After renaming the *.unknown files I tried to boot up the VM. To my surprise the first try was successful and all my VMs were back online.

Summary
  • Do not delete your files (obviously).
  • Use a clear naming convention for all your files.
  • A lsof before deleting a supposed unused file is always a good idea.
  • ext4magic worked for me and did as promised. My files are back, the VMs are up and running again. I am happy and :cool: .

    Cet article Oooooops or how to undelete a file on an ext4 filesystem est apparu en premier sur Blog dbi services.

OEM 13c Repository view gc$metric_values_daily

Mon, 2018-12-03 03:50

Oracle OEM 13c offers the possibility to generate reports about Oracle Database Tablespace Monthly Space Usage. The displayed information is quite useful and display a lot of information:

Screen Shot 2018-11-23 at 12.09.37

 

Screen Shot 2018-11-23 at 12.15.16

 

The report displays the details for each month and for each tablespace, but the GUI is sometimes painful: you have a nice report but many tablespaces and many numbers :=(

At a client’s site, I had the following behavior: a production database had its data files distributed across three file system /u01 /u01 and /u03. And I wanted to know the evolution of the data files available size for each disk.

My first idea was to write sql requests querying the tablespace history view dba_hist_tbspc_space_usage, but this view does not contain any information about the file system, and as a consequence it is not possible to know how much space will be needed by each file system in the future.

OEM 13c offers the capability of viewing graphs for most of the metrics within OEM, but forecasting and trending capabilities are not present, and most of the category of metrics are set up for real time alerting, not for historical trending.

And I did not find any views like dba_hist_datafiles_space_usage :=)

So I checked in EM13c to verify that the file system space usage is correctly monitored. Naturally each file system is monitored and triggers a warning when the available space is below 20%, or a critical alert when it is below 5%.

As a consequence I had the idea to query the OEM 13c repository views, and this was the good solution.

At first we can query the mgmt.$alert_current to display the alerts generated on the metric_name Filesystems over target of type host:

SQL> select target_name , target_type, metric_name, metric_column, column_label,alert_state, violation_level
  2  FROM MGMT$ALERT_CURRENT
  3  where metric_name ='Filesystems'
  4  and target_guid in (SELECT target_guid
  5  FROM mgmt$target
  6* WHERE target_type ='host');

em13c.localdomain
host
Filesystems
pctAvailable
Filesystem Space Available (%)
Warning 			20

There is a lot of management repository views providing access to target, metrics and monitoring information stored in the Management Repository. Those views offer the possibility to create your own scripts to query historical data from those views.

In my case I used the gc$metric_values_daily view corresponding to the data in the EM_METRIC_VALUES_DAILY table:

SQL> desc gc$metric_values_daily
 Name					   Null?    Type
 ----------------------------------------- -------- ----------------------------
 ENTITY_TYPE				   NOT NULL VARCHAR2(64)
 ENTITY_NAME				   NOT NULL VARCHAR2(256)
 ENTITY_GUID				   NOT NULL RAW(16)
 PARENT_ME_TYPE 				    VARCHAR2(64)
 PARENT_ME_NAME 				    VARCHAR2(256)
 PARENT_ME_GUID 			   NOT NULL RAW(16)
 TYPE_META_VER				   NOT NULL VARCHAR2(8)
 METRIC_GROUP_NAME			   NOT NULL VARCHAR2(64)
 METRIC_COLUMN_NAME			   NOT NULL VARCHAR2(64)
 COLUMN_TYPE				   NOT NULL NUMBER(1)
 COLUMN_INDEX				   NOT NULL NUMBER(3)
 DATA_COLUMN_TYPE			   NOT NULL NUMBER(2)
 METRIC_GROUP_ID			   NOT NULL NUMBER(38)
 METRIC_GROUP_LABEL				    VARCHAR2(64)
 METRIC_GROUP_LABEL_NLSID			    VARCHAR2(64)
 METRIC_COLUMN_ID			   NOT NULL NUMBER(38)
 METRIC_COLUMN_LABEL				    VARCHAR2(64)
 METRIC_COLUMN_LABEL_NLSID			    VARCHAR2(64)
 DESCRIPTION					    VARCHAR2(1024)
 SHORT_NAME					    VARCHAR2(40)
 UNIT						    VARCHAR2(64)
 IS_FOR_SUMMARY 				    NUMBER
 IS_STATEFUL					    NUMBER
 NON_THRESHOLDED_ALERTS 			    NUMBER
 METRIC_KEY_ID				   NOT NULL NUMBER(38)
 KEY_PART_1				   NOT NULL VARCHAR2(256)
 KEY_PART_2				   NOT NULL VARCHAR2(256)
 KEY_PART_3				   NOT NULL VARCHAR2(256)
 KEY_PART_4				   NOT NULL VARCHAR2(256)
 KEY_PART_5				   NOT NULL VARCHAR2(256)
 KEY_PART_6				   NOT NULL VARCHAR2(256)
 KEY_PART_7				   NOT NULL VARCHAR2(256)
 COLLECTION_TIME			   NOT NULL DATE
 COLLECTION_TIME_UTC				    DATE
 COUNT_OF_COLLECTIONS			   NOT NULL NUMBER(38)
 AVG_VALUE					    NUMBER
 MIN_VALUE					    NUMBER
 MAX_VALUE					    NUMBER
 STDDEV_VALUE					    NUMBER

The main column signification:

ENTITY_TYPE The type of the target. Host, oracle_database ENTITY_NAME The target or component name for example the hostname METRIC_GROUP_NAME Name of the metric group for example filesystems METRIC_COLUMN_NAME Name of the metric column for example available, pct KEY_PART_1 Key part 1 of composite key for example name of the file system KEY_PART_2 Key part 2 of composite key COLLECTION_TIME Collection time in target time zone AVG_VALUE Average value of metric for the day MIN_VALUE Minimum value of metric for the day MAX_VALUE Maximum value of metric for the day

So in my case I only have to execute the following SQL request:

select entity_name, 
key_part_1,
collection_time, 
avg_value as avg, 
avg_value/1000000 as percent  from gc$metric_values_daily
where metric_group_name='Filesystems' 
and metric_column_name= 'available' 
and key_part_1='/u01' 
and entity_name ='oraprod.com'
order by 1,2
/

oraprod.com     /u01          04-OCT-16  169545.44  	.380928435
oraprod.com     /u01          05-OCT-16  169572.44	    .380928435
oraprod.com     /u01          06-OCT-16  169583.44	    .380928435
…
oraprod.com     /u01           17-NOV-18  169538.44 	.172295163
oraprod.com     /u01           18-NOV-18  169538.44 	.172295163
oraprod.com     /u01           19-NOV-18  169525.94 	.172282459
oraprod.com     /u01           20-NOV-18  169498.44 	.172254512

With this result I can easily know how much space is consumed by each file system during the time. We can use Excel’s capabilities to determine the trend of the file system available size.

Conclusion:

We often forget that OEM is not only a graphical administration tool, the main interesting part of OEM is that OEM stores date at regular intervals for important metrics into the management repository. If you understand the management repository views structure, you will have extraordinary possibilities to extract historical data, build graphs in order to analyze more precisely your infrastructure.

Cet article OEM 13c Repository view gc$metric_values_daily est apparu en premier sur Blog dbi services.

AWS re:invent 2018 – Day 5 – Final day

Sat, 2018-12-01 12:34

On Friday, it’s the last day for AWS re:Invent convention in Las Vegas. I didn’t know in advance what to expect because I have to admit that even if I started to use AWS, there is still a lot to learn. And that’s why I personally think working with technology is interesting, it’s always moving and there is always something new to investigate or learn.

This morning, I attended a very interesting session about the life of a Cloud Ops Engineer with 2 people from AWS speaking about some scenarios that may happen (or already happened) in daily life of system engineers. I was quite surprised on 1 scenario where the starting point to analyze a failure was the billing console. But imagine, you get a call from a customer because the application is down and you don’t have lots of informations about it. In AWS, you pay for the services you use so you can find some resources directly on the bill. Then you can pull the string and follow different resources to get a better view. Then CloudTrail can help to see what was changed on the resources because it keeps track of API calls.

Knowing some keys services and enabling them can help a lot operation/devops team to support applications in an AWS environment: CloudTrail, CloudWatch, GuardDuty or few of them. Using CloudFormation or tools to provision the infrastructure can also help to detect drifts when there is an issue. It definitely worth waking up this morning.

I ended the conference with a new launch session about running on Amazon RDS with VMware on local datacenter. It allows customers to give the offload the management part to AWS while keeping the data on site. AWS will provide a bunch of control machines built by AWS and VMware which is AWS partner. These machines will connect using a private VPN to control machines in AWS Cloud infrastructure. Currently supporting PostgreSQL / MySQL and planning to support SQL Server and Oracle. It will be a BYOL (Bring Your Own Licence) model so it will interesting to see how to licence that environment. Aurora is not supported and with what I learned this week about the storage architecture I think it will be a difficult challenge.

With Amazon RDS on VMware and AWS Outpost which has been announced in the keynote, AWS is giving solutions for customer want to use Cloud but keep data in local datacenter. Oracle proposed Cloud at Customer to achieve a similar goal. Oracle proposal is built on hardware (engineered systems) provided and maintained by Oracle when AWS ‘ solution is built on VMware. But many companies are already running successfully VMware on their own hardware so the “fight” will be interesting.

Cet article AWS re:invent 2018 – Day 5 – Final day est apparu en premier sur Blog dbi services.

PostgreSQL 12: csv output format for psql

Sat, 2018-12-01 07:25

Getting data out of PostgreSQL in csv format is not a big issue. Using copy you can do that easily. PostgreSQL 12 will enhance psql so that you can directly return csv formatted output from a query.

Using “copy” you would do something like this:

postgres=# copy pg_database to '/var/tmp/aa.txt' with (format csv);
COPY 3
postgres=# \! cat /var/tmp/aa.txt
11788,postgres,10,6,en_US.utf8,en_US.utf8,f,t,-1,11787,598,1,1663,
1,template1,10,6,en_US.utf8,en_US.utf8,t,t,-1,11787,598,1,1663,"{=c/postgres,postgres=CTc/postgres}"
11787,template0,10,6,en_US.utf8,en_US.utf8,t,f,-1,11787,598,1,1663,"{=c/postgres,postgres=CTc/postgres}"
postgres=# 

What you can not do directly with copy is dumping a view:

postgres=# copy pg_settings to '/var/tmp/aa.txt' with (format csv);
ERROR:  cannot copy from view "pg_settings"
HINT:  Try the COPY (SELECT ...) TO variant.

Of course you can workaround that by doing it like this:

postgres=# copy (select * from pg_settings) to '/var/tmp/aa.txt' with (format csv);
COPY 309
postgres=# \! head -2 /var/tmp/aa.txt
allow_system_table_mods,off,,Developer Options,Allows modifications of the structure of system tables.,,postmaster,bool,default,,,,off,off,,,f
application_name,psql,,Reporting and Logging / What to Log,Sets the application name to be reported in statistics and logs.,,user,string,client,,,,"",psql,,,f

With PostgreSQL 12 there will be an easier way of doing that:

postgres=# \pset format csv
Output format is csv.
postgres=# select * from pg_database;
oid,datname,datdba,encoding,datcollate,datctype,datistemplate,datallowconn,datconnlimit,datlastsysoid,datfrozenxid,datminmxid,dattablespace,datacl
11788,postgres,10,6,en_US.utf8,en_US.utf8,f,t,-1,11787,598,1,1663,
1,template1,10,6,en_US.utf8,en_US.utf8,t,t,-1,11787,598,1,1663,"{=c/postgres,postgres=CTc/postgres}"
11787,template0,10,6,en_US.utf8,en_US.utf8,t,f,-1,11787,598,1,1663,"{=c/postgres,postgres=CTc/postgres}"

… or directly when executing psql:

postgres@pgbox:/home/postgres/ [PGDEV] psql -c "select * from pg_database" --csv  postgres
oid,datname,datdba,encoding,datcollate,datctype,datistemplate,datallowconn,datconnlimit,datlastsysoid,datfrozenxid,datminmxid,dattablespace,datacl
11788,postgres,10,6,en_US.utf8,en_US.utf8,f,t,-1,11787,598,1,1663,NULL
1,template1,10,6,en_US.utf8,en_US.utf8,t,t,-1,11787,598,1,1663,"{=c/postgres,postgres=CTc/postgres}"
11787,template0,10,6,en_US.utf8,en_US.utf8,t,f,-1,11787,598,1,1663,"{=c/postgres,postgres=CTc/postgres}"
Time: 0.794 ms

This makes is very easy to pass the data to other programs for further processing. By switching to unaligned output mode you even specify the field separator:

postgres@pgbox:/home/postgres/ [PGDEV] psql -c "select * from pg_database" --csv --field-separator='##' -A postgres
oid##datname##datdba##encoding##datcollate##datctype##datistemplate##datallowconn##datconnlimit##datlastsysoid##datfrozenxid##datminmxid##dattablespace##datacl
11788##postgres##10##6##en_US.utf8##en_US.utf8##f##t##-1##11787##598##1##1663##NULL
1##template1##10##6##en_US.utf8##en_US.utf8##t##t##-1##11787##598##1##1663##{=c/postgres,postgres=CTc/postgres}
11787##template0##10##6##en_US.utf8##en_US.utf8##t##f##-1##11787##598##1##1663##{=c/postgres,postgres=CTc/postgres}
(3 rows)
Time: 2.105 ms

Cet article PostgreSQL 12: csv output format for psql est apparu en premier sur Blog dbi services.

PostgreSQL 12: log_statement_sample_rate

Fri, 2018-11-30 12:41

A common way to identify long running queries in PostgreSQL is to set log_min_duration_statement to a value that is known to cause troubles. In other words: If you know most of your statements usually execute in under a second you could set log_min_duration_statement to “2s” (which means two seconds) so that whenever a statement takes longer than two seconds it will be reported in the log file. The issue with this however is, that this can cause a lot of activity in the server log file which of course is not good for performance as well. PostgreSQL 12 will probably come with a solution to that.

The default value of log_min_duration_statement is “-1″, which means disabled:

postgres=# select version();
                                                  version                                                   
------------------------------------------------------------------------------------------------------------
 PostgreSQL 12devel on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28), 64-bit
(1 row)

postgres=# show log_min_duration_statement;
 log_min_duration_statement 
----------------------------
 -1
(1 row)

Lets set this to 5ms and then generate some queries:

postgres=# alter system set log_min_duration_statement = '5ms';
ALTER SYSTEM
postgres=# select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

postgres=# show log_min_duration_statement;
 log_min_duration_statement 
----------------------------
 5ms
(1 row)

When we execute some queries that take longer than 5ms they should get reported in the log file:

postgres=# select 'select pg_sleep(0.5)' from generate_series(1,10); \gexec
       ?column?       
----------------------
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
(10 rows)

 pg_sleep 
----------
 
(1 row)

...

Looking at the log file there are exactly 10 entries:

2018-11-28 03:13:32.240 CET - 20 - 28978 - [local] - postgres@postgres LOG:  duration: 529.825 ms  statement: select pg_sleep(0.5)
2018-11-28 03:13:32.770 CET - 21 - 28978 - [local] - postgres@postgres LOG:  duration: 529.904 ms  statement: select pg_sleep(0.5)
2018-11-28 03:13:33.273 CET - 22 - 28978 - [local] - postgres@postgres LOG:  duration: 501.729 ms  statement: select pg_sleep(0.5)
2018-11-28 03:13:33.783 CET - 23 - 28978 - [local] - postgres@postgres LOG:  duration: 509.532 ms  statement: select pg_sleep(0.5)
2018-11-28 03:13:34.305 CET - 24 - 28978 - [local] - postgres@postgres LOG:  duration: 520.946 ms  statement: select pg_sleep(0.5)
2018-11-28 03:13:34.809 CET - 25 - 28978 - [local] - postgres@postgres LOG:  duration: 502.624 ms  statement: select pg_sleep(0.5)
2018-11-28 03:13:35.315 CET - 26 - 28978 - [local] - postgres@postgres LOG:  duration: 505.043 ms  statement: select pg_sleep(0.5)
2018-11-28 03:13:35.817 CET - 27 - 28978 - [local] - postgres@postgres LOG:  duration: 502.034 ms  statement: select pg_sleep(0.5)
2018-11-28 03:13:36.321 CET - 28 - 28978 - [local] - postgres@postgres LOG:  duration: 503.417 ms  statement: select pg_sleep(0.5)
2018-11-28 03:13:36.824 CET - 29 - 28978 - [local] - postgres@postgres LOG:  duration: 501.790 ms  statement: select pg_sleep(0.5)

Imagine there are thousands of statements that exceed log_min_duration_statement: This will make the log file grow quite fast and performance will suffer because writing the log file takes resources as well. PostgreSQL 12 introduces a new parameter “log_statement_sample_rate” which helps with this:

postgres=# show log_statement_sample_rate;
 log_statement_sample_rate 
---------------------------
 1
(1 row)

The default is 1 which means log all statements that exceed the log_min_duration_statement threshold. When that is too much we can now go and say: We want only 10% of these statements getting logged:

postgres=# alter system set log_statement_sample_rate=0.1;
ALTER SYSTEM
postgres=# select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

postgres=# show log_statement_sample_rate;
 log_statement_sample_rate 
---------------------------
 0.1
(1 row)

Doing the same test as above again:

postgres=# select 'select pg_sleep(0.5)' from generate_series(1,10); \gexec
       ?column?       
----------------------
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
(10 rows)

 pg_sleep 
----------
 
(1 row)

...

… we only get one entry in the log file, instead of 10:

2018-11-28 03:20:36.218 CET - 34 - 28978 - [local] - postgres@postgres LOG:  duration: 504.845 ms  statement: select pg_sleep(0.5)

Nice. Btw: The commit is here.

Cet article PostgreSQL 12: log_statement_sample_rate est apparu en premier sur Blog dbi services.

AWS re:invent 2018 – Day 4

Fri, 2018-11-30 11:52

Thursday morning it’s time for the keynote from Dr Werner Vogels.

A big part of the keynote was about databases again. Maybe AWS people knows that there are still lots of DBAs out there! It’s also again a focus on Amazon Aurora (relational), Dynamo DB (NoSQL) and AWS Redshift that are replacing Oracle databases Amazon was using. There was also a focus on how S3 works to maintain the best durability for the data. I’ve seen several sessions this week mentioning S3 as the object storage for building a data lake.

I haven’t mentioned on yesterday’s keynote but there are some guest coming on stage to speak about how they are using AWS on there businesses. Today, Ethan Kaplan from Fender. He spoke about Serverless movement than is used to power the new learning guitar application from Fender. This application was born because even if there are still lots of people buying new guitars, most of them quit playing guitar after 6 months. On the other side, people that continue playing are buying 8-10 guitars so it was important to develop a solution for people to continue to play.

The important guest for me was Yuri Misnik, a representative of National Australian Bank speaking about the Cloud first strategy within the Bank. Banking is traditionally associated with legacy technologies and regulation that would be against moving to the Cloud. But they are targeting 1/3 of the applications running in AWS by the year 2020 and they are moving fast.

There were less new launches than in Andy’s keynote but there are few to mention:

  • Redshift concurrent scaling: Allow Redshift to automatically increase the capacity to avoid waits when number of queries increases. The very nice part is the burstable, for every 24 hours your cluster is running you get 1 hour credit for this new concurrent usage making it almost transparent and free for most customers according to Werner
  • Support for new languages in AWS Lambda and most important support of custom running to allow customer running almost any language they like
  • AWS Lambda layer: You can now share a library between several Lambda functions without having to copy the code again
  • AWS Well-Architected Tool based on the AWS Well-Architected framework to allow reviewing the architecture without the need of a meeting with AWS or an AWS partner

Again, the keynote consumed a big part of the day but I was able to attend 2 more sessions. The 1st was Kyle Hailey presenting Performance Insights. It’s a performance tool for Amazon RDS working for Aurora, PostgreSQL, MySQL and Oracle. SQL Server is planned and will be the next on the list. It’s very promising even if it’s currently missing SQL execution plans. The strength of the tool is that it will provide consistent GUI across different engines.

AWS Performance Insights

As an Oracle DBA, I have the feeling that the tool is inspired a lot of what has been done already quite some time ago when ASH has been implemented directly in the database. The same AAS metric is driving the analysis and it’s based on 1 second sampling interval. It’s a good news, most DBAs that are already working this way with Oracle will be able to quickly understand their load when moving to Amazon RDS. The fun part is a demo from the company Slalom that developed a connector to Performance Insights for Alexa, now you can speak to your database.

I finished the day with a workshop about CI/CD: Continuous Integration / Continuous Deployment or Delivery according to the level you reach. Most often, we prefer or we have to (validated environment) have an approval for moving to Production thus only it’s only continuous delivery. I can’t put all the details but the workshop was well organize with people able to answer questions and infrastructure running without issues. It was an occasion to deal a little bit with Opswork service for a managed Puppet server and discover CodeBuild / CodePipeline services.

On Thursday night, it’s re:Play party where nobody told people that they can’t play anymore because they are too old :-) There will be a lot of activities besides 2 different stages: 1 for live music and 1 with DJs. But on Friday morning, there will be the last sessions and it will still be interesting, stay tuned.

Cet article AWS re:invent 2018 – Day 4 est apparu en premier sur Blog dbi services.

Extend a Oracle Access Manager 12.2.1.3 with User Messaging Service

Fri, 2018-11-30 07:43

After upgrading a Oracle Access Manager from 11.1.2.3 to 12.2.1.3, one of my customer requested from me to configure the Forget Password feature. This feature requires the User Messaging Service that was part of the SOA software in previous versions. Now it is part of the Oracle Access Manager software and the WebLogic Domain can be extended with the UMS services.

Extend WebLogic Domain

[oracle@vm03 ~]$ cd /u01/app/fmw_oim_12213/
[oracle@vm03 fmw_oim_12213]$ ls
cfgtoollogs coherence domain-registry.xml em idm inventory OPatch oracle_common oraInst.loc oui user_projects wlserver
[oracle@vm03 fmw_oim_12213]$ cd oracle_common/common/bin/
[oracle@vm03 bin]$ ./config.sh

Conf1
In the next wizard, we will select the Oracle User Messaging Service Basic template that is a quick start template that defines the managed server, ums_server1, and targets all UMS components to that server.
Conf2
We keep the file store persistence store but it should be placed on a shared drive if it is planned to use a UMS WebLogic cluster.
Conf3
The OPSS was already configured in the OAM WebLogic Domain, there is noting to change in the two next wizards.
Conf4
conf5
In the next wizards we get the JDBC connections from the RCU configuration schema.
conf6
conf7
The UMS repository schema was already installed. The JDBC connections shows correctly and the JDBC connections test went smoothly.
conf8
conf9
We had no keystore configured
conf10
We had to go through the topoligy to configure the UMS server and the UMS cluster.
conf11
In the next wizard, the new ums_server1 appeared listening on port 7003. This is the default for all new WebLogic Servers in the configuration wizard. Best is to change it to the default SOA port that is 8001 (it can help avoiding some troubles with hard coded ports in some applications).
conf12
As this OAM WebLogic Domain was configured with clusters, I had to create an UMS cluster named ums_cluster.
conf13
I did not use server templatres neither dynamic servers. Nothing changed in the next two wizards.
conf14
conf15
Assigned the ums_server1 to the ums_cluster moving it from the right panel to the left panel under the ums_cluster.
conf16
The coherence cluster existed already in the OAM WebLogic Domain and the machine too. Not changes was done in the next two wizards.
conf17
conf18
The ums_server1 was attached to the already existing machine moving it from the left panel to the right panel under the existing machine.
conf19
The deployments and the services were not changed. No actions was applied on the two next wizards.
conf20
conf21
We kept the persistence file store file name as it was proposed by default.
conf22
We checked the summary and clicked the Update button to extend the OAM WebLogic Domain.
conf23
All went smoothly without errors.
conf24
After the Domain has been Extended, it was started. The first time the ums_server1 had to be started manually using the startManagedWebLogic.sh script.

Configure the UMS email driver.

We decided to use mails to send the One Time Pin token to the users requesting to reset their password.

This configuration has to be done through the Fusion Middleware Enterprise Manager Console.
EM1
Select usermessaginserver (ums_server1)
EM2
EM3
At the bottom of the page, look for the Email driver and click the configure button. The next page allows t create a new Email driver and then to configure it.
Em4
EM5
here are the parameters that needs to be set for the UMS to be able to send mails.

  • Name: MailServer
  • Capability: Send
  • Sender Address:
    Check ‘Use Default Sender Address’ and set the default email ddress: EMAIL:admin@dbi-workshop.com
  • Outgoing Mail Server: “to be filed with your email server name”
Usage

Using the Oracle Access Manager One Time Pin rest API, messages are pushed to the User Messaging Service JMS queue and sent by mails using the email driver.

My next blog will explain how to configure the Oracle Access Manager Forget Password feature using the One Time Pin sent by Mail

Cet article Extend a Oracle Access Manager 12.2.1.3 with User Messaging Service est apparu en premier sur Blog dbi services.

AWS re:invent 2018 – Day 3

Thu, 2018-11-29 13:11

On Wednesday morning, this is the big day for AWS re:Invent conference. The day started with a 3 hours keynote from Andy Jassy, CEO of AWS.The keynote covered a lot of different topics: databases, blockchain, machine learning… As most keynotes, it started with some numbers and AWS is still a strong leader on the public Cloud if you look at the market shares, a moment to tackle a bit some known competitors.

AWS market share

Then it’s also the occasion for lots of announces and new services launches. If I have to make a selection, here is a few:

  • AWS Timestream: a new database service focusing on data from time series designed for IoT data that may come from sensors
  • AWS Quantum Ledger Database (QLDB): a new database service to answer some needs where customers were looking at the blockchain
  • AWS Glacier Deep Archive: the goal is to forget about backup to tape with a very competitive price of 1$ per TB (0.00099 $ per GB)
  • AWS Security Hub: to manage security and compliance across an AWS environment

Andy Jassy also announced a competition for machine learning developer AWS DeepRacer league. The goal is to create an autonomous driving model so the AWS DeepRacer (small electric car) finish a lap of a test track the fastest way possible. It’s reinforcement learning with a try and error where the model improves itself using the feedback of previous iterations. For this year developers will have 1 day and the 3 best will compete in a final before Thursday’s morning keynote.

After the keynote, I continued to pick sessions around databases and AWS introductory sessions about the infrastructure.

The one about matching the workload with the right database was very interesting. Basically, Rick Houlihan was explaining that you have to know your knowledge before picking the engine you will use. Most often, the engine is picked because it’s already licensed or because there is already an operational team knowing how to operate it. Also it happens that some people tries to go to the latest or newest trend because of the buzz. According to him, relational databases are not scaling very well and are optimized for storage. That’s true that if you fully normalize, data is not supposed to be duplicated. On the other hand, NoSQL databases can scale a lot but are not very adaptive to ad-hoc queries.

Databases workload

On the architecture part, I focused again on VPC and there are some changes on how you can build your network to work on AWS alongside with your on premise datacenter.

VPC sharing allows now a VPC to span across different AWS account within the same organization. It means you can have several billing accounts but managed by a VPC owner that would be responsible for routing tables and so. It can simplify the maintenance and help network engineers.

The other big point I noted is the new transit gateway (TGW). A transit gateway can be attached to several VPCs and to AWS VPN to link with your datacenter. This is a central hub where you can maintain routing tables for inter-VPC traffic without maintaining a virtual private gateway per VPC.

VPC before TGW VPC with TGW

On Thursday, there will be another keynote from Dr Werner Vogels, CTO of Amazon.com. Let’s see if there will be other announces.

Cet article AWS re:invent 2018 – Day 3 est apparu en premier sur Blog dbi services.

AWS re:invent 2018 – Day 2

Wed, 2018-11-28 14:43

Second day in Las Vegas for the AWS re:Invent conference. It was time to travel a little bit around the campus to attend some sessions at the Aria. The planning was still around databases and a bit AWS knowledge.

The shuttle service worked perfectly in both directions with a reasonable time travel between the 2 hotels. But with such time travels, you can’t always be in all sessions you would like to see.

I started with a session about DevOps strategy where the speaker Ajit Zadgaonkar explained some rules to succeed in the DevOps strategy. Even if you start small, moving to DevOps is a movement that all the company should be aware of. It’s about teaching not only within the DevOps team but let know other teams and businesses about your work.

Then I saw 2 different interesting sessions about Aurora running on Amazon RDS. Aurora runs on the same platform than the other proposed engines (Oracle, SQL Server, PostgreSQL, mySQL and MariaDB). It means Aurora is fully managed by AWS.

The interesting part is that Aurora supports 2 different engines: MySQL or Postgres and in both cases, AWS claims that the performance is lot better in Aurora than in the community edition because it has been designed for the Cloud. One of the 2 session was a deep dive focusing on the Postgres part and the storage part of Aurora is totally different.

AWS Aurora Postgres storage

AWS is using a shared storage across a region (like Frankfurt) and “replicate” pages in 6 different locations. According to them, it provides great resilience/durability/availability. To prevent write performance bottleneck, write is valid once 4 out of the 6 blocs have been written. In addition, Aurora is kind of redo log based and doesn’t send full pages/blocs to the storage, reducing a lot the amount of written data. Below is a slides of a benchmark using pgbench.

Aurora Postgres benchmark

To continue my journey, I also went to basic sessions about AWS infrastructure itself and it’s interesting to note that they think in advance how to power their datacenters, 50% of the energy used by AWS datacenters comes from renewable sources like wind or solar.followed this session remotely thanks to overflow areas where you can attend a session currently on-going in another hotel. You get a video streaming of the session with the slides and you get a headset for the sound.

Invent overflow session

There is also 5 new regions planned in a near future including 2 new locations in Europe: Milan, Bahrein, Stockholm Hong Kong and Cape Town.

Even if there were already some announcements, on Wednesday morning we will have the keynote with Andy Jassy, CEO of AWS. I’m looking forward for this keynote.

Cet article AWS re:invent 2018 – Day 2 est apparu en premier sur Blog dbi services.

No more recovery.conf in PostgreSQL 12

Wed, 2018-11-28 13:41

Traditionally everything which is related to recovery in PostgreSQL goes to recovery.conf. This is not only true for recovery settings but also for turning an instance into a replica which follows a master. Some days ago this commit landed in the PostgreSQL git repository. What that effectively means is, that there will be no more recovery.conf starting with PostgreSQL 12. How does that work then? Lets do some tests.

Obviously you need the latest development version of PostgreSQL (if you are not sure on how to do that check here and here):

postgres@pgbox:/home/postgres/ [PGDEV] psql -X postgres
psql (12devel)
Type "help" for help.

postgres=# select version();
                                                  version                                                   
------------------------------------------------------------------------------------------------------------
 PostgreSQL 12devel on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28), 64-bit
(1 row)

Lets look at the replica case first. When you do a pg_basebackup you can tell it to write a recovery.conf file (at least you could tell that up to PostgreSQL 11). So what changed here:

postgres@pgbox:/home/postgres/ [PGDEV] pg_basebackup --help | grep -A 1 recovery
  -R, --write-recovery-conf
                         write configuration for replication

When you compare that to a version before 12 you’ll notice the difference in wording:

postgres@pgbox:/home/postgres/ [PGDEV] /u01/app/postgres/product/10/db_4/bin/pg_basebackup --help | grep -A 1 recovery
  -R, --write-recovery-conf
                         write recovery.conf for replication

The word “recovery.conf” is gone and it is a more general statement about replication configuration now. What does pg_baebackup do now in PostgreSQL 12 when we ask to write the configuration for recovery:

postgres@pgbox:/home/postgres/ [PGDEV] pg_basebackup -R -D /var/tmp/pg12s/

We do not have a recovery.conf file:

postgres@pgbox:/home/postgres/ [PGDEV] ls -la /var/tmp/pg12s/
total 64
drwxr-xr-x. 20 postgres postgres  4096 Nov 27 20:19 .
drwxrwxrwt.  6 root     root       256 Nov 27 20:19 ..
-rw-------.  1 postgres postgres   224 Nov 27 20:19 backup_label
drwx------.  5 postgres postgres    41 Nov 27 20:19 base
-rw-------.  1 postgres postgres    33 Nov 27 20:19 current_logfiles
drwx------.  2 postgres postgres  4096 Nov 27 20:19 global
drwx------.  2 postgres postgres     6 Nov 27 20:19 pg_commit_ts
drwx------.  2 postgres postgres     6 Nov 27 20:19 pg_dynshmem
-rw-------.  1 postgres postgres  4513 Nov 27 20:19 pg_hba.conf
-rw-------.  1 postgres postgres  1636 Nov 27 20:19 pg_ident.conf
drwxr-xr-x.  2 postgres postgres    32 Nov 27 20:19 pg_log
drwx------.  4 postgres postgres    68 Nov 27 20:19 pg_logical
drwx------.  4 postgres postgres    36 Nov 27 20:19 pg_multixact
drwx------.  2 postgres postgres     6 Nov 27 20:19 pg_notify
drwx------.  2 postgres postgres     6 Nov 27 20:19 pg_replslot
drwx------.  2 postgres postgres     6 Nov 27 20:19 pg_serial
drwx------.  2 postgres postgres     6 Nov 27 20:19 pg_snapshots
drwx------.  2 postgres postgres     6 Nov 27 20:19 pg_stat
drwx------.  2 postgres postgres     6 Nov 27 20:19 pg_stat_tmp
drwx------.  2 postgres postgres     6 Nov 27 20:19 pg_subtrans
drwx------.  2 postgres postgres     6 Nov 27 20:19 pg_tblspc
drwx------.  2 postgres postgres     6 Nov 27 20:19 pg_twophase
-rw-------.  1 postgres postgres     3 Nov 27 20:19 PG_VERSION
drwx------.  3 postgres postgres    60 Nov 27 20:19 pg_wal
drwx------.  2 postgres postgres    18 Nov 27 20:19 pg_xact
-rw-------.  1 postgres postgres   390 Nov 27 20:19 postgresql.auto.conf
-rw-------.  1 postgres postgres 26000 Nov 27 20:19 postgresql.conf
-rw-------.  1 postgres postgres     0 Nov 27 20:19 standby.signal

Replica related configuration is appended to postgresql.auto.conf:

postgres@pgbox:/home/postgres/ [PGDEV] cat /var/tmp/pg12s/postgresql.auto.conf 
# Do not edit this file manually!
# It will be overwritten by the ALTER SYSTEM command.
logging_collector = 'on'
log_truncate_on_rotation = 'on'
log_filename = 'postgresql-%a.log'
log_line_prefix = '%m - %l - %p - %h - %u@%d '
log_directory = 'pg_log'
primary_conninfo = 'user=postgres passfile=''/home/postgres/.pgpass'' port=5433 sslmode=prefer sslcompression=0 target_session_attrs=any'

But what about timeline and all the other settings? All these have been merged into the normal postgresql.[auto.]conf file as well:

postgres=# select name,setting from pg_settings where name like '%recovery%';
           name            | setting 
---------------------------+---------
 recovery_end_command      | 
 recovery_min_apply_delay  | 0
 recovery_target           | 
 recovery_target_action    | pause
 recovery_target_inclusive | on
 recovery_target_lsn       | 
 recovery_target_name      | 
 recovery_target_time      | 
 recovery_target_timeline  | 
 recovery_target_xid       | 
 trace_recovery_messages   | log
(11 rows)

So all the settings can now be set in one file. The remaining question is: How does the instance then know when it needs to go into recovery? Before PostgreSQL 12 the presence of the recovery.conf file told the instance to go into recovery. Now, that the file is gone there must be a new mechanism and that is the “standby.signal” file in case of a replica:

postgres@pgbox:/home/postgres/ [PGDEV] cat /var/tmp/pg12s/standby.signal 
postgres@pgbox:/home/postgres/ [PGDEV] 

That file is empty and just tells PostgreSQL to go into recovery and then process the recovery related parameters which are now in postgresql.[auto.]conf. The same is true when a recovery is requested: The signal file in that case is “recovery.signal”.

All in all that means there is one configuration file less to take care of and that is good. The question will be on how fast all the third party tools will catch up with that change.

Cet article No more recovery.conf in PostgreSQL 12 est apparu en premier sur Blog dbi services.

Keine Angst vor Container Technologie (DOAG 2018)

Wed, 2018-11-28 01:46

Seit 30 Jahren bin ich in der IT-Branche tätig, hatte immer wieder mit Oracle mit RDBMS Systemen zu tun. Nun, seit bald 4 Jahren als Berater bei dbi services habe ich sehr viel Berührung mit Oracle Datenbaken, daher auch mein Besuch der DOAG 2018.

Mit grossem Interesse reiste ich zur DOAG nach Nürnberg und hatte mir vorgenommen zum Thema OpenShift und Container, diverse Sessions zu besuchen.

Warum OpenShift? Nun seit einiger Zeit sehen wir Projekte(PoC) bei unseren Kunden in diesem Bereich. Red Hat bietet eine komplette Lösung an, die alle Komponenten beinhaltet. Ein Start wir deutlich schneller möglich.

Interessanterweise, ist nicht nur Euphorie zu spüren, es gibt auch kritische Stimmen zu diesem Thema. Doch es erinnert mich an die Zeit als die Hardware-Virtualisierung aufkam. Auch damals wurden kritische Fragen gestellt. Wird das funktionieren und ist viel zu komplex! Diese Technologie eignet sich nur für Dienstleister, Cloud-Anbieter etc.

Also Grund genug erste Erfahrungsberichte an der DOAG anzuhören und Vorträge zu diesem Thema zu besuchen.
 

Was ändert sich denn hier?

Nach der Hardware-Virtualisierung folgt nun der nächste Virtualisierungsschritt, Docker (Einsatz von Container).
Der Unterschiede der Hardware-Virtualisierung zu Docker kann am besten mit einer Schematischen Darstellung aufgezeigt werden.

Schematische Darstellung der Hardware-Virtualisierung

Server-Virtualisierung
 

Der Unterschied in der Architektur zwischen Hardware-Virtualisierung und Container

docker_fig1

Grösster Unterschied, bei der Hardware-Virtualisierung hat jeder Virtuelle-Server ein komplettes eigenes Betriebssystem. Durch die Container Architektur, fällt dieser Teil zum grössten Teil weg, was den einzelnen Container deutlich kleiner und vor allem portabler macht. Es werden weniger Ressourcen benötig auf der Infrastruktur, oder auf der selben Infrastruktur können deutlich mehr Containers betrieben werden.
 

OpenShift die Red hat Lösung für Docker hat folgende Architektur

architecture_overview
 

Was erwartet uns mit OpenShift, was müssen wir auf jeden Fall beachten

– Nächster Schritt zum Thema Virtualisierung -> Container
– Komplexe Infrastruktur, bei Red Hat alles aus einer Hand (Inkl. Kubernetes)
– Der Start in die Container Welt, muss sehr gut vorbereitet sein
– Technologie ist noch sehr jung, hier wird sich noch einiges ändern
– Wenn möglich ein PoC durchführen, nicht zu lange warten
– Konzepte und Prozesse werden zwingend benötigt
 

Mein Fazit

Mein erster Besuch an der DOAG hat mir sehr wertvolle Informationen und Erkenntnisse geliefert zu den beiden Theme OpenShift und Containers. Im speziellen die Lösung von Red Hat, mit dieser Technologie werde ich mich in der nächster Zeit beschäftigen. Ich bin sicher das wir hier wieder einmal an einem sehr interessanten Technologie-Wendepunkt stehen, dem Start in die Container Infrastrukturen mit kompletten Lösungen wie OpenShift von Red Hat. Jedoch trotz aller Euphorie, ein start in diese Technologie sollte geplant und kontrolliert erfolgen. Speziell sollten Erfahrungen in einem PoC gesammelt werden. Der Schritt ein OpenShift Infrastruktur in einem produktiven Umfeld einzusetzen, muss basiert auf den Erfahrungen gut geplant und kontrolliert erfolgen um nicht in die gleichen Probleme wie es damals bei der Hardware-Virtualisierung zu laufen!

chaos_container

Für den produktiven Betrieb, braucht es Sicherheit, Stabilität, Kontinuität ebenfalls sollten alle Komponenten aktuell bleiben. Monitoring und Backup/Restore sind ebenso Themen mit denen man sich vor der Inbetriebnahme auseinandersetzen muss. Sicher ermöglicht diese Technologie mehr Tempo, aber es braucht Regelungen und Prozesse damit nicht nach einer gewissen Zeit, die Container Welt plötzlich so wie auf dem Bild oberhalb aussieht!

Cet article Keine Angst vor Container Technologie (DOAG 2018) est apparu en premier sur Blog dbi services.

AWS re:invent 2018 – Day 1

Tue, 2018-11-27 10:20

Yesterday was my first day at AWS re:Invent conference. The venue is quite impressive, the conference is split between 6 hotels where you can attend different types of sessions including chalk talk, keynotes, hands-on labs or workshop. For my first day, I stayed in the same area in The Venetian to make it easy.

Invent

The walking distance is quite big between some places so it requires to carefully plan the day to be able to see what you want to see. Hopefully there is a shuttle service and I’ll move a bit more between hotels tomorrow. You also need to reserve your seat and be here in advance to be sure to enter the room.

In my own example, I wanted to attend a chalk talk about Oracle Licensing in the Cloud to start the week. As I was not able to reserve a seat I had to wait on the walk up line. The session was full, Oracle still interests lots of people and licensing is still a concern besides performance for lots of customers when they start planning to move to public cloud.

I’m working with AWS services for a bit more than 1 year at a customer but there are still a lot to learn and understand about AWS, that’s why I also attended to an Introductory session about VPC (Virtual Private Cloud) to better understand the network options when going to AWS. To make it simple, a VPC allows to to have a private network configured as you wish inside AWS. You have the control of the IP range you would like to use and you can configure the routing tables and so on.

I also tried to attend a workshop about running Oracle on the Amazon RDS, the AWS managed database service and especially how to migrate them from Oracle to the Amazon Aurora database using PostgreSQL compatibility. The goal was to use 2 AWS products to run the migration: AWS Schema Convertion Tool and AWS Database Migration Service. Unfortunately some issues with the WiFi constantly changing the IP and a limitation on my brand new AWS account that required additional checks from Amazon prevented me from going to the end of the workshop. But I got some credits to try it by myself a bit later so I’ll most probably try the Schema Conversion Tool.

Some DBA may worry about the managed database services or announces from Oracle about autonomous database but I agree with the slides below from AWS speaker during the workshop. I personally think that DBA won’t disappear. Data itself and applications will still be around for quite long time and the job may evolve and we will spend more time on application/data side than before.

DBA role in the Cloud

Today is another day, let’s forget a bit about the DBA part and try to see more about DevOps…

Cet article AWS re:invent 2018 – Day 1 est apparu en premier sur Blog dbi services.

SQL Server 2019 CTP 2.1 – A replacement of DBCC PAGE command?

Tue, 2018-11-27 07:08

Did you ever use the famous DBCC PAGE command? Folks who are interested in digging further to the SQL Server storage already use it for a while. We also use it during our SQL Server performance workshop by the way. But the usage of such command may sometimes go beyond and it may be used for some troubleshooting scenarios. For instance, last week, I had to investigate a locking contention scenario where I had to figure out which objects were involved and with their related pages (resource type) as the only way to identify them. SQL Server 2019 provides the sys.dm_db_page_info system function that can be useful in this kind of scenario.

blog 148 - 0 - banner

To simulate locks let’s start updating some rows in the dbo.bigTransactionHistory as follows:

USE AdventureWorks_dbi;
GO

BEGIN TRAN;

UPDATE TOP (1000) dbo.bigTransactionHistory
SET Quantity = Quantity + 1

 

Now let’s take a look at the sys.dm_tran_locks to get a picture of locks held by the above query:

SELECT 
	resource_type,
	COUNT(*) AS nb_locks
FROM 
	sys.dm_tran_locks AS tl
WHERE 
	tl.request_session_id = 52
GROUP BY
	resource_type

 

blog 148 - 1 - query locks

Referring to my customer scenario, let’s say I wanted to investigate locks and objects involved. For the simplicity of the demo I focused only the sys.dm_tran_locks DMV but generally speaking you would probably add other ones as sys.dm_exec_requests, sys.dm_exec_sessions etc …

SELECT 
	tl.resource_database_id,
	SUBSTRING(tl.resource_description, 0, CHARINDEX(':', tl.resource_description)) AS file_id,
	SUBSTRING(tl.resource_description, CHARINDEX(':', tl.resource_description) + 1, LEN(tl.resource_description)) AS page_id
FROM 
	sys.dm_tran_locks AS tl
WHERE 
	tl.request_session_id = 52
	AND tl.resource_type = 'PAGE'

 

blog 148 - 2 - locks and pages

The sys.dm_tran_locks DMV contains the resource_description column that provides contextual information about the resource locked by my query. Therefore the resource_description value column will inform about [file_id:page_id] when resource_type is PAGE.

SQL Server 2019 will probably lead the DBCC PAGE command to return to the stone age for some tasks but let’s start with this old command as follows:

DBCC PAGE (5, 1, 403636, 3) WITH TABLERESULTS;

 

blog 148 - 3 - dbcc page

The DBCC PAGE did the job and provides and output that includes the page header section where the Metadata: ObjectId is stored. We may then use it with OBJECT_NAME() function to get the corresponding table name.

SELECT OBJECT_NAME(695673526)

 

blog 148 - 4 - dbcc page - object_name

But let’s say that using this command may be slightly controversial because this is always an undocumented command so far and no need to explain here how it can be dangerous to use it in production. Honestly, I never encountered situations where DBCC PAGE was an issue but I may not provide a full guarantee and it is obviously at your own risk. In addition, applying DBCC PAGE for all rows returned from my previous query can be a little bit tricky and this is where the new sys.dm_db_page_info comes into play.

;WITH tran_locks
AS
(
	SELECT 
		tl.resource_database_id,
		SUBSTRING(tl.resource_description, 0, CHARINDEX(':', tl.resource_description)) AS file_id,
		SUBSTRING(tl.resource_description, CHARINDEX(':', tl.resource_description) + 1, LEN(tl.resource_description)) AS page_id
	FROM 
		sys.dm_tran_locks AS tl
	WHERE 
		tl.request_session_id = 52
		AND tl.resource_type = 'PAGE'
)
SELECT 
	OBJECT_NAME(page_info.object_id) AS table_name,
	page_info.*
FROM 
	tran_locks AS t
CROSS APPLY 
	sys.dm_db_page_info(t.resource_database_id, t.file_id, t.page_id,DEFAULT) AS page_info

 

This system function provides a plenty of information mainly coming from the page header in tabular format and makes my previous requirement easier to address as show below.

blog 148 - 5 - sys.dm_db_page_info

The good news is this function is officially documented but un/fortunately (as you convenience) for the deep dive study you will still continue to rely on the DBCC PAGE.

Happy troubleshooting!

 

 

Cet article SQL Server 2019 CTP 2.1 – A replacement of DBCC PAGE command? est apparu en premier sur Blog dbi services.

Strange behavior when patching GI/ASM

Mon, 2018-11-26 12:45

I tried to apply a patch to my 18.3.0 GI/ASM two node cluster on RHEL 7.5.
The first node worked fine, but the second node got always an error…

Environment:
Server Node1: dbserver01
Server Node2: dbserver02
Oracle Version: 18.3.0 with PSU OCT 2018 ==> 28660077
Patch to be installed: 28655784 (RU 18.4.0.0)

First node (dbserver01)
Everything fine:

cd ${ORACLE_HOME}/OPatch
sudo ./opatchauto apply /tmp/28655784/
...
Sucessfull

Secondary node (dbserver02)
Same command but different output:

cd ${ORACLE_HOME}/Patch
sudo ./opatchauto apply /tmp/28655784/
...
Remote command execution failed due to No ECDSA host key is known for dbserver01 and you have requested strict checking.
Host key verification failed.
Command output:
OPATCHAUTO-72050: System instance creation failed.
OPATCHAUTO-72050: Failed while retrieving system information.
OPATCHAUTO-72050: Please check log file for more details.

After playing around with the keys I found out, that the host keys had to be exchange also for root.
So I connected as root and made an ssh from dbserver01 to dbserver02 and from dbserver02 to dbserver01.

After I exchanged the host keys the error message changed:

Remote command execution failed due to Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
Command output:
OPATCHAUTO-72050: System instance creation failed.
OPATCHAUTO-72050: Failed while retrieving system information.
OPATCHAUTO-72050: Please check log file for more details.

So I investigated the log file a litte further and the statement with the error was:

/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=no -o NumberOfPasswordPrompts=0 dbserver01 \
/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=yes -o NumberOfPasswordPrompts=0 dbserver01 \
/u00/app/oracle/product/18.3.0/dbhome_1//perl/bin/perl \
/u00/app/oracle/product/18.3.0/dbhome_1/OPatch/auto/database/bin/RemoteHostExecutor.pl \
-GRID_HOME=/u00/app/oracle/product/18.3.0/grid_1 \
-OBJECTLOC=/u00/app/oracle/product/18.3.0/dbhome_1//cfgtoollogs/opatchautodb/hostdata.obj \
-CRS_ACTION=get_all_homes -CLUSTERNODES=dbserver01,dbserver02,dbserver02 \
-JVM_HANDLER=oracle/dbsysmodel/driver/sdk/productdriver/remote/RemoteOperationHelper

Soooooo: dbserver02 starts a ssh session to dbserver01 and from there an additional session to dbserver01 (himself).
I don’t know why but it is as it is….after I did a keyexchange from dbserver01 (root) to dbserver01 (root) the patching worked fine.
At the moment I can not remeber that I ever had to do a keyexchange from the root User on to the same host.

Did you got the same proble or do you know a better way to do that? Write me a comment!

Cet article Strange behavior when patching GI/ASM est apparu en premier sur Blog dbi services.

DOAG 2018: OVM or KVM on ODA?

Mon, 2018-11-26 03:51

The DOAG 2018 is over, for me the most important topics were in the field of licensing. The insecurity among the users is great, let’s take virtualization on the ODA, for example:

The starting point: The customer uses Oracle Enterprise Edition, has 2 CPU licenses, uses Dataguard as disaster protection on 2 ODA X7-2M systems and wants to virtualize, he also has 2 application servers that are also to be virtualized.

Sure, if I use the HA variant of the ODA or Standard Edition, this does not concern me, there OVM is used as a hypervisor and this allows hard partitioning. The database system (ODA_BASE) automatically gets its own CPU pool in Virtualized Deployment; additional VMs can be distributed to the rest of the CPU.

On the small and medium models only KVM is available as a hypervisor. This has some limitations: on the one hand there is no virtualized deployment of the ODA 2S / 2M system, on the other hand, the operation of databases as KVM guests is not supported. This means that the ODA must be set up as a bare metal system, the application servers are virtualized in KVM.

What does that mean for the customer described above? We set up the system in bare metal mode, we activate 2 cores on each system, set up the database and set up the Dataguard between primary and standby. The customer costs 2 EE CPU licenses (about $ 95k per price list).

Now he wants to virtualize his 2 application servers and notes that 4 cores are needed per application server. Of 36 cores (per system) but only 2 cores are available, so he also activates 4 more cores (odacli update-cpucore -c 6) on both systems and installs the VM.

But: The customer has also changed his Oracle EE licenses, namely from 1 EE CPU to 3 CPU per ODA, so overall he has to buy 6 CPU licenses (about $ 285k according to the price list)!

Now Oracle propagates that in the future KVM in the virtualization should be the means of choice. However, this will not work without hard partitioning under KVM or the support of databases in KVM machines.

Tammy Bednar (Oracle’s Oracle Database Appliance Product Manager) announced in her presentation “KVM or OVM? Use Cases for Solution in a Box” that solutions to this problem are expected by mid-2019:

– Oracle databases and applications should be supported as KVM guests
– Support for hard partitioning
– Windows guests under KVM
– Tooling (odacli / Web Console) should support the deployment of KVM guests
– A “privileged” VM (similar to the ODA_BASE on the HA models) for the databases should be provided
– Automated migration of OVM guests to KVM

All these measures would certainly make the “small” systems much more attractive for consolidation. It will also help to simplify the “license jungle” a bit and to give the customers a bit more security. I am curious what will come.

Cet article DOAG 2018: OVM or KVM on ODA? est apparu en premier sur Blog dbi services.

AWS re:invent 2018 warm up

Mon, 2018-11-26 03:07

The Cloud is now part of our job so we have to get a deeper look on the available services to understand and take best advantage of them. The annual AWS conference re:invent has started tonight in The Venetian at Las Vegas and will last until Friday.

AWS logo

Today was a bit special because there were no sessions yet but instead I was able to participate to a ride to Red Rock canyon on a Harley Davidson motorbike.

It’s a 56 miles ride and you can enjoy beautiful landscapes very different from the city and the light of the casinos. We were a small group with around 13 bikes and even if it was a bit cold it was a really nice tour. I really recommend people in Vegas to escape the city for few hours to discover such places like Red Rock or Valley of Fire.

Harley Davidson ride to Red Rock Canyon

 

Then the conference opened on Midnight Madness and an attempt to beat the world record of ensemble air drumming. I don’t know yet if we achieve the goal but I tried to help and participated to the challenge.

invent Midnight Madness

The 1st launch of the week has been also done this evening and it’s a new service called AWS RoboMaker. You can now use AWS cloud to develop new robotics applications and use other services like Lex or Polly to allow your robot to understand voice orders and answer it for example.

Tomorrow the real thing begins with hand-on labs and some sessions, stay tuned.

Cet article AWS re:invent 2018 warm up est apparu en premier sur Blog dbi services.

Flashback to the DOAG conference 2018

Sat, 2018-11-24 14:40

Each year, since the company creation in 2010, dbi services attends the DOAG conference in Nürnberg. Since 2013 we even have a booth.

The primary goal of participating to the DOAG Conference, is to get an overview about the main trends in the Oracle business. Furthermore, this conference and our booth allow us to welcome our Swiss and German customers and thank them for their trust. They’re always pleased to receive some nice Swiss Chocolate produced in Delémont (Switzerland), city of our Headquarter.

But those are not the only reasons why we attend this event. The DOAG conference is also a way to promote our expertise with our referents and to thank our performing consultants for their work all over the year. We consider the conference as a way to train people and improve their skills.

Finally some nice social evenings take place, first of all the Swiss Oracle User Group (SOUG) “Schweizer Abend”, the Tuesday Evening, secondly the “DOAG party” on Wednesday evening. dbi services being active in the Swiss Oracle User Group, we always keep a deep link to the Oracle community.

As a Chief Sales Officer I tried to get an overview of the main technical “Oracle trends”, through the successes of our sessions (9 in total) all over the conference. The “success” being measured in term of number of participants to those sessions.

At a first glance I did observe a kind of “stagnation” of the interest about Cloud topics. I can provide several evidences and explanations about that. First of all the Key Note during the first day presenting a study over German customers concerning the cloud adoption didn’t reveal any useful information, according to me. The Cloud adoption increases, however there are still some limitations in the deployment of Cloud solutions because of security issues and in particular the cloud act.

Another possible reason about the “small” interest about Cloud topics during the conference, according to me, relies on the fact that Cloud became a kind of “commodity”. Furthermore, we all have to admit that Oracle has definitively not a leadership position in this business. Amazon, Azure and Google definitively are the leaders in this business and Oracle remains a “small” challenger.

Our session from Thomas Rein did not had so much attendees, even if we really presented a concrete use case about Oracle Cloud usage and adoption. The DOAG conference is a user group conference, Techies mostly attend the conference and Techies have to deal with concrete issues, currently the Oracle Cloud does not belong to them.

So what were the “main topics” according to what I could observe ?

Open Source had a huge success for us, both the MySQL and the two PostgreSQL tracks were very very successful, thanks to Elisa Usai and Daniel Westermann.

Some general topics like an “introduction to Blockchain” also had a huge success, thanks to Alain Lacour for this successful session.

Finally the “classicals”, like DB tuning on the old-fashion “On Prem” architectures also had a huge success, thanks to our technology leader Clemens Bleile and to Jérôme Witt who explained all about the I/O internals (which are of course deeply link with performance issues).

Thanks to our other referents: Pascal Brand (Implement SAML 2.0 SSO in WLS and IDM Federation Services) and our CEO David Hueber (ODA HA: What about VMs and backup?) who presented some more “focused” topics.

I use this Blog post to also thank the Scope Alliance and in particular Esentri for the very nice party on Wednesday Evening, beside hard work, hard party is also necessary :-)

Below, Daniel Westermann with our customer “die Mobiliar” on the stage, full room :

IMG_5143

Cet article Flashback to the DOAG conference 2018 est apparu en premier sur Blog dbi services.

My DOAG Debut

Fri, 2018-11-23 08:50

Unbelievable! After more than 10 years working in the Oracle Database environment, this year was my first participation at the DOAG Conference + Exhibition.

After a relaxed travel to Nürnberg with all the power our small car could provide on the German Autobahn, we arrived at the Messezentrum.
With the combined power of our dbi services’ team, the booth was ready in no time and we could switch to the more relaxed part of the day and ended up in our hotel’s bar with other DOAG participants.

The next few days were a firework of valuable sessions, stimulating discussions and some after hour parties who gave me to think about my life decisions and led me to the question: Why did it take me so long for participating in the DOAG Conference + Exhibition?

It would make this post unreadable long and boring if I would sum up all sessions I attended.
So I will just mention a few highlights with the links to the presentations:

Boxing-Gloves-Icons

Boxing Gloves Vectors by Creativology.pk

And of course, what must be mentioned is The Battle: Oracle vs. Postgres: Jan Karremans vs. Daniel Westermann

The red boxing glow (for Oracle) represents Daniel Westermann, Oracle expert for many many years who now is the Open Infrastructure Technologie Leader @ dbi services, while Jan Karremans, Senior Sales Engineer at Enterprise DB put on the blue glow (for Postgres). The room was fully packed with over 200 people who have more sympathy for Oracle.

 Oracle vs. Postgres

The Battle: Oracle vs. Postgres

Knowing how much Daniel loves the Open Source database it was inspiring to see how eloquent he defended the Oracle system and brought Jan multiple times into troubles.
It was a good and brave fight between the opponents in which Daniel had the better arguments and gained a win after points.
For the next time, I would wish to see Daniel on the other side defending Postgres because I am sure he could fight down almost every opponent.

In the end, this DOAG was a wonderful experience and I am sure it won’t take another 10 years until I come back.

PS: I could write about the after party, but as you know, what happens at the after party stays at the after party expect the headache, this little b… stays a little bit longer.

PPS: On the last day I’ve got a nice little present from virtual7 for winning the F1 grand prix challenge. I now exactly on which dbi event we will open this bottle, stay tuned…
IMG_20181122_153112

Cet article My DOAG Debut est apparu en premier sur Blog dbi services.

DOAG 2018: Key word: “Docker”

Fri, 2018-11-23 06:34

Capture

In my blog about the DOAG Last year I said that saw a growing interest on the automatic deployment tools and Docker containers. This year confirmed the interest. They were a lot of presentations about Docker containers, Kubernetes, OpenShift. This for the database stream, the DevOps stream but also the Middleware one. I numbered more than 25 sessions where the keyword Docker appeared in the Abstract.

Despite my will, I was not able to assist to all of those. They were to many to be able to.

One of those interesting presentations that retained my attention was the following one: “Management von Docker Containern mit Openshift & Kubernetes” from Heiko Stein. He gave us a very good overview of the services for Kubernetes and Openshift and showed us how they can be complementary.

An other one was about monitoring and diagnosing performances of a Java application (OpenJDK 11) running in a Docker Container.
Monitoring of JVM in Docker to Diagnose Performance Issues. This one was interesting at several levels, as it talked about: Docker container, OpenJDK 11 and the tools that are delivered with. Monitring applications and diagnosing issues are always interesting subjects to follow to get some hints from someone else experiences.

The last I will list but not the least one was “MS Docker: 42 Tips & Tricks for Working with Containers“. This one in summary is all you ever wanted to know about Docker. But the 45 minutes sessions were really to short to get everything from it :-(.

Those presentations just made my interest for those technologies grow faster.

Cet article DOAG 2018: Key word: “Docker” est apparu en premier sur Blog dbi services.

Pages