Feed aggregator

Be Careful When Subscribing To Oracle Learning Subscription

Michael Dinh - Mon, 2019-10-07 18:53

Subscribing to Oracle Learning Subscription seems good in theory but bad in reality.

Oracle support informed. “Oracle University’s policy regarding Learning Subscription courseware materials is that they cannot be downloaded by customers.”

How convenience of Oracle as the info should have been stated at https://education.oracle.com/oracle-learning-subscriptions

Took for granted materials can be downloaded since they are made available to download for all other training formats.

This seems to be a deceptive process by not disclosing the information. because by the time one has subscribe to find the lack of full disclosure, it may be too late.

Hopefully, this will help anyone to avoid the same mistake.

 

The future of work

RDBMS Insight - Wed, 2019-10-02 10:48

The people behind the American tech industry blog TechDirt have put together an anthology of stories about the future of work called (appropriately) Working Futures, and I’m excited to have a story in it.

My story, “The Auditor and the Exorcist,” was heavily influenced by the years I spent working remotely for Oracle. It tells a near-future story of working in a world with a social credit system that depends on weak AI. Of course, poor security, hacking, and bugs are all present in the future, too…

Quick summary: Pat is stuck in a soul-deadening job as a social credit auditor. Her thoroughly modern home suddenly shows every sign of being haunted. Pat doesn’t believe in ghosts, but the only thing that seems to restore her home to normalcy is the help of an online exorcist. Is Pat’s house really haunted… or is something more sinister going on?

The book is now available in both ebook and paperback format at Amazon, and if you have a Kindle Unlimited subscription, you can read it for free. Plus, you can repost any of the stories in it (non-commercially): The stories in the anthology are all released under CC license. My story is released as CC-BY-NC-ND, meaning anyone’s free to recopy it non-commercially, but not to re-release it commercially or to remix without permission. to recopy it non-commercially, but not to re-release it commercially or to remix without permission.

Image credit: Remixed from a public domain image found on publicdomainimages.net. Feel free to reuse/remix further.

Categories: DBA Blogs

Java Development with Autonomous Transaction Processing Dedicated (ATP-D)

Kuassi Mensah - Tue, 2019-10-01 18:20
The Oracle Autonomous Transaction Processing Dedicated (ATP-D) is a database Cloud service which allows implementing a private database Cloud service running on dedicated Exadata Infrastructure within the Oracle Public Cloud. The goal of this blog article is to help you, Java developer or architect, build and deploy fast, scalable, and reliable Java applications with ATP-D, using plain Java, Java Servlets, or Java Microservices with WebLogic, Helidon, WebSphere, Liberty, Tomcat, WildFly (JBoss), Spring, and so on.



Please read the full blog post @ https://medium.com/oracledevs/java-development-with-autonomous-transaction-processing-dedicated-atp-d-f0355a2f9abd

Oracle JDBC drivers on Maven Central

Kuassi Mensah - Tue, 2019-10-01 18:04


At last!
Yes, you asked for it, and with some delay (better late than ..), we did it!
Maven Central becomes a distribution center for the Oracle JDBC drivers. We started with the latest release 19.3.0.0 but will soon add previous and supported releases.
Read the full post @ https://medium.com/oracledevs/oracle-jdbc-drivers-on-maven-central-64fcf724d8b

Migrating Oracle database from windows to ODA

Yann Neuhaus - Tue, 2019-10-01 02:47

Nowadays I have been working on an interesting customer project where I had to migrate windows oracle standard databases to ODA. The ODAs are X7-2M Models, running version 18.5. This version is coming with Red Hat Enterprise Linx 6.10 (Santiago). Both windows databases and target ODA databases are running PSU 11.2.0.4.190115. But this would definitively also be working for oracle 12c and oracle 18c databases. The databases are licensed with Standard Edition, so migrating through data guard was not possible. Through this blog I would like to share the experience I could get on this topic as well as the method and steps I have been using to successfully migrate those databases.

Limitations

Windows and Linux platform been on the same endian, I have been initially thinking that it would not be more complicated than simply duplicating the windows database to an ODA instance using the last backup. ODA databases are OMF databases, so can not be easier, as no convert parameter is needed.
After having created a single instance database on the ODA, exported the current database pfile and adapted it for the ODA, created the needed TNS connections, I have been running a single RMAN duplicate command :

RMAN> run {
2> set newname for database to new;
3> duplicate target database to 'ODA_DBNAME' backup location '/u99/app/oracle/backup';
4> }

Note : If the database is huge, as for example, more than a Tera bytes, and your sga is small, you might want to increase it. Having a bigger sga size will lower the restore time. Minimum 50 GB would be a good compromise. Also if your ODA is from the ODA-X7 family you will benefit from the NVMe technologie. As per my experience, a duplication of 1.5 TB database, with backup stored locally, did not take more than 40 minutes.

I have been more than happy to see the first duplication step been successfully achieved :

Finished restore at 17-JUL-2019 16:45:10

And I was expecting the same for the next recovery part.

Unfortunately, this didn’t end as expected and I quickly got following restore errors :

Errors in memory script
RMAN-03015: error occurred in stored script Memory Script
RMAN-06136: ORACLE error from auxiliary database: ORA-01507: database not mounted
ORA-06512: at "SYS.X$DBMS_RCVMAN", line 13661
ORA-06512: at line 1
RMAN-03015: error occurred in stored script Memory Script
RMAN-20000: abnormal termination of job step
RMAN-11003: failure during parse/execution of SQL statement: alter database recover logfile '/u03/app/oracle/fast_recovery_area/ODA_DBNAME_RZA/archivelog/2019_07_17/o1_mf_1_25514_glyf3yd3_.arc'
RMAN-11001: Oracle Error:
ORA-10562: Error occurred while applying redo to data block (file# 91, block# 189)
ORA-10564: tablespace DBVISIT
ORA-01110: data file 91: '/u02/app/oracle/oradata/ODA_DBNAME_RZA/ODA_DBNAME_RZA/datafile/o1_mf_dbvisit_glyczqcj_.dbf'
ORA-10561: block type 'TRANSACTION MANAGED DATA BLOCK', data object# 501874
ORA-00600: internal error code, arguments: [4502], [0], [], [], [], [], [], [], [], [], [], [] RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 07/17/2019 16:45:32
RMAN-05501: aborting duplication of target database

Troubleshooting the problem I could understand that migrating database from Windows to Linux might not be so simple. Following oracle Doc ID is describing the problem :
Restore From Windows To Linux using RMAN Fails (Doc ID 2003327.1)
Cross-Platform Database Migration (across same endian) using RMAN Transportable Database (Doc ID 1401921.1)
RMAN DUPLICATE/RESTORE/RECOVER Mixed Platform Support (Doc ID 1079563.1)
Restore From Windows To Linux using RMAN Fails (Doc ID 2003327.1)

The problem is coming from the fact that recovering redo transactions between windows and linux platform is not supported if the database is not a standby one. For standard database version, the only possibility would be to go through a cold backup which, in my case, was impossible knowing the database size, the time taken to execute a backup and the short maintenance windows.

Looking for other solution and doing further tests, I could find a solution that I’m going to describe in the next steps.

Restoring the database from the last backup

In order to restore the database, I have been running next steps.

  1. Start the ODA instance in no mount :

  2. SQL> startup nomount

  3. Restore the last available control file from backup with rman :

  4. RMAN> connect target /
     
    RMAN> restore controlfile from '/mnt/backupNFS/oracle/ODA_DBNAME/20190813_233004_CTL_ODA_DBNAME_1179126808_S2864_P1.BCK';

  5. Mount the database :

  6. SQL> alter database mount;

  7. Catalog the backup path :

  8. RMAN> connect target /
     
    RMAN> catalog start with '/mnt/backupNFS/oracle/ODA_DBNAME';

  9. And finally restore the database :

  10. RMAN> connect target /
     
    RMAN> run {
    2> set newname for database to new;
    3> restore database;
    4> switch datafile all;
    5> }

Convert the primary database to a physical standby database

In order to be able to recover the database we will convert the primary database to a physical standby one.

  1. We can check the actual status and see that our database is a primary one in mounted state :

  2. SQL> select status,instance_name,database_role,open_mode from v$database,v$Instance;
     
    STATUS INSTANCE_NAME DATABASE_ROLE OPEN_MODE
    ------------ ---------------- ---------------- --------------------
    MOUNTED ODA_DBNAME PRIMARY MOUNTED

  3. We will convert the database to a physical standby

  4. SQL> alter database convert to physical standby;
     
    Database altered.

  5. We need to restart the database.

  6. SQL> shutdown immediate
     
    SQL> startup mount

  7. We can check new database status

  8. SQL> select status,instance_name,database_role,open_mode from v$database,v$Instance;
     
    STATUS INSTANCE_NAME DATABASE_ROLE OPEN_MODE
    ------------ ---------------- ---------------- --------------------
    MOUNTED ODA_DBNAME PHYSICAL STANDBY MOUNTED

Get the current windows SCN database

We are now ready to recover the database and the application can be stopped. The next steps will now be executed during the maintenance windows. The windows database listener can be stopped to make sure there is no new connection.

  1. We will make sure there is no existing application session on the database :

  2. SQL> set linesize 300
    SQL> set pagesize 500
    SQL> col machine format a20
    SQL> col service_name format a20
     
    SQL> select SID, serial#, username, machine, process, program, status, service_name, logon_time from v$session where username not in ('SYS', 'PUBLIC') and username is not null order by status, username;

  3. We will create a restore point :

  4. SQL> create restore point for_migration_14082019;
     
    Restore point created.

  5. We will get the last online log transactions archived :

  6. SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
     
    System altered.

  7. We will retrieve the SCN corresponding to the restore point :

  8. SQL> col scn format 999999999999999
     
    SQL> select scn from v$restore_point where lower(name)='for_migration_14082019';
     
    SCN
    ----------------
    13069540631

  9. We will backup the last archive log. This will be executed on the windows database using our dbi services internal DMK tool (https://www.dbi-services.com/offering/products/dmk-management-kit/) :

  10. servicedbi@win_srv:E:\app\oracle\local\dmk_custom\bin\ [ODA_DBNAME] ./rman_backup_ODA_DBNAME_arc.bat
     
    E:\app\oracle\local\dmk_custom\bin>powershell.exe -command "E:\app\oracle\local\dmk_ha\bin\check_primary.ps1 ODA_DBNAME 'dmk_rman.ps1 -s ODA_DBNAME -t bck_arc.rcv -c E:\app\oracle\admin\ODA_DBNAME\etc\rman.cfg
     
    [OK]::KSBL::RMAN::dmk_dbbackup::ODA_DBNAME::bck_arc.rcv
     
    Logfile is : E:\app\oracle\admin\ODA_DBNAME\log\ODA_DBNAME_bck_arc_20190814_141754.log
     
    RMAN return Code: 0
    2019-08-14_02:19:01::check_primary.ps1::MainProgram ::INFO ==> Program completed

Recover the database

The database can now be recovered till our 13069540631 SCN number.

  1. We will first need to catalog new archive log backups :

  2. RMAN> connect target /
     
    RMAN> catalog start with '/mnt/backupNFS/oracle/ODA_DBNAME';

  3. And recover the database till SCN 13069540632 :

  4. RMAN> connect target /
     
    RMAN> run {
    2> set until scn 13069540632;
    3> recover database;
    4> }
     
    archived log file name=/u03/app/oracle/fast_recovery_area/ODA_DBNAME_RZA/archivelog/2019_08_14/o1_mf_1_30098_go80084r_.arc RECID=30124 STAMP=1016289320
    archived log file name=/u03/app/oracle/fast_recovery_area/ODA_DBNAME_RZA/archivelog/2019_08_14/o1_mf_1_30099_go80084x_.arc thread=1 sequence=30099
    channel default: deleting archived log(s)
    archived log file name=/u03/app/oracle/fast_recovery_area/ODA_DBNAME_RZA/archivelog/2019_08_14/o1_mf_1_30099_go80084x_.arc RECID=30119 STAMP=1016289320
    archived log file name=/u03/app/oracle/fast_recovery_area/ODA_DBNAME_RZA/archivelog/2019_08_14/o1_mf_1_30100_go8008bg_.arc thread=1 sequence=30100
    channel default: deleting archived log(s)
    archived log file name=/u03/app/oracle/fast_recovery_area/ODA_DBNAME_RZA/archivelog/2019_08_14/o1_mf_1_30100_go8008bg_.arc RECID=30121 STAMP=1016289320
    media recovery complete, elapsed time: 00:00:02
    Finished recover at 14-AUG-2019 14:35:23

  5. We can check the alert log and see that recovering has been performed until SCN 13069540632 :

  6. oracle@ODA02:/u02/app/oracle/oradata/ODA_DBNAME_RZA/ODA_DBNAME_RZA/datafile/ [ODA_DBNAME] taa
    ORA-279 signalled during: alter database recover logfile '/u03/app/oracle/fast_recovery_area/ODA_DBNAME_RZA/archivelog/2019_08_14/o1_mf_1_30098_go80084r_.arc'...
    alter database recover logfile '/u03/app/oracle/fast_recovery_area/ODA_DBNAME_RZA/archivelog/2019_08_14/o1_mf_1_30099_go80084x_.arc'
    Media Recovery Log /u03/app/oracle/fast_recovery_area/ODA_DBNAME_RZA/archivelog/2019_08_14/o1_mf_1_30099_go80084x_.arc
    ORA-279 signalled during: alter database recover logfile '/u03/app/oracle/fast_recovery_area/ODA_DBNAME_RZA/archivelog/2019_08_14/o1_mf_1_30099_go80084x_.arc'...
    alter database recover logfile '/u03/app/oracle/fast_recovery_area/ODA_DBNAME_RZA/archivelog/2019_08_14/o1_mf_1_30100_go8008bg_.arc'
    Media Recovery Log /u03/app/oracle/fast_recovery_area/ODA_DBNAME_RZA/archivelog/2019_08_14/o1_mf_1_30100_go8008bg_.arc
    Wed Aug 14 14:35:23 2019
    Incomplete Recovery applied until change 13069540632 time 08/14/2019 14:13:46
    Media Recovery Complete (ODA_DBNAME)
    Completed: alter database recover logfile '/u03/app/oracle/fast_recovery_area/ODA_DBNAME_RZA/archivelog/2019_08_14/o1_mf_1_30100_go8008bg_.arc'

  7. We can check the new ODA database current SCN :

  8. SQL> col current_scn format 999999999999999
     
    SQL> select current_scn from v$database;
     
    CURRENT_SCN
    ----------------
    13069540631

Convert database to primary again

Database can now be converted back to primary.

SQL> alter database activate standby database;
 
Database altered.


SQL> select status,instance_name,database_role,open_mode from v$database,v$Instance;
 
STATUS INSTANCE_NAME DATABASE_ROLE OPEN_MODE
------------ ---------------- ---------------- --------------------
MOUNTED ODA_DBNAME PRIMARY MOUNTED

At this step if the windows source database would be running 11.2.0.3 version, we could successfully upgrade the new ODA database to 11.2.0.4 following common oracle database upgrade process.

And finally we can open our database and have the database been migrated from windows to linux.


SQL> alter database open;
 
Database altered.


SQL> select status,instance_name,database_role,open_mode from v$database,v$Instance;
 
STATUS INSTANCE_NAME DATABASE_ROLE OPEN_MODE
------------ ---------------- ---------------- --------------------
OPEN ODA_DBNAME PRIMARY READ WRITE


oracle@ODA02:/u02/app/oracle/oradata/ODA_DBNAME_RZA/ODA_DBNAME_RZA/datafile/ [ODA_DBNAME] ODA_DBNAME
********* dbi services Ltd. *********
STATUS : OPEN
DB_UNIQUE_NAME : ODA_DBNAME_RZA
OPEN_MODE : READ WRITE
LOG_MODE : ARCHIVELOG
DATABASE_ROLE : PRIMARY
FLASHBACK_ON : NO
FORCE_LOGGING : YES
VERSION : 11.2.0.4.0
*************************************

Post migration steps

There will be a few post migration steps to be executed.

Created redo logs again

Redo logs are still stamped with windows path and therefore have been created in $ORACLE_HOME/dbs folder. In this steps we will create new OMF one again.

  1. Checking current online log members :

  2. SQL> set linesize 300
    SQL> set pagesize 500
    SQL> col member format a100
     
    SQL> select a.GROUP#, b.member, a.status, a.bytes/1024/1024 MB from v$log a, v$logfile b where a.GROUP#=b.GROUP#;
     
    GROUP# MEMBER STATUS MB
    ---------- ---------------------------------------------------------------------------------------------------- ---------------- ----------
    6 /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_6_1.LOG UNUSED 500
    6 /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_6_2.LOG UNUSED 500
    5 /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_5_2.LOG UNUSED 500
    5 /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_5_1.LOG UNUSED 500
    4 /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_4_2.LOG UNUSED 500
    4 /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_4_1.LOG UNUSED 500
    3 /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_3_2.LOG UNUSED 500
    3 /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_3_1.LOG UNUSED 500
    2 /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_2_2.LOG UNUSED 500
    2 /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_2_1.LOG UNUSED 500
    1 /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_1_2.LOG CURRENT 500
    1 /u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_1_1.LOG CURRENT 500

  3. Drop the first unused redo log group keeping only one :

  4. SQL> alter database drop logfile group 6;
     
    Database altered.
     
    SQL> alter database drop logfile group 5;
     
    Database altered.
     
    SQL> alter database drop logfile group 4;
     
    Database altered.
     
    SQL> alter database drop logfile group 3;
     
    Database altered.
     
    SQL> alter database add logfile group 3 size 500M;
     
    Database altered.

  5. Create the recent dropped group again :

  6. SQL> alter database add logfile group 3 size 500M;
     
    Database altered.
     
    SQL> alter database add logfile group 4 size 500M;
     
    Database altered.
     
    SQL> alter database add logfile group 5 size 500M;
     
    Database altered.
     
    SQL> alter database add logfile group 6 size 500M;
     
    Database altered.

  7. Drop the last unused redo log group and create it again :

  8. SQL> alter database drop logfile group 2;
     
    Database altered.
     
    SQL> alter database add logfile group 2 size 500M;
     
    Database altered.

  9. Execute a switch log file and checkpoint so the current redo group becomes unused :

  10. SQL> alter system switch logfile;
     
    System altered.
     
    SQL> alter system checkpoint;
     
    System altered.

  11. Drop it and create it again :

  12. SQL> alter database drop logfile group 1;
     
    Database altered.
     
    SQL> alter database add logfile group 1 size 500M;
     
    Database altered.

  13. Check redo group members :

  14. SQL> select a.GROUP#, b.member, a.status, a.bytes/1024/1024 MB from v$log a, v$logfile b where a.GROUP#=b.GROUP#;
     
    GROUP# MEMBER STATUS MB
    ---------- ---------------------------------------------------------------------------------------------------- ---------------- ----------
    3 /u03/app/oracle/redo/ODA_DBNAME_RZA/onlinelog/o1_mf_3_go81rj4t_.log INACTIVE 500
    3 /u02/app/oracle/oradata/ODA_DBNAME_RZA/redo/ODA_DBNAME_RZA/onlinelog/o1_mf_3_go81rjqn_.log INACTIVE 500
    4 /u03/app/oracle/redo/ODA_DBNAME_RZA/onlinelog/o1_mf_4_go81ron1_.log UNUSED 500
    4 /u02/app/oracle/oradata/ODA_DBNAME_RZA/redo/ODA_DBNAME_RZA/onlinelog/o1_mf_4_go81rp6o_.log UNUSED 500
    5 /u03/app/oracle/redo/ODA_DBNAME_RZA/onlinelog/o1_mf_5_go81rwhs_.log UNUSED 500
    5 /u02/app/oracle/oradata/ODA_DBNAME_RZA/redo/ODA_DBNAME_RZA/onlinelog/o1_mf_5_go81rx1g_.log UNUSED 500
    6 /u03/app/oracle/redo/ODA_DBNAME_RZA/onlinelog/o1_mf_6_go81s1rk_.log UNUSED 500
    6 /u02/app/oracle/oradata/ODA_DBNAME_RZA/redo/ODA_DBNAME_RZA/onlinelog/o1_mf_6_go81s2bx_.log UNUSED 500
    2 /u03/app/oracle/redo/ODA_DBNAME_RZA/onlinelog/o1_mf_2_go81sgdf_.log CURRENT 500
    2 /u02/app/oracle/oradata/ODA_DBNAME_RZA/redo/ODA_DBNAME_RZA/onlinelog/o1_mf_2_go81sgxd_.log CURRENT 500
    1 /u03/app/oracle/redo/ODA_DBNAME_RZA/onlinelog/o1_mf_1_go81vpls_.log UNUSED 500
    1 /u02/app/oracle/oradata/ODA_DBNAME_RZA/redo/ODA_DBNAME_RZA/onlinelog/o1_mf_1_go81vq4v_.log UNUSED 500

  15. Delete the wrong previous redo log members files :

  16. oracle@ODA02:/u02/app/oracle/oradata/ODA_DBNAME_RZA/ODA_DBNAME_RZA/datafile/ [ODA_DBNAME] cdh
     
    oracle@ODA02:/u01/app/oracle/product/11.2.0.4/dbhome_1/ [ODA_DBNAME] cd dbs
     
    oracle@ODA02:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [ODA_DBNAME] ls -ltrh *REDO*.LOG
    -rw-r----- 1 oracle asmadmin 501M Aug 14 14:59 I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_6_2.LOG
    -rw-r----- 1 oracle asmadmin 501M Aug 14 14:59 I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_6_1.LOG
    -rw-r----- 1 oracle asmadmin 501M Aug 14 14:59 I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_5_2.LOG
    -rw-r----- 1 oracle asmadmin 501M Aug 14 14:59 I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_5_1.LOG
    -rw-r----- 1 oracle asmadmin 501M Aug 14 14:59 I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_4_2.LOG
    -rw-r----- 1 oracle asmadmin 501M Aug 14 14:59 I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_4_1.LOG
    -rw-r----- 1 oracle asmadmin 501M Aug 14 14:59 I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_3_2.LOG
    -rw-r----- 1 oracle asmadmin 501M Aug 14 14:59 I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_3_1.LOG
    -rw-r----- 1 oracle asmadmin 501M Aug 14 14:59 I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_2_2.LOG
    -rw-r----- 1 oracle asmadmin 501M Aug 14 14:59 I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_2_1.LOG
    -rw-r----- 1 oracle asmadmin 501M Aug 14 15:05 I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_1_2.LOG
    -rw-r----- 1 oracle asmadmin 501M Aug 14 15:05 I:FAST_RECOVERY_AREAODA_DBNAME_SITE1ONLINELOGREDO_1_1.LOG
     
    oracle@ODA02:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [ODA_DBNAME] rm *REDO*.LOG

Created temp file again
  1. Checking current temp file we can see that the path is still the windows one :

  2. SQL> set linesize 300
    SQL> col name format a100
     
    SQL> select b.name, b.status, b.bytes/1024/1024 MB, a.name from v$tablespace a, v$tempfile b where a.TS#=b.TS#;
     
    NAME STATUS MB NAME
    ---------------------------------------------------------------------------------------------------- ------- ---------- -------------------------------------------
    F:\ORADATA\ODA_DBNAME\TEMPORARY_DATA_1.DBF ONLINE 8192 TEMPORARY_DATA

  3. We can check that the default temporary tablespace is TEMPORARY_DATA

  4. SQL> col property_value format a50
     
    SQL> select property_name, property_value from database_properties where property_name like '%DEFAULT%TABLESPACE%';
     
    PROPERTY_NAME PROPERTY_VALUE
    ------------------------------ --------------------------------------------------
    DEFAULT_TEMP_TABLESPACE TEMPORARY_DATA
    DEFAULT_PERMANENT_TABLESPACE USER_DATA

  5. Let’s create a new temp tablespace and make it the default one

  6. SQL> create temporary tablespace TEMP tempfile size 8G;
     
    Tablespace created.
     
    SQL> alter database default temporary tablespace TEMP;
     
    Database altered.
     
    SQL> select property_name, property_value from database_properties where property_name like '%DEFAULT%TABLESPACE%';
     
    PROPERTY_NAME PROPERTY_VALUE
    ------------------------------ --------------------------------------------------
    DEFAULT_TEMP_TABLESPACE TEMP
    DEFAULT_PERMANENT_TABLESPACE USER_DATA

  7. Drop previous TEMPORARY_DATA tablespace

  8. SQL> drop tablespace TEMPORARY_DATA including contents and datafiles;
     
    Tablespace dropped.
     
    SQL> select b.file#, b.name, b.status, b.bytes/1024/1024 MB, a.name from v$tablespace a, v$tempfile b where a.TS#=b.TS#;
     
    FILE# NAME STATUS MB NAME
    ---------- ---------------------------------------------------------------------------------------------------- ------- ----------
    3 /u02/app/oracle/oradata/ODA_DBNAME_RZA/ODA_DBNAME_RZA/datafile/o1_mf_temp_go83m1tp_.tmp ONLINE 8192 TEMP

  9. Create TEMPORARY_DATA tablespace again and make it the default one :

  10. SQL> create temporary tablespace TEMPORARY_DATA tempfile size 8G;
     
    Tablespace created.
     
    SQL> select b.file#, b.name, b.status, b.bytes/1024/1024 MB, a.name from v$tablespace a, v$tempfile b where a.TS#=b.TS#;
     
    FILE# NAME STATUS MB NAME
    ---------- ---------------------------------------------------------------------------------------------------- ------- ----------
    1 /u02/app/oracle/oradata/ODA_DBNAME_RZA/ODA_DBNAME_RZA/datafile/o1_mf_temporar_go83wfd7_.tmp ONLINE 8192 TEMPORARY_DATA
    3 /u02/app/oracle/oradata/ODA_DBNAME_RZA/ODA_DBNAME_RZA/datafile/o1_mf_temp_go83m1tp_.tmp ONLINE 8192 TEMP
     
    SQL> alter database default temporary tablespace TEMPORARY_DATA;
     
    Database altered.
     
    SQL> select property_name, property_value from database_properties where property_name like '%DEFAULT%TABLESPACE%';
     
    PROPERTY_NAME PROPERTY_VALUE
    ------------------------------ --------------------------------------------------
    DEFAULT_TEMP_TABLESPACE TEMPORARY_DATA
    DEFAULT_PERMANENT_TABLESPACE USER_DATA

  11. And finally drop the intermediare temp tablespace :

  12. SQL> drop tablespace TEMP including contents and datafiles;
     
    Tablespace dropped.
     
    SQL> select b.file#, b.name, b.status, b.bytes/1024/1024 MB, a.name from v$tablespace a, v$tempfile b where a.TS#=b.TS#;
     
    FILE# NAME STATUS MB NAME
    ---------- ---------------------------------------------------------------------------------------------------- ------- ----------
    1 /u02/app/oracle/oradata/ODA_DBNAME_RZA/ODA_DBNAME_RZA/datafile/o1_mf_temporar_go83wfd7_.tmp ONLINE 8192 TEMPORARY_DATA

  13. Appropriate max size can be given to the new created temp tablespace

  14. SQL> alter database tempfile '/u02/app/oracle/oradata/ODA_DBNAME_RZA/ODA_DBNAME_RZA/datafile/o1_mf_temporar_go83wfd7_.tmp' autoextend on maxsize 31G;
     
    Database altered.

  15. Remove wrong temp file stored in $ORACLE_HOME/dbs

  16. oracle@ODA02:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [ODA_DBNAME] ls -ltr
    -rw-r--r-- 1 oracle oinstall 2851 May 15 2009 init.ora
    -rw-r--r-- 1 oracle oinstall 64 Jul 25 08:10 initODA_DBNAME.ora.old
    -rw-r----- 1 oracle oinstall 2048 Jul 25 08:10 orapwODA_DBNAME
    -rw-r--r-- 1 oracle oinstall 67 Jul 25 08:31 initODA_DBNAME.ora
    -rw-r----- 1 oracle asmadmin 8589942784 Aug 14 08:14 F:ORADATAODA_DBNAMETEMPORARY_DATA_1.DBF
    -rw-rw---- 1 oracle asmadmin 1544 Aug 14 14:59 hc_ODA_DBNAME.dat
    -rw-r----- 1 oracle asmadmin 43466752 Aug 14 15:48 snapcf_ODA_DBNAME.f
     
    oracle@RZA-ODA02:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [ODA_DBNAME] rm F:ORADATAODA_DBNAMETEMPORARY_DATA_1.DBF

Apply specific ODA parameters

Following specific ODA parameters can be updated to the new created instance.


SQL> alter system set "_datafile_write_errors_crash_instance"=false scope=spfile;
 
System altered.
 
SQL> alter system set "_db_writer_coalesce_area_size"=16777216 scope=spfile;
 
System altered.
 
SQL> alter system set "_disable_interface_checking"=TRUE scope=spfile;
 
System altered.
 
SQL> alter system set "_ENABLE_NUMA_SUPPORT"=FALSE scope=spfile;
 
System altered.
 
SQL> alter system set "_FILE_SIZE_INCREASE_INCREMENT"=2143289344 scope=spfile;
 
System altered.
 
SQL> alter system set "_gc_policy_time"=0 scope=spfile;
 
System altered.
 
SQL> alter system set "_gc_undo_affinity"=FALSE scope=spfile;
 
System altered.
 
SQL> alter system set db_block_checking='FULL' scope=spfile;
 
System altered.
 
SQL> alter system set db_block_checksum='FULL' scope=spfile;
 
System altered.
 
SQL> alter system set db_lost_write_protect='TYPICAL' scope=spfile;
 
System altered.
 
SQL> alter system set sql92_security=TRUE scope=spfile;
 
System altered.
 
SQL> alter system set use_large_pages='only' scope=spfile;
 
System altered.

“_fix_control”parameter is specific to Oracle12c and not compatible Oracle 11g. See Doc ID 2145105.1.

Register database in grid

After applying specific ODA instance parameters, we can register the database in the grid and start it with the grid.


oracle@ODA02:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [ODA_DBNAME] srvctl add database -d ODA_DBNAME_RZA -o /u01/app/oracle/product/11.2.0.4/dbhome_1 -c SINGLE -i ODA_DBNAME -x RZA-ODA02 -m ksbl.local -p /u02/app/oracle/oradata/ODA_DBNAME_RZA/dbs/spfileODA_DBNAME.ora -r PRIMARY -s OPEN -t IMMEDIATE -n ODA_DBNAME -j "/u02/app/oracle/oradata/ODA_DBNAME_RZA,/u03/app/oracle"
 
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
 
oracle@ODA02:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [ODA_DBNAME] srvctl start database -d ODA_DBNAME_RZA
 
oracle@ODA02:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [ODA_DBNAME] srvctl status database -d ODA_DBNAME_RZA
Instance ODA_DBNAME is running on node rza-oda02
 
oracle@ODA02:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [ODA_DBNAME] ODA_DBNAME
********* dbi services Ltd. *********
STATUS : OPEN
DB_UNIQUE_NAME : ODA_DBNAME_RZA
OPEN_MODE : READ WRITE
LOG_MODE : ARCHIVELOG
DATABASE_ROLE : PRIMARY
FLASHBACK_ON : NO
FORCE_LOGGING : YES
VERSION : 11.2.0.4.0
*************************************

We can check the well functionning :

oracle@ODA02:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [ODA_DBNAME] srvctl stop database -d ODA_DBNAME_RZA
 
oracle@ODA02:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [ODA_DBNAME] srvctl status database -d ODA_DBNAME_RZA
Instance ODA_DBNAME is not running on node rza-oda02
 
oracle@ODA02:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [ODA_DBNAME] ODA_DBNAME
********* dbi services Ltd. *********
STATUS : STOPPED
*************************************
 
oracle@ODA02:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [ODA_DBNAME] srvctl start database -d ODA_DBNAME_RZA
 
oracle@ODA02:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [ODA_DBNAME] srvctl status database -d ODA_DBNAME_RZA
Instance ODA_DBNAME is running on node rza-oda02
 
oracle@ODA02:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [ODA_DBNAME] ODA_DBNAME
********* dbi services Ltd. *********
STATUS : OPEN
DB_UNIQUE_NAME : ODA_DBNAME_RZA
OPEN_MODE : READ WRITE
LOG_MODE : ARCHIVELOG
DATABASE_ROLE : PRIMARY
FLASHBACK_ON : NO
FORCE_LOGGING : YES
VERSION : 11.2.0.4.0
*************************************

Conclusion

Going through a physical standby database, I was able to migrate successfully the windows databases into ODA linux one. I have been able to achieve migration of source 11.2.0.4 databases but also 11.2.0.3 database by adding an upgrade step in the process.

Cet article Migrating Oracle database from windows to ODA est apparu en premier sur Blog dbi services.

How to enlarge an #Exasol database by adding a node

The Oracle Instructor - Mon, 2019-09-30 08:46

Adding a cluster node will not only increase the available storage capacity but also the total compute power of your cluster. This scale-out is a quite common operation for Exasol customers to do.

My example shows how to change an existing 2+1 cluster into a 3+0 cluster. Before you can enlarge the database with an active node, this node has to be a reserve node first. See here how to add a reserve to a 2+0 cluster. Of course you can add another reserve node to change from 3+0 to 3+1 afterwards. See here if you wonder why you may want to have a reserve node at all.

Initial state – reserve node is present

I start with a 2+1 cluster – 2 active nodes and 1 reserve node:

For later comparison, let’s look at the distribution of rows of one of my tables:

The rows are roughly even distributed across the two active nodes.

Before you continue, it would be a good idea to take a backup on a remote archive volume now – just in case.

Shutdown database before volume modification

A data volume used used by a database cannot be modified while that database is up, so shut it down first:

After going to the Storage branch in EXAoperation, click on the data volume:

Then click on Edit:

Decrease volume redundancy to 1

Change the redundany from 2 to 1, then click Apply:

Why is the redundancy reduced from 2 to 1 here? Let’s try to explain that. Initially, I had 2 active nodes with a volume using redundancy 2:

A and B are master segments while A’ and B’ are mirrored segments. If I could add a node to this volume keeping the existing segments, it would look like this:

Of course this would be a bad idea. The redundancy is reduced to 1 before the new node is added to the volume:

Only distributed master segments with no mirrors at first. Then the redundancy is again increased to 2:

This way, every master segment can be mirrored on a neighbor node. That’s why the redundancy needs to be reduced to 1.

Add new node to volume

After having decreased the volume redundancy to 1, click Edit on the volume detail page again and add n13 as a new master node to the volume and click Apply:

Increase redundancy to 2

Now click Edit again and increase the redudancy to 2:

The state of the volume shows now as RECOVERING – don’t worry, it just means that mirrored segments are now created.

Enlarge the database

Now click on the database link on the EXASolution screen:

Select the Action Enlarge and click Submit:

Enter 1 and click Apply:

The database detail page looks like this now:

Reorganize

Technically, this is a 3+0 cluster now – but the third node doesn’t contain any data yet. If we look at the same table as before, we see that no rows are on the new node:

To change that, a REORGANIZE needs to be done. Either on the database layer, on schema layer or on table layer. Most easy to perform is REORGANIZE DATABASE:

Took me about 10 Minutes on my tiny database. That command re-distributes every table across all cluster nodes and can be time consuming with high data volume. While a table is reorganized, that table is locked against DML. You can monitor the ongoing reorganization by selecting from EXA_DBA_PROFILE_RUNNING in another session.

Final state

Let’s check the distribution of the previous table again:

As you can see above, now there are rows on the added node. Also EXAoperation confirms that the new node is not empty any more:

On a larger database, you would see that the volume usage of the nodes is less than before per node and every node is holding roughly the same amount of data. For failsafety, you could add another reserve node now.

Summary of steps
  1. Add a reserve node (if not yet existing)
  2. Take a backup on a remote archive volume
  3. Shutdown database
  4. Decrease volume redundancy to 1
  5. Add former reserve node as new master node to the volume
  6. Increase redundancy to 2
  7. Enlarge database by 1 active node
  8. Reorganize
  9. Add another reserve node (optionally)
Categories: DBA Blogs

Using non-root SQL Server containers on Docker and K8s

Yann Neuhaus - Sun, 2019-09-29 11:54

This is something that I waited for a while, in fact since SQL Server 2017 … and the news came out on Wednesday 09th September 2019. Running Non-Root SQL Server Containers is now possible either on the next version of SQL Server (2019) and it has been backported on SQL Server 2017 as well. Non-root SQL Server containers will likely be part of hidden gem of SQL Server new features, but this is definitely a good news for me because it will facilitate the transition of SQL Server containers on production from a security standpoint.

At this stage, no need to precise why it is not a best practice to run SQL Server containers or more generally speaking applications with root privileges within a container. For further information, I invite you to take a look at the different threats implied by a such configuration with your google-fu. 

Let’s start with docker environments. First, Microsoft provides a Docker file to build an image either for SQL Server 2017 and SQL Server 2019. We may notice the Docker file is already based on a SQL Server docker image and performs some extra configuration for non-root privilege capabilities. I put here the interesting part:

# Exmple of creating a SQL Server 2019 container image that will run as a user 'mssql' instead of root
# This is example is based on the official image from Microsoft and effectively changes the user that SQL Server runs as
# and allows for dumps to generate as a non-root user


FROM mcr.microsoft.com/mssql/server:2019-latest

# Create non-root user and update permissions
#
RUN useradd -M -s /bin/bash -u 10001 -g 0 mssql
RUN mkdir -p -m 770 /var/opt/mssql && chgrp -R 0 /var/opt/mssql

# Grant sql the permissions to connect to ports <1024 as a non-root user
#
RUN setcap 'cap_net_bind_service+ep' /opt/mssql/bin/sqlservr

# Allow dumps from the non-root process
# 
RUN setcap 'cap_sys_ptrace+ep' /opt/mssql/bin/paldumper
RUN setcap 'cap_sys_ptrace+ep' /usr/bin/gdb

# Add an ldconfig file because setcap causes the os to remove LD_LIBRARY_PATH
# and other env variables that control dynamic linking
#
RUN mkdir -p /etc/ld.so.conf.d && touch /etc/ld.so.conf.d/mssql.conf
RUN echo -e "# mssql libs\n/opt/mssql/lib" >> /etc/ld.so.conf.d/mssql.conf
RUN ldconfig

USER mssql
CMD ["/opt/mssql/bin/sqlservr"]

 

Note the different sections where the mssql user is created and is used when running the image. So, the new image specification implies running the sqlservr process using this mssql user as shown below:

$ docker exec -ti sql19 top

 

The user process is well identified by its name because it is already defined in the /etc/password file within the container namespace:

$ docker exec -ti sql19 cat /etc/passwd | grep mssql
mssql:x:10001:0::/home/mssql:/bin/bash

 

Let’s go ahead and let’s talk about persisting SQL Server database files on an external storage. In this case, we need to refer to the Microsoft documentation to configure volumes and underlying storage permissions regarding the scenario we will have to deal with.

If you don’t specify any user (and group) when spinning up the container, the sqlservr process will run with the identity of the mssql user created inside the container and as part of the root group. The underlying host filesystem must be configured accordingly, either a user with same UID = 10001 or the root group GUID = 0). Otherwise chances are you will experience permission issues with the following error message:

SQL Server 2019 will run as non-root by default.
This container is running as user mssql.
To learn more visit https://go.microsoft.com/fwlink/?linkid=2099216.
/opt/mssql/bin/sqlservr: Error: Directory [/var/opt/mssql/system/] could not be created.  Errno [13]

 

If you want to run the container as part of a custom user and group created on your own, you must be aware of the different database file placement scenarios. The first one consists in using the default configuration with all the SQL Server logs, data and transaction log files in /var/opt/mssql path. In this case, your custom user UID and GUID can be part of the security context of the hierarchy folder on the host as follows:

$ ls -l | grep sqlserver
drwxrwx---. 6 mssql mssql 59 Sep 27 19:08 sqlserver

$ id mssql
uid=1100(mssql) gid=1100(mssql) groups=1100(mssql),100(users)

 

The docker command below specifies the UID and GUID of my custom user through the -u parameter:

docker run -d \
 --name sql19 \
 -u $(id -u mssql):$(id -g mssql) \
 -e "MSSQL_PID=Developer" \
 -e "ACCEPT_EULA=Y" \
 -e "SA_PASSWORD=Password1" \
 -e "MSSQL_AGENT_ENABLED=True" \
 -e "MSSQL_LCID=1033" \
 -e "MSSQL_MEMORY_LIMIT_MB=2048" \
 -v "/u00/sqlserver:/var/opt/mssql" \
 -p 1451:1433 -d 2019-latest-non-root

 

Note the username is missing and replaced by the UID of the mssql user created on my own.

This is a normal behavior because my user is not known within the container namespace. There is no record from my user with UID = 1001. The system only knows the mssql user with UID = 10001 as shown below:

I have no name!@e698c3db2180:/$ whoami
whoami: cannot find name for user ID 1100

$ cat /etc/passwd | grep mssql | cut -d":" -f 1,3
mssql:10001

 

For a sake of curiosity, we may wonder how SQL Server makes the choice of using the correct user for the sqlservr process. Indeed, I created two users with the same name but with different UIDs and I think that after some investigations, taking a look at the uid_entry point definition in the microsoft/mssql-docker github project could help understanding this behavior:

If we don’t specify the UID / GUID during the container’s creation, the whoami command will fail and the mssql user’s UID defined in the Dockerfile (cf. USER mssql) will be chosen.

 The second scenario consists in introducing some SQL Server best practices in terms of database file placement. In a previous blog post, I wrote about a possible implementation based on a flexible architecture for SQL Server on Linux and which may fit with containers. In this case, database files will be stored outside of the /var/opt/mssql default path and in this case, the non-root container has the restriction that it must run as part of the root group as mentioned in the Microsoft documentation:

The non-root container has the restriction that it must run as part of the root group unless a volume is mounted to '/var/opt/mssql' that the non-root user can access. The root group doesn’t grant any extra root permissions to the non-root user.

 

Here my implementation of the flexible architecture template with required Linux permissions in my context:

$ ls -ld /u[0-9]*/sql*2/
drwxrwx---. 2 mssql root    6 Sep 24 22:02 /u00/sqlserver2/
drwxrwx---. 2 mssql root 4096 Sep 27 14:20 /u01/sqlserverdata2/
drwxrwx---. 2 mssql root   25 Sep 27 14:20 /u02/sqlserverlog2/
drwxrwx---. 2 mssql root    6 Sep 24 22:04 /u03/sqlservertempdb2/
drwxrwx---. 2 mssql root    6 Sep 27 10:09 /u98/sqlserver2/

 

… with:

  • /u00/sqlserver2 (binaries structure that will contain remaining files in /var/opt/mssql path)
  • /u01/sqlserverdata2 (data files including user, system and tempdb databases)
  • /u02/sqlserverlog2 (transaction log files)
  • /u98/sqlserver2 (database backups)

And accordingly, my docker command and parameters to start my SQL Server container that will sit on my flexible architecture:

docker run -d \
 --name sql19 \
 -u $(id -u mssql):0 \
 -e "MSSQL_PID=Developer" \
 -e "ACCEPT_EULA=Y" \
 -e "SA_PASSWORD=Password1" \
 -e "MSSQL_AGENT_ENABLED=True" \
 -e "MSSQL_LCID=1033" \
 -e "MSSQL_MEMORY_LIMIT_MB=2048" \
 -e "MSSQL_MASTER_DATA_FILE=/u01/sqlserverdata/master.mdf" \
 -e "MSSQL_MASTER_LOG_FILE=/u02/sqlserverlog/mastlog.ldf" \
 -e "MSSQL_DATA_DIR=/u01/sqlserverdata" \
 -e "MSSQL_LOG_DIR=/u02/sqlserverlog" \
 -e "MSSQL_BACKUP_DIR=/u98/sqlserver" \
 -v "/u00/sqlserver2:/var/opt/mssql" \
 -v "/u01/sqlserverdata2:/u01/sqlserverdata" \
 -v "/u02/sqlserverlog2:/u02/sqlserverlog" \
 -v "/u98/sqlserver2:/u98/sqlserver" \
 -p 1451:1433 -d 2019-latest-non-root

 

The mssql user created on my own from the host (with UID = 1100) is used by the sqlservr process:

The system and user database files are placed according to my specification:

master> create database test;
Commands completed successfully.
Time: 0.956s
master> \n ldd %%
+--------+----------------+---------------------------------+-----------+
| DB     | logical_name   | physical_name                   | size_MB   |
|--------+----------------+---------------------------------+-----------|
| master | master         | /u01/sqlserverdata/master.mdf   | 71        |
| master | mastlog        | /u02/sqlserverlog/mastlog.ldf   | 32        |
| tempdb | tempdev        | /u01/sqlserverdata/tempdb.mdf   | 128       |
| tempdb | templog        | /u01/sqlserverdata/templog.ldf  | 128       |
| tempdb | tempdev2       | /u01/sqlserverdata/tempdb2.ndf  | 128       |
| tempdb | tempdev3       | /u01/sqlserverdata/tempdb3.ndf  | 128       |
| tempdb | tempdev4       | /u01/sqlserverdata/tempdb4.ndf  | 128       |
| model  | modeldev       | /u01/sqlserverdata/model.mdf    | 128       |
| model  | modellog       | /u01/sqlserverdata/modellog.ldf | 128       |
| msdb   | MSDBData       | /u01/sqlserverdata/MSDBData.mdf | 236       |
| msdb   | MSDBLog        | /u01/sqlserverdata/MSDBLog.ldf  | 12        |
| test   | test           | /u01/sqlserverdata/test.mdf     | 128       |
| test   | test_log       | /u02/sqlserverlog/test_log.ldf  | 128       |
+--------+----------------+---------------------------------+-----------+

 

I may correlate the above output with corresponding files persisted on underlying storage according to my flexible architecture specification:

$ sudo ls -lR /u[0-9]*/sqlserver*2/
/u00/sqlserver2/:
total 4
drwxrwx---. 2 mssql root 4096 Sep 28 17:39 log
drwxr-xr-x. 2 mssql root   25 Sep 28 17:39 secrets

/u00/sqlserver2/log:
total 428
-rw-r-----. 1 mssql root  10855 Sep 28 17:39 errorlog
-rw-r-----. 1 mssql root  10856 Sep 28 17:37 errorlog.1
-rw-r-----. 1 mssql root      0 Sep 28 17:37 errorlog.2
-rw-r-----. 1 mssql root  77824 Sep 28 17:37 HkEngineEventFile_0_132141586653320000.xel
-rw-r-----. 1 mssql root  77824 Sep 28 17:39 HkEngineEventFile_0_132141587692350000.xel
-rw-r-----. 1 mssql root   2560 Sep 28 17:39 log_1.trc
-rw-r-----. 1 mssql root   2560 Sep 28 17:37 log.trc
-rw-r-----. 1 mssql root   6746 Sep 28 17:37 sqlagent.1
-rw-r-----. 1 mssql root   6746 Sep 28 17:39 sqlagent.out
-rw-r-----. 1 mssql root    114 Sep 28 17:39 sqlagentstartup.log
-rw-r-----. 1 mssql root 106496 Sep 28 17:37 system_health_0_132141586661720000.xel
-rw-r-----. 1 mssql root 122880 Sep 28 17:41 system_health_0_132141587698940000.xel

/u00/sqlserver2/secrets:
total 4
-rw-------. 1 mssql root 44 Sep 28 17:39 machine-key

/u01/sqlserverdata2/:
total 105220
-rw-r-----. 1 mssql root      256 Sep 27 14:20 Entropy.bin
-rw-r-----. 1 mssql root  4653056 Sep 28 17:39 master.mdf
-rw-r-----. 1 mssql root  8388608 Sep 28 17:39 modellog.ldf
-rw-r-----. 1 mssql root  8388608 Sep 28 17:39 model.mdf
-rw-r-----. 1 mssql root 14024704 Sep 27 14:20 model_msdbdata.mdf
-rw-r-----. 1 mssql root   524288 Sep 27 14:20 model_msdblog.ldf
-rw-r-----. 1 mssql root   524288 Sep 27 14:20 model_replicatedmaster.ldf
-rw-r-----. 1 mssql root  4653056 Sep 27 14:20 model_replicatedmaster.mdf
-rw-r-----. 1 mssql root 15466496 Sep 28 17:39 msdbdata.mdf
-rw-r-----. 1 mssql root   786432 Sep 28 17:39 msdblog.ldf
-rw-r-----. 1 mssql root  8388608 Sep 28 17:39 tempdb2.ndf
-rw-r-----. 1 mssql root  8388608 Sep 28 17:39 tempdb3.ndf
-rw-r-----. 1 mssql root  8388608 Sep 28 17:39 tempdb4.ndf
-rw-r-----. 1 mssql root  8388608 Sep 28 17:39 tempdb.mdf
-rw-r-----. 1 mssql root  8388608 Sep 28 17:39 templog.ldf
-rw-r-----. 1 mssql root  8388608 Sep 28 17:39 test.mdf

/u02/sqlserverlog2/:
total 10240
-rw-r-----. 1 mssql root 2097152 Sep 28 17:39 mastlog.ldf
-rw-r-----. 1 mssql root 8388608 Sep 28 17:39 test_log.ldf

/u03/sqlservertempdb2/:
total 0

/u98/sqlserver2/:
total 0

 

What next? Because in production your containers will run on the top of orchestrator like Kubernetes, the question is how to implement such privilege restriction in this context? Kubernetes provides security context at different levels including pod and containers. In this blog post example, I applied the security context at the container level within the container specification.  

Let’s set the context. Here the picture of my K8s environment:

$ kubectl get nodes
NAME                     STATUS   ROLES    AGE   VERSION
k8m.dbi-services.test    Ready    master   97d   v1.14.1
k8n1.dbi-services.test   Ready    <none>   97d   v1.14.1
k8n2.dbi-services.test   Ready    <none>   97d   v1.14.1

 

I used the new local-storage Storage class (available with K8s v.1.14+):

$ kubectl get sc
NAME            PROVISIONER                    AGE
local-storage   kubernetes.io/no-provisioner   4d

$ kubectl describe sc local-storage
Name:                  local-storage
IsDefaultClass:        No
Annotations:           <none>
Provisioner:           kubernetes.io/no-provisioner
Parameters:            <none>
AllowVolumeExpansion:  <unset>
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     WaitForFirstConsumer
Events:                <none>

 

I configured a persistent volume based on this local-storage class and that pointing to the /mnt/local-storage on my K81n node. The access mode and Retain policy are configured according to meet the best practices for databases.

$ cat StoragePV.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-local-pv
spec:
  capacity:
    storage: 5Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  volumeMode: Filesystem
  storageClassName: local-storage
  local:
    path: /mnt/localstorage
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - k8n1.dbi-services.test

 

For a sake of simplicity, I applied the default configuration with all SQL Server related files related stored in /var/opt/mssql. I configured the underlying storage and folder permissions accordingly with my custom mssql user (UID = 10001) and group (GUID = 10001) created on the K8n1 host. Note that the UID matches with that of the mssql user created within the container.

$ hostname
k8n1.dbi-services.test

$ id mssql
uid=10001(mssql) gid=10001(mssql) groups=10001(mssql)

$ ls -ld /mnt/localstorage/
drwxrwx--- 6 mssql mssql 59 Sep 26 20:57 /mnt/localstorage/

 

My deployment file is as follows. It includes the security context that specifies a non-root container configuration with my custom user’s UID / GUID created previously (runAsUser and runAsGroup parameters):

$ cat ReplicaSet.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: mssql-deployment-2
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: mssql-2
    spec:
      securityContext:
        runAsUser: 10001
        runAsGroup: 10001
      terminationGracePeriodSeconds: 10
      containers:
      - name: mssql-2
        image: trow.kube-public:31000/2019-latest-non-root
        ports:
        - containerPort: 1433
        env:
        - name: MSSQL_PID
          value: "Developer"
        - name: ACCEPT_EULA
          value: "Y"
        - name: MSSQL_SA_PASSWORD
          valueFrom:
            secretKeyRef:
              name: sql-secrets
              key: sapassword
        volumeMounts:
        - name: mssqldb
          mountPath: /var/opt/mssql
      volumes:
      - name: mssqldb
        persistentVolumeClaim:
          claimName: mssql-data-2

 

Obviously, if you don’t meet the correct security permissions on the underlying persistent volume, you will get an error when provisioning the MSSQL pod because the sqlservr process will not get the privileges to create or to access the SQL Server related files as shown below:

$ kubectl get pod
NAME                                 READY   STATUS   RESTARTS   AGE
mssql-deployment-2-8b4d7f7b7-x4x8w   0/1     Error    2          30s

$ kubectl logs mssql-deployment-2-8b4d7f7b7-x4x8w
SQL Server 2019 will run as non-root by default.
This container is running as user mssql.
To learn more visit https://go.microsoft.com/fwlink/?linkid=2099216.
/opt/mssql/bin/sqlservr: Error: Directory [/var/opt/mssql/system/] could not be created.  Errno [13]

 

If well configured, everything should work as expected and your container should run and interacts correctly with the corresponding persistent volume in the security context defined in your YAML specification:

All this stuff applies to SQL Server 2017.

See you!

 

 

 

 

 

 

Cet article Using non-root SQL Server containers on Docker and K8s est apparu en premier sur Blog dbi services.

Red Hat Enterprise Linux 8 – Stratis

Yann Neuhaus - Fri, 2019-09-27 09:46

The Initial Release (8.0.0) of Red Hat Enterprise Linux 8 is available since May 2019.
I’ve already blogged about one of its new feature (AppStream) during the Beta version. In this post I will present Stratis, which is a new local storage-management solution available on RHEL8.

Introduction

LVM, fdisk, ext*, XFS,… there is plenty of terms, tools and technologies available for managing disks and file systems on a Linux server. In a general way, setting up the initial configuration of storage is not so difficult, but when it comes to manage this storage (meaning most of the time extend it), that’s where things can get a bit more complicated.
The goal of Stratis is to provide an easy way to work on local storage, from the initial setup to the usage of more advanced features.
Like Btrfs or ZFS, Stratis is a “volume-managing filesystems”. VMF’s particularity is that it can be used to manage volume-management and filesystems layers into one, using the concept of “pool” of storage, created from one or more block devices.

Stratis is implemented as a userspace daemon triggered to configure and monitor existing components :
[root@rhel8 ~]# ps -ef | grep stratis
root 591 1 0 15:31 ? 00:00:00 /usr/libexec/stratisd –debug
[root@rhel8 ~]#

To interact with the deamon a CLI is available (stratis-cli) :
[root@rhel8 ~]# stratis --help
usage: stratis [-h] [--version] [--propagate] {pool,blockdev,filesystem,fs,daemon} ...
Stratis Storage Manager
optional arguments:
-h,              --help show this help message and exit
--version        show program's version number and exit
--propagate      Allow exceptions to propagate
subcommands:
{pool,blockdev,filesystem,fs,daemon}
pool             Perform General Pool Actions
blockdev         Commands related to block devices that make up the pool
filesystem (fs)  Commands related to filesystems allocated from a pool
daemon           Stratis daemon information
[root@rhel8 ~]#

Among the Stratis features we can mention :
> Thin provisioning
> Filesystem snapshots
> Data integrity check
> Data caching (cache tier)
> Data redundancy (raid1, raid5, raid6 or raid10)
> Encryption

Stratis is only 2 years old and the current version is 1.0.3. Therefore, certain features are not yet available such as redundancy for example :
[root@rhel8 ~]# stratis daemon redundancy
NONE: 0
[root@rhel8 ~]#

Architecture

Startis architecture is composed of 3 layers :
Block device
A blockdev is the storage used to make up the pool. That could be :
> Hard drives / SSDs
> iSCSI
> mdraid
> Device Mapper Multipath
> …

Pool
A pool is a set of Block devices.

Filesystem
Filesystems are created from the pool. Stratis supports up to 2^4 filesystems per pool. Currently you can only created XFS filesystem on top of a pool.

Let’s try…

I have a new empty 5G disk on my system. This is the blockdev I want to use :
[root@rhel8 ~]# lsblk /dev/sdb
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb    8:16   0   5G  0 disk
[root@rhel8 ~]#

I create pool composed of this unique blockdev…
[root@rhel8 ~]# stratis pool create pool01 /dev/sdb

…and verify :
[root@rhel8 ~]# stratis pool list
Name    Total Physical Size Total Physical Used
pool01                5 GiB              52 MiB
[root@rhel8 ~]#

On top of this pool I create a XFS filesystem called “data”…
[root@rhel8 ~]# stratis fs create pool01 data
[root@rhel8 ~]# stratis fs list
Pool Name   Name        Used       Created             Device                      UUID
pool01      data        546 MiB   Sep 04 2019 16:50   /stratis/pool01/data        dc08f87a2e5a413d843f08728060a890
[root@rhel8 ~]#

…and mount it on /data directory :
[root@rhel8 ~]# mkdir /data
[root@rhel8 ~]# mount /stratis/pool01/data /data
[root@rhel8 ~]# df -h /data
Filesystem                                                                                      Size Used Avail Use% Mounted on
/dev/mapper/stratis-1-8fccad302b854fb7936d996f6fdc298c-thin-fs-f3b16f169e8645f6ac1d121929dbb02e 1.0T 7.2G 1017G 1%   /data
[root@rhel8 ~]#

Here the ‘df’ command report the current used and free sizes as seen and reported by XFS. In fact this is the thin-device :
[root@rhel8 ~]# lsblk /dev/mapper/stratis-1-8fccad302b854fb7936d996f6fdc298c-thin-fs-f3b16f169e8645f6ac1d121929dbb02e
NAME                                                                                           MAJ:MIN  RM  SIZE  RO  TYPE     MOUNTPOINT
/dev/mapper/stratis-1-8fccad302b854fb7936d996f6fdc298c-thin-fs-f3b16f169e8645f6ac1d121929dbb02e  253:7   0    1T   0  stratis  /data
[root@rhel8 ~]#

This is not very useful, because the real usage of the storage is less due to thin provisioning. And also because Stratis will automatically grow the filesystem if it nears XFS’s currently sized capacity.

Let’s extend the pool with a new disk of 1G…
[root@rhel8 ~]# lsblk /dev/sdc
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdc    8:32   0   1G  0 disk
[root@rhel8 ~]#
[root@rhel8 ~]# stratis pool add-data pool01 /dev/sdc

…and check :
[root@rhel8 ~]# stratis blockdev
Pool Name  Device Node  Physical Size   State  Tier
pool01     /dev/sdb              5 GiB  In-use  Data
pool01     /dev/sdc              1 GiB  In-use  Data
[root@rhel8 pool01]# stratis pool list
Name   Total Physical Size    Total Physical Used
pool01                6 GiB                602 MiB
[root@rhel8 ~]#

A nice feature of Stratis is the possibility to duplicate a filesystem with a snapshot.
For this test I create a new file on the filesystem “data” we just added :
[root@rhel8 ~]# touch /data/new_file
[root@rhel8 ~]# ls -l /data
total 0
-rw-r--r--. 1 root root 0 Sep 4 20:43 new_file
[root@rhel8 ~]#

The operation is straight forward :
[root@rhel8 ~]# stratis fs snapshot pool01 data data_snap
[root@rhel8 ~]#

You can notice that Stratis don’t make a difference between a filesystem and a snapshot filesystem. They are the same kind of “object” :
[root@rhel8 ~]# stratis fs list
Pool Name   Name        Used       Created             Device                      UUID
pool01      data        546 MiB   Sep 04 2019 16:50   /stratis/pool01/data        dc08f87a2e5a413d843f08728060a890
pool01      data_snap   546 MiB   Sep 04 2019 16:57   /stratis/pool01/data_snap   a2c45e9a15e74664bab5de992fa884f7
[root@rhel8 ~]#

I can now mount the new Filesystem…
[root@rhel8 ~]# mkdir /data_snap
[root@rhel8 ~]# mount /stratis/pool01/data_snap /data_snap
[root@rhel8 ~]# df -h /data_snap
Filesystem                                                                                       Size  Used  Avail  Use%  Mounted on
/dev/mapper/stratis-1-8fccad302b854fb7936d996f6fdc298c-thin-fs-a2c45e9a15e74664bab5de992fa884f7  1.0T  7.2G  1017G  1%    /data_snap
[root@rhel8 ~]#

…and check that my test file is here :
[root@rhel8 ~]# ls -l /data_snap
total 0
-rw-r--r--. 1 root root 0 Sep 4 20:43 new_file
[root@rhel8 ~]#

Nice ! But… can I snapshot a filesystem in “online” mode, meaning when data are writing on it ?
Let’s create another snapshot from one session, while a second session is writing on the /data filesystem.
From session 1 :
[root@rhel8 ~]# stratis fs snapshot pool01 data data_snap2

And from session 2, in the same time :
[root@rhel8 ~]# dd if=/dev/zero of=/data/bigfile.txt bs=4k iflag=fullblock,count_bytes count=4G

Once done, the new filesystem is present…
[root@rhel8 ~]# stratis fs list
Pool Name Name Used Created Device UUID
pool01 data_snap2 5.11 GiB Sep 27 2019 11:19 /stratis/pool01/data_snap2 82b649724a0b45a78ef7092762378ad8

…and I can mount it :
[root@rhel8 ~]# mkdir /data_snap2
[root@rhel8 ~]# mount /stratis/pool01/data_snap /data_snap2
[root@rhel8 ~]#

But the file inside seems to have changed (corruption) :
[root@rhel8 ~]# md5sum /data/bigfile.txt /data_snap2/bigfile.txt
c9a5a6878d97b48cc965c1e41859f034 /data/bigfile.txt
cde91bbaa4b3355bc04f611405ae4430 /data_snap2/bigfile.txt
[root@rhel8 ~]#

So, the answer is no. Stratis is not able to duplicate a file system online (at least for the moment). Thus I would strongly recommend to un-mount the filesystem before creating a snapshot.

Conclusion

Stratis is an easy-to-use tool for managing local storage on RHEL8 server. But due to its immaturity I would not recommend to use it in a productive environment yet. Moreover some interesting features like raid management or data integrity check are not available for the moment, but I’m quite sure that the tool will evolve quickly !

If you want to know more, all is here.
Enjoy testing Stratis and stay tuned to discover its evolution…

Cet article Red Hat Enterprise Linux 8 – Stratis est apparu en premier sur Blog dbi services.

Not All Endpoints Registered

Senthil Rajendran - Fri, 2019-09-27 02:31

Not All Endpoints Registered

crsctl stat res -t

 --------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_LSTNSOLTP .lsnr
     1        ONLINE  INTERMEDIATE orcl1           Not All Endpoints Registered,STABLE
ora.LISTENER_LSTNSOLTP .lsnr
     1        ONLINE  ONLINE       orcl2           STABLE
ora.LISTENER_LSTNSOLTP .lsnr
     1        ONLINE  ONLINE       orcl3           STABLE

"Not All Endpoints Registered"

To fix the problem
- compare the listener.ora of all the nodes
- find the differences and sync it up

if you find static entries make sure you stop the listener running with "LSNRCT STOP " otherwise you might end up with duplicate processes

$  ps -ef|grep -i LSTNSOLTP
grid     312080      1  0 03:17 ?        00:00:00 /u01/app/12.1.0.2/grid/bin/tnslsnr LSTNSOLTP -no_crs_notify -inherit
$

-- found static entry in listener.ora

-- fixed listener.ora by removing the static entry

-- used crsctl to start the listener

$ ps -ef|grep -i LSTNSOLTP
grid     166779      1  0 Sep18 ?        00:14:53 /u01/app/12.1.0.2/grid/bin/tnslsnr LSTNSOLTP -no_crs_notify -inherit
grid     267334      1  0 03:07 ?        00:00:00 /u01/app/12.1.0.2/grid/bin/tnslsnr LSTNSOLTP -no_crs_notify -inherit
$

correct way to approach this problem


$  ps -ef|grep -i LSTNSOLTP
grid     312080      1  0 03:17 ?        00:00:00 /u01/app/12.1.0.2/grid/bin/tnslsnr LSTNSOLTP -no_crs_notify -inherit
$

$ lsnrctl stop LSTNSOLTP


$  ps -ef|grep -i LSTNSOLTP
$

Make sure no processes are running

-- fix listener.ora by removing the static entry

-- use crsctl to start the listener

$ ps -ef|grep -i LSTNSOLTP
grid     267334      1  0 03:07 ?        00:00:00 /u01/app/12.1.0.2/grid/bin/tnslsnr LSTNSOLTP -no_crs_notify -inherit
$


crsctl stat res -t

 --------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_LSTNSOLTP .lsnr
     1        ONLINE  ONLINE       orcl1           STABLE
ora.LISTENER_LSTNSOLTP .lsnr
     1        ONLINE  ONLINE       orcl2           STABLE
ora.LISTENER_LSTNSOLTP .lsnr
     1        ONLINE  ONLINE       orcl3           STABLE






Pivot with list of rows

Tom Kyte - Thu, 2019-09-26 06:46
We have a table which contains db_name and usernames. In the output we need list of users per DB i.e. number of columns will be equal to distinct db_name. sample output format: <b>DB1 DB2</b> USER1 USER4 USER2 USER5 USER3 Database version:...
Categories: DBA Blogs

confuse at the order of execution plan table

Tom Kyte - Thu, 2019-09-26 06:46
As we were told that "The execution order in EXPLAIN PLAN output begins with the line that is the furthest indented to the right. The next step is the parent of that line. If two lines are indented equally, then the top line is executed first." howe...
Categories: DBA Blogs

Converting data types in where clause

Tom Kyte - Thu, 2019-09-26 06:46
Hi Tom, My question is regarding when a query is not having the right datatype in the where clause Example: -- Create table <code>CREATE TABLE mytable ( mynumber varchar2(20), primary key(mynumber));</code> -- Insert some rows <...
Categories: DBA Blogs

Resetting a live sequence

Tom Kyte - Thu, 2019-09-26 06:46
A sequence was about to finish, so I had make it bigger. I work on database 11.2 so I had to use a workaround described here (https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1119633817597). The procedure shown in LiveSQL was ru...
Categories: DBA Blogs

Strange behavior in analytic functions with partitions

Tom Kyte - Thu, 2019-09-26 06:46
Hello, I have met strange behavior that I can't understand. I have this table <code>CREATE TABLE test_table (id NUMBER(10,0) NOT NULL, register_date DATE DEFAULT sysdate NOT NULL, row...
Categories: DBA Blogs

Issue in generating Custom Reference ID and Mapping with Form's field

Tom Kyte - Thu, 2019-09-26 06:46
Building an app for blocking a demo calendar for particular product setup. I wanted to create a custom reference ID ( CURRENT_MONTH-CURRENT_YEAR-SEQUENCE like SEPT-2019-003) which will be the Primary Key (Column BOOKING_REF) for the table (table ...
Categories: DBA Blogs

Inserting without a full list for field names

Tom Kyte - Thu, 2019-09-26 06:46
We have an issue when we perform an insert like this <code> INSERT INTO STS_RESP_LOG(STS_REQ_LOG_SYSTEM_ID,HSTRY_FLG, SNGL_STS, FRQNCY, CRT_DATE ) VALUES ( ?, ?, ?, ?, sysdate); </code> ?...
Categories: DBA Blogs

RMAN MAXPIECESIZE VS SECTION SIZE

Tom Kyte - Thu, 2019-09-26 06:46
HELLO , i've made some test to try to parallelizethe best the backup of bigdatabase and i wanted to know if it's possible to parallelize the backup of a backuppiece (multiple backupset) on mulitple channel with maxpiecesize because our SBT media is ...
Categories: DBA Blogs

Managing Licenses with AWS License Manager

Yann Neuhaus - Wed, 2019-09-25 06:51
Introduction

Computing environments became more and more agile over these last years. Companies need to provide solutions helping people to quickly set up new resources, starting and stopping them, scaling them according to the need and finally, removing them. In such environments, it could be tricky to follow license compliance when resources are changing on hourly basis.

Having a look on AWS services, I saw that AWS provides a license managing tool named “AWS License Manager”. I took few minutes in order to:

  • Understand which resources this service is able to monitor
  • How it works
  • Test it with an on-premise Linux server executing an oracle database
License Manager Service

The first step in order to use License Manager is to select it in the list of AWS Services.

AWS Services List

AWS Services List

After having clicked on AWS License Manager, the AWS License Manager window will appear.

"<yoastmark

Now, we simply have to create a license configuration with required license terms according to the software vendor. You can setup different kind of metrics such as

  • vPCUs
  • Cores
  • Sockets
  • Instances

License Manager also provides the possibility to enforce license limit, meaning that it prevents license usage after available licenses are exhausted.

AWS Create License options

AWS Create License configuration

In a context of on-premise License monitoring, it is important to notice that sockets and cores license’s type are not accepted. Therefore, in this example I used vCPUs.

"<yoastmark

Error while trying to associate Socket License to an on-premise host

AWS System Manager

Once the license configuration created, it’s now mandatory to use another AWS Service, AWS System Manager. This service allows you to view and control your infrastructure on AWS. AWS System Manager not only allows you to view and control your Amazon EC2 Instance but also on-premises servers, virtual machines (including VMs in other cloud environments). Some System Manager capabilities are not free, however in the context of this example everything is free.

 

AWS System Manager Agent (SSM Agent)

In order to benefit from AWS System Manager we need to install AWS Systems Manager Agent (SSM Agent) on our on-premised host. Indeed, SSM Agent is an Amazon software that can be installed and configured on an Amazon EC2 instance, an on-premises server, or a virtual machine (VM) and provides a solution to update, manage, and configure resources. SSM Agent is installed, by default on instances created from Windows Server 2016 and Windows Server 2019, Amazon Linux, Ubuntu Server Images AMIs. However, if you are running an on-premise server you need to install it. The process is really straightforward as presented below.

[root@vmrefdba01 ~]# mkdir /tmp/ssm
[root@vmrefdba01 ~]# curl https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm -o /tmp/ssm/amazon-ssm-agent.rpm
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 18.9M  100 18.9M    0     0  3325k      0  0:00:05  0:00:05 --:--:-- 4368k
[root@vmrefdba01 ~]# sudo yum install -y /tmp/ssm/amazon-ssm-agent.rpm
Loaded plugins: refresh-packagekit, ulninfo
Setting up Install Process
Examining /tmp/ssm/amazon-ssm-agent.rpm: amazon-ssm-agent-2.3.707.0-1.x86_64
Marking /tmp/ssm/amazon-ssm-agent.rpm to be installed
public_ol6_UEK_latest                                    | 2.5 kB     00:00
public_ol6_UEK_latest/primary_db                         |  64 MB     00:07
public_ol6_latest                                        | 2.7 kB     00:00
public_ol6_latest/primary_db                             |  18 MB     00:07
Resolving Dependencies
--> Running transaction check
---> Package amazon-ssm-agent.x86_64 0:2.3.707.0-1 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

======================================================================================================================================
 Package                            Arch                     Version                        Repository                           Size
======================================================================================================================================
Installing:
 amazon-ssm-agent                   x86_64                   2.3.707.0-1                    /amazon-ssm-agent                    61 M

Transaction Summary
======================================================================================================================================
Install       1 Package(s)

Total size: 61 M
Installed size: 61 M
Downloading Packages:
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing : amazon-ssm-agent-2.3.707.0-1.x86_64                                                                                1/1
amazon-ssm-agent start/running, process 3896
  Verifying  : amazon-ssm-agent-2.3.707.0-1.x86_64                                                                                1/1

Installed:
  amazon-ssm-agent.x86_64 0:2.3.707.0-1

Complete!

 

Creating an activation

Once the agent installed, we have to create a new “Activation” in the AWS System Manager Service by clicking on “Create activation“. At the end of the creation you will get an Activation Code and an Activation ID (in the green field below). You have to keep this information for the agent configuration.

AWS System Manager Activation

AWS System Manager Activation

Agent Configuration

In order to register your on-premise instance on AWS, you simply have to execute the following command with the activation code and activation id provided by AWS System Manager

sudo amazon-ssm-agent -register -code "<cf Activation Code>" -id "<cf Activation ID>" -region "us-east-2"

2019-09-19 13:53:05 INFO Successfully registered the instance with AWS SSM using Managed instance-id: mi-0756a9f0dc25be3cd

 

Once registered the Managed Instance should appear as presented below in AWS Systems Manager

AWS System Manager - Managed Instances

AWS Systems Manager – Managed Instances

The Platform type is detected as well as the Kernel version, IP address and computer name. AWS Systems Manager provides also a package inventory and many other kinds of inventory such as Network inventory, Files inventory, aso…

AWS Systems Manager - Application Inventory

AWS Systems Manager – Application Inventory

 

Association between License Configuration and Resource ID

We now have to make the link between the Managed Instance (resource) and the license configuration. The goal of course is to define which license configuration will be applied to which resource. In order to proceed, we have to go into the AWS License Manager, and select “Search Inventory” into the menu. Then we simply have to select the resource and then click on “Associate license Configuration”.

"<yoastmark

AWS License Manager – Search Inventory

The following window will appear, allowing you to define to which license configuration matches which resource:

"<yoastmark

Having a look in the AWS License Manager Dashboard, you can see that 1 out of 1 license is consumed since I dedicated one vCPU to my virtual machine and I provided 1vCPU  license to this instance.

"<yoastmark

AWS License Manager – Dashboard

Core Messages
  • AWS License Manager offers more functionalities for EC2 Instances than for on-premise servers.
  • AWS License Manager offers functionalities in order to monitor socket, vCPU, Cores and Instances.
  • AWS License Manager definitively helps to manage licenses but doesn’t fit with all requirements and license model.
  • AWS Systems Manager is a powerful tool providing several functionalities.
Strenghts
  • AWS License Manager is free.
  • AWS License Manager offers possibilities to monitor on-premise resources.
  • AWS License Manager provides solution in order to prevent instance to run if license compliance doesn’t fit.
  • AWS License Manager and AWS System Manager are straightforward to install and configure.
  • AWS License Manager and AWS System Manager offers a good documentation.
  • AWS System Manager offers many free functionalities (Patch Manager, Session Manager, Insights Dashboard, aso…).
  • AWS System Manager offers many  functionalities and is the basis of several other AWS tools such as AWS Config which allows to monitor instance’s compliance.
Weaknesses
  • AWS License Manager is not able by default to monitor options usage such as the ones of an Oracle database (Partitioning, Active Data Guard, aso…).
  • AWS License Manager is not able to calculate Oracle Processors, meaning taking into consideration core factors.
  • AWS System Manager is not able to monitor socket or cores on on-premise resources, only vCPUs.

Cet article Managing Licenses with AWS License Manager est apparu en premier sur Blog dbi services.

MobaXterm 12.2

Tim Hall - Wed, 2019-09-25 02:06

In another “the rest of the world ceases to exist in the lead up to OpenWorld” moment, I missed the release of MobaxTerm 12.2.

The downloads and changelog are in the usual places.

For Windows users who, like me, spend most of the day connecting to machines via SSH, this is the best tool I’ve found.

Cheers

Tim…

MobaXterm 12.2 was first posted on September 25, 2019 at 8:06 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

PFCLScan - Version 3.0

Pete Finnigan - Tue, 2019-09-24 09:26
We are very excited to announce that we are currently working to have version 3.0 of PFCLScan our flagship database security scanner for the Oracle database. We will be ready for sale in September and this development is going really....[Read More]

Posted by Pete On 11/07/19 At 03:33 PM

Categories: Security Blogs

Pages

Subscribe to Oracle FAQ aggregator