Feed aggregator

CloudBees DevOps Playground – Hands On with JenkinsX

Yann Neuhaus - Mon, 2019-07-29 16:32

Last week, we had the chance to attend to the CloudBees DevOps Playground in London. The event was a presentation and a Hands-On on Jenkins X done by one of the most popular guys from the CloudBees, Gareth Evans.

Before taking an interest in Jenkins X, we focused most of our time in the Docker and Kubernetes part. We enhance a lot our skills during the last months on the administration of Kubernetes cluster and the deployment of applications, especially the Documentum stack as well WebLogic.

Jenkins X is quite a new technology in the landscape of automatic deployment, and we face the difficulties to find a workshop /training related to Jenkins X administration and usage. So we decided to go to London for this CloudBees DevOps Hands-on done.

As working in middleware infrastructures between system engineer and applications, Jenkins X is completely making sense for us to automate the creation of Kubernetes infrastructure in terms of cluster and application deployment.

What’s Jenkins X?

Basically, Jenkins X automates the whole development process end to end for containerized applications based on Docker and Kubernetes.

Overview of Jenkins X:

  • Jenkins X provides an automated CI/CD solution for Kubernetes
  • Buildpacks to quickly create new applications
  • Uses GitOps to manage promotion between Environments
  • Creates Preview Environments on Pull Requests
  • Provides control via ChatOps and feedback on Pull Requests
  • Improves developers’ productivity
  • It is open source
  • Microservices architecture
  • Designed for extension
  • Relies on k8s CRDs

JX Topologies:

Jenkins X can work in 2 modes: Static and Serverless.

Cloud-Native approach:
Our goal will be to use Jenkins X to automate the deployment of containerized applications on Kubernetes cluster.
Jenkins X make a real collaboration between system engineer and application teams with a focus on making development teams productive through automation and DevOps best practices.

We will achieve the automation of CI/CD pipelines using Jenkins X as following:

This is how Jenkins X works (big picture) and we will see later how to install JX with the different methods on the cloud or on-premise and how to build CI/CD pipelines.

Cet article CloudBees DevOps Playground – Hands On with JenkinsX est apparu en premier sur Blog dbi services.

Alfresco Clustering – Basis & Architectures

Yann Neuhaus - Mon, 2019-07-29 01:00

This blog will be the first of a series on Alfresco HA/Clustering topics. It’s been too long I haven’t posted anything related to Alfresco so I thought about writing a few blogs about my experience with setting up more or less complex HA/Clustering infrastructures. So, let’s start this first part with an introduction to the Alfresco HA/Clustering.

If you want to setup a HA/Cluster environment, you will have to first think about where you want to go exactly. Alfresco is composed of several components so “what do you want to achieve exactly?”, that would probably be the first question to ask.

Alfresco offers a lot of possibilities, you can more or less do whatever you want. That’s really great, but it also means that you should plan what you want to do first. Do you just want a simple HA architecture for Share+Repository but you can live without Solr for a few minutes/hours (in case of issues) or you absolutely want all components to be always available? Or maybe you want an HA architecture which is better suited for high throughput? Obviously, there might be some costs details that need to be taken into consideration linked to the resources but also the licenses: the Alfresco Clustering license itself but also the Index Engine license if you go for separated Solr Servers.

That’s what you need to define first to avoid losing time changing configurations and adding more components into the picture later. Alternatively (and that’s something I will try to cover as much as I can), it’s also possible to setup an environment which will allow you to add more components (at least some of them…) as needed without having to change your HA/Clustering configuration, if you are doing it right from the start and if you don’t change too much the architecture itself.

I mentioned earlier the components of Alfresco (Alfresco Content Services, not the company), these are the ones we are usually talking about:

  • *Front-end (Apache HTTPD, Nginx, …)
  • *ActiveMQ
  • Alfresco PDF Renderer
  • Database
  • File System
  • ImageMagick
  • Java
  • LibreOffice
  • *Share (Tomcat)
  • *Repository (Tomcat)
  • *Solr6 (Jetty)

 

In this series of blog, I won’t talk about the Alfresco PDF Renderer, ImageMagick & Java because these are just simple binaries/executables that need to be available from the Repository side. For LibreOffice, it’s usually Alfresco that is managing it directly (multi-processes, restart if crash, aso…). It wouldn’t really make sense to talk about these in blogs related to Clustering. I will also disregard the Database and File System ones since they are usually out of my scope. The Database is usually installed & managed by my colleagues which are DBAs, they are much better at that than me. That leaves us with all components with an asterisk (*). I will update this list with links to the different blogs.

Before jumping in the first component, which will be the subject of the next blog, I wanted to go through some possible architectures for Alfresco. There are a lot of schemas available on internet but it’s often the same architecture that is presented so I thought I would take some time to represent, in my own way, what the Alfresco’s architecture could look like.

In the below schemas, I represented the main components: Front-end, Share, Repository, Solr, Database & File System (Data) as little boxes. As mentioned previously, I won’t talk about the Database & File System so I just represented them once to see the communications with these but what is behind their boxes can be anything (with HA/Clustering or not). The arrows represent the way communications are initiated: an arrow in a single direction “A->B” means that B is never initiating a communication with A. Boxes that are glued together represent all components installed on the same host (a physical server, a VM, a container or whatever).

 

Alfresco Architecture 1N°1: This is the simplest architecture for Alfresco. As you can see, it’s not a HA/Clustering architecture but I decided to start small. I added a Front-end (even if it’s not mandatory) because it’s a best practice and I would not install Alfresco without it. Nothing specific to say on this architecture, it’s just simple.

 

Alfresco Architecture 2N°2: The first thing to do if you have the simplest architecture in place (N°1) and you start seeing some resources contention is to split the components and more specifically to install Solr separately. This should really be the minimal architecture to use, whenever possible.

 

Alfresco Architecture 3N°3: This is the first HA/Clustering architecture. It starts small as you can see with just two nodes for each Front-end/Share/Repository stack with a Load Balancer to dispatch the load on each side for an Active/Active solution. The dotted grey lines represent the Clustering communications. In this architecture, there is therefore a Clustering for Share and another one for the Repository layer. The Front-end doesn’t need Clustering since it just forwards the communications but the session itself is on the Tomcat (Share/Repository) side. There is only one Solr node and therefore both Repository boxes will communicate with the Solr node (through the Front-end or not). Between the Repository and Solr, there is one bidirectional arrow and another one unidirectional. That’s because both Repository boxes will initiate searches but the Solr will do tracking to index new content with only one Repository: this isn’t optimal.

 

Alfresco Architecture 4N°4: To solve this small issue with Solr tracking, we can add a second Load Balancer in between so that the Solr tracking can target any Repository node. The first bottleneck you will encounter in Alfresco is usually the Repository because a lot of things are happening in the background at that layer. Therefore, this architecture is usually the simplest HA/Clustering solution that you will want to setup.

 

Alfresco Architecture 5N°5: If you are facing some performance issues with Solr or if you want all components to be in HA, then you will have to duplicate the Solr as well. Between the two Solr nodes, I put a Clustering link, that’s in case you are using Solr Sharding. If you are using the default cores (alfresco and archive), then there is no communication between distinct Solr nodes. If you are using Solr Sharding and if you want a HA architecture, then you will have the same shards on both Solr nodes and in this case, there will be communications between the Solr nodes, it’s not really a Clustering so to speak, that’s how Solr Sharding is working but I still used the same representation.

 

Alfresco Architecture 6N°6: As mentioned previously (for the N°4), the Repository is usually the bottleneck. To reduce the load on this layer, it is possible to do several things. The first possibility is to install another Repository and dedicate it to the Solr Tracking. As you can see above, the communications aren’t bidirectional anymore but only unidirectional. Searches will come from the two Repository that are in Cluster and Solr Tracking will use the separated/dedicated Repository. This third Repository can then be set in read-only, the jobs and services can be disabled, the Clustering can be disabled as well (so it uses the same DB but it’s not part of the Clustering communications because it doesn’t have to), aso… I put this third Repository as a standalone box but obviously you can install it with one of the two Solr nodes.

 

Alfresco Architecture 7N°7: The next step can be to add another read-only Repository and put these two nodes side by side with the Solr nodes. This is to only have localhost communications for the Solr Tracking which is therefore a little bit easier to secure.

 

Alfresco Architecture 8N°8: The previous architectures (N°6 & N°7) introduced a new single point of failure so to fix this, there is only one way: add a new Load Balancer between the Solr and the Repository for the tracking. Behind the Load Balancer, there are two solutions: keep the fourth Repository which is also in read-only or use a fallback to the Repository node1/node2 in case the read-only Repository (node3) isn’t available. For that purpose, the Load Balancer should be in, respectively, Active/Active or Active/Passive. As you can see, I choose to represent the first one.

 

These were a few possible architectures. You can obviously add more nodes if you want to, to handle more load. There are many other solutions so have fun designing the best one, according to your requirements.

 

Cet article Alfresco Clustering – Basis & Architectures est apparu en premier sur Blog dbi services.

Oracle 19c Automatic Indexing: Configuration (All I Need)

Richard Foote - Mon, 2019-07-29 00:12
In this post, I’ll discuss how to configure the new Automatic Indexing capability introduced in Oracle Database 19c. The intent of Oracle here is to make the configuration of Automatic Indexing as simplistic as possible, with as few levers for DBAs to potentially stuff up as possible. The ultimate goal would be to have a […]
Categories: DBA Blogs

dbvisit 9: Automatic Failover

Yann Neuhaus - Sat, 2019-07-27 06:43

dbvisit 9 is released since a few months. And one new feature I tested is the Automatic Failover. In this blog I suppose that dbvisit 9 is already installed and that the standby database is already created. Indeed I will not describe nor dbvisit installation neither the standby creation as it is the same as the previous versions.
For more info about dbvisit installation and/or dbvisit standby creation please see these steps in my previous blog or dbvisit documentation
The new feature Autamatic Failover needs to install an observer which main functions are:
-Provide remote monitoring of existing DDCs, and inform the DBA of problems in close to real-time
-Automatically perform a Failover of the DDC if previously-specified conditions are met.

We will describe observer installation and configuration later

We describe below the configuration we are using
dbvisit1: primary server with Oracle 19c
dbvisit2: standby server with Oracle 19c
dbvisitconsole : Host of the dbvisit console (dbvserver) and for the observer

As specified earlier, we need to install an observer. It is very easy to do this, just launch the install_dbvisit executable and follow the instructions

[oracle@dbvisitconsole installer]$ pwd
/home/oracle/dbvisit/installer
[oracle@dbvisitconsole installer]$ ./install-dbvisit

-----------------------------------------------------------
    Welcome to the Dbvisit software installer.
-----------------------------------------------------------

    It is recommended to make a backup of our current Dbvisit software
    location (Dbvisit Base location) for rollback purposes.

    Installer Directory /home/oracle/dbvisit

>>> Please specify the Dbvisit installation directory (Dbvisit Base).

    The various Dbvisit products and components - such as Dbvisit Standby,
    Dbvisit Dbvnet will be installed in the appropriate subdirectories of
    this path.

    Enter a custom value or press ENTER to accept default [/usr/dbvisit]:
     > /u01/app/dbvisit
    DBVISIT_BASE = /u01/app/dbvisit

    -----------------------------------------------------------
    Component      Installer Version   Installed Version
    -----------------------------------------------------------
    standby        9.0.02_0_gbd40c486                                not installed                 
    dbvnet         9.0.02_0_gbd40c486                                not installed                 
    dbvagent       9.0.02_0_gbd40c486                                not installed                 
    dbvserver      9.0.02_0_gbd40c486                                9.0.02_0_gbd40c486            
    observer       1.02                                              not installed                 

    -----------------------------------------------------------

    What action would you like to perform?
       1 - Install component(s)
       2 - Uninstall component(s)
       3 - Exit

    Your choice: 1

    Choose component(s):
       1 - Core Components (Dbvisit Standby Cli, Dbvnet, Dbvagent)
       2 - Dbvisit Standby Core (Command Line Interface)
       3 - Dbvnet (Dbvisit Network Communication)
       4 - Dbvagent (Dbvisit Agent)
       5 - Dbvserver (Dbvisit Central Console) - Not available on Solaris/AIX
       6 - Dbvisit Observer (Automatic Failover Option) - Not available on Solaris/AIX
    Press ENTER to exit Installer

    Your choice: 6

-----------------------------------------------------------
    Summary of the Dbvisit OBSERVER configuration
-----------------------------------------------------------
    DBVISIT_BASE /u01/app/dbvisit

    Press ENTER to continue

-----------------------------------------------------------
    About to install Dbvisit OBSERVER
-----------------------------------------------------------
    Component observer installed.

    -----------------------------------------------------------
    Component      Installer Version   Installed Version
    -----------------------------------------------------------
    standby        9.0.02_0_gbd40c486                                not installed                 
    dbvnet         9.0.02_0_gbd40c486                                not installed                 
    dbvagent       9.0.02_0_gbd40c486                                not installed                 
    dbvserver      9.0.02_0_gbd40c486                                9.0.02_0_gbd40c486            
    observer       1.02                                              1.02                          

    -----------------------------------------------------------

    What action would you like to perform?
       1 - Install component(s)
       2 - Uninstall component(s)
       3 - Exit

    Your choice: 3

>>> Installation completed
    Install log /tmp/dbvisit_install.log.201907231647.
[oracle@dbvisitconsole installer]$

And once the installation done, we can start it

[oracle@dbvisitconsole observer]$ ./observersvc start &
[1] 2866

[oracle@dbvisitconsole observer]$ ps -ef | grep obser
oracle    2866  2275  0 14:25 pts/0    00:00:01 ./observersvc start
oracle    2921  2275  0 14:29 pts/0    00:00:00 grep --color=auto obser
[oracle@dbvisitconsole observer]$

After starting the observer we have to add the observer server. This is done from the MANAGE CONFIGURATION TAB from the MENU

From the Configuration TAB, choose the NEW on the left to add a dbvisit observer

And then fill the informations. Note that the default passphrase for the observer is admin900 and then save

To monitor our configuration by the observer, let’s click on Monitor

And then specify the poll interval and the number of retries before a failover happens. In our case

The observer will monitor the configuration every 60 s and will retry 5 times if there is any error.
If after 5 minutes (5×60 secondes), the probleme is not fixed, than an automatic failover will happen.

The observer logfile is located on the observer server

[oracle@dbvisitconsole log]$ pwd
/u01/app/dbvisit/observer/log
[oracle@dbvisitconsole log]$ ls -l
total 8
-rw-r--r--. 1 oracle oinstall 1109 Jul 25 15:24 observer.log
-rw-r--r--. 1 oracle oinstall   97 Jul 25 15:24 orcl_1_observer.log
[oracle@dbvisitconsole log]$

[oracle@dbvisitconsole log]$ tail -f orcl_1_observer.log
2019/07/25 13:24:46 DDC: DDC#1(orcl): Started watchdog: Watchdog successfully started monitoring

Now let’s break the primary database and normally a failover should happen after 5 minutes

[oracle@dbvisit1 log]$ ps -ef | grep pmon
oracle    1887     1  0 14:03 ?        00:00:00 ora_pmon_orcl
oracle   18199  1733  0 15:31 pts/0    00:00:00 grep --color=auto pmon
[oracle@dbvisit1 log]$ kill -9 1887
[oracle@dbvisit1 log]$ ps -ef | grep pmon
oracle   18304  1733  0 15:32 pts/0    00:00:00 grep --color=auto pmon
[oracle@dbvisit1 log]$

In the observer logfile we can see that the standby was promoted after 5 retries.

[oracle@dbvisitconsole log]$ tail -f orcl_1_observer.log
2019/07/25 13:24:46 DDC: DDC#1(orcl): Started watchdog: Watchdog successfully started monitoring
2019/07/25 13:33:51 DDC: DDC#1(orcl): rules failing (1/5): primary: error on dbvisit1:7891: unexpected database status for DDC orcl: got: "Database is down", expected: "Regular database open in read write mode"
2019/07/25 13:34:51 DDC: DDC#1(orcl): rules failing (2/5): primary: error on dbvisit1:7891: unexpected database status for DDC orcl: got: "Database is down", expected: "Regular database open in read write mode"
2019/07/25 13:35:51 DDC: DDC#1(orcl): rules failing (3/5): primary: error on dbvisit1:7891: unexpected database status for DDC orcl: got: "Database is down", expected: "Regular database open in read write mode"
2019/07/25 13:36:51 DDC: DDC#1(orcl): rules failing (4/5): primary: error on dbvisit1:7891: unexpected database status for DDC orcl: got: "Database is down", expected: "Regular database open in read write mode"
2019/07/25 13:37:51 DDC: DDC#1(orcl): rules failing (5/5): primary: error on dbvisit1:7891: unexpected database status for DDC orcl: got: "Database is down", expected: "Regular database open in read write mode"
2019/07/25 13:37:51 DDC: DDC#1(orcl): configuration failed after 5 retries: primary: error on dbvisit1:7891: unexpected database status for DDC orcl: got: "Database is down", expected: "Regular database open in read write mode"
2019/07/25 13:37:51 DDC: DDC#1(orcl): watchdog shutting down: activation imminent
2019/07/25 13:37:51 DDC: DDC#1(orcl): ACTIVATION started: conditions for activation satisfied
2019/07/25 13:38:41 DDC: DDC#1(orcl): ACTIVATION successful: ACTIVATION OK: standby activated, activation took: 50.043794192s

And we can verify that the standby is now open in read write mode

[oracle@dbvisit2 trace]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Thu Jul 25 15:49:38 2019
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> select db_unique_name,open_mode from v$database;

DB_UNIQUE_NAME                 OPEN_MODE
------------------------------ --------------------
orcl                           READ WRITE

SQL>

Note that we can use a user defined script with the observer. For more information please see dbvisit documentation

Cet article dbvisit 9: Automatic Failover est apparu en premier sur Blog dbi services.

Spatial Analytics Made Easy: Oracle Spatial Studio

Rittman Mead Consulting - Fri, 2019-07-26 10:31
 Oracle Spatial Studio

Let's say we need to understand where our company needs to open a new shop. Most of the time the decision is driven by gut feeling and some knowledge of the market and client base, but what if we could have visual insights about where are the high density zones with customers not covered by a shop nearby like in the map below?

 Oracle Spatial Studio

Well... welcome Oracle Spatial Studio!

Spatial Studio is Oracle's new tool for creating spatial analytics with a visual GUI. It uses Oracle Spatial database functions in the backen d exposed with an interface in line with the Oracle Analytics Cloud one. Let's see how it works!

QuickStart Installation

First of all we need to download Spatial Studio from the Oracle web page, for this initial test I downloaded the "Quick Start", a self contained version pre-deployed in a lightweight application server. For more robust applications you may want to download the EAR file deployable in Weblogic.

 Oracle Spatial Studio

Once downloaded and unzipped the file, we just need to verify we have a Java JDK 8 (update 181 or higher) under the hood and we can immediately start Oracle Spatial Studio with the ./start.sh command.

The command will start the service on the local machine that can be accessed at https://localhost:4040/spatialstudio. By default Oracle Spatial Studio Quickstart uses HTTPS protocol with self-signed certificates, thus the first time you access the URL you will need to add a security exception in your browser. The configurations such as port, JVM parameters, host and HTTP/HTTPS protocol can be changed in the conf/server.json file.

We can then login with the default credentials admin/welcome1

 Oracle Spatial Studio

The first step in the Spatial Studio setup is the definition of the metadata connection type. This needs to point to an Oracle database with the spatial option. For my example I initially used an Oracle Autonomous Data Warehouse, for which I had to drop the wallet and specify the schema details.

 Oracle Spatial Studio

Once logged in, the layout and working flows are very similar to Oracle Analytics Cloud making the transition between the two very easy (more details on this later on). In the left menu we can access, like in OAC, Projects (visualizations), Data, Jobs and the Console.

 Oracle Spatial Studio

In order to do Spatial Analysis we need to start from a Dataset, this can be existing tables or views, or we can upload local files. To create a Dataset, click on Create and Dataset

 Oracle Spatial Studio

We have then three options:

  • Upload a Spreadsheet containing spatial information (e.g. Addresses, Postcodes, Regions, Cities etc)
  • Upload a Shapefile containing geometric locations and associated attributes.
  • Use spatial data from one of the existing connections, this can point to any connection containing spatial information (e.g. a table in a database containing customer addresses)
 Oracle Spatial StudioSample Dataset with Mockaroo

I used Mockaroo, a realistic data generator service, to create two excel files: one containing customers with related locations and a second one with shops and related latitude and longitude. All I had to do was to select which fields I wanted to include in my file and the related datatype.

 Oracle Spatial Studio

For example, the list of shop dataset contained the following columns:

  • Id: as row number
  • Shop Name: as concatenation of Shop and the Id
  • Lat: Latitude
  • Long: Longitude
  • Dept: the Department (e.g. Grocery, Books, Health&Beauty)

Mockaroo offers a perfect service and has a free tier of datasets with less than 1000 rows which can be useful for demo purposes. For each column defined, you can select between a good variety of column types. You can also define your own type using regular expressions!

 Oracle Spatial StudioAdding the Datasets to Oracle Spatial Studio

Once we have the two datasources in Excel format, it's time to start playing with Spatial Studio. We first need to upload the datasets, we can do it via Create and Dataset. Starting with the Customer.xlsx one. Once selected the file to upload Spatial Studio provides (as OAC) an overview of the dataset together with options to change configurations like dataset name, target destination (metadata database) and column names.

 Oracle Spatial Studio

Once modified the table name to TEST_CUSTOMERS and clicked on Submit Spatial Studio starts inserting all the rows into the SPATIAL_STUDIO connection with a routine that could take seconds or minutes depending on the dataset volume. When the upload routine finishes I can see the TEST_CUSTOMERS table appearing in the list of datasets.

 Oracle Spatial Studio

We can immediately see the yellow warning sign next to the dataset name, it's due to the fact that we have a dataset with no geo-coded information, we can solve this problem by clicking on the option button and then Prepare and Geocode Addresses

 Oracle Spatial Studio

Oracle Spatial Studio will suggest, based on the column content, some geo-type matching e.g. City Name, Country and Postal Code. We can use the defaults or modify them if we feel they are wrong.

 Oracle Spatial Studio

Once clicked on Apply the geocoding job starts.

 Oracle Spatial Studio

Once the job ends, we can see the location icon next to our dataset name

 Oracle Spatial Studio

We can do the same for the Shops.xlsx dataset, starting by uploading it and store it as TEST_SHOPS dataset.

 Oracle Spatial Studio

Once the dataset is uploaded I can geo-locate the information based on the Latitude and Longitude, I can click on the option button and the selecting Prepare and Create Lon/Lat Index. Then I'll need to assign the Longitude and Latitude column correctly and click on Ok.

 Oracle Spatial StudioSpatial Analytics

Now it's time to do some Spatial Analysis so I can click on Create and Project and I'll face an empty canvas by default

 Oracle Spatial Studio

The first step is to add a Map, I can do that by selecting the visualizations menu and then dragging the map to the canvas.

 Oracle Spatial Studio

Next step is to add some data by clicking on Data Elements and then Add Dataset

 Oracle Spatial Studio

I select the TEST_CUSTOMERS dataset and add it to the project, then I need to drag it on top of the map to visualize my customer data.

 Oracle Spatial Studio

Oracle Spatial Studio Offers several options to change the data visualizations like color, opacity, blur etc.

 Oracle Spatial Studio

Now I can add the TEST_SHOPS dataset and visualize it on the map with the same set of steps followed before.

 Oracle Spatial Studio

It's finally time for spatial analysis! Let's say, as per initial example, that I want to know which of my customers doesn't have any shops in the nearest 200km. In order to achieve that I need to first create buffer areas of 200km around the shops, by selecting the TEST_SHOPS datasource and then clicking on the Spatial Analysis.

 Oracle Spatial Studio

This will open a popup window listing a good number of spatial analysis, by clicking on the Transform tab I can see the Add a buffer of a specified distance option.

 Oracle Spatial Studio

Unfortunately the buffer function is not available in ADW at the moment.

 Oracle Spatial Studio

I had to rely on an Oracle Database Cloud Service 18c Enterprise Edition - High Performance (which includes the Spatial option) to continue for my metadata storage and processing. Few Takeaways:

  • Select 18c (or anything above 12.2): I hit an issue ORA-00972: identifier is too long when importing the data in a 12.1 Database, which (thanks StackOverflow) is fixed as of 12.2.
  • High Performance: This includes the Spatial Option

Once I used the DBCS as metadata store, I can finally use the buffer function and set the parameter of 200km around the shops.

 Oracle Spatial Studio

The TEST_SHOPS_BUFFER is now visible under Analysis and can be added on top of the Map correctly showing the 200km buffer zone.

 Oracle Spatial Studio

I can understand which customers have a shop in the nearest 200k by creating an analysis and select the option "Return shapes within a specified distance of another"

 Oracle Spatial Studio

In the parameters I can select the TEST_CUSTOMERS as Layer to be filtered, the TEST_SHOPS as the Layer to be used as filter and the 200Km as distance.

 Oracle Spatial Studio

I can then visualize the result by adding the TEST_CUSTOMERS_WITHIN_DISTANCE layer in the map.

 Oracle Spatial Studio

TEST_CUSTOMERS_WITHIN_DISTANCE contains the customers already "covered" by a shop in the 200km range, what I may want to do now is remove them from my list of customers in order to do analysis on the remaining ones, how can I do that? Unfortunately in the first Spatial Studio version there is no visual way of doing DATASET_A MINUS DATASET_B but, hey, it's just the first incarnation and we can expect that type of functions and many others to be available in future releases!

The following paragraph is an in-depth analysis in the database of functions that will probably be exposed in Spatial Studio's future version, so if not interested, progress directly to the section named "Progressing in the Spatial Analysis".

A Look in the Database

Since we want to achieve our goal of getting the customers not covered by a shop now, we need to look a bit deeper where the data is stored: in the database. This gives us two opportunities: check how Spatial Studio works under the covers and freely use SQL to achieve our goals (DATASET_A MINUS DATASET_B).

First let's have a look at the tables created by Spatial Studio: we can see some metadata tables used by studio as well as the database representation of our two excel files TEST_CUSTOMERS and TEST_SHOPS.

 Oracle Spatial Studio

Looking in depth at the metadata we can also see a table named SGTECH$TABLE followed by an ID. That table collects the information regarding the geo-coding job we executed against our customers dataset which were located starting from zip-codes and addresses. We can associate the table to the TEST_CUSTOMERS dataset with the following query against the SGTECH_OBJECTS metadata table.

SELECT NAME, 
  JSON_VALUE(data, '$.gcHelperTableName') DATASET  
FROM SGTECH_OBJECT 
WHERE OBJECTTYPE='dataset'
AND NAME='TEST_CUSTOMERS';
 Oracle Spatial Studio

The SGTECH$TABLEA004AA549110B928755FC05F01A3EF89 table contains, as expected, a row for each customer in the dataset, together with the related geometry if the geo-coding was successful and some metadata flags like GC_ATTEMPTED, GC_STATUS and GC_MATCH_CODE stating the accuracy of the geo-coding match.

 Oracle Spatial Studio

What about all the analysis like the buffer and the customers within distance? For each analysis Spatial Studio creates a separate view with the SGTECH$VIEW prefix followed by an ID.

 Oracle Spatial Studio

To understand which view is referring to which analysis we need to query the metadata table SGTECH_OBJECTS with a query like

SELECT NAME, 
  JSON_VALUE(data, '$.tableName') DATASET  
FROM SGTECH_OBJECT 
WHERE OBJECTTYPE='dataset'

With the following result

 Oracle Spatial Studio

We know then that the TEST_CUSTOMERS_WITHIN_DISTANCE can be accessed by the view SGTECH$VIEW0B2B36785A28843F74B58B3CCF1C51E3 and when checking its SQL we can clearly see that it executes the SDO_WITHIN_DISTANCE function using the TEST_CUSTOMERS.GC_GEOMETRY, the TEST_SHOPS columns LONGITUDE and LATITUDE and the distance=200 unit=KILOMETER parameters we set in the front-end.

CREATE OR replace force editionable view "SPATIAL_STUDIO"."SGTECH$VIEW0B2B36785A28843F74B58B3CCF1C51E3"
SELECT
    ...
FROM
    "TEST_CUSTOMERS"   "t1",
    "TEST_SHOPS"       "t2"
WHERE
    sdo_within_distance("t1"."GC_GEOMETRY",
    spatial_studio.sgtech_ptf("t2"."LONGITUDE", "t2"."LATITUDE"), 
    'distance=200 unit=KILOMETER'
    ) = 'TRUE';

Ok, we now understood which view contains the data, thus we can create a new view containing only the customers which are not within the 200km distance with

CREATE VIEW TEST_CUSTOMERS_NOT_WITHIN_DISTANCE AS
SELECT
    t1.id            AS id,
    t1.first_name    AS first_name,
    t1.last_name     AS last_name,
    t1.email         AS email,
    t1.gender        AS gender,
    t1.postal_code   AS postal_code,
    t1.street        AS street,
    t1.country       AS COUNTRY,
    t1.city          AS city,
    t1.studio_id     AS studio_id,
    t1.gc_geometry   AS gc_geometry
FROM
    test_customers t1
WHERE
    id NOT IN (
        SELECT
            id
        FROM
            spatial_studio.sgtech$view0b2b36785a28843f74b58b3ccf1c51e3
    );
Progressing in the Spatial Analysis

In the previous paragraph we created a view in the database named TEST_CUSTOMERS_NOT_WITHIN_DISTANCE containing the customer without a shop in a 200km radius. We can now import it into Spatial Studio by creating a new dataset, selecting the connection to the database (in our case named SPATIAL_STUDIO) as source and then the newly created TEST_CUSTOMERS_NOT_WITHIN_DISTANCE view.

 Oracle Spatial Studio

The dataset is added, but it has a yellow warning icon next to it

 Oracle Spatial Studio

Spatial Studio requests us to define a primary key, we can do that by accessing the properties of the dataset, select the Columns tab, choosing which column acts as primary key and validate it. After this step I can visualize this customer in a map.

 Oracle Spatial Studio

What's next? Well If I want to open a new shop, I may want to do that where there is a concentration of customers, which is easily visualizable with Spatial Studio by changing the Render Style to Heatmap.

 Oracle Spatial Studio

With the following output

 Oracle Spatial Studio

We can clearly see some major concentrations around Dallas, Washington and Minneapolis. Focusing more on Dallas, Spatial Studio also offers the option to simulate a new shop in the map and calculate the 200km buffer around it. I can clearly see that adding a shop halfway between Oklahoma City and Dallas would allow me to cover both clients within the 200km radius.

 Oracle Spatial Studio

Please remember that this is a purely demonstrative analysis, and some of the choices, like the 200km buffer are expressly simplistic. Other factors could come into play when choosing a shop location like the revenue generated by some customers. And here it comes the second beauty of Oracle Spatial Studio, we can export datasets as GeoJSON or CSV and include them in Data Visualization.

 Oracle Spatial Studio

For example I can export the data of TEST_CUSTOMERS_NOT_WITHIN_DISTANCE from Spatial Studio and include then in a Data Visualization Project blending them with the Sales related to the same customers.

 Oracle Spatial Studio

I can now focus not only on the customer's position but also on other metrics like Profit or Sales Amount that I may have in other datasets. For another example of Oracle Spatial Studio and Data Visualization interoperability check out this video from Oracle Analytics Senior Director Philippe Lions.

Conclusions

Spatial analytics made easy: this is the focus of Oracle Spatial Studio. Before spatial queries were locked down at database level with limited access from an analyst point of view. Now we have a visual tool with a simple GUI (in line with OAC) that easily enables spatial queries for everybody!

But this is only the first part of the story: the combination of capabilities achievable when mixing Oracle Spatial Studio and Oracle Analytics Cloud takes any type of analytics to the next level!

Categories: BI & Warehousing

Alfresco – ActiveMQ basic setup

Yann Neuhaus - Fri, 2019-07-26 08:49

Apache ActiveMQ

ActiveMQ is an open source Java Messaging Server (JMS) from the Apache Software Foundation that supports a lot of protocols. In Alfresco 5, ActiveMQ has been introduced as a new, optional, component in the stack. It was, at the beginning, only used for “side” features like Alfresco Analytics or Alfresco Media Management in the early Alfresco 5.0. In Alfresco 6.0, ActiveMQ was still used for Alfresco Media Management but also for the Alfresco Sync Service. It’s only starting with the Alfresco 6.1, released last February, that ActiveMQ became a required component, used for the same things but also now used for transformations.

The Alfresco documentation doesn’t really describe how to install ActiveMQ or how to configure it, it just explains how to connect Alfresco to it. Therefore, I thought I would write a small blog about how to do a basic installation of ActiveMQ for a usage in Alfresco.

Alfresco 6.1 supports ActiveMQ v5.15.6 so that’s the one I will be using for this blog as example.

First let’s start with defining some environment variables that will be used to know where to put ActiveMQ binaries and data:

[alfresco@mq_n1 ~]$ echo "export ACTIVEMQ_HOME=/opt/activemq" >> ~/.profile
[alfresco@mq_n1 ~]$ echo "export ACTIVEMQ_DATA=\$ACTIVEMQ_HOME/data" >> ~/.profile
[alfresco@mq_n1 ~]$
[alfresco@mq_n1 ~]$ grep "ACTIVEMQ" ~/.profile
export ACTIVEMQ_HOME=/opt/activemq
export ACTIVEMQ_DATA=$ACTIVEMQ_HOME/data
[alfresco@mq_n1 ~]$
[alfresco@mq_n1 ~]$ source ~/.profile
[alfresco@mq_n1 ~]$
[alfresco@mq_n1 ~]$ echo $ACTIVEMQ_DATA
/opt/activemq/data
[alfresco@mq_n1 ~]$

 

I’m usually using symlinks for all the components so that I can keep a generic path in case of upgrades, aso… So, let’s download the software and put all that where it should:

[alfresco@mq_n1 ~]$ activemq_version="5.15.6"
[alfresco@mq_n1 ~]$
[alfresco@mq_n1 ~]$ wget http://archive.apache.org/dist/activemq/${activemq_version}/apache-activemq-${activemq_version}-bin.tar.gz
--2019-07-25 16:55:23--  http://archive.apache.org/dist/activemq/5.15.6/apache-activemq-5.15.6-bin.tar.gz
Resolving archive.apache.org... 163.172.17.199
Connecting to archive.apache.org|163.172.17.199|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 58556801 (56M) [application/x-gzip]
Saving to: ‘apache-activemq-5.15.6-bin.tar.gz’

apache-activemq-5.15.6-bin.tar.gz     100%[=======================================================================>]  55.84M  1.62MB/s    in 35s

2019-07-25 16:55:58 (1.60 MB/s) - ‘apache-activemq-5.15.6-bin.tar.gz’ saved [58556801/58556801]

[alfresco@mq_n1 ~]$
[alfresco@mq_n1 ~]$ tar -xzf apache-activemq-${activemq_version}-bin.tar.gz
[alfresco@mq_n1 ~]$ mkdir -p $ACTIVEMQ_HOME-${activemq_version}
[alfresco@mq_n1 ~]$ ln -s $ACTIVEMQ_HOME-${activemq_version} $ACTIVEMQ_HOME
[alfresco@mq_n1 ~]$
[alfresco@mq_n1 ~]$ ls -l $ACTIVEMQ_HOME/.. | grep -i activemq
lrwxr-xr-x   1 alfresco  alfresco        31 Jul 25 17:04 activemq -> /opt/activemq-5.15.6
drwxr-xr-x   2 alfresco  alfresco        64 Jul 25 17:03 activemq-5.15.6
[alfresco@mq_n1 ~]$
[alfresco@mq_n1 ~]$ rm -rf ./apache-activemq-${activemq_version}/data
[alfresco@mq_n1 ~]$ mkdir -p $ACTIVEMQ_DATA
[alfresco@mq_n1 ~]$
[alfresco@mq_n1 ~]$ mv apache-activemq-${activemq_version}/* $ACTIVEMQ_HOME/

 

Once that is done and before starting ActiveMQ for the first time, there are still some configurations to be done. It is technically possible to add a specific authentication for communications between Alfresco and ActiveMQ or setup the communications in SSL for example. It depends on the usage you will have for the ActiveMQ but as a minimal configuration for use with Alfresco, I believe that the default users (“guest” to access docbroker & “user” to access web console) should at least be removed and the admin password changed:

[alfresco@mq_n1 ~]$ activemq_admin_pwd="Act1v3MQ_pwd"
[alfresco@mq_n1 ~]$ activemq_broker_name="`hostname -s`"
[alfresco@mq_n1 ~]$
[alfresco@mq_n1 ~]$ # Remove user "user" from the web console
[alfresco@mq_n1 ~]$ sed -i "/^user:[[:space:]]*.*/d" $ACTIVEMQ_HOME/conf/jetty-realm.properties
[alfresco@mq_n1 ~]$
[alfresco@mq_n1 ~]$ # Remove user "guest" from the broker
[alfresco@mq_n1 ~]$ sed -i "/^guest.*/d" $ACTIVEMQ_HOME/conf/credentials.properties
[alfresco@mq_n1 ~]$
[alfresco@mq_n1 ~]$ # Change admin password
[alfresco@mq_n1 ~]$ sed -i "s/^admin=.*/admin=${activemq_admin_pwd}\n/" $ACTIVEMQ_HOME/conf/users.properties
[alfresco@mq_n1 ~]$ sed -i "s/^admin.*/admin: ${activemq_admin_pwd}, admin/" $ACTIVEMQ_HOME/conf/jetty-realm.properties
[alfresco@mq_n1 ~]$ sed -i "s/^activemq.username=.*/activemq.username=admin/" $ACTIVEMQ_HOME/conf/credentials.properties
[alfresco@mq_n1 ~]$ sed -i "s/^activemq.password=.*/activemq.password=${activemq_admin_pwd}/" $ACTIVEMQ_HOME/conf/credentials.properties
[alfresco@mq_n1 ~]$
[alfresco@mq_n1 ~]$ grep -E "brokerName|storeUsage |tempUsage " $ACTIVEMQ_HOME/conf/activemq.xml
    <broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost" dataDirectory="${activemq.data}">
                <storeUsage limit="100 gb"/>
                <tempUsage limit="50 gb"/>
[alfresco@mq_n1 ~]$
[alfresco@mq_n1 ~]$ # Set broker name & allowed usage
[alfresco@mq_n1 ~]$ sed -i "s/brokerName=\"[^"]*\"/brokerName=\"${activemq_broker_name}\"/" $ACTIVEMQ_HOME/conf/activemq.xml
[alfresco@mq_n1 ~]$ sed -i 's,storeUsage limit="[^"]*",storeUsage limit="10 gb",' $ACTIVEMQ_HOME/conf/activemq.xml
[alfresco@mq_n1 ~]$ sed -i 's,tempUsage limit="[^"]*",tempUsage limit="5 gb",' $ACTIVEMQ_HOME/conf/activemq.xml
[alfresco@mq_n1 ~]$
[alfresco@mq_n1 ~]$ grep -E "brokerName|storeUsage |tempUsage " $ACTIVEMQ_HOME/conf/activemq.xml
    <broker xmlns="http://activemq.apache.org/schema/core" brokerName="mq_n1" dataDirectory="${activemq.data}">
                    <storeUsage limit="10 gb"/>
                    <tempUsage limit="5 gb"/>
[alfresco@mq_n1 ~]$
[alfresco@mq_n1 ~]$ chmod -R o-rwx $ACTIVEMQ_HOME
[alfresco@mq_n1 ~]$ chmod -R o-rwx $ACTIVEMQ_DATA

 

So above, I set a specific name for the broker, that’s mainly if you expect to see at some points several brokers, to differentiate them. I also change the default storeUsage and tempUsage, that’s mainly to show how it’s done because these two parameters define the limit that ActiveMQ will be able to use on the file system. I believe the default is way too much for ActiveMQ’s usage in Alfresco, so I always reduce these or use a percentage as value (percentLimit).

With the default configuration, ActiveMQ uses “${activemq.data}” for the data directory which is actually using the “$ACTIVEMQ_DATA” environment variable, if present (otherwise it sets it as $ACTIVEMQ_HOME/data). That’s the reason why I set this environment variable, so it is possible to define a different data folder without having to change the default configuration. This data folder will mainly contain the logs of ActiveMQ, the PID file and the KahaDB for the persistence adapter.

Finally creating a service for ActiveMQ and starting it is pretty easy as well:

[alfresco@mq_n1 ~]$ cat > activemq.service << EOF
[Unit]
Description=ActiveMQ service

[Service]
Type=forking
ExecStart=###ACTIVEMQ_HOME###/bin/activemq start
ExecStop=###ACTIVEMQ_HOME###/bin/activemq stop
Restart=always
User=alfresco
WorkingDirectory=###ACTIVEMQ_DATA###
LimitNOFILE=8192:65536

[Install]
WantedBy=multi-user.target
EOF
[alfresco@mq_n1 ~]$
[alfresco@mq_n1 ~]$ sed -i "s,###ACTIVEMQ_HOME###,${ACTIVEMQ_HOME}," activemq.service
[alfresco@mq_n1 ~]$ sed -i "s,###ACTIVEMQ_DATA###,${ACTIVEMQ_DATA}," activemq.service
[alfresco@mq_n1 ~]$
[alfresco@mq_n1 ~]$ sudo cp activemq.service /etc/systemd/system/
[alfresco@mq_n1 ~]$ rm activemq.service
[alfresco@mq_n1 ~]$
[alfresco@mq_n1 ~]$ sudo systemctl enable activemq.service
[alfresco@mq_n1 ~]$ sudo systemctl daemon-reload
[alfresco@mq_n1 ~]$
[alfresco@mq_n1 ~]$ sudo systemctl start activemq.service

 

Once ActiveMQ is setup as you want, for the registration in Alfresco, it’s very easy:

[alfresco@alf_n1 ~]$ cat $CATALINA_HOME/shared/classes/alfresco-global.properties
...
### ActiveMQ
messaging.broker.url=failover:(tcp://mq_n1.domain:61616?daemon=false&dynamicManagement=false&trace=false)?timeout=3000&randomize=false
#messaging.username=
#messaging.password=
...
[alfresco@alf_n1 ~]$

 

As mentioned at the beginning of this blog, ActiveMQ supports a lot of protocols so you can use pretty much what you want: TCP, NIO, SSL, NIO SSL, Peer (2 Peer), UDP, Multicast, HTTP, HTTPS, aso… You can find all the details for that here.

To add authentication between Alfresco and ActiveMQ, you will need to enable the properties in the alfresco-global.properties (the two that I commented above) and define the appropriate authentication in the ActiveMQ broker configuration. There is an example in the Alfresco documentation.

 

Cet article Alfresco – ActiveMQ basic setup est apparu en premier sur Blog dbi services.

Python Script To Backup Linux Directory To Windows

Bobby Durrett's DBA Blog - Thu, 2019-07-25 18:32

I found out that my blog backup script was failing so I had to rewrite it to handle dropped connections to my remote sftp server. In the process I broke out as much of the code as I could into a module that I could share. The module is backupremote.py in my miscpython repository. Might be helpful to someone else. It copies the directory tree on a remote Linux server down to a directory on a Windows machine (i.e. a laptop). Uses sftp.

The earlier version of this script was in this blog post: https://www.bobbydurrettdba.com/2018/05/30/python-script-to-backup-remote-directory-using-sftp/

Bobby

Categories: DBA Blogs

Certs and AdminClient … How to login?

DBASolved - Thu, 2019-07-25 15:26

I’ve been building a test environment using Docker for sometime (over and over), to validate some items within Oracle GoldenGate Microservices (current release as of writing – 19.1.0.0.1). Part of setting Oracle GoldenGate Microservices is to make the environment secure by using certificates. Per Oracle documentation, you can use Self-Signed Certificates for testing purposes (more on that in this post).

In my testing, I have built an Oracle GoldenGate 19c Microservices configuraiton with two deployments (Atlanta and New York). I can access the ServiceManager and login to the associated HTML5 pages with no problem. When I went to run items from the command line (adminclient), I wouldn’t login to the ServiceManager/Deployment due to a Network Error.

[oracle@ogg19c scripts]$ sh ./adminclient.sh
Oracle GoldenGate Administration Client for Oracle
Version 19.1.0.0.1 OGGCORE_19.1.0.0.0_PLATFORMS_190524.2201</p>
Copyright (C) 1995, 2019, Oracle and/or its affiliates. All rights reserved.

Linux, x64, 64bit (optimized) on May 25 2019 02:00:23
Operating system character set identified as US-ASCII.

OGG (not connected) 1> connect https://ogg19c:16000 deployment NewYork as oggadmin password ********

ERROR: Network error - Certificate validation error

OGG (not connected) 2> exit

This got me thinking and started to ask some questions internally. Which lead me to a new envionrment parameter. This enviornment variable is OGG_CLIENT_TLS_CAPATH. The OGG_CLIENT_TLS_CAPATH variable is used to specify the root certificate athority needed to login to the ServiceManager/Deployment that has been secured using the certificate … in my case, my Self-Signed Certs.

After setting the enviornment variable OGG_CLIENT_TLS_CAPATH, I can now login to the AdminClient as expected.

[oracle@ogg19c scripts]$ export OGG_CLIENT_TLS_CAPATH=/home/oracle/wallet/Root_CA.pem
[oracle@ogg19c scripts]$ sh ./adminclient.sh
Oracle GoldenGate Administration Client for Oracle
Version 19.1.0.0.1 OGGCORE_19.1.0.0.0_PLATFORMS_190524.2201


Copyright (C) 1995, 2019, Oracle and/or its affiliates. All rights reserved.


Linux, x64, 64bit (optimized) on May 25 2019 02:00:23
Operating system character set identified as US-ASCII.


OGG (not connected) 1> connect https://ogg19c:16000 deployment NewYork as oggadmin password ********


OGG (https://ogg19c:16000 NewYork) 2>

 

I found this quite helpful.

Enjoy!!!!

Categories: DBA Blogs

Simple way to get rid of OGG-01525 for AdminClient

DBASolved - Wed, 2019-07-24 22:22

While installing Oracle GoldenGate 19c tonight, I was wanting to work from the command line and play around with AdminClient. For those who are not up to speed, AdminClient is the replacement for GGSCI when using Microservices.

AdminClient in Microservices provide the same functionalty as GGSCI, but it is a thin, lightweight tool that can be installed on a remote linux box or windows desktop/laptop. It allows you to make simple GGSCI commands and convert them into RESTP API calls on the backend. All the while, providing the same command line interface as GGSCI provided.

It is great that there is still a command line option for Oracle GoldenGate within the Microservices Architecture, however, when you start it you get presented with a Warning. The Warning is an OGG-01525 message that states there is no place to produce a trace file for the session.

WARNING OGG-01525 Failed to open trace output file, ‘/opt/app/oracle/product/19.1.0/oggcore_1/var/log/adminclient.log’, error 2 (No such file or directory).

So how do you fix this issue?

Note: This is not a bug! AdminClient was designed this way.

In order to fix this issue and get rid of the warning, you need to set a new enviornment variable. Since Oracle GoldenGate Microservices has been out for a bit over 2 year, I guess the environment variable isn’t that new. Any ways, the environment variable that needs to be set is OGG_VAR_HOME.

export OGG_VAR_HOME=/tmp

The OGG_VAR_HOME variable is used to tell AdminClient where to keep the adminclient.log file. In my example above, I’m using the temp ( /tmp ) directory for the log file.

Now, there is only one problem with the OGG_VAR_HOME environment variable. This environment variable along with OGG_ETC_HOME have to be set per Deployment Home environment. Meaning, when you have more than one Deployment Home, these environment variables are specific to that deployment.

The question that is begging to asked is -> How do I assign an environment variable for AdminClient and Deployments at the same time?

To solve this problem, I just wrote a simple shell wrapper and placed it in my home directory. The script looks like this:

[oracle@ogg19c scripts]$ cat adminclient.sh
#/bin/bash

export OGG_VAR_HOME=/tmp

${OGG_HOME}/bin/adminclient

Now, I can run the shell script and execute the AdminClient without getting the OGG-01525 warning.

[oracle@ogg19c scripts]$ sh ./adminclient.sh
Oracle GoldenGate Administration Client for Oracle
Version 19.1.0.0.1 OGGCORE_19.1.0.0.0_PLATFORMS_190524.2201

Copyright (C) 1995, 2019, Oracle and/or its affiliates. All rights reserved.

Linux, x64, 64bit (optimized) on May 25 2019 02:00:23
Operating system character set identified as US-ASCII.

 

OGG (not connected) 1>

For everyone is likes the Oracle GoldenGate command line, you still have access there!

Enjoy!!!

Categories: DBA Blogs

Deploying SQL Server 2019 container on RHEL 8 with podman

Yann Neuhaus - Wed, 2019-07-24 11:12

Having a fresh install of RHEL8 on my lab environment, I was curious to take a look at new containerization stuff from Red Hat in the context of SQL Server 2019. Good chances are the future version of SQL Server should be available and supported on with the latest version of Red Hat but for now this blog post is purely experimental. This time I wanted to share with you some thoughts about the new Podman command.

First of all, we should be aware that since RHEL8 Red Hat decided to replace docker with CRI-O/podman in order to provide a “daemonless” container world and especially for Kubernetes. By 2016, Kubernetes project introduced the Container Runtime Interface (CRI).  Basically, with CRI, Kubernetes can be container runtime-agnostic. CRI-O that is an open source project initiated by Red Hat the same year that gives the ability to run containers directly from Kubernetes without any unnecessary code or tooling as long as the container remains OCI-compliant. Because Docker is not implemented anymore (and officially not supported) by Red Hat since RHEL8, we need a client tool for working with containers and this is where Podman steps in. To cut the story short, Podman implements almost all the Docker CLI commands and more.

So, let’s have an overview of Podman commands through the installation of a SQL Server 2019 based container. It is worth noting that Podman is not intended to be used in the context of a “standalone” container environnement and should be used with an container orchestrator like K8s or an orchestration platform like OpenShift.  That said,  let’s first create a host directory to persist the SQL Server database files.

$ sudo mkdir -p  /var/mssql/data
$ sudo chmod 755 -R /var/mssql/data

 

Then let’s download the SQL Server 2019 RHEL image. We will use the following Podman command:

$ sudo podman pull mcr.microsoft.com/mssql/rhel/server:2019-CTP3.1
Trying to pull mcr.microsoft.com/mssql/rhel/server:2019-CTP3.1...Getting image source signatures
Copying blob 079e961eee89: 70.54 MiB / 70.54 MiB [========================] 1m3s
Copying blob 1b493d38a6d3: 1.20 KiB / 1.20 KiB [==========================] 1m3s
Copying blob 89e62e5b4261: 333.24 MiB / 333.24 MiB [======================] 1m3s
Copying blob d39017c722a8: 174.82 MiB / 174.82 MiB [======================] 1m3s
Copying config dbba412361d7: 4.98 KiB / 4.98 KiB [==========================] 0s
Writing manifest to image destination
Storing signatures
dbba412361d7ca4fa426387e1d6fc3ec85e37d630bfe70e6599b5116d392394d

 

Note that if you’re already comfortable with the Docker commands, the shift to Podman will be easy thanks to the similarity between the both tools. To get information of the new fresh image, we will use the following Podman command:

$ sudo podman images
REPOSITORY                            TAG           IMAGE ID       CREATED       SIZE
mcr.microsoft.com/mssql/rhel/server   2019-CTP3.1   dbba412361d7   3 weeks ago   1.79 GB
$ sudo podman inspect dbba
…
"GraphDriver": {
            "Name": "overlay",
            "Data": {
                "LowerDir": "/var/lib/containers/storage/overlay/b2769e971a1bdb62f1c0fd9dcc0e9fe727dca83f52812abd34173b49ae55e37d/diff:/var/lib/containers/storage/overlay/4b0cbf0d9d0ff230916734a790f47ab2adba69db44a79c8eac4c814ff4183c6d/diff:/var/lib/containers/storage/overlay/9197342671da8b555f200e47df101da5b7e38f6d9573b10bd3295ca9e5c0ae28/diff",
                "MergedDir": "/var/lib/containers/storage/overlay/b372c0d6ff718d2d182af4639870dc6e4247f684d81a8b2dc2649f8517b9fc53/merged",
                "UpperDir": "/var/lib/containers/storage/overlay/b372c0d6ff718d2d182af4639870dc6e4247f684d81a8b2dc2649f8517b9fc53/diff",
                "WorkDir": "/var/lib/containers/storage/overlay/b372c0d6ff718d2d182af4639870dc6e4247f684d81a8b2dc2649f8517b9fc53/work"
            }
        },
…

 

As show above, Podman uses the CRI-O back-end store directory with the /var/lib/containers path, instead of using the Docker default storage location (/var/lib/docker).

Go ahead and let’s take a look at the Podman info command:

$ podman info
…
OCIRuntime:
    package: runc-1.0.0-54.rc5.dev.git2abd837.module+el8+2769+577ad176.x86_64
    path: /usr/bin/runc
    version: 'runc version spec: 1.0.0'
…
store:
  ConfigFile: /home/clustadmin/.config/containers/storage.conf
  ContainerStore:
    number: 0
  GraphDriverName: overlay

 

The same kind of information is provided by the Docker info command including the runtime and the graph driver name that is overlay in my case. Generally speaking, creating and getting information of a container with Podman is pretty similar to what we may use with the usual Docker commands. Here  for instance the command to spin up a SQL Server container based on the RHEL image:

$ sudo podman run -d -e 'ACCEPT_EULA=Y' -e \
> 'MSSQL_SA_PASSWORD=Password1'  \
> --name 'sqltest' \
> -p 1460:1433 \
> -v /var/mssql/data:/var/opt/mssql/data:Z \
> mcr.microsoft.com/mssql/rhel/server:2019-CTP3.1
4f5128d36e44b1f55d23e38cbf8819041f84592008d0ebb2b24ff59065314aa4
$ sudo podman ps
CONTAINER ID  IMAGE                                            COMMAND               CREATED        STATUS            PORTS                   NAMES
4f5128d36e44  mcr.microsoft.com/mssql/rhel/server:2019-CTP3.1  /opt/mssql/bin/sq...  4 seconds ago  Up 3 seconds ago  0.0.0.0:1460->1433/tcp  sqltest

 

Here comes the interesting part. Looking at the pstree output we may notice that there is not dependencies with any (docker) daemon with CRI-O implementation. Usually with the Docker implementation we retrieve the containerd daemon and the related shim for the process within the tree. 

$ pstree
systemd─┬─NetworkManager───2*[{NetworkManager}]
        ├─…
        ├─conmon─┬─sqlservr─┬─sqlservr───138*[{sqlservr}]
        │        │          └─{sqlservr}

 

By using the runc command below, we may notice the MSSQL container (identified by the ID here) is actually running through CRI-O and runc runtime.

$ sudo runc list -q
4f5128d36e44b1f55d23e38cbf8819041f84592008d0ebb2b24ff59065314aa4

 

Let’s have a look at the existing namespace. The 9449 PID corresponds to the SQL Server process running in isolation mode through Linux namespaces.

$ sudo lsns 
…
4026532116 net         2  9449 root   /opt/mssql/bin/sqlservr
4026532187 mnt         2  9449 root   /opt/mssql/bin/sqlservr
4026532188 uts         2  9449 root   /opt/mssql/bin/sqlservr
4026532189 ipc         2  9449 root   /opt/mssql/bin/sqlservr
4026532190 pid         2  9449 root   /opt/mssql/bin/sqlservr

$ ps aux | grep sqlservr
root       9449  0.1  0.6 152072 25336 ?        Ssl  05:08   0:00 /opt/mssql/bin/sqlservr
root       9465  5.9 18.9 9012096 724648 ?      Sl   05:08   0:20 /opt/mssql/bin/sqlservr
clustad+   9712  0.0  0.0  12112  1064 pts/0    S+   05:14   0:00 grep --color=auto sqlservr

 

We can double check that the process belongs to the SQL Server container by using the nsenter command:

sudo nsenter -t 17182 --mount --uts --ipc --net --pid sh
sh-4.2# ps aux
USER        PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root          1  0.0  0.7 152076 28044 ?        Ssl  Jul23   0:00 /opt/mssql/bin/sqlservr
root          9  2.2 19.7 9034224 754820 ?      Sl   Jul23   0:28 /opt/mssql/bin/sqlservr
root        319  0.0  0.0  13908  3400 ?        S    00:01   0:00 sh
root        326  0.0  0.1  53832  3900 ?        R+   00:02   0:00 ps aux

 

Well, we used different Podman commands to spin up a container that meets the OCI specification like Docker. For a sake of curiosity, let’s build a custom image from a Dockerfile. In fact, this is a custom image we developed for customers to meet our best practices requirements. 

$ ls -l
total 40
drwxrwxr-x. 2 clustadmin clustadmin   70 Jul 24 02:06 BestPractices
drwxrwxr-x. 2 clustadmin clustadmin   80 Jul 24 02:06 DMK
-rw-rw-r--. 1 clustadmin clustadmin  614 Jul 24 02:06 docker-compose.yml
-rw-rw-r--. 1 clustadmin clustadmin 2509 Jul 24 02:06 Dockerfile
-rw-rw-r--. 1 clustadmin clustadmin 3723 Jul 24 02:06 entrypoint.sh
-rw-rw-r--. 1 clustadmin clustadmin 1364 Jul 24 02:06 example.docker-swarm-compose.yml
-rw-rw-r--. 1 clustadmin clustadmin  504 Jul 24 02:06 healthcheck.sh
-rw-rw-r--. 1 clustadmin clustadmin   86 Jul 24 02:06 mssql.conf
-rw-rw-r--. 1 clustadmin clustadmin 4497 Jul 24 02:06 postconfig.sh
-rw-rw-r--. 1 clustadmin clustadmin 2528 Jul 24 02:06 Readme.md
drwxrwxr-x. 2 clustadmin clustadmin   92 Jul 24 02:06 scripts

 

To build an image from a Dockerfile the corresponding Podman command is as follow:

$ sudo podman build -t dbi_mssql_linux:2019-CTP3.1 .
…
--> 5db120fba51f3adc7482ec7a9fed5cc4194f13e97b855d9439a1386096797c39
STEP 65: FROM 5db120fba51f3adc7482ec7a9fed5cc4194f13e97b855d9439a1386096797c39
STEP 66: EXPOSE ${MSSQL_TCP_PORT}
--> 8b5e8234af47adb26f80d64abe46715637bd48290b4a6d7711ddf55c393cd5a8
STEP 67: FROM 8b5e8234af47adb26f80d64abe46715637bd48290b4a6d7711ddf55c393cd5a8
STEP 68: ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
--> 11045806b8af7cf2f67e5a279692e6c9e25212105bcd104ed17b235cdaea97fe
STEP 69: FROM 11045806b8af7cf2f67e5a279692e6c9e25212105bcd104ed17b235cdaea97fe
STEP 70: CMD ["tail -f /dev/null"]
--> bcb8c26d503010eb3e5d72da4b8065aa76aff5d35fac4d7958324ac3d97d5489
STEP 71: FROM bcb8c26d503010eb3e5d72da4b8065aa76aff5d35fac4d7958324ac3d97d5489
STEP 72: HEALTHCHECK --interval=15s CMD [ "/usr/local/bin/healthcheck.sh" ]
--> e7eedf0576f73c95b19adf51c49459b00449da497cf7ae417e597dd39a9e4c8f
STEP 73: COMMIT dbi_mssql_linux:2019-CTP3.1

 

The image built is now available in the local repository:

$ sudo podman images
REPOSITORY                            TAG           IMAGE ID       CREATED         SIZE
localhost/dbi_mssql_linux             2019-CTP3.1   e7eedf0576f7   2 minutes ago   1.79 GB
mcr.microsoft.com/mssql/rhel/server   2019-CTP3.1   dbba412361d7   3 weeks ago     1.79 GB

 

The next step will consist in spinning up a SQL Server container based on this new image. Note that I used a custom parameter DMK=Y to drive the creation of the DMK maintenance tool in our case which including the deployment of a custom dbi_tools database ans related objects that carry out the database maintenance.

$ sudo podman run -d -e 'ACCEPT_EULA=Y' \
> -e 'MSSQL_SA_PASSWORD=Password1' -e 'DMK=Y'  \
> --name 'sqltest2' \
> -p 1470:1433 \
> localhost/dbi_mssql_linux:2019-CTP3.1
d057e0ca41f08a948de4206e9aa07b53450c2830590f2429e50458681d230f6b

 

Let’s check if the dbi_tools has been created during the container runtime phase:

$ sudo podman exec -ti d057 /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P Password1 -Q"SELECT name from sys.databases"
name
--------------------------------------------------------------------------------------------------------------------------------
master
tempdb
model
msdb
dbi_tools

 

Finally, to make the transition with a future blog post, the Podman tool comes with extra commands (under development) that is not available with Docker CLI. The following example generates a YAML deployment file and the corresponding service from an existing container. Please note however that containers with volumes are not supported yet.

The container definition is a follows:

$ sudo podman run -d -e 'ACCEPT_EULA=Y' -e \
'MSSQL_SA_PASSWORD=Password1'  \
--name 'sqltestwithnovolumes' \
-p 1480:1433 \
mcr.microsoft.com/mssql/rhel/server:2019-CTP3.1
7e99581eaec4c91d7c13af4525bfb3805d5b56e675fdb53d0061c231294cd442

 

And we get the corresponding YAML file generated by the Podman command:

$ sudo podman generate kube -s 7e99
# Generation of Kubernetes YAML is still under development!
#
# Save the output of this file and use kubectl create -f to import
# it into Kubernetes.
#
# Created with podman-1.0.2-dev
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: 2019-07-24T03:52:18Z
  labels:
    app: sqltestwithnovolumes
  name: sqltestwithnovolumes
spec:
  containers:
  - command:
    - /opt/mssql/bin/sqlservr
    env:
    - name: PATH
      value: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
    - name: TERM
      value: xterm
    - name: HOSTNAME
    - name: container
      value: oci
    - name: ACCEPT_EULA
      value: "Y"
    - name: MSSQL_SA_PASSWORD
      value: Password1
    image: mcr.microsoft.com/mssql/rhel/server:2019-CTP3.1
    name: sqltestwithnovolumes
    ports:
    - containerPort: 1433
      hostPort: 1480
      protocol: TCP
    resources: {}
    securityContext:
      allowPrivilegeEscalation: true
      capabilities: {}
      privileged: false
      readOnlyRootFilesystem: false
    workingDir: /
status: {}
---
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: 2019-07-24T03:52:18Z
  labels:
    app: sqltestwithnovolumes
  name: sqltestwithnovolumes
spec:
  ports:
  - name: "1433"
    nodePort: 30309
    port: 1433
    protocol: TCP
    targetPort: 0
  selector:
    app: sqltestwithnovolumes
  type: NodePort
status:
  loadBalancer: {}

 

By default the service type NodePort has been created by the command. This latest command needs further testing for sure!

See you

Cet article Deploying SQL Server 2019 container on RHEL 8 with podman est apparu en premier sur Blog dbi services.

Rsync DBFS To ACFS For GoldenGate Trail Migration

Michael Dinh - Wed, 2019-07-24 09:25

Planning to move GoldenGate trail files from DBFS to ACFS.

This is pre-work before actual migration to stress IO for ACFS.

Learned some cron along the way.

# Run every 2 hours at even hours
0 */2 * * * /home/oracle/working/dinh/acfs_ggdata02_rsync.sh > /tmp/rsync_acfs_ggdata_to_ggdata02.log 2>&1

# Run every 2 hours at odd hours
0 1-23/2 * * * /home/oracle/working/dinh/acfs_ggdata02_rsync.sh > /tmp/rsync_acfs_ggdata_to_ggdata02.log 2>&1

Syntax and ouptput.

+ /bin/rsync -vrpogt --delete-after /DBFS/ggdata/ /ACFS/ggdata
building file list ... done

dirchk/E_SOURCE.cpe
dirchk/P_TARGET.cpe

dirdat/
dirdat/aa000307647
dirdat/aa000307648
.....
dirdat/aa000307726
dirdat/aa000307727

deleting dirdat/aa000306741
deleting dirdat/aa000306740
.....
deleting dirdat/aa000306662
deleting dirdat/aa000306661

sent 16,205,328,959 bytes  received 1,743 bytes  140,305,893.52 bytes/sec
total size is 203,021,110,174  speedup is 12.53

real	1m56.671s
user	1m24.643s
sys	0m45.875s

+ '[' 0 '!=' 0 ']'

+ /bin/diff -rq /DBFS/ggdata /ACFS/ggdata

Files /DBFS/ggdata/dirchk/E_SOURCE.cpe and /ACFS/ggdata/dirchk/E_SOURCE.cpe differ
Files /DBFS/ggdata/dirchk/P_TARGET.cpe and /ACFS/ggdata/dirchk/P_TARGET.cpe differ

Only in /ACFS/ggdata/dirdat: aa000306742
Only in /ACFS/ggdata/dirdat: aa000306743
Only in /ACFS/ggdata/dirdat: aa000306744
Only in /ACFS/ggdata/dirdat: aa000306745

Only in /DBFS/ggdata/dirdat: aa000307728
Only in /DBFS/ggdata/dirdat: aa000307729

real	69m15.207s
user	2m9.242s
sys	17m3.846s

+ ls /DBFS/ggdata/dirdat/
+ wc -l
975

+ ls -alrt /DBFS/ggdata/dirdat/
+ head
total 190631492
drwxrwxrwx 24 root    root             0 Feb  9  2018 ..
-rw-r-----  1 ggsuser oinstall 199999285 Mar  8  2018 .fuse_hidden001a3c47000001c5
-rw-r-----  1 ggsuser oinstall 199999896 May 23 00:23 .fuse_hidden000002b500000001
-rw-r-----  1 ggsuser oinstall 199999934 Jul 23 06:11 aa000306798
-rw-r-----  1 ggsuser oinstall 199999194 Jul 23 06:13 aa000306799
-rw-r-----  1 ggsuser oinstall 199999387 Jul 23 06:14 aa000306800
-rw-r-----  1 ggsuser oinstall 199999122 Jul 23 06:16 aa000306801
-rw-r-----  1 ggsuser oinstall 199999172 Jul 23 06:19 aa000306802
-rw-r-----  1 ggsuser oinstall 199999288 Jul 23 06:19 aa000306803

+ ls -alrt /DBFS/ggdata/dirdat/
+ tail
-rw-r-----  1 ggsuser oinstall 199999671 Jul 24 07:59 aa000307764
-rw-r-----  1 ggsuser oinstall 199999645 Jul 24 08:01 aa000307765
-rw-r-----  1 ggsuser oinstall 199998829 Jul 24 08:02 aa000307766
-rw-r-----  1 ggsuser oinstall 199998895 Jul 24 08:04 aa000307767
-rw-r-----  1 ggsuser oinstall 199999655 Jul 24 08:05 aa000307768
-rw-r-----  1 ggsuser oinstall 199999930 Jul 24 08:07 aa000307769
-rw-r-----  1 ggsuser oinstall 199999761 Jul 24 08:09 aa000307770
-rw-r-----  1 ggsuser oinstall 199999421 Jul 24 08:11 aa000307771
-rw-r-----  1 ggsuser oinstall   7109055 Jul 24 08:11 aa000307772

+ ls /ACFS/ggdata/dirdat/
+ wc -l
986

+ ls -alrt /ACFS/ggdata/dirdat/
+ head
total 194779104
drwxrwxrwx 24 root    root          8192 Feb  9  2018 ..
-rw-r-----  1 ggsuser oinstall 199999285 Mar  8  2018 .fuse_hidden001a3c47000001c5
-rw-r-----  1 ggsuser oinstall 199999896 May 23 00:23 .fuse_hidden000002b500000001
-rw-r-----  1 ggsuser oinstall 199998453 Jul 23 04:55 aa000306742
-rw-r-----  1 ggsuser oinstall 199999657 Jul 23 04:56 aa000306743
-rw-r-----  1 ggsuser oinstall 199999227 Jul 23 04:57 aa000306744
-rw-r-----  1 ggsuser oinstall 199999389 Jul 23 04:59 aa000306745
-rw-r-----  1 ggsuser oinstall 199999392 Jul 23 05:00 aa000306746
-rw-r-----  1 ggsuser oinstall 199999116 Jul 23 05:01 aa000306747

+ ls -alrt /ACFS/ggdata/dirdat/
+ tail
-rw-r-----  1 ggsuser oinstall 199999876 Jul 24 06:48 aa000307719
-rw-r-----  1 ggsuser oinstall 199999751 Jul 24 06:50 aa000307720
-rw-r-----  1 ggsuser oinstall 199999918 Jul 24 06:51 aa000307721
-rw-r-----  1 ggsuser oinstall 199999404 Jul 24 06:52 aa000307722
-rw-r-----  1 ggsuser oinstall 199999964 Jul 24 06:54 aa000307723
-rw-r-----  1 ggsuser oinstall 199999384 Jul 24 06:56 aa000307724
-rw-r-----  1 ggsuser oinstall 199999283 Jul 24 06:57 aa000307725
-rw-r-----  1 ggsuser oinstall 199998033 Jul 24 06:59 aa000307726
-rw-r-----  1 ggsuser oinstall 199999199 Jul 24 07:00 aa000307727

Oracle EBS 12.2 - ADOP ad_zd_prep.create_patch_service exact fetch returns more than requested number of rows

Senthil Rajendran - Wed, 2019-07-24 02:29
Oracle EBS 12.2 - ADOP ad_zd_prep.create_patch_service exact fetch returns more than requested number of rows

Please note , if you get ADOP issues on PROD please read the logs and understand the problem before executing any commands.

SQL> exec ad_zd_prep.create_patch_service;
BEGIN ad_zd_prep.create_patch_service; END;

*
ERROR at line 1:
ORA-01422: exact fetch returns more than requested number of rows
ORA-06512: at "APPS.AD_ZD_PREP", line 342
ORA-06512: at "APPS.AD_ZD_PREP", line 378
ORA-06512: at line 1


A Quick fix without autoconfig


  1. SQL> create table fnd_oam_bkup1 as select * from fnd_oam_context_files;
  2. SQL> truncate fnd_oam_context_files
  3. move DB context file MT
  4. Load the Database Context File
    • $ADJVAPRG oracle.apps.ad.autoconfig.oam.CtxSynchronizer action=upload contextfile=/tmp/test_dbserver.xml
  5. Load the Run File System Context File
    • $ADJVAPRG oracle.apps.ad.autoconfig.oam.CtxSynchronizer action=upload contextfile=/u01/test/appl/fs1/inst/apps/test_mtserver/appl/admin/test_mtserver.xml
  6. Load the Patch File System Context File
    • $ADJVAPRG oracle.apps.ad.autoconfig.oam.CtxSynchronizer action=upload contextfile=/u01/test/appl/fs1/inst/apps/test_mtserver/appl/admin/test_mtserver.xml
Next Maintenance run Autoconfig on Database and Middletier

SQL> exec ad_zd_prep.create_patch_service; -- this will succeed

or if you are running ADOP prepare will succeed.


Oracle EBS 12.2 - ADOP ad_zd_prep.create_patch_service exceptions

Senthil Rajendran - Wed, 2019-07-24 02:23

Oracle EBS 12.2 - ADOP ad_zd_prep.create_patch_service exceptions

Please note , if you get ADOP issues on PROD please read the logs and understand the problem before executing any commands.



There are cases where you might have to create a patch service manually. So usually after cloning or fresh install , add node , mass patching , upgrade this crazy error will pop up.

SQL> exec ad_zd_prep.create_patch_service;
BEGIN ad_zd_prep.create_patch_service; END;

*
ERROR at line 1:
ORA-01422: exact fetch returns more than requested number of rows
ORA-06512: at "APPS.AD_ZD_PREP", line 342
ORA-06512: at "APPS.AD_ZD_PREP", line 378
ORA-06512: at line 1

Check the state of the package

SQL> select owner, object_name,status,object_type from dba_objects where object_name like upper ('AD_ZD_PREP');

OWNER                                    OBJECT_NAME                              STATUS  OBJECT_TYPE
---------------------------------------- ---------------------------------------- ------- -----------------------
APPS                                     AD_ZD_PREP                               VALID   PACKAGE
APPS                                     AD_ZD_PREP                               INVALID PACKAGE BODY


SQL> alter package apps.ad_zd_prep compile body;

Warning: Package Body altered with compilation errors.

SQL> show error
Errors for PACKAGE BODY APPS.AD_ZD_PREP:

LINE/COL ERROR
-------- -----------------------------------------------------------------
463/3    PL/SQL: SQL Statement ignored
463/19   PL/SQL: ORA-00942: table or view does not exist
467/5    PL/SQL: SQL Statement ignored
467/21   PL/SQL: ORA-00942: table or view does not exist
501/5    PL/SQL: SQL Statement ignored
501/21   PL/SQL: ORA-00942: table or view does not exist


Fix :  follow this note
12.2 E-Business Suite DBA Upgrade Script ADZDEXRPT.sql Fails On Invalid Object 'AD_ZD_PREP' Or Patch 13543062 Fails With Error: ORA-06508: PL/SQL: could not find APPS.AD_ZD_PREPAD_ZD_PREP Due To Missing Package 'xdb_migrateschema' (Doc ID 2066607.1)

1. Confirm if the package 'xdb_migrateschema' exists using the following select statement:

select owner, object_name from dba_objects where object_name like upper ('xdb_migrateschema');

If it does NOT exist, then use the following scripts to create it and confirm results again using the above query:

conn / as sysdba

@?/rdbms/admin/dbmsxdbschmig.sql

@?/rdbms/admin/prvtxdbschmig.plb

2. Run adgrants.sql.

3. Recompile the package AD_ZD_PREP and ensure it compiles successfully.

4. Continue with the upgrade and confirm online patching is successfully enabled.


Move the adgrants.sql from MT to DB before running it.



exec ad_zd_prep.create_patch_service  -- this should succeed.


Oracle EBS 12.2 - ADOP patching on a downtime mode with [ERROR] Patch service is not exist or running

Senthil Rajendran - Wed, 2019-07-24 02:12
ADOP patching on a downtime mode with [ERROR]     Patch service  is not exist or running

Please note , if you get ADOP issues on PROD please read the logs and understand the problem before executing any commands.

When applying adop patches on a downtime mode you might see an error that the patch service is not existing or running. Please note NOT all patches are meant to be applied to a downtime mode so read the readme of the patch to understand the patching process. Exception cases breaking the rule book patches can be applied to a development environment but you have to know how to deal with post adop failure cases.

In this case it is clear that the patch service is not existing.

adop phase=apply patches=1234567 apply_mode=downtime

Enter the APPS password:
Enter the SYSTEM password:
Enter the WLSADMIN password:

Validating credentials.

Initializing.
   Run Edition context  : /p01/ebs/appl/fs1/inst/apps/ebs_app01/appl/admin/ebs_app01.xml
   Patch edition context: /p01/ebs/appl/fs2/inst/apps/ebs_app01/appl/admin/ebs_app01.xml
   Patch file system free space: 77.95 GB

Validating system setup.
   Node registry is valid.
   [ERROR]     Patch service  is not exist or running
   [WARNING]   ETCC: The following required database fixes have not been applied to node app01:
bla bla bla
Encountered the above errors when performing database validations.
   Resolve the above errors and restart adop.

Fix : connect as APPS user and execute
SQL>  exec ad_zd_prep.create_patch_service;

Run ADOP now and it should proceed further.



Oracle 19c Automatic Indexing: Methodology Introduction (After Today)

Richard Foote - Tue, 2019-07-23 23:27
For the past month or so I’ve been playing around extensively with the new Oracle 19c “Automatic Indexing” feature, so I thought it was time to start blogging about it. Considering it’s only in “Version 1” status, my initial impression is very positive in that it works extremely well doing at what it’s initially designed […]
Categories: DBA Blogs

Customizing DML in an APEX Interactive Grid

The Anti-Kyte - Tue, 2019-07-23 16:06

It should have been quite a relaxing Cricket World Cup final. After all, it was England v New Zealand. I was guaranteed to be on the winning side.
After several hours of nerve-shredding tension had failed to separate the teams England were awarded the trophy on the basis of dumb luck hitting more boundaries. The result was born with stoicism by the Black Caps, whose philosophy would, in other countries, be known as “Elite Niceness”. By a cruel twist of fate, Ben Stokes – England’s star all-rounder and Man of the Match – was actually born in Christchurch.
Oracle APEX has it’s own star all-rounder in the shape of the Editable Interactive Grid ( see what I did there ?)
As well as presenting information in the same way as an Interactive Report, it allows users to perform DML operations on the records it displays – provided it’s based on a single table.
What we’re going to look at here is how to base an Interactive Grid (IG) on a Query rather than a table whilst retaining the ability to perform DML operations on the displayed records. To achieve this, we’ll be customizing the PL/SQL that is executed when a DML operation is invoked in the IG.

The Application

For what follows, I’ll be using APEX 18.2 running against an Oracle 18cXE database.

We have two related tables which hold information about Men’s 50-over Cricket World Cup Finals :

The tables were created as follows :

create table teams(
    cid varchar2(3) primary key,
    team_name varchar2(100) not null)
/

create table finals(
    tournament_year number(4) primary key,
    date_played date,
    venue varchar2(100),
    winning_tcid varchar2(3) references teams(cid),
    losing_tcid varchar2(3) references teams(cid),
    winning_margin varchar2(100))
/    

… and have been populated with some data. The TEAMS table first…

insert into teams( cid, team_name)
values('AUS', 'AUSTRALIA');

insert into teams( cid, team_name)
values('ENG', 'ENGLAND');

insert into teams( cid, team_name)
values('RSA', 'SOUTH AFRICA');

insert into teams( cid, team_name)
values('WI', 'WEST INDIES');

insert into teams( cid, team_name)
values('IND', 'INDIA');

insert into teams( cid, team_name)
values('NZL', 'NEW ZEALAND');

insert into teams( cid, team_name)
values('PAK', 'PAKISTAN');

insert into teams( cid, team_name)
values('SL', 'SRI LANKA');

insert into teams( cid, team_name)
values('ZIM', 'ZIMBABWE');

insert into teams( cid, team_name)
values('BAN', 'BANGLADESH');

insert into teams( cid, team_name)
values('AFG', 'AFGHANISTAN');

insert into teams( cid, team_name)
values('IRL', 'IRELAND');

commit;

…and then FINALS…

insert into finals(tournament_year, date_played, venue, winning_tcid, losing_tcid, winning_margin)
values(1975, to_date('21-JUN-1975', 'DD-MON-YYYY'), 'LORDS', 'WI', 'AUS', '17 runs');

insert into finals(tournament_year, date_played, venue, winning_tcid, losing_tcid, winning_margin)
values(1979, to_date('23-JUN-1979' , 'DD-MON-YYYY'), 'LORDS', 'WI', 'ENG', '92 runs');

insert into finals(tournament_year, date_played, venue, winning_tcid, losing_tcid, winning_margin)
values(1983, to_date('25-JUN-1983' , 'DD-MON-YYYY'), 'LORDS', 'IND', 'WI', '43 runs');

insert into finals(tournament_year, date_played, venue, winning_tcid, losing_tcid, winning_margin)
values(1987, to_date('08-NOV-1987' , 'DD-MON-YYYY'), 'EDEN GARDENS', 'AUS', 'ENG', '7 runs');

insert into finals(tournament_year, date_played, venue, winning_tcid, losing_tcid, winning_margin)
values(1992, to_date('25-MAR-1992' , 'DD-MON-YYYY'), null, 'PAK', 'ENG', '22 runs');

-- deliberate mistake to be corrected later
insert into finals(tournament_year, date_played, venue, winning_tcid, losing_tcid, winning_margin)
values(1997, to_date('17-MAR-1996' , 'DD-MON-YYYY'), 'QADDAFI STADIUM', 'SL', 'AUS', '8 wickets');

commit;

The data that we wish to present to application users can be retrieved with the following query :

select f.tournament_year, f.date_played, 
    initcap(f.venue) as venue,
    initcap(win.team_name) as winners,
    initcap(ru.team_name) as runners_up,
    f.winning_margin
from finals f
inner join teams win on f.winning_tcid = win.cid
inner join teams ru on f.losing_tcid = ru.cid
/

There are a couple of issues with the data as it stands so we want users to be able to edit the existing application records and add new ones.
As we’re using APEX, it would be good if we could use an Editable Interactive Grid as this would mean only needing to write a single page to handle all of these actions.
Of course, we could simply create a view using this query and then knock-up an Instead Of trigger to handle any DML. Alternatively…

Creating the Interactive Grid

The first step is to create a Region…

…and define it as an IG…

…using the above query as the Source SQL Query

When we run this (after saving our changes), we can see that the ROW_SELECTOR and ROW_ACTION widgets are missing :

Making the IG Editable

In order to persuade APEX to add these widgets, we need to make the IG Editable. We can do this by going to the Region’s Attributes and setting the Edit Enabled property to Yes

Among other things, this automatically creates a Save Interactive Grid Data process :

However, if we attempt to run the page now (after saving these changes), we’ll hit an error…

…so we need to select a Primary Key.
TOURNAMENT_YEAR will fulfil this purpose in our IG, so we just need to adjust the properties of the column :

When we save the change and run the page we can see that the ROW_SELECTOR and ROW_ACTION are now present :

The IG is not based on a table or view, remember, so we still need to tell APEX what to do when any DML actions are initiated by the user.

Customising the Save Interactive Grid Data process

Returning to the Processing Tab in the Page Designer we need to change the Type of this process to PL/SQL Code :

In the Source PL/SQL Code box, we need to enter a PL/SQL block which will be run whenever the Page processes a DML action.
In order to tell what specific DML action a row is subject to, we can look at the value of the built-in $APEXROW_STATUS variable.
The possible values are :

  • C – for Create
  • U – Update
  • D – Delete

Therefore, one approach for our PL/SQL block would be simply to include the actual DML statements we want to execute in-line like this :

declare 
    l_win_tcid finals.winning_tcid%type;
    l_ru_tcid finals.losing_tcid%type;
    
    cursor c_team_cid( i_name teams.team_name%type)
    is
        select cid
        from teams
        where team_name = upper(i_name);
begin
    -- reset the variables for each pass through this process
    l_win_tcid := null;
    l_ru_tcid := null;
    
    if :APEX$ROW_STATUS = 'D' then
        -- DELETE the record
        delete from finals
        where tournament_year = :TOURNAMENT_YEAR;
        
    else

        -- As we're either doing an UPDATE or an INSERT, we need to find the 
        -- CID value for each of the team names ( if specified)

        if :WINNERS is not null then
            open c_team_cid(:WINNERS);
            fetch c_team_cid into l_win_tcid;
            close c_team_cid;
        end if;

        if :RUNNERS_UP is not null then
            open c_team_cid(:RUNNERS_UP);
            fetch c_team_cid into l_ru_tcid;
            close c_team_cid;
        end if;
        
        if :APEX$ROW_STATUS = 'U' then
            -- UPDATE the record
            -- Note that, although DATE_PLAYED is a DATE field, the bind variable
            -- contains a string so we need to handle the conversion to a date here
            update finals
            set date_played = nvl(to_date(:DATE_PLAYED, sys_context('userenv', 'nls_date_format')), date_played),
                venue = nvl(:VENUE, venue),
                winning_tcid = nvl(l_win_tcid, winning_tcid),
                losing_tcid = nvl(l_ru_tcid, losing_tcid),
                winning_margin = nvl(:WINNING_MARGIN, winning_margin)
            where tournament_year = :TOURNAMENT_YEAR;
        
        elsif :APEX$ROW_STATUS = 'C' then
            -- CREATE (INSERT) as new record
            -- We need to return the Primary Key of the new record as APEX will
            -- use it to refresh the IG display and show the newly created row
            insert into finals( tournament_year, date_played, venue, 
                winning_tcid, losing_tcid, winning_margin)
            values( :TOURNAMENT_YEAR, to_date(:DATE_PLAYED, sys_context('userenv', 'nls_date_format')), :VENUE,
                l_win_tcid, l_ru_tcid, :WINNING_MARGIN)
            returning tournament_year into :TOURNAMENT_YEAR;
        end if;
    end if;
end;

The main points to note are :

  • The possible values of APEX$ROW_STATUS
  • the bind variables for the IG column values return a string, hence the explicit date conversion of :DATE_PLAYED
  • when the insert code generates a new primary key value (e.g. an ID taken from a sequence), APEX needs to know what it is so that it can display the new record once processing is completed.

On that last point, I believe that we may not need to return the PK value in this case because we’re already providing it explicitly in the UI. I’ve left it here as an illustration that this may be the case in other circumstances.

Anyhow, let’s give this a test.
First of all, we’re going to update the 1992 record with the venue :

The record for 1997 is wrong. The Tournament actually took place in 1996, so we’ll create a corrected record and delete the incorrect one :

In both cases, we get a success message :

…and if we check in the database, we can see that the DML has worked as expected :

NOTE : for any cricketing pedants reading – yes, I know that Sri Lanka won by 7 wickets, not 8. I’ll be correcting this when I “notice it” in a few paragraphs time.

At this point you may be less than thrilled at the prospect of having to maintain large chunks of PL/SQL in your APEX application.
Luckily for you, as the process code is a PL/SQL block, you can do pretty much anything you like…

Calling Stored Program Units from our APEX process

I’ve created a database package to handle DML operations on the FINALS table :

create or replace package edit_finals as

    procedure save_final( 
        i_year in finals.tournament_year%type, 
        i_date in finals.date_played%type,
        i_venue in finals.venue%type default null,
        i_winners in teams.team_name%type default null,
        i_losers in teams.team_name%type default null,
        i_margin in finals.winning_margin%type default null,
        o_year out finals.tournament_year%type);

    procedure delete_final( i_year in finals.tournament_year%type);
end edit_finals;
/

create or replace package body edit_finals as

    function get_team_cid( i_team in teams.team_name%type)
        return teams.cid%type 
    is 
        rtn_cid teams.cid%type;
    begin
        select cid 
        into rtn_cid
        from teams 
        where team_name = upper( i_team);

        return rtn_cid;
    end get_team_cid;

    procedure save_final( 
        i_year in finals.tournament_year%type, 
        i_date in finals.date_played%type,
        i_venue in finals.venue%type default null,
        i_winners in teams.team_name%type default null,
        i_losers in teams.team_name%type default null,
        i_margin in finals.winning_margin%type default null,
        o_year out finals.tournament_year%type)
    is 
        win_tcid teams.cid%type := null;
        ru_tcid teams.cid%type := null;
    begin
        if i_winners is not null then 
            win_tcid := get_team_cid( i_winners);
        end if;

        if i_losers is not null then 
            ru_tcid := get_team_cid( i_losers);
        end if;

        merge into finals 
            using dual 
            on ( tournament_year = i_year)
        when matched then update
            set date_played = nvl(i_date, date_played),
                venue = nvl(i_venue, venue),
                winning_tcid = nvl(win_tcid, winning_tcid),
                losing_tcid = nvl(ru_tcid, losing_tcid),
                winning_margin = nvl(i_margin, winning_margin)
            where tournament_year = i_year
        when not matched then 
            insert( tournament_year, date_played, venue, winning_tcid, losing_tcid, winning_margin)
            values( i_year, i_date, i_venue, win_tcid, ru_tcid, i_margin);

        o_year := i_year;
    end save_final;

    procedure delete_final( i_year in finals.tournament_year%type) is 
    begin
        delete from finals
        where tournament_year = i_year;
    end delete_final;
end edit_finals;
/

This means that the PL/SQL code for the Save IG Process is a bit more compact :

begin
    if :APEX$ROW_STATUS = 'D' then
        edit_finals.delete_final(i_year => :TOURNAMENT_YEAR);
    elsif :APEX$ROW_STATUS in ('C', 'U') then
        edit_finals.save_final( 
           i_year => :TOURNAMENT_YEAR, 
           i_date => to_date(:DATE_PLAYED, sys_context('userenv', 'nls_date_format')),
           i_venue => :VENUE,
           i_winners => :WINNERS,
           i_losers => :RUNNERS_UP,
           i_margin => :WINNING_MARGIN,
           o_year => :TOURNAMENT_YEAR);
   end if;
end;

The behaviour is the same…

…which we can confirm in the database…

The benefit of this approach is that, if you decide to migrate from APEX to another front-end technology, the Package is there in the database and does not need to change.

References

I’ve not managed to find too much else out there on this topic, hence this post.
There is a very good forum post by Patrick Wolf.
This Oracle Tips and Tricks article may be worth a read.
Finally, there’s this rather comprehensive look at Interactive Grids by John Snyders.

SQL Server 2019 availability group R/W connection redirection, routing mesh and load balancing

Yann Neuhaus - Tue, 2019-07-23 10:59

SQL Server 2019 availability group feature will provide secondary to primary replica read/write connection redirection. I wrote about it in a previous blog post here. It consists in redirecting client application connections to the primary replica regardless of the target server specified in the connections string. That’s pretty interesting in some scenarios as read scale-out or specific multi-subnet configurations where creating the traditional AG listener is not an viable option.

The new R/W connection redirection capability does the job but the one-million-dollar question here is what’s happen if one of the replicas specified in my connection string becomes suddenly unavailable? Referring  to the BOL  the connection will fail regardless the role that the replica on the target server plays but we can mitigate the issue by introducing the failover partner parameter in the connection string. As a reminder, the Failover Partner keyword in the connection string works in a database mirror setup and prevent prolonged application downtime. But from my point of view, we could go likely another way and get benefit to all the power of this new availability group feature by introducing a load balancer on the top of this topology as we could do with Docker Swarm or K8s architectures. Indeed, if we take a look more closely, this new provided mechanism by SQL Server 2019 is pretty similar to the routing mesh capabilities of container orchestrators with the same advantages and weaknesses as well. I wrote a blog post about Docker Swarm architectures where we need to implement a proxy to load balance the traffic to avoid getting stuck with the routing mesh capability where a node get unhealthy.

I just applied the same kind of configuration by using an HA Proxy (but you can use your own obviously) with my availability group topology and the behavior was basically the same. Here the intended behavior:

 

Here the configuration of my HAProxy including my 3 AG replicas (WIN20191, WIN20192, WIN20193) and a round robin algorithm at the bottom:

…
backend rserve_backend
    mode tcp
    option tcplog
    option log-health-checks
    option redispatch
    log global
    balance roundrobin
    timeout connect 10s
    timeout server 1m
    server WIN20191 192.168.40.205:1433 check
    server WIN20192 192.168.40.206:1433 check
    server WIN20193 192.168.40.207:1433 check

 

Let’s do a try with connections directly to my HAProxy that is listen on port 81 in my test scenario. Note that for this first test I will connect to the master database to force the local connection getting stick to each replica rather than going through the R/W redirection process. The goal is to check if the round-robin algorithm come into play …

$connectionString = "Server=192.168.40.14,81;uid=sa; pwd=xxxx;Integrated Security=False;Initial Catalog=master;pooling=false”

$connection = New-Object System.Data.SqlClient.SqlConnection
$connection.ConnectionString = $connectionString
$connection.Open()

$sqlCommandText="SELECT 'Current server : ' + @@SERVERNAME AS server_name"
$sqlCommand = New-Object system.Data.sqlclient.SqlCommand($sqlCommandText,$connection)
$sqlCommand.ExecuteScalar()

$connection.Close()
$connection.Dispose()

 

… and that’s the case as show below:

Test connexion initial server nb : 0 - 192.168.40.14,81 - Current server : WIN20191
Test connexion initial server nb : 1 - 192.168.40.14,81 - Current server : WIN20192
Test connexion initial server nb : 2 - 192.168.40.14,81 - Current server : WIN20193
Test connexion initial server nb : 3 - 192.168.40.14,81 - Current server : WIN20191
Test connexion initial server nb : 4 - 192.168.40.14,81 - Current server : WIN20192
Test connexion initial server nb : 5 - 192.168.40.14,81 - Current server : WIN20193

 

Let’s do a try by forcing the R/W redirection now. This time I set up the correct target database name named dummy2 for my availability group AG2019.

$connectionString = "Server=192.168.40.14,81;uid=sa; pwd=xxxx;Integrated Security=False;Initial Catalog=dummy2;pooling=false”

$connection = New-Object System.Data.SqlClient.SqlConnection
$connection.ConnectionString = $connectionString
$connection.Open()

$sqlCommandText="SELECT 'Current server : ' + @@SERVERNAME AS server_name"
$sqlCommand = New-Object system.Data.sqlclient.SqlCommand($sqlCommandText,$connection)
$sqlCommand.ExecuteScalar()

$connection.Close()
$connection.Dispose()

Test connexion initial server nb : 0 - 192.168.40.14,81 - Current server : WIN20191
Test connexion initial server nb : 1 - 192.168.40.14,81 - Current server : WIN20191
Test connexion initial server nb : 2 - 192.168.40.14,81 - Current server : WIN20191
Test connexion initial server nb : 3 - 192.168.40.14,81 - Current server : WIN20191
Test connexion initial server nb : 4 - 192.168.40.14,81 - Current server : WIN20191
Test connexion initial server nb : 5 - 192.168.40.14,81 - Current server : WIN20191

 

This time the R/W redirection is taking effect and each established connection is redirected to my primary replica – WIN20191 this time.

Finally, let’s simulate an outage of one of my replicas, let’s say the WIN20193 replica with a turn off operation and let’s see what’s happen below:

Test connexion initial server nb : 32 - 192.168.40.14,81 - Current server : WIN20191
Test connexion initial server nb : 33 - 192.168.40.14,81 - Current server : WIN20191
Test connexion initial server nb : 34 - 192.168.40.14,81 - Current server : WIN20191
Test connexion initial server nb : 35 - 192.168.40.14,81 - Current server : WIN20191
Test connexion initial server nb : 36 - 192.168.40.14,81 - Current server : WIN20191
Test connexion initial server nb : 37 - 192.168.40.14,81 - Current server : WIN20191
Test connexion initial server nb : 38 - 192.168.40.14,81 - Current server : WIN20191
Test connexion initial server nb : 39 - 192.168.40.14,81 - Current server : WIN20191
Test connexion initial server nb : 40 - 192.168.40.14,81 - Current server : WIN20191

 

Well, from a connection perspective nothing has changed and the HAProxy continues to load balance connections between the remaining healthy replicas. The R/W connection redirection mechanism still continue to come into play as well. A quick look at the HAProxy indicates the WIN20193 replica got unhealthy and the HAProxy has evicted this replica from the game.

[WARNING] 203/063813 (1772) : Health check for server rserve_backend/WIN20193 failed, reason: Layer4 timeout, check durati               on: 2001ms, status: 2/3 UP.
[WARNING] 203/063818 (1772) : Health check for server rserve_backend/WIN20193 failed, reason: Layer4 timeout, check durati               on: 2002ms, status: 1/3 UP.
[WARNING] 203/063822 (1772) : Health check for server rserve_backend/WIN20193 failed, reason: Layer4 timeout, check durati               on: 2001ms, status: 0/2 DOWN.
[WARNING] 203/063822 (1772) : Server rserve_backend/WIN20193 is DOWN. 2 active and 0 backup servers left. 2 sessions activ               e, 0 requeued, 0 remaining in queue.
[WARNING] 203/063848 (1772) : Health check for server rserve_backend/WIN20193 failed, reason: Layer4 connection problem, i               nfo: "No route to host", check duration: 4ms, status: 0/2 DOWN.

 

The new R/W redirection capability provided by Microsoft will extend possible scenarios with availability groups for sure. With previous versions of SQL Server, using a load balancer was limited to R/O workloads but SQL Server 2019 will probably change the game on this topic. Let’s see what’s happen in the future!

 

 

 

 

 

Cet article SQL Server 2019 availability group R/W connection redirection, routing mesh and load balancing est apparu en premier sur Blog dbi services.

Check Cluster Resources Where Target != State

Michael Dinh - Tue, 2019-07-23 10:32

Current version.

[oracle@racnode-dc2-1 patch]$ cat /etc/oratab
#Backup file is  /u01/app/12.2.0.1/grid/srvm/admin/oratab.bak.racnode-dc2-1 line added by Agent
-MGMTDB:/u01/app/12.2.0.1/grid:N
hawk1:/u01/app/oracle/12.2.0.1/db1:N
+ASM1:/u01/app/12.2.0.1/grid:N          # line added by Agent
[oracle@racnode-dc2-1 patch]$

Kill database instance process.

[oracle@racnode-dc2-1 patch]$ ps -ef|grep pmon
oracle   13542     1  0 16:09 ?        00:00:00 asm_pmon_+ASM1
oracle   27663     1  0 16:39 ?        00:00:00 ora_pmon_hawk1
oracle   29401 18930  0 16:40 pts/0    00:00:00 grep --color=auto pmon
[oracle@racnode-dc2-1 patch]$
[oracle@racnode-dc2-1 patch]$ kill -9 27663
[oracle@racnode-dc2-1 patch]$

Check cluster resource – close but no cigar (false positive)

[oracle@racnode-dc2-1 patch]$ crsctl stat res -t -w '(TARGET != ONLINE) or (STATE != ONLINE)'
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.proxy_advm
               OFFLINE OFFLINE      racnode-dc2-1            STABLE
               OFFLINE OFFLINE      racnode-dc2-2            STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      3        OFFLINE OFFLINE                               STABLE
ora.hawk.db
      1        ONLINE  OFFLINE      racnode-dc2-1            Instance Shutdown,ST
                                                             ARTING
--------------------------------------------------------------------------------
[oracle@racnode-dc2-1 patch]$

Check cluster resource – BINGO!

[oracle@racnode-dc2-1 patch]$ crsctl stat res -t -w '(TARGET = ONLINE) and (STATE != ONLINE)'
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.hawk.db
      1        ONLINE  OFFLINE      racnode-dc2-1            Instance Shutdown,ST
                                                             ARTING
--------------------------------------------------------------------------------
[oracle@racnode-dc2-1 patch]$

Another example:

[oracle@racnode-dc2-1 ~]$ crsctl stat res -t -w '(TARGET = ONLINE) and (STATE != ONLINE)'
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRS.dg
               ONLINE  INTERMEDIATE racnode-dc2-2            STABLE
ora.DATA.dg
               ONLINE  INTERMEDIATE racnode-dc2-2            STABLE
ora.FRA.dg
               ONLINE  INTERMEDIATE racnode-dc2-2            STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.hawk.db
      1        ONLINE  OFFLINE      racnode-dc2-1            Instance Shutdown,ST
                                                             ARTING
--------------------------------------------------------------------------------
[oracle@racnode-dc2-1 ~]$

Learned something here.

[oracle@racnode-dc2-1 ~]$ crsctl stat res -v -w 'TYPE = ora.database.type'
NAME=ora.hawk.db
TYPE=ora.database.type
LAST_SERVER=racnode-dc2-1
STATE=ONLINE on racnode-dc2-1
TARGET=ONLINE
CARDINALITY_ID=1
OXR_SECTION=0
RESTART_COUNT=0
***** FAILURE_COUNT=1 
***** FAILURE_HISTORY=1564015051:racnode-dc2-1
ID=ora.hawk.db 1 1
INCARNATION=4
***** LAST_RESTART=07/25/2019 02:39:38
***** LAST_STATE_CHANGE=07/25/2019 02:39:51
STATE_DETAILS=Open,HOME=/u01/app/oracle/12.2.0.1/db1
INTERNAL_STATE=STABLE
TARGET_SERVER=racnode-dc2-1
RESOURCE_GROUP=
INSTANCE_COUNT=2

LAST_SERVER=racnode-dc2-2
STATE=ONLINE on racnode-dc2-2
TARGET=ONLINE
CARDINALITY_ID=2
OXR_SECTION=0
RESTART_COUNT=0
FAILURE_COUNT=0
FAILURE_HISTORY=
ID=ora.hawk.db 2 1
INCARNATION=1
LAST_RESTART=07/25/2019 02:21:45
LAST_STATE_CHANGE=07/25/2019 02:21:45
STATE_DETAILS=Open,HOME=/u01/app/oracle/12.2.0.1/db1
INTERNAL_STATE=STABLE
TARGET_SERVER=racnode-dc2-2
RESOURCE_GROUP=
INSTANCE_COUNT=2

[oracle@racnode-dc2-1 ~]$

Check cluster resource – sanity check.

[oracle@racnode-dc2-1 patch]$ crsctl stat res -t -w '((TARGET = ONLINE) and (STATE != ONLINE)'
[oracle@racnode-dc2-1 patch]$
[oracle@racnode-dc2-1 patch]$ crsctl stat res -t -w 'TYPE = ora.database.type'
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.hawk.db
      1        ONLINE  ONLINE       racnode-dc2-1            Open,HOME=/u01/app/o
                                                             racle/12.2.0.1/db1,S
                                                             TABLE
      2        ONLINE  ONLINE       racnode-dc2-2            Open,HOME=/u01/app/o
                                                             racle/12.2.0.1/db1,S
                                                             TABLE
--------------------------------------------------------------------------------
[oracle@racnode-dc2-1 patch]$

What is a Blog and What does it Stand for?

VitalSoftTech - Tue, 2019-07-23 09:49

All of us who use the internet daily are familiar with what a blog is. A blog is a web page or a website that one person or a small group regularly updates, written in a simple conversational style. But how many of us know what the word blog means or what does it stand […]

The post What is a Blog and What does it Stand for? appeared first on VitalSoftTech.

Categories: DBA Blogs

Sometimes the Simplest Things Are the Most Important

Cary Millsap - Tue, 2019-07-23 09:44
I didn’t ride in First Class a lot during my career, but I can remember one trip, I was sitting in seat 1B. Aisle seat, very front row. Right behind the lavatory bulkhead. The lady next to me in 1A was very nice, and we traded some stories.

I can remember telling her during dinner, wow, just look at us. Sitting in an aluminum tube going 500 miles per hour, 40,000 feet off the ground. It’s 50º below zero out there. Thousands of gallons of kerosene are burning in huge cans bolted to our wings, making it all go. Yet here we sit in complete comfort, enjoying a glass of wine and a steak dinner. And just three feet away from us in that lavatory right there, a grown man is evacuating his bowels.

I said, you know, out of all the inventions that have brought us to where we are here today, the very most important one is probably that wall.

Pages

Subscribe to Oracle FAQ aggregator