Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 6 days 16 hours ago

OpenText Enterprise World Europe 2019 – Day 2

Wed, 2019-03-13 16:23

Day 2 of OTEW, we followed the global stream this morning which was taking most of the points from yesterday. But we had the pleasure to have a session from Dr. Michio Kaku, Theoretical Physicist, Futurist and popularizer of science. He wrote several books about physics and how he sees the future.

kaku

He sees us in the next 20 years ultra connected with internet lenses, the Moore’s law will collapse in 2025 where it will probably be replaced by Graphene technology (instead of basic transistors), which will, in an unknown perspective, be replaced by Quantum calculation machines (q-bits instead of bits). The main issue with quantum calculation is that q-bit are really disrupted by noises and electromagnetic waves (decoherence). According to him, internet will be replaced by brain net thanks to biological new technologies focusing on sensations instead of visualization.

What’s new and what’s next for OpenText Documentum

We were totally waiting for this session as we, documentum experts, were exited to see the future of this well spread technology. Micah Byrd, Director Product Management at OpenText started to talk about the generic integration roadmap with “Content in Context”, “Cloud”, “LoB and industry” and “Intelligent automation” and how Documentum interprets these guidelines.

Documentum will be more and more integrated to Office 365 thanks to the new UI Smart View. A Coherent solution across all platforms which allows easy and seamless fusion into leading applications like Word and SAP. This is content in context.

OpenText is aggressively pushing Documentum to the cloud since several years with custom solutions like private, managed or public cloud. With Private you keep your data on your data center (2014-2016). With Managed your data goes to OpenText cloud (2017-2018). With Public your data goes where you want on different cloud providers like AWS, Azure, Google and so on (2019). OpenText invests on containerization as well with Docker and Kubernetes for “Documentum from Everywhere”.

 Documentum future innovations

As part of the main new features we have the continous integration of Documentum in Office 365 which already supports Word and SAP and soon (EP7 in October) Excel, Power Point and Outlook. It means that you’ll be able to access Documentum data from Office softwares. In addition OpenText wants to enable Bi-Directional synchronization between Documentum and Core, implying possibilities of interrecting with content outside of the corporate network. Hence, the content will be synced no matter where, no matter when, in a secure and controlled way.

img10

Next to come is also improved content creation experience in D2 thanks to more integration of Brava! for annotation sharing as well as more collaborative capabilities with Share point (improvement of DC4SP).

img11

A new vision of security:

img12

D2 on mobile will come soon on IOS and Android, developed in AppWorks:

img13

We are particularly exited about a prototype which was presented today: the Documentum Security Dashboard. It gives a quick and easy view of user activities and tracks the content usage like views and downloads and can demonstrate trends about content evolution. We hope it will be released one day.

img14

Many more topics around Documentum components where presented but we will not provide details here about it, we were only focusing on main features.

Documentum D2 Demo

We had a chance to put our hands on the new D2 Smart View which brings reactivity and modernity. Our feeling about it is: SMOOTH.

img15

Conclusion

Another amazing day at the OTEW where we met a lot of expert people and attended interesting sessions about the huge OpenText world.

Cet article OpenText Enterprise World Europe 2019 – Day 2 est apparu en premier sur Blog dbi services.

Upgrading SQL Server pods on K8s and helm charts

Wed, 2019-03-13 02:08

It has been while since my last blog. Today it is about continuing with helm charts and how to upgrade / downgrade SQL Server containers to a specific cumulative update. My first write-up in my to-do list.

blog 149 - 0 - banner

Last year, I wrote an introduction of SQL Server containers on K8s. I remembered to face some issues when testing upgrade scenarios (probably a lack of knowledge). Since then, I have discovered helm charts and I use them intensively with my environments and they also provide upgrade / rollback capabilities.

So, the question is how to upgrade an existing SQL Server container to a new cumulative update with a helm chart?

First of all, during deployment you need to specify a strategy type. There are several strategy types and most of them address upgrade scenarios with stateless applications (ramped, blue/green, canary and a/b testing). Unfortunately, with stateful applications like SGBDRs the story is not the same because persistent storage cannot be accessed by several at time. In this case K8s must first stop and remove the current pod and then spin up a new pod with the new version. “recreate” strategy type is designed to carry out this task and to address SQL Server pod upgrade scenarios.

My deployment file is as follow:

apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: {{ template "mssql.fullname" . }}
  labels:
    app: {{ template "mssql.name" . }}
    chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
{{- if .Values.deployment.annotations }}
  annotations:
{{ toYaml .Values.deployment.annotations | indent 4 }}
{{- end }}
spec:
  replicas: {{ .Values.replicaCount }}
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: {{ template "mssql.name" . }}
      release: {{ .Release.Name }}
  template:
    metadata:
      labels:
        app: {{ template "mssql.name" . }}
        release: {{ .Release.Name }}
    spec:
      containers:
        - name: {{ .Chart.Name }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          env:
            - name: ACCEPT_EULA
              value: "{{ .Values.acceptEula.value | upper }}"
            - name: MSSQL_PID
              value: "{{ .Values.edition.value }}"
            - name: MSSQL_SA_PASSWORD
              valueFrom:
               secretKeyRef:
                 name: {{ template "mssql.fullname" . }}-sa-secret
                 key: sapassword
            - name: MSSQL_TCP_PORT
              value: "{{ .Values.service.port.value }}"
            - name: MSSQL_LCID
              value: "{{ .Values.lcid.value }}"
            - name: MSSQL_COLLATION
              value: "{{ .Values.collation.value }}"
            - name: MSSQL_ENABLE_HADR
              value: "{{ .Values.hadr.value }}"
            {{ if .Values.resources.limits.memory }}
            - name: MSSQL_MEMORY_LIMIT_MB
              valueFrom:
                resourceFieldRef:
                  resource: limits.memory
                  divisor: 1Mi
            {{ end }}
          ports:
            - name: mssql
              containerPort: {{ .Values.service.port.value }}
          volumeMounts:
            - name: data
              mountPath: /var/opt/mssql/data
          livenessProbe:
            tcpSocket:
               port: mssql
            initialDelaySeconds: {{ .Values.livenessprobe.initialDelaySeconds }}
            periodSeconds: {{ .Values.livenessprobe.periodSeconds }}
          readinessProbe:
            tcpSocket:
               port: mssql
            initialDelaySeconds: {{ .Values.readinessprobe.initialDelaySeconds }}
            periodSeconds: {{ .Values.readinessprobe.periodSeconds }}
          resources:
{{ toYaml .Values.resources | indent 12 }}
    {{- if .Values.nodeSelector }}
      nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
    {{- end }}
      volumes:
      - name: data
      {{- if .Values.persistence.enabled }}
        persistentVolumeClaim:
          {{- if .Values.persistence.existingDataClaim }}
          claimName: {{ .Values.persistence.existingDataClaim }}
          {{- else }}
          claimName: {{ template "mssql.fullname" . }}-data
          {{- end -}}
      {{- else }}
        emptyDir: {}
      {{- end }}

 

My default values (in values.yaml) are the following:

# General parameters
acceptEula: 
  value: "Y"
edition: 
  value: "Developer"
collation: 
  value: SQL_Latin1_General_CP1_CI_AS
lcid: 
  value: 1033
hadr: 
    value: 0
# User parameters
sapassword: 
  value: Password1
# Image parameters
image:
  repository: mcr.microsoft.com/mssql/server
  tag: 2017-CU12-ubuntu
  pullPolicy: IfNotPresent
# Service parameters
service:
  type: 
    value: LoadBalancer
  port: 
    value: 1460
  annotations: {}
deployment:
  annotations: {}
# Volumes & persistence parameters
persistence:
  enabled: true
  storageClass: ""
  dataAccessMode: ReadWriteOnce
  dataSize: 5Gi
# Probe parameters
livenessprobe:
  initialDelaySeconds: 20
  periodSeconds: 15
readinessprobe:
  initialDelaySeconds: 20
  periodSeconds: 15
# Resourcep parameters
resources:
  limits:
  #  cpu: 100m
    memory: 3Gi
  # requests:
  #  cpu: 100m
  #  memory: 2Gi
nodeSelector: {}

You may notice I will pull a SQL Server image from the MCR with the 2017-CU12-ubuntu tag.

Let’s now install SQL2017container release:

$ helm install --name sql2017container .

 

This command will install a helm release which includes among others a deployment, a replicaset with one pod (my SQL Server pod), a secret that contains the sa password, a persistence volume claim to persistent my database files (mapped to the /var/opt/mssql/data path inside the pod) and the service to expose the pod on port 1460 TCP.

$ helm status sql2017container
LAST DEPLOYED: Tue Mar 12 20:36:12 2019
NAMESPACE: ci
STATUS: DEPLOYED

RESOURCES:
==> v1/Secret
NAME                                        TYPE    DATA  AGE
sql2017container-dbi-mssql-linux-sa-secret  Opaque  1     7m7s

==> v1/PersistentVolumeClaim
NAME                                   STATUS  VOLUME                                    CAPACITY  ACCESS MODES  STORAGECLASS  AGE
sql2017container-dbi-mssql-linux-data  Bound   pvc-18304483-44fe-11e9-a668-ca78ebdc2a19  5Gi       RWO           default       7m7s

==> v1/Service
NAME                              TYPE          CLUSTER-IP    EXTERNAL-IP     PORT(S)         AGE
sql2017container-dbi-mssql-linux  LoadBalancer  10.0.104.244  xx.xx.xx.xx  1460:31502/TCP  7m6s

==> v1beta2/Deployment
NAME                              DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
sql2017container-dbi-mssql-linux  1        1        1           1          7m6s

==> v1/Pod(related)
NAME                                               READY  STATUS   RESTARTS  AGE
sql2017container-dbi-mssql-linux-76b4f7c8f5-mmhqt  1/1    Running  0         7m6s

 

My SQL Server pod is running with the expected version and CU:

master> select @@version AS [version];
+-----------+
| version   |
|-----------|
| Microsoft SQL Server 2017 (RTM-CU12) (KB4464082) - 14.0.3045.24 (X64)
        Oct 18 2018 23:11:05
        Copyright (C) 2017 Microsoft Corporation
        Developer Edition (64-bit) on Linux (Ubuntu 16.04.5 LTS)           |
+-----------+
(1 row affected)
Time: 0.354s

 

It’s time now to upgrade my pod with the latest CU13 (at the moment of this write-up). With helm charts this task is pretty simple. I will just upgrade my release with the new desired tag as follows:

$ helm upgrade sql2017container . --set=image.tag=2017-CU13-ubuntu
Release "sql2017container" has been upgraded. Happy Helming!

 

Let’s dig further into deployment stuff:

$ kubectl describe deployment sql2017container-dbi-mssql-linux

 

The interesting part is below:

Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  18m   deployment-controller  Scaled up replica set sql2017container-dbi-mssql-linux-76b4f7c8f5 to 1
  Normal  ScalingReplicaSet  1m    deployment-controller  Scaled down replica set sql2017container-dbi-mssql-linux-76b4f7c8f5 to 0
  Normal  ScalingReplicaSet  1m    deployment-controller  Scaled up replica set sql2017container-dbi-mssql-linux-799ff7979b to 1

 

Referring to the deployment strategy, the deployment controller has recreated a new ReplicaSet (and a new SQL Server pod) accordingly. A quick check from client tool confirms the instance has been upgraded correctly:

master> select @@version AS [version];
+-----------+
| version   |
|-----------|
| Microsoft SQL Server 2017 (RTM-CU13) (KB4466404) - 14.0.3048.4 (X64)
        Nov 30 2018 12:57:58
        Copyright (C) 2017 Microsoft Corporation
        Developer Edition (64-bit) on Linux (Ubuntu 16.04.5 LTS)           |
+-----------+
(1 row affected)
Time: 0.716s

 

Another interesting part is how SQL Server detects the new image and starts upgrading process. Let’s dump the SQL Server log pod. I just put a sample of messages from the pod log to get a picture of scripts used during the upgrade.

$ kubectl logs sql2017container-dbi-mssql-linux-799ff7979b-knqrm
2019-03-12 19:54:59.11 spid22s     Service Broker manager has started.
2019-03-12 19:54:59.44 spid6s      Database 'master' is upgrading script 'ProvisionAgentIdentity.sql' from level 234884069 to level 234884072.
2019-03-12 19:54:59.45 spid6s      Database 'master' is upgrading script 'no_op.sql' from level 234884069 to level 234884072.
2019-03-12 19:54:59.70 spid6s      Database 'master' is upgrading script 'no_op.sql' from level 234884069 to level 234884072.
….
2019-03-12 19:54:59.70 spid6s      -----------------------------------------
2019-03-12 19:54:59.70 spid6s      Starting execution of dummy.sql
2019-03-12 19:54:59.70 spid6s      -----------------------------------------
…
2019-03-12 19:55:00.24 spid6s      Starting execution of PRE_MSDB.SQL
2019-03-12 19:55:00.24 spid6s      ----------------------------------
2019-03-12 19:55:00.70 spid6s      Setting database option COMPATIBILITY_LEVEL to 100 for database 'msdb'.
2019-03-12 19:55:00.90 spid6s      -----------------------------------------
2019-03-12 19:55:00.90 spid6s      Starting execution of PRE_SQLAGENT100.SQL
2019-03-12 19:55:00.90 spid6s      -----------------------------------------
…
2019-03-12 19:55:12.09 spid6s      ----------------------------------
2019-03-12 19:55:12.09 spid6s      Starting execution of MSDB.SQL
2019-03-12 19:55:12.09 spid6s      ----------------------------------
…
2019-03-12 19:55:12.86 spid6s      -----------------------------------------
2019-03-12 19:55:12.86 spid6s      Starting execution of MSDB_VERSIONING.SQL
2019-03-12 19:55:12.86 spid6s      -----------------------------------------
…
2019-03-12 19:55:51.68 spid6s      -----------------------------------------
2019-03-12 19:55:51.68 spid6s      Starting execution of EXTENSIBILITY.SQL
2019-03-12 19:55:51.68 spid6s      -----------------------------------------
…
2019-03-12 19:56:01.51 spid6s      --------------------------------
2019-03-12 19:56:01.51 spid6s      Starting execution of Alwayson.SQL
2019-03-12 19:56:01.51 spid6s      --------------------------------
…
2019-03-12 19:56:29.17 spid6s      ------------------------------------
2019-03-12 19:56:29.17 spid6s      Moving 2005 SSIS Data to 2008 tables
2019-03-12 19:56:29.17 spid6s      ------------------------------------
…
2019-03-12 19:56:32.52 spid6s      ------------------------------------------------------
2019-03-12 19:56:32.52 spid6s      Starting execution of UPGRADE_UCP_CMDW_DISCOVERY.SQL
2019-03-12 19:56:32.52 spid6s      ------------------------------------------------------
…
2019-03-12 19:56:32.66 spid6s      ------------------------------------------------------
2019-03-12 19:56:32.66 spid6s      Starting execution of SSIS_DISCOVERY.SQL
2019-03-12 19:56:32.66 spid6s      ------------------------------------------------------
…
2019-03-12 19:56:32.83 spid6s      ------------------------------------------------------
2019-03-12 19:56:32.83 spid6s      Start provisioning of CEIPService Login
2019-03-12 19:56:32.83 spid6s      ------------------------------------------------------
…

 

A set of scripts developed by the SQL Server team runs during the SQL Server pod startup and updates different parts of the SQL Server instance.

Helm provides a command to view release history …

$ helm history sql2017container
REVISION        UPDATED                         STATUS          CHART                   DESCRIPTION
1               Tue Mar 12 20:36:12 2019        SUPERSEDED      dbi-mssql-linux-1.0.0   Install complete
2               Tue Mar 12 20:53:26 2019        DEPLOYED        dbi-mssql-linux-1.0.0   Upgrade complete

 

… and to rollback to previous release revision if anything goes wrong:

$ helm rollback sql2017container 1

 

The same process applies here. The deployment controller will recreate a ReplicaSet and a downgraded SQL Server pod to the previous version.

$ kubectl describe deployment sql2017container-dbi-mssql-linux
Events:
  Type    Reason             Age               From                   Message
  ----    ------             ----              ----                   -------
  Normal  ScalingReplicaSet  31m               deployment-controller  Scaled down replica set sql2017container-dbi-mssql-linux-76b4f7c8f5 to 0
  Normal  ScalingReplicaSet  31m               deployment-controller  Scaled up replica set sql2017container-dbi-mssql-linux-799ff7979b to 1
  Normal  ScalingReplicaSet  6m                deployment-controller  Scaled down replica set sql2017container-dbi-mssql-linux-799ff7979b to 0
  Normal  ScalingReplicaSet  6m (x2 over 49m)  deployment-controller  Scaled up replica set sql2017container-dbi-mssql-linux-76b4f7c8f5 to 1

 

Same set of TSQL scripts seem to be executed again during the SQL Server pod startup for downgrade purpose this time.

The release rollback is logged in the release history:

$ helm history sql2017container
REVISION        UPDATED                         STATUS          CHART                   DESCRIPTION
1               Tue Mar 12 20:36:12 2019        SUPERSEDED      dbi-mssql-linux-1.0.0   Install complete
2               Tue Mar 12 20:53:26 2019        SUPERSEDED      dbi-mssql-linux-1.0.0   Upgrade complete
3               Tue Mar 12 21:18:57 2019        DEPLOYED        dbi-mssql-linux-1.0.0   Rollback to 1

 

Rollback capabilities of helm charts (and implicitly of K8s) may be attractive but for database applications it will likely not fit with all upgrade scenarios. To be used sparingly … What’s next? Taking a look at the upgrade scenarios with availability groups on K8s for sure … see you on a next write-up!

 

 

Cet article Upgrading SQL Server pods on K8s and helm charts est apparu en premier sur Blog dbi services.

SQL Tuning – Mix NULL / NOT NULL Values

Tue, 2019-03-12 18:41

One of the difficulty when writing a SQL query (static SQL) is to have in the same Where Clause different conditions handling Null Values and Not Null Values for a predica.

Let’s me explain you by an example :

Users can entered different values for a user field from an OBI report:
– If no value entered then all rows must be returned.
– If 1 value entered then only row(s) related to the filter must be returned.
– If List Of Values entered then only row(s) related to the filter must be returned.

The SQL we want to write must take into account all the conditions possible (the 3 listed above).

Here is the first version of the SQL query written by the customer :

select * 
from my_table a
WHERE a.pt_name LIKE decode(:PT_PARAM, NULL, '%', '')
OR a.pt_name IN (:PT_PARAM);

:PT_PARAM is the user variable.

The problem with this query is that the both conditions :
– a.pt_name LIKE decode(:PT_PARAM, NULL, ‘%’, ”)
– a.pt_name IN (:PT_PARAM)
are always TRUE, so unnecessary work will be done by oracle optimizer.

We can prove that by checking the execution plan :

If :PT_PARAM is equal to ‘Value1′ :

EXPLAIN PLAN FOR
select * 
from my_table a  
WHERE a.pt_name LIKE decode('Value1', NULL, '%', '')
OR a.pt_name IN ('Value1');

SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY);

Plan hash value: 1606647163
 
--------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |                  |     5 |  1140 |     3   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| MY_TABLE         |     5 |  1140 |     3   (0)| 00:00:01 |
|   2 |   BITMAP CONVERSION TO ROWIDS       |                  |       |       |            |          |
|   3 |    BITMAP OR                        |                  |       |       |            |          |
|*  4 |     BITMAP INDEX SINGLE VALUE       | BIX_DMED_TERM_01 |       |       |            |          |
|   5 |     BITMAP MERGE                    |                  |       |       |            |          |
|*  6 |      BITMAP INDEX RANGE SCAN        | BIX_DMED_TERM_01 |       |       |            |          |
--------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   4 - access("A"."PT_NAME"='Value1')
   6 - access("A"."PT_NAME" LIKE NULL)
       filter("A"."PT_NAME" LIKE NULL AND "A"."PT_NAME" LIKE NULL)

Oracle Optimizer does 2 access :
– 1 access for NULL value
– 1 access for ‘Value1′ value

The first access is not necessary since the user has selected a Not Null Value (‘Value1′). Indeed if the user select one Not Null value (‘Value1′), we don’t want oracle execute condition for NULL value.

To avoid this couple of access, it’s necessary to re-write the SQL statement like that :

select * 
from my_table a
where (:PT_PARAM is null AND a.pt_name like '%')
OR (:PT_PARAM IS NOT NULL AND a.pt_name in (:PT_PARAM));

We just add a SQL clause indicating that if the first condition is TRUE, the second condition is FALSE and vice versa:
if (:PT_PARAM is null AND a.pt_name like ‘%’) is TRUE then (:PT_PARAM IS NOT NULL AND a.pt_name in (:PT_PARAM)) is FALSE
if (:PT_PARAM IS NOT NULL AND a.pt_name in (:PT_PARAM)) is TRUE then (:PT_PARAM is null AND a.pt_name like ‘%’) is FALSE

Checking the execution plan related to the new SQL statement :

EXPLAIN PLAN FOR
select * 
from my_table a
where ('Value1' is null AND a.pt_name like '%')
OR ( 'Value1' IS NOT NULL AND a.pt_name in ('Value1'));

Plan hash value: 2444798625
 
--------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |                  |     5 |  1140 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| MY_TABLE         |     5 |  1140 |     2   (0)| 00:00:01 |
|   2 |   BITMAP CONVERSION TO ROWIDS       |                  |       |       |            |          |
|*  3 |    BITMAP INDEX SINGLE VALUE        | BIX_DMED_TERM_01 |       |       |            |          |
--------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   3 - access("A"."PT_NAME"='Value1')

Now only one access is done, the one related to the value ‘Value1′ selected by the user.

Conclusion:

Tuning a SQL query can be made within different way : modify the physical design for a table (indexes, partitioning), influence the optimizer (Hints) to force an execution plan, modify oracle optimizer database parameters.

But very often, SQL tuning can be made “simply” by re-written the SQL query. Most of the time, performance problem is due to bad written SQL statement.

The first advice before to write a SQL query is:
– always understand the business needs in order to avoid bad interpretations.
– avoid unnecessary step for oracle optimizer by checking oracle execution plan in details to control the path oracle choose to access the data.
– avoid writing complex SQL – SQL is a very simple language, don’t forget it.

Cet article SQL Tuning – Mix NULL / NOT NULL Values est apparu en premier sur Blog dbi services.

OpenText Enterprise World Europe 2019 – Partner Day

Tue, 2019-03-12 16:23

First day of the #OTEW here at the Austria International Center in Vienna, Guillaume Fuchs and I where invited to assist to the Partner Global sessions.

Welcome to OTEW Vienna 2019

img4Mark J. Barrenechea, the OpenText’s CEO & CTO, started the day with some generic topics concerning the global trends and achievements like:

  • More and More partners and sponsors
  • Cloud integration direction
  • Strong security brought to customers
  • AI & machine learning new trend
  • New customer wave made of Gen Z and millennials to consider
  • OpenText #1 in Content Services in 2018
  • Turned to the future with Exabytes goals (high level transfers and storage)
  • Pushing to upgrade to version 16 with most complete Content Platform ever for security and integration
  • Real trend of SaaS with the new OT2 solutions
OpenText Cloud and OT2 is the future

img1

Today the big concern is the sprawl of data, OpenText is addressing this point by centralizing data and flux and create an information advantage. Using Cloud and OT2 SaaS, PaaS will open the business to every thing.

OT2 is the EIM as a service, it’s an hybrid cloud platform that brings security and scalability to customers solutions which you can integrates to leading applications like O365 Microsoft Teams, Documentum and more, it provides SaaS as well. One place for your data and many connectors to it. More info on it to come, stay tuned.

Smart View is the default

Smart View is the new OpenText UI default for every components such as D2 for documentum, SAP integration, Extended ECM, SuccessFactor and so on.

img3img5

Documentum and D2

New features:

  • Add documents to subfodlers without opening folder first
  • Multi-items download -> Zip and download
  • Download phases displayed in progress bar
  • Pages editable inline with smart view
  • Possibility to add widgets in smart view
  • Workspace look improved in smart view
  • Image/media display improved: Galery View with sorting, filters by name
  • Threaded discussion in smart view look and feel
  • New permission management visual representation
  • Mobile capabilities
  • Integrated in other lead applications (Teams, SAP, Sharepoint and so on…)

img6img7

OpenText Roadmap

OpenText trends are the following:

  • New UI for products: Smart View: All devices, well integrated to OT2
  • Content In Context
    • Embrace Office 365, with Documentum integration
    • Integration of documentum in SAP
  • Push to Cloud
    • More cloud based product: Docker, Kubernetes
    • Run applications anywhere with OpenText Cloud, Azure, AWS, Google
    • SaaS Applications & Services on OT2
  • Line Of Business
    • SAP applications
    • LoB solutions like SuccessFactors
    • Platform for industry solutions like Life Science, Engineering and Government
  • Intelligent Automation
    • Information extraction with machine learning (Capture)
    • Cloud capture apps for SAP, Salesforce, etc
    • Drive automation with Document Generation
    • Automatic sharing with OT Core
    • Leverage Magellan and AI
    • Personal Assistant / Bots
  • Governance:
    • Smart Compliance
    • GDPR and DPA ready
    • Archiving and Application decommissioning
Conclusion

After this first day at OTEW we can see that OpenText is really pushing on new UI with Smart View, as well as centralized services and storage with OT2 and OpenText Cloud solutions. Content Services will become the angular stone for all content storage with plugged interfaces and components provided by the OT2 platform.

Cet article OpenText Enterprise World Europe 2019 – Partner Day est apparu en premier sur Blog dbi services.

SQL Server Temporal Table – How to store a history table in another file?

Tue, 2019-03-12 04:32

Few days ago, a customer asks me if it is possible to move the history table to slower but cheaper storage.
The question behind this is whether it is possible to create a history table on a separate filegroup and file.
Few years ago, I write a serie of blogs about temporal table here.

I will take the same example to try to set up a filegroup specific to a history table.
In my sample, I create a ‘Zoo’ database with a table ‘Animals’ which inventory animals.

First, I create the filegroup HISTORY and add a file Zoo_history.ndf:

USE [master]
GO
ALTER DATABASE [Zoo] ADD FILEGROUP [HISTORY]
GO
ALTER DATABASE [Zoo] ADD FILE ( NAME = N'Zoo_History', FILENAME = N'D:\DATA\Zoo_History.ndf' , SIZE = 131072KB , FILEGROWTH = 131072KB )
 TO FILEGROUP [HISTORY]
GO

Before I create the table Animals, I create the history table [AnimalsHistory] on this filegroup [HISTORY] with also a separate schema [History] (it’s a good practice):

USE [Zoo]
GO
CREATE SCHEMA [History] AUTHORIZATION [dbo]
GO
CREATE TABLE [History].[AnimalsHistory]
(
 [AnimalId] [int]  NOT NULL,
 [Name] [varchar](200) NOT NULL,
 [Genus Species] [varchar](200) NOT NULL,
 [Number]  [int] NOT NULL,
 [StartDate] [datetime2]  NOT NULL,
 [EndDate]  [datetime2] NOT NULL,

) ON [HISTORY]

history_table01

At this time, the table is not a history table. It will be after the creation of the principal table:

CREATE TABLE [dbo].[Animals]
(
 [AnimalId] [int]  NOT NULL,
 [Name] [varchar](200) NOT NULL,
 [Genus Species] [varchar](200) NOT NULL,
 [Number]  [int] NOT NULL,
  CONSTRAINT [PK_Animals] PRIMARY KEY CLUSTERED ([AnimalId] ASC),
  /*Temporal: Define the Period*/
  [StartDate] [datetime2](7) GENERATED ALWAYS AS ROW START NOT NULL,
  [EndDate]  [datetime2](7) GENERATED ALWAYS AS ROW END NOT NULL,
 PERIOD FOR SYSTEM_TIME([StartDate],[EndDate])
) 
 WITH (SYSTEM_VERSIONING=ON (HISTORY_TABLE = [History].[AnimalsHistory]))

history_table02

Now the history table is link to my table System-Versioned with a separate file.
I run few queries to have data and as you can see, all previous version of each updated row is inserted into the history table in the HISTORY filegroup.

INSERT INTO [Zoo].[dbo].[Animals]([AnimalId],[Name],[Genus Species],[Number])
     VALUES(1,'African wild cat','Felis silvestris lybica',10)
GO

UPDATE [Zoo].[dbo].[Animals] SET Number = 21 WHERE Name = 'African wild cat' AND  [Genus Species]= 'Felis silvestris lybica';
GO
UPDATE [Zoo].[dbo].[Animals] SET Number = 5 WHERE Name = 'African wild cat' AND  [Genus Species]= 'Felis silvestris lybica';
GO
UPDATE [Zoo].[dbo].[Animals] SET Number = 12 WHERE Name = 'African wild cat' AND  [Genus Species]= 'Felis silvestris lybica';
GO
UPDATE [Zoo].[dbo].[Animals] SET Number = 20 WHERE Name = 'African wild cat' AND  [Genus Species]= 'Felis silvestris lybica';
GO
...

As you can see below, all changes are in the filegroup HISTORY:

history_table03

I recommend creating a separate filegroup for history table in case of temporal table usage.
It is easier to manage (table growth), will not be place in your “principal” data file and can be place on a different storage if needed.
I hope this will help you to design your database

Cet article SQL Server Temporal Table – How to store a history table in another file? est apparu en premier sur Blog dbi services.

How to patch your ODA lite to 18.3.0.0.0

Fri, 2019-03-08 11:00

Even you don’t need 18c, or you’re not ready for this release, this patch will also update your 11gR2, 12cR1 and 12cR2 databases to the latest PSU available on ODA, it means for example the patchset from last July if you’re using 12cR1. Here is how to apply this latest patch for your ODA lite. In this example, the patch was applied on an X6-2S ODA running on previous release: 12.2.1.4.0.

1) Download the patch

The patch number is 28864490. As usual, this patch will update the following components: dcs (odacli), operating system, bios/firmwares, storage (on lite it means data disks firmwares), ilom, GI, dbhomes and databases.
Download and copy the patch to a temporary folder on the server, for example /opt/patch. You’ll have to be root to apply the patch.

2) Check the actual versions and free space on disk

First check the current version:

odacli describe-component

System Version
---------------
12.2.1.4.0

Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                       12.2.1.4.0            up-to-date
GI                                        12.2.0.1.180417       up-to-date
DB                                        12.1.0.2.180417       up-to-date
DCSAGENT                                  18.2.1.0.0            up-to-date
ILOM                                      3.2.9.23.r116695      up-to-date
BIOS                                      38070200              up-to-date
OS                                        6.9                   up-to-date
FIRMWARECONTROLLER {
[ c2 ]                                    4.650.00-7176         up-to-date
[ c0,c1 ]                                 KPYAGR3Q              up-to-date
}
FIRMWAREDISK                              0R3Q                  up-to-date

For the moment, the “available version” column doesn’t know that a newer patch has been released.

Check that folders /, /u01 and /opt have enough free GB to process (>=20GB). This 18c patch is quite big, so don’t forget that you can extend the /u01 and /opt logical volumes online quite easily, for example if you need to increase the /opt:

lvextend -L +30G /dev/VolGroupSys/LogVolOpt
resize2fs /dev/VolGroupSys/LogVolOpt

If you never changed the logical volume configuration, about 200GB are available on local disks on this kind of ODA.

3) Prepare the patch files

You need to unzip and register the 3 unzipped files (they are zip files, too).

cd /opt/patch
unzip p28864490_183000_Linux-x86-64_1of3.zip
unzip p28864490_183000_Linux-x86-64_2of3.zip
unzip p28864490_183000_Linux-x86-64_3of3.zip
odacli update-repository -f /opt/patch/oda-sm-18.3.0.0.0-181205-server1of3.zip
odacli update-repository -f /opt/patch/oda-sm-18.3.0.0.0-181205-server2of3.zip
odacli update-repository -f /opt/patch/oda-sm-18.3.0.0.0-181205-server3of3.zip

Updating the repository, as other tasks through odacli, will generate a job. Check if the 3 latest jobs are OK:

odacli list-jobs | head -n 3;  odacli list-jobs | tail -n 4
ID                                       Description               Created                             Status
---------------------------------------- ------------------------- ----------------------------------- ----------
add82ae5-3295-49ad-811d-c8c57ebb0cb1     Repository Update         March 7, 2019 1:11:13 PM CET        Success
a057e961-2584-467f-9fc3-d8951dcae213     Repository Update         March 7, 2019 1:11:49 PM CET        Success
0483dbf8-7562-424e-b7bb-3786558d62b1     Repository Update         March 7, 2019 1:15:31 PM CET        Success

4) Run the prepatch report

It’s strongly advised to run the prepatch report before patching:

odacli create-prepatchreport -s -v 18.3.0.0.0
odacli describe-prepatchreport -i 9112e726-62f1-4e85-9d9c-aec46e8e8210
Patch pre-check report
------------------------------------------------------------------------
                 Job ID:  9112e726-62f1-4e85-9d9c-aec46e8e8210
            Description:  Patch pre-checks for [OS, ILOM, GI]
                 Status:  SUCCESS
                Created:  March 7, 2019 1:41:23 PM CET
                 Result:  All pre-checks succeeded

Node Name
---------------
dbi02

Pre-Check                      Status   Comments
------------------------------ -------- --------------------------------------
__OS__
Validate supported versions     Success   Validated minimum supported versions
Validate patching tag           Success   Validated patching tag: 18.3.0.0.0
Is patch location available     Success   Patch location is available
Verify OS patch                 Success   Verified OS patch

__ILOM__
Validate supported versions     Success   Validated minimum supported versions
Validate patching tag           Success   Validated patching tag: 18.3.0.0.0
Is patch location available     Success   Patch location is available
Checking Ilom patch Version     Success   Successfully verified the versions
Patch location validation       Success   Successfully validated location

__GI__
Validate supported GI versions  Success   Validated minimum supported versions
Validate available space        Success   Validated free space under /u01
Verify DB Home versions         Success   Verified DB Home versions
Validate patching locks         Success   Validated patching locks

Success on all the pre-checked elements is not a patching guarantee: as for each patch you’ll need to manually remove extra rpms and unsupported configurations, like public yum repository (all the package updates have to be done through ODA patches only).

5) Update the dcs-agent

It seems that it’s no more mandatory to update the dcs-agent before patching the server, it will probably be updated in the same time, but if you want it’s still working:

/opt/oracle/dcs/bin/odacli update-dcsagent -v 18.3.0.0.0

odacli update-dcsagent -v 18.3.0.0.0
{
  "jobId" : "5cd58876-3db5-48d5-880e-2ce934545d2f",
  "status" : "Created",
  "message" : "Dcs agent will be restarted after the update. Please wait for 2-3 mins before executing the other commands",
  "reports" : [ ],
  "createTimestamp" : "March 07, 2019 13:35:12 PM CET",
  "resourceList" : [ ],
  "description" : "DcsAgent patching",
  "updatedTime" : "March 07, 2019 13:35:12 PM CET"
}
odacli list-jobs | head -n 3;  odacli list-jobs | tail -n 2

ID                                       Description                         Created                             Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
5cd58876-3db5-48d5-880e-2ce934545d2f     DcsAgent patching                   March 7, 2019 1:35:12 PM CET        Success

6) Update the server

Updating the server is the biggest and longest part of the patch, make sure that everything is OK before patching.

odacli update-server -v 18.3.0.0.0
odacli describe-job -i "3a568fb1-8517-405f-9a60-0a1ee285e1ff"

Job details
----------------------------------------------------------------
                     ID:  3a568fb1-8517-405f-9a60-0a1ee285e1ff
            Description:  Server Patching
                 Status:  Running
                Created:  March 7, 2019 4:48:32 PM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Patch location validation                March 7, 2019 4:48:32 PM CET        March 7, 2019 4:48:32 PM CET        Success
dcs-controller upgrade                   March 7, 2019 4:48:32 PM CET        March 7, 2019 4:48:32 PM CET        Success
Patch location validation                March 7, 2019 4:48:32 PM CET        March 7, 2019 4:48:32 PM CET        Success
dcs-cli upgrade                          March 7, 2019 4:48:33 PM CET        March 7, 2019 4:48:33 PM CET        Success
Creating repositories using yum          March 7, 2019 4:48:33 PM CET        March 7, 2019 4:48:38 PM CET        Success
Creating repositories using yum          March 7, 2019 4:48:38 PM CET        March 7, 2019 4:48:38 PM CET        Success
Creating repositories using yum          March 7, 2019 4:48:38 PM CET        March 7, 2019 4:48:38 PM CET        Success
Creating repositories using yum          March 7, 2019 4:48:38 PM CET        March 7, 2019 4:48:38 PM CET        Success
Creating repositories using yum          March 7, 2019 4:48:38 PM CET        March 7, 2019 4:48:38 PM CET        Success
Creating repositories using yum          March 7, 2019 4:48:38 PM CET        March 7, 2019 4:48:38 PM CET        Success
Creating repositories using yum          March 7, 2019 4:48:38 PM CET        March 7, 2019 4:48:38 PM CET        Success
Updating YumPluginVersionLock rpm        March 7, 2019 4:48:38 PM CET        March 7, 2019 4:48:39 PM CET        Success
Applying OS Patches                      March 7, 2019 4:48:39 PM CET        March 7, 2019 4:51:31 PM CET        Success
Creating repositories using yum          March 7, 2019 4:51:31 PM CET        March 7, 2019 4:51:32 PM CET        Success
Applying HMP Patches                     March 7, 2019 4:51:32 PM CET        March 7, 2019 4:51:32 PM CET        Success
Patch location validation                March 7, 2019 4:51:32 PM CET        March 7, 2019 4:51:32 PM CET        Success
oda-hw-mgmt upgrade                      March 7, 2019 4:51:33 PM CET        March 7, 2019 4:51:33 PM CET        Success
Applying Firmware Disk Patches           March 7, 2019 4:51:33 PM CET        March 7, 2019 4:51:51 PM CET        Success
Applying Firmware Expander Patches       March 7, 2019 4:51:51 PM CET        March 7, 2019 4:52:03 PM CET        Success
Applying Firmware Controller Patches     March 7, 2019 4:52:03 PM CET        March 7, 2019 4:52:16 PM CET        Success
Checking Ilom patch Version              March 7, 2019 4:52:17 PM CET        March 7, 2019 4:52:19 PM CET        Success
Patch location validation                March 7, 2019 4:52:19 PM CET        March 7, 2019 4:52:20 PM CET        Success
Save password in Wallet                  March 7, 2019 4:52:21 PM CET        March 7, 2019 4:52:21 PM CET        Success
Apply Ilom patch                         March 7, 2019 4:52:21 PM CET        March 7, 2019 4:52:22 PM CET        Success
Copying Flash Bios to Temp location      March 7, 2019 4:52:22 PM CET        March 7, 2019 4:52:22 PM CET        Success
Starting the clusterware                 March 7, 2019 4:52:22 PM CET        March 7, 2019 4:52:22 PM CET        Success
Creating GI home directories             March 7, 2019 4:52:22 PM CET        March 7, 2019 4:52:22 PM CET        Success
Cloning Gi home                          March 7, 2019 4:52:22 PM CET        March 7, 2019 4:54:55 PM CET        Success
Configuring GI                           March 7, 2019 4:54:55 PM CET        March 7, 2019 4:55:54 PM CET        Success
Running GI upgrade root scripts          March 7, 2019 4:55:54 PM CET        March 7, 2019 5:08:19 PM CET        Failure

Bad news, GI upgrade failed in my case. Hopefully it’s possible to relaunch the patching and it will skip the already patched components. But another attempt failed 2 steps before (Cloning the Gi home: for sure GI home is already deployed on disk).

Actually if you already patched your ODA with the previous release, there is a bug with this previous patch:

ODA GI Patching from 12.2 to 18.3 Failed, CLSRSC-697: Failed to get the value of environment variable ‘TZ’ from the environment file (Doc ID 2502972.1)

An XML file incorrectly describing the previous Grid Infrastructure home needs to be deleted before applying this newest patch:

cat '/u01/app/grid/crsdata/@global/crsconfig/ckptGridHA_global.xml' | grep 12
         <PROPERTY NAME="VERSION" TYPE="STRING" VAL="12.2.0.1.0"/>
         <PROPERTY NAME="OLD_CRS_HOME" TYPE="STRING" VAL="/u01/app/12.1.0.2/grid"/>
         <PROPERTY NAME="OLD_CRS_VERSION" TYPE="STRING" VAL="12.1.0.2.0"/>
         <PROPERTY NAME="MANUAL_BACKUP_FILE_NAME" TYPE="STRING" VAL="+DATA:/dbi02-c/OCRBACKUP/dbi02-c_backup12.1.0.2.0.ocr.261.987245221"/>
rm '/u01/app/grid/crsdata/@global/crsconfig/ckptGridHA_global.xml'

Before running the patch again, cloned GI home needs to be deleted and its reference in the oraInventory needs to be removed:

vi /u01/app/oraInventory/ContentsXML/inventory.xml
Delete this line ==>  <HOME NAME="OraGrid180" LOC="/u01/app/18.0.0.0/grid" TYPE="O" IDX="7" CRS="true"/> 

rm -rf /u01/app/18.0.0.0

Now the patching will work:

odacli update-server -v 18.3.0.0.0
odacli describe-job -i "3a568fb1-8517-405f-9a60-0a1ee285e1ff"

Job details
----------------------------------------------------------------
                     ID:  3a568fb1-8517-405f-9a60-0a1ee285e1ff
            Description:  Server Patching
                 Status:  Success
                Created:  March 7, 2019 5:48:32 PM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Patch location validation                March 7, 2019 5:48:32 PM CET        March 7, 2019 5:48:32 PM CET        Success
dcs-controller upgrade                   March 7, 2019 5:48:32 PM CET        March 7, 2019 5:48:32 PM CET        Success
Patch location validation                March 7, 2019 5:48:32 PM CET        March 7, 2019 5:48:32 PM CET        Success
dcs-cli upgrade                          March 7, 2019 5:48:33 PM CET        March 7, 2019 5:48:33 PM CET        Success
Creating repositories using yum          March 7, 2019 5:48:33 PM CET        March 7, 2019 5:48:38 PM CET        Success
Creating repositories using yum          March 7, 2019 5:48:38 PM CET        March 7, 2019 5:48:38 PM CET        Success
Creating repositories using yum          March 7, 2019 5:48:38 PM CET        March 7, 2019 5:48:38 PM CET        Success
Creating repositories using yum          March 7, 2019 5:48:38 PM CET        March 7, 2019 5:48:38 PM CET        Success
Creating repositories using yum          March 7, 2019 5:48:38 PM CET        March 7, 2019 5:48:38 PM CET        Success
Creating repositories using yum          March 7, 2019 5:48:38 PM CET        March 7, 2019 5:48:38 PM CET        Success
Creating repositories using yum          March 7, 2019 5:48:38 PM CET        March 7, 2019 5:48:38 PM CET        Success
Updating YumPluginVersionLock rpm        March 7, 2019 5:48:38 PM CET        March 7, 2019 5:48:39 PM CET        Success
Applying OS Patches                      March 7, 2019 5:48:39 PM CET        March 7, 2019 5:51:31 PM CET        Success
Creating repositories using yum          March 7, 2019 5:51:31 PM CET        March 7, 2019 5:51:32 PM CET        Success
Applying HMP Patches                     March 7, 2019 5:51:32 PM CET        March 7, 2019 5:51:32 PM CET        Success
Patch location validation                March 7, 2019 5:51:32 PM CET        March 7, 2019 5:51:32 PM CET        Success
oda-hw-mgmt upgrade                      March 7, 2019 5:51:33 PM CET        March 7, 2019 5:51:33 PM CET        Success
Applying Firmware Disk Patches           March 7, 2019 5:51:33 PM CET        March 7, 2019 5:51:51 PM CET        Success
Applying Firmware Expander Patches       March 7, 2019 5:51:51 PM CET        March 7, 2019 5:52:03 PM CET        Success
Applying Firmware Controller Patches     March 7, 2019 5:52:03 PM CET        March 7, 2019 5:52:16 PM CET        Success
Checking Ilom patch Version              March 7, 2019 5:52:17 PM CET        March 7, 2019 5:52:19 PM CET        Success
Patch location validation                March 7, 2019 5:52:19 PM CET        March 7, 2019 5:52:20 PM CET        Success
Save password in Wallet                  March 7, 2019 5:52:21 PM CET        March 7, 2019 5:52:21 PM CET        Success
Apply Ilom patch                         March 7, 2019 5:52:21 PM CET        March 7, 2019 5:52:22 PM CET        Success
Copying Flash Bios to Temp location      March 7, 2019 5:52:22 PM CET        March 7, 2019 5:52:22 PM CET        Success
Starting the clusterware                 March 7, 2019 5:52:22 PM CET        March 7, 2019 5:52:22 PM CET        Success
Creating GI home directories             March 7, 2019 5:52:22 PM CET        March 7, 2019 5:52:22 PM CET        Success
Cloning Gi home                          March 7, 2019 5:52:22 PM CET        March 7, 2019 5:54:55 PM CET        Success
Configuring GI                           March 7, 2019 5:54:55 PM CET        March 7, 2019 5:55:54 PM CET        Success
Running GI upgrade root scripts          March 7, 2019 5:55:54 PM CET        March 7, 2019 6:08:19 PM CET        Success
Resetting DG compatibility               March 7, 2019 6:08:19 PM CET        March 7, 2019 6:08:29 PM CET        Success
Running GI config assistants             March 7, 2019 6:08:29 PM CET        March 7, 2019 6:11:50 PM CET        Success
restart oakd                             March 7, 2019 6:11:53 PM CET        March 7, 2019 6:12:03 PM CET        Success
Updating GiHome version                  March 7, 2019 6:12:03 PM CET        March 7, 2019 6:12:09 PM CET        Success
Setting AUDIT SYSLOG LEVEL               March 7, 2019 6:12:30 PM CET        March 7, 2019 6:14:55 PM CET        Success
Update System version                    March 7, 2019 6:14:55 PM CET        March 7, 2019 6:14:55 PM CET        Success
preRebootNode Actions                    March 7, 2019 6:14:55 PM CET        March 7, 2019 6:15:37 PM CET        Success
Reboot Ilom                              March 7, 2019 6:15:37 PM CET        March 7, 2019 6:15:37 PM CET        Success

Once this part of the patch is successfully applied, let the server reboot automatically 5 minutes after and then check again the components:

odacli describe-component

System Version
---------------
18.3.0.0.0

Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                       18.3.0.0.0            up-to-date
GI                                        18.3.0.0.180717       up-to-date
DB                                        12.1.0.2.180417       12.1.0.2.180717
DCSAGENT                                  18.3.0.0.0            up-to-date
ILOM                                      4.0.4.22.r126940      up-to-date
BIOS                                      38110100              up-to-date
OS                                        6.10                  up-to-date
FIRMWARECONTROLLER {
[ c2 ]                                    4.650.00-7176         up-to-date
[ c0,c1 ]                                 KPYAGR3Q              kpyair3q
}
FIRMWAREDISK                              0R3Q                  up-to-date

7) Patch the storage

You now need to patch the storage, understand firmware of data disks:

odacli update-storage -v 18.3.0.0.0
{
  "jobId" : "38a779cf-ac6c-4514-a838-f5cd1bec637c",
  "status" : "Created",
  "message" : "Success of Storage Update may trigger reboot of node after 4-5 minutes. Please wait till node restart",
  "reports" : [ ],
  "createTimestamp" : "March 07, 2019 17:28:34 PM CET",
  "resourceList" : [ ],
  "description" : "Storage Firmware Patching",
  "updatedTime" : "March 07, 2019 17:28:35 PM CET"
}

odacli describe-job -i "38a779cf-ac6c-4514-a838-f5cd1bec637c"
Job details
----------------------------------------------------------------
                     ID:  38a779cf-ac6c-4514-a838-f5cd1bec637c
            Description:  Storage Firmware Patching
                 Status:  Success
                Created:  March 7, 2019 5:28:34 PM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Applying Firmware Disk Patches           March 7, 2019 5:28:35 PM CET        March 7, 2019 5:28:49 PM CET        Success
Applying Firmware Controller Patches     March 7, 2019 5:28:49 PM CET        March 7, 2019 5:33:02 PM CET        Success
preRebootNode Actions                    March 7, 2019 5:33:02 PM CET        March 7, 2019 5:33:02 PM CET        Success
Reboot Ilom                              March 7, 2019 5:33:02 PM CET        March 7, 2019 5:33:02 PM CET        Success

odacli describe-component
System Version
---------------
18.3.0.0.0

Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                       18.3.0.0.0            up-to-date
GI                                        18.3.0.0.180717       up-to-date
DB                                        12.1.0.2.180417       12.1.0.2.180717
DCSAGENT                                  18.3.0.0.0            up-to-date
ILOM                                      4.0.4.22.r126940      up-to-date
BIOS                                      38110100              up-to-date
OS                                        6.10                  up-to-date
FIRMWARECONTROLLER {
[ c2 ]                                    4.650.00-7176         up-to-date
[ c0,c1 ]                                 KPYAIR3Q              up-to-date
}
FIRMWAREDISK                              0R3Q                  up-to-date

The server will also reboot after this update.

8) Patch the dbhomes

You now need to patch the dbhomes separately. First of all, list them:

odacli list-dbhomes

ID                                       Name                 DB Version        Home Location                                 Status
---------------------------------------- -------------------- ----------------- ------------------------------------------- ----------
2c28acde-f041-4283-b984-fe6b73dd724d     OraDB12102_home2     12.1.0.2.180417   /u01/app/oracle/product/12.1.0.2/dbhome_2     Configured
ebac9543-337b-4edd-8e00-d593abd52ca6     OraDB12102_home10    12.1.0.2.180417   /u01/app/oracle/product/12.1.0.2/dbhome_10    Configured
0ab15f33-e1b7-4193-a110-fa4aee01cc21     OraDB12102_home21    12.1.0.2.180417   /u01/app/oracle/product/12.1.0.2/dbhome_21    Configured

Patch the dbhomes one by one. Target version is actually version of the patch. Remember that updating the ODA through a patch will never upgrade your database to a newer release. Only the PSU number will change (the fifth number which is actually a date).

odacli update-dbhome -i 2c28acde-f041-4283-b984-fe6b73dd724d -v 18.3.0.0.0
{
  "jobId" : "bc3e5564-7283-4497-ac70-9e41c834183d",
  "status" : "Created",
  "message" : null,
  "reports" : [ ],
  "createTimestamp" : "March 07, 2019 17:45:46 PM CET",
  "resourceList" : [ ],
  "description" : "DB Home Patching: Home Id is 2c28acde-f041-4283-b984-fe6b73dd724d",
  "updatedTime" : "March 07, 2019 17:45:46 PM CET"
}
odacli describe-job -i "bc3e5564-7283-4497-ac70-9e41c834183d"

Job details
----------------------------------------------------------------
                     ID:  bc3e5564-7283-4497-ac70-9e41c834183d
            Description:  DB Home Patching: Home Id is 2c28acde-f041-4283-b984-fe6b73dd724d
                 Status:  Success
                Created:  March 7, 2019 5:45:46 PM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
clusterware patch verification           March 7, 2019 5:45:47 PM CET        March 7, 2019 5:45:52 PM CET        Success
Patch location validation                March 7, 2019 5:45:52 PM CET        March 7, 2019 5:46:03 PM CET        Success
Opatch updation                          March 7, 2019 5:48:05 PM CET        March 7, 2019 5:48:09 PM CET        Success
Patch conflict check                     March 7, 2019 5:48:09 PM CET        March 7, 2019 5:49:45 PM CET        Success
db upgrade                               March 7, 2019 5:49:45 PM CET        March 7, 2019 5:55:04 PM CET        Success
SqlPatch upgrade                         March 7, 2019 5:55:04 PM CET        March 7, 2019 5:55:48 PM CET        Success
Update System version                    March 7, 2019 5:55:48 PM CET        March 7, 2019 5:55:48 PM CET        Success
updating the Database version            March 7, 2019 5:55:51 PM CET        March 7, 2019 5:55:54 PM CET        Success

...

odacli list-dbhomes

ID                                       Name                 DB Version        Home Location                                 Status
---------------------------------------- -------------------- ----------------- ------------------------------------------ ----------
2c28acde-f041-4283-b984-fe6b73dd724d     OraDB12102_home2     12.1.0.2.180717   /u01/app/oracle/product/12.1.0.2/dbhome_2     Configured
ebac9543-337b-4edd-8e00-d593abd52ca6     OraDB12102_home10    12.1.0.2.180717   /u01/app/oracle/product/12.1.0.2/dbhome_10    Configured
0ab15f33-e1b7-4193-a110-fa4aee01cc21     OraDB12102_home21    12.1.0.2.180717   /u01/app/oracle/product/12.1.0.2/dbhome_21    Configured

9) Optional: deploy the latest db clone files

If you’ll never deploy a new dbhome this is not necessary. If you will, or simply if you don’t know if you’ll have to do that later, download and register the new db clone files in the repository to be able to create a new dbhome at the same patch level than the one already deployed. With this release, you can choose dbhome from various versions: 11gR2, 12cR1, 12cR2 and 18c.

10) Control the final version of the components

Now the patching is done. ±2.5 hours were needed, if everything is running fine.

oodacli describe-component

System Version
---------------
18.3.0.0.0

Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                       18.3.0.0.0            up-to-date
GI                                        18.3.0.0.180717       up-to-date
DB                                        12.1.0.2.180717       up-to-date
DCSAGENT                                  18.3.0.0.0            up-to-date
ILOM                                      4.0.4.22.r126940      up-to-date
BIOS                                      38110100              up-to-date
OS                                        6.10                  up-to-date
FIRMWARECONTROLLER {
[ c2 ]                                    4.650.00-7176         up-to-date
[ c0,c1 ]                                 KPYAIR3Q              up-to-date
}
FIRMWAREDISK                              0R3Q                  up-to-date

11) Latest steps

Don’t forget to remove the patch files and the unzipped files too and check the remaining space on your local disks. Remember to keep your ODA clean and up-to-date.

Cet article How to patch your ODA lite to 18.3.0.0.0 est apparu en premier sur Blog dbi services.

SQL Konferenz 2019 – Day 1

Wed, 2019-03-06 11:15

For my first “dbi visit”, I went to Darmstadt, in Germany, to attend SQL Konferenz 2019.

There were more than 50 sessions over two days (20 & 21.02.2019) in several areas as SQL Administration, BI, DevOps, Azure & BigData.

I decided to write a short glimpse of the most appreciated sessions.

Keynote

To start this event, they introduced the German SQL Server community (PASS.de) and the upcoming meetings this year, the well-known people are present on stage. Let’s say this year we celebrated the 15th anniversary of this conference.

The next dates

The next dates

 

Afterward, The organization team prepared a small staging based on “Back to the SQL Future”, the evolution of SQL Server over the years:

Dr Emmett Brown & Marty McFly is back

Dr Emmett Brown & Marty McFly are back

 

Next, there were a lot of talks about the Azure Cloud and PowerBi, including a demo of Azure Data Factory.

And to finish the keynote, Kusto Explorer was presented through a funny demo. Kusto Explorer is a program that allows to explore your data with a SQL style syntax light version, and to create results with graphs, quickly and easily, focused on BigData.

See details on Microsoft website

Explore your data

Explore your data

 

Azure Managed Instance

This session was presented by  Björn Peters, who listed the main traditional recurring tasks of a DBA: Backup, ETL, Monitoring, Patching, Reorgs Rebuilds, DBCC CheckDB. These tasks are greatly simplified today in Azure and shows us the functions available in Azure Managed Instance:

  •  Data migration: Native backup/restore
  • Security: Azure AD, TDE, SQL Audit, Dynamic Data Masking
  • Programmability: Global temp tables, Linked servers, CLR module
  • Operational: DMVs & XEvents, Query Store, DB Mail
  • Scenario enablers: Service Broker, Transactional Replication
Azure Data Studio – The new Kid in Town

Presented by Frank Geisler, a member of the PASS.de community

This is a new tool to manage our databases, not intended to replace SSMS, but more like a new tool more oriented towards Dev or DevOps.

  • Multi-platform: Windows, Mac & Linux
  • Based on a Visual Studio Code fork
  • A modern architecture
  • For developer and DBA
  • Extensions
    • SQL Server Agent
    • SQL Server Import
    • Server Reports
    • WhoIsActive
ADS will kill Microsoft SSMS?

ADS will kill Microsoft SSMS?

 

I invite you to test Azure Data Studio => Download here

Frank’s podcasts => https://www.pleasetalkdatatome.de/

SQL Agent in the cloud

this session was presented by Sam Cogan, As you know the SQL Agent doesn’t exist in Azure SQL and we have 3 alternatives:

  • Azure SQL Elastics Jobs
  • Azure Automation
  • Azure Functions
You have the choice

You have the choice

 

Trends in databases

It was a presentation about popular databases stats, based on the following website => https://db-engines.com/en/ranking

SQl Server VS Oracle?

SQL Server VS Oracle?

 

After: 15th anniversary

An evening was organized with meals, animations and games to win prizes.

We played "Bingo Monsters"

We played “Bingo Monsters”

 

Finally, it was a good opportunity to meet some people, who know dbi services but in the context of the DOAG ;)

Cet article SQL Konferenz 2019 – Day 1 est apparu en premier sur Blog dbi services.

How to extract sql statements with awk

Wed, 2019-03-06 09:04

A common problem is to extract sql statements out of a sql file e.g. produced by full metadata impdp. Often these files are too big to be edited with vi.

Following statement extracts all sql statements with “CREATE MATERIALIZED VIEW” at the beginning with awk:


cat impdp_full_metadata.sql | awk 'BEGIN{ FLAG = 0 } { if( $0 ~ /CREATE MATERIALIZED VIEW/ ) FLAG=1; if(FLAG != 0) print $0; if ($0 ~ /;/ ) FLAG=0; }'

Explanation: At beginning of awk state FLAG is set to 0, also when semicolon is in the line. If flag is not 0, the line is printed, if pattern of statement is found, flag is set to 1 which causes line with pattern and all subsequence lines are printed up to the next line which contains a semicolon. This works also for statements spanning more than one line.

Cet article How to extract sql statements with awk est apparu en premier sur Blog dbi services.

How To Create A Recovery Services Vault In Azure

Tue, 2019-03-05 02:54
Creating A Recovery Services Vault In Azure

To create a recovery services vault, we first need to create a resource group by using either the CLI or the Azure interface.

Creating a resource group using the command line interface

$resourceGroup = “recovvault”
$location = “westeurope”
az group create -n $resourceGroup -l $location

The resource recovvault has been created successfully.

Now we need to setup  the recovery services vault.

From Azure portal, select the resource group created then click on Add, and search for Backup and Site recovery (OMS), click on it and hit create.
Specify the following information as described in the image below:

Recovery Services Backup

Hit create.

The recovery services vault has been created successfully.

Set the vault storage redundancy to Locally-redundant
  • Select the recovery vault created
  • From the options column, select Properties, then Backup configuration
  • Choose Locally-redundant (cost effective than Geo-redundant)
  • Hit save
Configuring the Azure Backup Agent.

We need to create a virtual machine in order to simulate an on-premise server, so click on Backup from the left menu.
For the purpose of this example, we will choose On-premises for simulation and File and Folders to backup.
Hit Prepare the infrastructure.

Then download the agent for windows server and the vault credentials by clicking on the link provided (see below):

Recovery_Services_Agent

Open the Azure Recovery Services Agent service, leave the sections Installation Settings and Proxy Configuration by default then click on Install.

When the installation is done, click on Proceed to Registration.

Let’s associate the vault credential with the agent.

Browse to select the agent credential in order to associate the server with the backup vault in Azure.
Enter and confirm the passphrase, choose a location to save it then hit finish.

The server is now registered with the Azure backup and the agent properly installed.

Schedule backup for Files and Folders.
  • Click on close to launch Microsoft Azure Recovery Services Agent.
  • Click on Schedule Backup from the right column
  • Select items to backup (my docs folder)
  • Specify backup schedule (outside normal working hours)
  • Leave by default the retention policy
  • In our example, we choose over the network due to our small data.
  • Hit Finish.

If you have small data, choose the option Over the network, otherwise choose Offline for large data.

The backup schedule is now successful.

Executing the backup.

Click on Back up Now from the right column
Hit back up and close (see below):

Backup_R

Recovering Data
  • Click on Recover data from the right column of the window backup
  • Choose This server
  • Choose Volume
  • Select the volume and date
  • Choose another recovery destination
  • Hit recover

The recovery is now successful, the folder has been properly recovered.

Information related to the cost of resources used during the implementation of the recovery vault:

Use the below PowerShell command to get an overview of the cost of the resources used:

Get-AzConsumptionUsageDetail -BillingPeriodName 201903 -ResourceGroup recovvault

If you are in a test environment, to avoid paying extra cost, make sure to delete the resource group (command-line below) created if not used.

  • az group delete -n $resourceGroup -y

Need further details about managing and monitoring a recovery service vault, click here.
Need further details about deleting a recovery service vault, click here.

Cet article How To Create A Recovery Services Vault In Azure est apparu en premier sur Blog dbi services.

PostgreSQL 12: New option –exclude-database for pg_dumpall

Sat, 2019-03-02 16:12

Up to PostgreSQL 11 pg_dump all was used to dump all PostgreSQL databases of a cluster into one script file. The script file then contains SQL commands that can be used as input to psql to restore the databases. It does this by calling pg_dump for each database in the cluster.
PostgreSQL 12 is under development but we can test new features which will be implemented. Now with PostgreSQL 12 there is a new option that allows to exclude specific database with pg_dumpall. This option is called –exclude-database
This option can be useful where a database name is visible but the database is not dumpable by the user.
We can now see this new option in the help output

[postgres@dbi-pg-essentials log]$ /u01/app/postgres/product/12dev/bin/pg_dumpall --help | grep exclude
--exclude-database=PATTERN   exclude databases whose name matches PATTERN
[postgres@dbi-pg-essentials log]$

In this blog I am going to test this new option.

Let’s first list the databases in my cluster

postgres=# \l
                                     List of databases
      Name       |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges
-----------------+----------+----------+-------------+-------------+-----------------------
 db1             | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
 db2             | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
 mydb            | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
 postgres        | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
 postgres_sample | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
 template0       | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
                 |          |          |             |             | postgres=CTc/postgres
 template1       | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
                 |          |          |             |             | postgres=CTc/postgres
(7 rows)

postgres=#

First let’s dump all user databases

[postgres@dbi-pg-essentials ~]$  /u01/app/postgres/product/12dev/bin/pg_dumpall   > with_alldb.sql   

If we do a grep in the output file, we can see that the dump contains all databases

[postgres@dbi-pg-essentials ~]$ grep "CREATE DATABASE" with_alldb.sql
CREATE DATABASE db1 WITH TEMPLATE = template0 ENCODING = 'UTF8' LC_COLLATE = 'en_US.UTF-8' LC_CTYPE = 'en_US.UTF-8';
CREATE DATABASE db2 WITH TEMPLATE = template0 ENCODING = 'UTF8' LC_COLLATE = 'en_US.UTF-8' LC_CTYPE = 'en_US.UTF-8';
CREATE DATABASE mydb WITH TEMPLATE = template0 ENCODING = 'UTF8' LC_COLLATE = 'en_US.UTF-8' LC_CTYPE = 'en_US.UTF-8';
CREATE DATABASE postgres_sample WITH TEMPLATE = template0 ENCODING = 'UTF8' LC_COLLATE = 'en_US.UTF-8' LC_CTYPE = 'en_US.UTF-8';
[postgres@dbi-pg-essentials ~]$

Now let’s dump all the user databases except db1 and db2.

[postgres@dbi-pg-essentials ~]$ /u01/app/postgres/product/12dev/bin/pg_dumpall  --exclude-database=db2 --exclude-database=db1 > without_db1_db2.sql

We can effectively verify that the dump file contains instructions to create the databases mydb and postgres_sample. That’s the expected result.

[postgres@dbi-pg-essentials ~]$ grep "CREATE DATABASE" without_db1_db2.sql
CREATE DATABASE mydb WITH TEMPLATE = template0 ENCODING = 'UTF8' LC_COLLATE = 'en_US.UTF-8' LC_CTYPE = 'en_US.UTF-8';
CREATE DATABASE postgres_sample WITH TEMPLATE = template0 ENCODING = 'UTF8' LC_COLLATE = 'en_US.UTF-8' LC_CTYPE = 'en_US.UTF-8';
[postgres@dbi-pg-essentials ~]$¨

What is interesting is that we can use wildcard characters. The result above can obtained like following

[postgres@dbi-pg-essentials ~]$  /u01/app/postgres/product/12dev/bin/pg_dumpall  --exclude-database=db* > without_db_star.sql
[postgres@dbi-pg-essentials ~]$ grep "CREATE DATABASE" without_db_star.sql
CREATE DATABASE mydb WITH TEMPLATE = template0 ENCODING = 'UTF8' LC_COLLATE = 'en_US.UTF-8' LC_CTYPE = 'en_US.UTF-8';
CREATE DATABASE postgres_sample WITH TEMPLATE = template0 ENCODING = 'UTF8' LC_COLLATE = 'en_US.UTF-8' LC_CTYPE = 'en_US.UTF-8';
[postgres@dbi-pg-essentials ~]$

Let’s say we want to dump all user databases except those whose names contain ‘db’

[postgres@dbi-pg-essentials ~]$  /u01/app/postgres/product/12dev/bin/pg_dumpall  --exclude-database=*db* > without_star_db_star.sql
[postgres@dbi-pg-essentials ~]$ grep "CREATE DATABASE" without_star_db_star.sql                    
 CREATE DATABASE postgres_sample WITH TEMPLATE = template0 ENCODING = 'UTF8' LC_COLLATE = 'en_US.UTF-8' LC_CTYPE = 'en_US.UTF-8';
[postgres@dbi-pg-essentials ~]$
Conclusion:

In this blog we talked about the new option exclude-database for pg_dumpall in PostgreSQL 12.

Cet article PostgreSQL 12: New option –exclude-database for pg_dumpall est apparu en premier sur Blog dbi services.

How to disable ACLs in Oracle 12c

Wed, 2019-02-27 06:47

A common problem in Oracle 12c is, that ACLs are often not wanted to limit connections from database to other hosts. To disable ACLs is not that easy, so the best way is to enable connections and resolutions to all hosts for all users like following example:


BEGIN
DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE(
host => '*',
ace => xs$ace_type(privilege_list => xs$name_list('resolve'),
principal_name => 'PUBLIC',
principal_type => xs_acl.ptype_db));
DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE(
host => '*',
ace => xs$ace_type(privilege_list => xs$name_list('connect'),
principal_name => 'PUBLIC',
principal_type => xs_acl.ptype_db));
END;
/
COMMIT;

“*” in host parameter allows connection to all hosts, principal_name “PUBLIC” allows all users of the database to initiate network connections.

Cet article How to disable ACLs in Oracle 12c est apparu en premier sur Blog dbi services.

How workload policies is making your life easy

Tue, 2019-02-26 17:22

Working as a control-M administrator, we are often requested to hold jobs or stop jobs during a period of time. Holding jobs and updating the job configuration might not be the most efficient way to do this. Workload policies is making your life really easier, giving you the possibility to configure such modification in advance and to let the system act independently whether the NDP (New Day Processing) is in the middle of the period range or not.

Job configuration and execution

In my lab, I have got a cyclic job called OS_Job#test4. Every 2 Minutes, from its End, this job is executing the following command hostname && cd $HOME && ls && cat installed-versions.txt && sleep 30.

The output of the job would be the following.
Output of the job

The job is executing every 2 minutes.
Cyclic execution

The job is waiting for the next execution time window.
Waiting next time execution window

Workload policies Manager

Let’s configure a workload policy. In monitoring, tools, click on Workload Policy Monitor button.
Workload policy button

Open Workload Policies Manager in order to create a policy.
Workload Policy Manager

Workload policy-2

Let’s configure the policy not to have that specific job running from 23:15 to 23:30.
workload configuration

Recommendation is to specify the application, sub-application and folder, in order to limit impact if there is 2 same job name.
Using * character as job name would include all jobs from specified folder.

Set running job to value 0 to have no expected job running.

When setting the period time, you need to take in account when the NDP is happening. For example, if the NDP is happening every day at 12pm, this would mean that you would setup :

  • Time from 08.02.2019 9am to 10.02.2019 9am to stop job as calendar date/time from 09.02.2019 9am until 11.02.2019 9am
  • Time from 08.02.2019 9am to 09.02.2019 5pm to stop job as calendar date/time from 09.02.2019 9am until 09.02.2019 5pm

The workload policy needs to be activated.
Activate workload policy

Monitor Workload Policy

Workload Policies monitor dashboard will show one job in wait status. Using viewpoint, we can monitor the jobs concerned by the newly created policy.
Monitoring
monitoring viewpoints

For the run between 23:15 and 23:30, the job will not be executed as belonging to an active workload policy.
waiting information

Logs are showing the same.
Logs

You would need to be aware that :

  • The workload will only act on the jobs once the current execution status is over.
  • You don’t have to take care of the NDP. In case the NDP is included in the range period, the current job will be removed and a new one will be ordered and will only start once the workload policy time range is over.

This Workload Policy function is quite interesting and helpful. Before implementing it and using it for a first time on production, I still recommend you to test it in a lab and get used to it.

Cet article How workload policies is making your life easy est apparu en premier sur Blog dbi services.

Documentum – Documents not transferred to WebConsumer

Fri, 2019-02-22 10:33

Receiving an incident is not always a pleasure, but it is always the case when we share the solution!
A few days ago, I received an incident regarding WebConsumer on a production environment, saying that documents are not transferred as expected to WebConsumer.

The issue didn’t happened for all documents, that’s why I directly suspect the High Availability configuration on this environment. Moreover, I know that the IDS is installed only on CS1 (as designed). So I checked the JMS logs on :
CS1 : No errors found there.

CS2 : Errors found :

2019-02-11 04:05:39,097 UTC INFO  [stdout] (default task-60) [DEBUG] - c.e.d.a.c.m.lifecycle.D2LifecycleConfig       : D2LifecycleConfig::applyMethod start 'WCPublishDocumentMethod'
2019-02-11 04:05:39,141 UTC INFO  [stdout] (default task-60) [DEBUG] - c.e.d.a.c.m.lifecycle.D2LifecycleConfig       : D2LifecycleConfig::applyMethod before session apply 'WCPublishDocumentMethod' time: 0.044s
2019-02-11 04:05:39,773 UTC INFO  [stdout] (default task-89) 2019-02-11 04:05:39,773 UTC ERROR [com.domain.repository1.dctm.methods.WCPublishDoc] (default task-89) DfException:: THREAD: default task-89; 
MSG: [DM_METHOD_E_JMS_APP_SERVER_NAME_NOTFOUND]error:  "The app_server_name/servlet_name 'WebCache' is not specified in dm_server_config/dm_jms_config."; ERRORCODE: 100; NEXT: null

To cross check:

On CS1:

[dmadmin@CONTENT_SERVER1 ~]$ cd $DOCUMENTUM/shared/wildfly9.0.1/server/DctmServer_MethodServer/log
[dmadmin@CONTENT_SERVER1 log]$ grep DM_METHOD_E_JMS_APP_SERVER_NAME_NOTFOUND server.log | wc -l
0

On CS2:

[dmadmin@CONTENT_SERVER2 ~]$ cd $DOCUMENTUM/shared/wildfly9.0.1/server/DctmServer_MethodServer/log
[dmadmin@CONTENT_SERVER2 log]$ grep DM_METHOD_E_JMS_APP_SERVER_NAME_NOTFOUND server.log | wc -l
60

So I checked the app servers list configured in the dm_server_config:

On CS1:

API> retrieve,c,dm_server_config
...
3d01e24080000102
API> dump,c,3d01e24080000102
...
USER ATTRIBUTES

  object_name                     : repository1
...
  app_server_name              [0]: do_method
                               [1]: do_mail
                               [2]: FULLTEXT_SERVER1_PORT_IndexAgent
                               [3]: WebCache
                               [4]: FULLTEXT_SERVER2_PORT_IndexAgent
  app_server_uri               [0]: https://CONTENT_SERVER1:9082/DmMethods/servlet/DoMethod
                               [1]: https://CONTENT_SERVER1:9082/DmMail/servlet/DoMail
                               [2]: https://FULLTEXT_SERVER1:PORT/IndexAgent/servlet/IndexAgent
                               [3]: https://CONTENT_SERVER1:6679/services/scs/publish
                               [4]: https://FULLTEXT_SERVER2:PORT/IndexAgent/servlet/IndexAgent
...

Good, WebCache is configured here.

On CS2:

API> retrieve,c,dm_server_config
...
3d01e24080000255
API> dump,c,3d01e24080000255
...
USER ATTRIBUTES

  object_name                     : repository1
...
  app_server_name              [0]: do_method
                               [1]: do_mail
                               [2]: FULLTEXT_SERVER1_PORT_IndexAgent
                               [3]: FULLTEXT_SERVER2_PORT_IndexAgent
  app_server_uri               [0]: https://CONTENT_SERVER1:9082/DmMethods/servlet/DoMethod
                               [1]: https://CONTENT_SERVER1:9082/DmMail/servlet/DoMail
                               [2]: https://FULLTEXT_SERVER1:PORT/IndexAgent/servlet/IndexAgent
                               [3]: https://FULLTEXT_SERVER2:PORT/IndexAgent/servlet/IndexAgent
...

Ok! The root cause of this error is clear now.

The concerned method is WCPublishDocumentMethod, but applied when? by who?

I noticed that in the log above:

D2LifecycleConfig::applyMethod start 'WCPublishDocumentMethod'

So, WCPublishDocumentMethod applied by the D2LifecycleConfig, which is applied when? by who?
Sought in the erver.log file and found:

2019-02-11 04:05:04,490 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : User  : repository1
2019-02-11 04:05:04,490 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : New session manager creation.
2019-02-11 04:05:04,491 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Session manager set identity.
2019-02-11 04:05:04,491 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Session manager get session.
2019-02-11 04:05:06,006 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Workitem ID: 4a01e2408002bd3d
2019-02-11 04:05:06,023 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Searching workflow tracker...
2019-02-11 04:05:06,031 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Searching workflow config...
2019-02-11 04:05:06,032 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Get packaged documents...
2019-02-11 04:05:06,067 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Apply on masters...
2019-02-11 04:05:06,068 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Workitem acquire...
2019-02-11 04:05:06,098 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Applying lifecycle (Target state : On Approved / Transition :promote
2019-02-11 04:05:06,098 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : No workflow properties
2019-02-11 04:05:06,098 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Searching target state name and/or transition type.
2019-02-11 04:05:06,099 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Target state name :On Approved
2019-02-11 04:05:06,099 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Target transition type :promote
2019-02-11 04:05:06,099 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Performing D2 lifecycle on :FRM-8003970 (0901e240800311cd)
2019-02-11 04:05:06,099 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Searching associated D2 lifecycle...
2019-02-11 04:05:06,099 UTC INFO  [stdout] (default task-79) [DEBUG] - c.e.d.a.c.m.lifecycle.D2LifecycleConfig       : D2LifecycleConfig::getInstancesForObject start time 0.000s
...
2019-02-11 04:05:39,097 UTC INFO  [stdout] (default task-60) [DEBUG] - c.e.d.a.c.m.lifecycle.D2LifecycleConfig       : D2LifecycleConfig::applyMethod start 'WCPublishDocumentMethod'
...

Hummmm, the D2WFLifeCycleMethod is applied by the job D2JobLifecycleBatch. I checked the target server of this job:

1> SELECT target_server FROM dm_job WHERE object_name='D2JobLifecycleBatch';
2> go
target_server                                                                                                                                                                               
-------------
 
(1 row affected)

As I suspected, no target server defined! That’s mean that the job can be executed on “Any Running Server”, that’s why this method has been executed on CS2… While CS2 is not configured to do so.

Now, two solutions are possible:
1. Change the target_server to use only CS1 (idql):

UPDATE dm_job OBJECTS SET target_server='repository1.repository1@CONTENT_SERVER1' WHERE object_name='D2JobLifecycleBatch';

2. Add the app server WebCache to CS2, pointing to CS1 (iapi):

API>fetch,c,dm_server_config
API>append,c,l,app_server_name
WebCache
API>append,c,l,app_server_uri

https://CONTENT_SERVER1:6679/services/scs/publish

API>save,c,l

Check after update:
API> retrieve,c,dm_server_config
...
3d01e24080000255
API> dump,c,3d01e24080000255
...
USER ATTRIBUTES

  object_name                     : repository1
...
  app_server_name              [0]: do_method
                               [1]: do_mail
                               [2]: FULLTEXT_SERVER1_PORT_IndexAgent
                               [3]: FULLTEXT_SERVER2_PORT_IndexAgent
                               [4]: WebCache
  app_server_uri               [0]: https://CONTENT_SERVER1:9082/DmMethods/servlet/DoMethod
                               [1]: https://CONTENT_SERVER1:9082/DmMail/servlet/DoMail
                               [2]: https://FULLTEXT_SERVER1:PORT/IndexAgent/servlet/IndexAgent
                               [3]: https://FULLTEXT_SERVER2:PORT/IndexAgent/servlet/IndexAgent
                               [4]: https://CONTENT_SERVER1:6679/services/scs/publish
...

We choose the second option, because:
– The job is handled by the application team,
– Modifying the job to run only on CS1 will resolve this case, but if the method is applied by another job or manually on CS2, we will get again the same error.

After this update no error has been recorded in the log file:

...
2019-02-12 04:06:10,948 UTC INFO  [stdout] (default task-81) [DEBUG] - c.e.d.a.c.m.lifecycle.D2LifecycleConfig       : D2LifecycleConfig::applyMethod start 'WCPublishDocumentMethod'
2019-02-12 04:06:10,955 UTC INFO  [stdout] (default task-81) [DEBUG] - c.e.d.a.c.m.lifecycle.D2LifecycleConfig       : D2LifecycleConfig::applyMethod before session apply 'WCPublishDocumentMethod' time: 0.007s
2019-02-12 04:06:10,955 UTC INFO  [stdout] (default task-81) [DEBUG] - com.emc.d2.api.methods.D2Method               : No ARG_RETURN_ID in mapArguments
2019-02-12 04:06:10,956 UTC INFO  [stdout] (default task-81) [DEBUG] - com.emc.d2.api.methods.D2Method               : newObject created, user session used: 0801e2408023f714
2019-02-12 04:06:10,956 UTC INFO  [stdout] (default task-81) [DEBUG] - com.emc.d2.api.D2SysObject                    : getFolderIdFromCache: got folder: /System/D2/Data/c6_method_return, object id: 0b01e2408000256b, docbase: repository1
2019-02-12 04:06:11,016 UTC INFO  [stdout] (default task-81) [DEBUG] - com.emc.d2.api.methods.D2Method               : mapArguments: {-method_return_id=0801e2408023f714}
2019-02-12 04:06:11,016 UTC INFO  [stdout] (default task-81) [DEBUG] - com.emc.d2.api.methods.D2Method               : origArguments: {-id=0901e24080122a59}
2019-02-12 04:06:11,017 UTC INFO  [stdout] (default task-81) [DEBUG] - com.emc.d2.api.methods.D2Method               : methodName: WCPublishDocumentMethod
2019-02-12 04:06:11,017 UTC INFO  [stdout] (default task-81) [DEBUG] - com.emc.d2.api.methods.D2Method               : methodParams: -id 0901e24080122a59 -user_name dmadmin -docbase_name repository1
2019-02-12 04:06:11,017 UTC INFO  [stdout] (default task-81) [DEBUG] - com.emc.d2.api.methods.D2Method               : Start WCPublishDocumentMethod method with JMS (Java Method Services) runLocally hint set is false
2019-02-12 04:06:11,017 UTC INFO  [stdout] (default task-81) [DEBUG] - com.emc.d2.api.methods.D2Method               : key: -method_return_id, and value: 0801e2408023f714
...

I hope this blog will help you to quickly resolve this kind of incident.

Cet article Documentum – Documents not transferred to WebConsumer est apparu en premier sur Blog dbi services.

MySQL 8 and Security – Encryption of binary logs

Fri, 2019-02-22 10:13

As I discussed in some of my recent talks at conferences (at the DOAG for example), MySQL 8 came out with new features which bring lots of improvements in terms of security.

“At-Rest” encryption has been existing from some releases by now:
– InnoDB Tablespace Encryption: by 5.7.11
– Redo and Undo Log Data Encryption: by 8.0.1
Now starting from version 8.0.14, you can also encrypt binary and relay log files. In this blog post we will see how to configure that and we will do some tests.

Case 1: Binary log files are not encrypted

Binary log files encryption is disables by default:

mysql> show variables like 'binlog_encryption';
+-------------------+-------+
| Variable_name     | Value |
+-------------------+-------+
| binlog_encryption | OFF   |
+-------------------+-------+
1 row in set (0.02 sec)

With this configuration, we could extract some sensitive information with some simple OS commands without connecting to the database, which means that if an OS user account is compromised we could have some important security issues.
First of all I create a database and a table and insert in it some sensitive information:

mysql> create database cards;
Query OK, 1 row affected (0.01 sec)
mysql> use cards;
Database changed
mysql> CREATE TABLE cards.banking_card (id int (128), day int(2), month int(2), year int(4), type varchar(128), code varchar(128));
Query OK, 0 rows affected (0.04 sec)
mysql> INSERT INTO cards.banking_card VALUES (1, 8, 3, 1984, 'secret code', '01-234-5678');
Query OK, 1 row affected (0.01 sec)

I check which is the binary log file which is currently in use:

mysql> SHOW BINARY LOGS;
+--------------------+-----------+-----------+
| Log_name           | File_size | Encrypted |
+--------------------+-----------+-----------+
| mysqld8-bin.000001 |      1384 | No        |
| mysqld8-bin.000002 |       178 | No        |
| mysqld8-bin.000003 |       974 | No        |
+--------------------+-----------+-----------+
mysql> SHOW BINLOG EVENTS IN 'mysqld8-bin.000003';
+--------------------+-----+----------------+-----------+-------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
| Log_name           | Pos | Event_type     | Server_id | End_log_pos | Info                                                                                                                                                    |
+--------------------+-----+----------------+-----------+-------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
| mysqld8-bin.000003 |   4 | Format_desc    |         8 |         124 | Server ver: 8.0.15, Binlog ver: 4                                                                                                                       |
| mysqld8-bin.000003 | 124 | Previous_gtids |         8 |         155 |                                                                                                                                                         |
| mysqld8-bin.000003 | 155 | Anonymous_Gtid |         8 |         232 | SET @@SESSION.GTID_NEXT= 'ANONYMOUS'                                                                                                                    |
| mysqld8-bin.000003 | 232 | Query          |         8 |         341 | create database cards /* xid=17 */                                                                                                                      |
| mysqld8-bin.000003 | 341 | Anonymous_Gtid |         8 |         420 | SET @@SESSION.GTID_NEXT= 'ANONYMOUS'                                                                                                                    |
| mysqld8-bin.000003 | 420 | Query          |         8 |         635 | use `cards`; CREATE TABLE cards.banking_card (id int (128), day int(2), month int(2), year int(4), type varchar(128), code varchar(128)) /* xid=23 */ |
| mysqld8-bin.000003 | 635 | Anonymous_Gtid |         8 |         714 | SET @@SESSION.GTID_NEXT= 'ANONYMOUS'                                                                                                                    |
| mysqld8-bin.000003 | 714 | Query          |         8 |         790 | BEGIN                                                                                                                                                   |
| mysqld8-bin.000003 | 790 | Table_map      |         8 |         865 | table_id: 66 (cards.banking_card)                                                                                                                     |
| mysqld8-bin.000003 | 865 | Write_rows     |         8 |         943 | table_id: 66 flags: STMT_END_F                                                                                                                          |
| mysqld8-bin.000003 | 943 | Xid            |         8 |         974 | COMMIT /* xid=24 */                                                                                                                                     |
+--------------------+-----+----------------+-----------+-------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
11 rows in set (0.00 sec)

With these 2 following commands for example (od to dump files in octal and other formats and xxd to make an hexdump) I can easily read the content of my binary log file:

# od -c mysqld8-bin.000003
0001440         a 003           B           001
0001460   005   c   a   r   d   s   016   b   a   n   k   i   n   g
0001500   _   c   a   r   d   ?     006 003 003 003 003 017 017 004
0001520   002   002   ? 001 001   002 003 374 377   005 224 022
0001540 332 375   ~   n   \ 036  \b         N       257 003
0001560           B           001   002   006 377
0001600   001        \b       003       300  \a  
0001620    \v     s   e   c   r   e   t       c   o   d   e  \v  
0001640   0   1   -   2   3   4   -   5   6   7   8   9   N   _ 312 375

# xxd mysqld8-bin.000003
00001f0: ff00 1300 6361 7264 7300 4352 4541 5445  ....cards.CREATE
0000200: 2054 4142 4c45 2063 6172 6473 2e63 6172   TABLE cards.ban
0000210: 645f 656e 6372 7970 7465 6420 2869 6420  king_card    (id
0000220: 696e 7420 2831 3238 292c 2064 6179 2069  int (128), day i
0000230: 6e74 2832 292c 206d 6f6e 7468 2069 6e74  nt(2), month int
0000240: 2832 292c 2079 6561 7220 696e 7428 3429  (2), year int(4)
0000250: 2c20 7479 7065 2076 6172 6368 6172 2831  , type varchar(1
0000260: 3238 292c 2063 6f64 6520 7661 7263 6861  28), code varcha
0000270: 7228 3132 3829 29f5 771f aafd 7e6e 5c22  r(128)).w...~n\"
0000280: 0800 0000 4f00 0000 ca02 0000 0000 0000  ....O...........
0000290: 0000 0000 0000 0000 0000 0000 0000 0000  ................
00002a0: 0000 0000 0000 0002 0200 0000 0000 0000  ................
00002b0: 0300 0000 0000 0000 e692 3509 6582 05fc  ..........5.e...
00002c0: 5301 8f38 0100 9993 56a3 fd7e 6e5c 0208  S..8....V..~n\..
00002d0: 0000 004c 0000 0016 0300 0008 000b 0000  ...L............
00002e0: 0000 0000 0005 0000 1d00 0000 0000 0001  ................
00002f0: 2000 a045 0000 0000 0603 7374 6404 ff00   ..E......std...
0000300: ff00 ff00 12ff 0063 6172 6473 0042 4547  .......cards.BEG
0000310: 494e 0567 d2c2 fd7e 6e5c 1308 0000 004b  IN.g...~n\.....K
0000320: 0000 0061 0300 0000 0042 0000 0000 0001  ...a.....B......
0000330: 0005 6361 7264 7300 0e63 6172 645f 656e  ..cards..banking
0000340: 6372 7970 7465 6400 0603 0303 030f 0f04  _card...........
0000350: 0002 0002 3f01 0100 0203 fcff 0005 9412  ....?...........
0000360: dafd 7e6e 5c1e 0800 0000 4e00 0000 af03  ..~n\.....N.....
0000370: 0000 0000 4200 0000 0000 0100 0200 06ff  ....B...........
0000380: 0001 0000 0008 0000 0003 0000 00c0 0700  ................
0000390: 000b 0073 6563 7265 7420 636f 6465 0b00  ...secret code..
00003a0: 3031 2d32 3334 2d35 3637 3839 4e5f cafd  01-234-56789N_..
00003b0: 7e6e 5c10 0800 0000 1f00 0000 ce03 0000  ~n\.............
00003c0: 0000 1800 0000 0000 0000 f9d0 d057       .............W

Yes, that’s not good news.

Case 2: Binary log files are encrypted

Now let’s try to activate encryption through the following steps:
1) Activate a keyring plugin for the master key management (for the Community Edition, it’s called keyring_file and it stores keyring data in a local file):

# mysqld_multi stop 8
mysqld_multi log file version 2.16; run: Thu Feb 21 10:54:19 2019
Stopping MySQL servers
# mysqld_multi --defaults-file=/u01/app/mysql/admin/mysqld8/etc/my.cnf start 8 > /dev/null 2>&1
# ps -eaf|grep mysqld
mysql     3362     1 13 10:58 pts/0    00:00:00 /u01/app/mysql/product/mysql-8.0.15/bin/mysqld --port=33008 --socket=/u01/app/mysql/admin/mysqld8/socket/mysqld8.sock --pid-file=/u01/app/mysql/admin/mysqld8/socket/mysqld8.pid --log-error=/u01/app/mysql/admin/mysqld8/log/mysqld8.err --datadir=/u02/mysqldata/mysqld8/ --basedir=/u01/app/mysql/product/mysql-8.0.15/ --slow_query_log=0 --slow_query_log_file=/u01/app/mysql/admin/mysqld8/log/mysqld8-slow-query.log --log-bin=/u01/app/mysql/admin/mysqld8/binlog/mysqld8-bin --innodb_flush_log_at_trx_commit=1 --sync_binlog=1 --local-infile=0 --general_log=0 --general_log_file=/u01/app/mysql/admin/mysqld8/log/mysqld8.log --lc_messages_dir=/u01/app/mysql/product/mysql-8.0.15/share/ --lc_messages=en_US --server-id=8 --log_timestamps=SYSTEM --early-plugin-load=keyring_file.so

The /u01/app/mysql/admin/mysqld8/etc/my.cnf file is defined as follows:

[mysqld8]
port                           = 33008
mysqladmin                     = /u01/app/mysql/product/mysql-8.0.15/bin/mysqladmin
...
server-id                      = 8
early-plugin-load              = keyring_file.so

2) Turn on the binlog_encryption variable:

mysql> SET PERSIST binlog_encryption=ON;
Query OK, 0 rows affected (0.03 sec)

At this point, encryption is enabled:

mysql> flush logs;
Query OK, 0 rows affected (0.05 sec)
mysql> SHOW BINARY LOGS;
+--------------------+-----------+-----------+
| Log_name           | File_size | Encrypted |
+--------------------+-----------+-----------+
| mysqld8-bin.000001 |      1384 | No        |
| mysqld8-bin.000002 |       178 | No        |
| mysqld8-bin.000003 |      1023 | No        |
| mysqld8-bin.000004 |       716 | Yes       |
| mysqld8-bin.000005 |       667 | Yes       |
+--------------------+-----------+-----------+

I insert again some data into my table:

mysql> use cards;
Database changed
mysql> INSERT INTO cards.banking_card VALUES (2, 5, 9, 1986, 'secret code', '01-234-5678');
Query OK, 1 row affected (0.02 sec)

I can try to extract some information from the binary log files on disk:

# xxd mysqld8-bin.000005
...
0000250: 2796 8d0c 9171 7109 df65 2434 9d0e 4f40  '....qq..e$4..O@
0000260: e024 07e8 9db7 ae84 f0d5 5728 90d4 905f  .$........W(..._
0000270: 9cc4 6c33 d4e1 5839 aa1f 97bb af04 b24d  ..l3..X9.......M
0000280: e36d dd05 3d0c f9d8 fbee 2379 2b85 2744  .m..=.....#y+.'D
0000290: efe4 29cb 3eff 03b8 b934 ec6b 4e9c 9189  ..).>....4.kN...
00002a0: d14b 402c 3d80 effe c34d 0d27 3be7 b427  .K@,=....M.';..'
00002b0: 5389 3208 b199 7da6 acf6 d98a 7ac3 299c  S.2...}.....z.).
00002c0: 3de0 5e12 3ed6 5849 f907 3d2c da66 f1a1  =.^.>.XI..=,.f..
00002d0: 7556 c62b b88f a3da 1a47 230b aae8 c63c  uV.+.....G#....<
00002e0: 6751 4f31 2d14 66e9 5a17 a980 4d37 2067  gQO1-.f.Z...M7 g
00002f0: 034c e0d7 b8ad 8cb4 b6d0 16e9 f6a5 3f90  .L............?.
0000300: 95aa 008e 79e1 7fda d74e ada2 f602 cc3b  ....y....N.....;
0000310: 1b61 c657 b656 3840 712d 2bb3 61b9 3c44  .a.W.V8@q-+.a..
0000390: 2a6b e68f e14c 6b3d b6ac e4cf 4f75 a828  *k...Lk=....Ou.(
00003a0: 0e21 24ad 27c7 e970 37a2 c883 46b0 ff26  .!$.'..p7...F..&
00003b0: 7c2a cf9f 9845 e4ca c067 f763 cd80 b1b3  |*...E...g.c....
00003c0: 74b8 6066 b1c0 634e fabc 9312 d0c4 ed8d  t.`f..cN........
00003d0: 880d 41b7 a1d4 3c59 bea3 63e7 ab61 11b7  ..A...<Y..c..a..
00003e0: 9f40 4555 f469 38b8 1add 1336 f03d       .@EU.i8....6.=

Well done, no clear-text data is readable anymore now!
Just a little information. When encryption is turned on, to display the content of the binary log files with mysqlbinlog, we have to use the –read-from-remote-server (-R) option, otherwise mysqlbinlog has no access to them:

# cd /u01/app/mysql/admin/mysqld8/binlog
# mysqlbinlog mysqld8-bin.000005
/*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=1*/;
/*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/;
DELIMITER /*!*/;
ERROR: Reading encrypted log files directly is not supported.
SET @@SESSION.GTID_NEXT= 'AUTOMATIC' /* added by mysqlbinlog */ /*!*/;
DELIMITER ;
#End of log file
/*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/;
/*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=0*/;

# mysqlbinlog -R --protocol TCP --host 192.168.25.2 --port 33008 --user root -p mysqld8-bin.000005
Enter password:
/*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=1*/;
/*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/;
DELIMITER /*!*/;
#at 4
#190221 12:01:55 server id 8  end_log_pos 124 CRC32 0x92413637  Start: binlog v 4, server v 8.0.15 created 190221 12:01:55
BINLOG '
I4VuXA8IAAAAeAAAAHwAAAAAAAQAOC4wLjE1AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAEwANAAgAAAAABAAEAAAAYAAEGggAAAAICAgCAAAACgoKKioAEjQA
CgE3NkGS
'/*!*/;
#at 124
#190221 12:01:55 server id 8  end_log_pos 155 CRC32 0x7d47d67e  Previous-GTIDs
#[empty]
#at 155
...

Stay tuned with MySQL 8! ;)

Cet article MySQL 8 and Security – Encryption of binary logs est apparu en premier sur Blog dbi services.

Working with files on the filesystem in PostgreSQL

Fri, 2019-02-22 04:18

PostgreSQL comes with various helper functions that support you with working with files on the filesystem on the host PostgreSQL is running on. You might ask yourself why that is important but there are use cases for that. Maybe you want to list the contents of a directory because new files that showed up since the last check do trigger something. Maybe you want to load a file into the database (which you also can (and event should) do using copy if it is text based and somehow well formatted, but that is not the scope of this post).

For listing files in a directory there is this one:

postgres=# select * from pg_ls_dir('.');
      pg_ls_dir       
----------------------
 pg_wal
 global
 pg_commit_ts
 pg_dynshmem
 pg_notify
 pg_serial
 pg_snapshots
 pg_subtrans
 pg_twophase
 pg_multixact
 base
 pg_replslot
 pg_tblspc
 pg_stat
 pg_stat_tmp
 pg_xact
 pg_logical
 PG_VERSION
 postgresql.conf
 postgresql.auto.conf
 pg_hba.conf
 pg_ident.conf
 pg_log
 postmaster.opts
 autoprewarm.blocks
 postmaster.pid
 current_logfiles
(27 rows)

By default the ‘.’ listings are omitted by you can control this:

postgres=# select * from pg_ls_dir('.',true,true);
      pg_ls_dir       
----------------------
 .
 ..
 pg_wal
 global
 pg_commit_ts
 pg_dynshmem
 pg_notify
 pg_serial
 pg_snapshots
 pg_subtrans
 pg_twophase
 pg_multixact
 base
 pg_replslot
 pg_tblspc
 pg_stat
 pg_stat_tmp
 pg_xact
 pg_logical
 PG_VERSION
 postgresql.conf
 postgresql.auto.conf
 pg_hba.conf
 pg_ident.conf
 pg_log
 postmaster.opts
 autoprewarm.blocks
 postmaster.pid
 current_logfiles
(29 rows)

There is no option to control sorting but of course you can add a where clause to do this:

postgres=# select * from pg_ls_dir('.',true,true) order by 1;
      pg_ls_dir       
----------------------
 .
 ..
 autoprewarm.blocks
 base
 current_logfiles
 global
 pg_commit_ts
 pg_dynshmem
 pg_hba.conf
 pg_ident.conf
 pg_log
 pg_logical
 pg_multixact
 pg_notify
 pg_replslot
 pg_serial
 pg_snapshots
 pg_stat
 pg_stat_tmp
 pg_subtrans
 pg_tblspc
 pg_twophase
 PG_VERSION
 pg_wal
 pg_xact
 postgresql.auto.conf
 postgresql.conf
 postmaster.opts
 postmaster.pid
(29 rows)

You could load that into an array and then do whatever you want to do with it for further processing:

postgres=# \x
Expanded display is on.
postgres=# with dirs as (select pg_ls_dir('.'::text,true,true) dir order by 1)
                select array_agg(dir) from dirs;
-[ RECORD 1 ]----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
array_agg | {.,..,autoprewarm.blocks,base,current_logfiles,global,pg_commit_ts,pg_dynshmem,pg_hba.conf,pg_ident.conf,pg_log,pg_logical,pg_multixact,pg_notify,pg_replslot,pg_serial,pg_snapshots,pg_stat,pg_stat_tmp,pg_subtrans,pg_tblspc,pg_twophase,PG_VERSION,pg_wal,pg_xact,postgresql.auto.conf,postgresql.conf,postmaster.opts,postmaster.pid}

When you try to list the files of a directory you do not have the permissions to do so of course that fails:

postgres=# select pg_ls_dir('/root');
ERROR:  could not open directory "/root": Permission denied

All other directories the PostgreSQL operating system user has access to can be listed:

postgres=# \x
Expanded display is off.
postgres=# select pg_ls_dir('/var/tmp');
                                pg_ls_dir                                
-------------------------------------------------------------------------
 yum-postgres-uSpYMT
 systemd-private-f706224b798a404a8b1b7efbbb7137c9-chronyd.service-saK1Py
 systemd-private-bcd40d1946c94f1fbcb73d1047ee2fc2-chronyd.service-Fr7WgV
 systemd-private-798725e073664df6bbc5c6041151ef61-chronyd.service-kRvvJa
(4 rows)

When you need to get some statistics about a file there is pg_stat_file:

postgres=# select pg_stat_file('postgresql.conf');
                                     pg_stat_file                                      
---------------------------------------------------------------------------------------
 (26343,"2019-02-21 17:35:22+01","2019-02-05 15:41:11+01","2019-02-05 15:41:11+01",,f)
(1 row)
postgres=# select pg_size_pretty((pg_stat_file('postgresql.conf')).size);
 pg_size_pretty 
----------------
 26 kB
(1 row)

Loading a file into the database is possible as well:

postgres=# create table t1 ( a text );
CREATE TABLE
postgres=# insert into t1 select pg_read_file('postgresql.conf');
INSERT 0 1
postgres=# select * from t1;
                                                        a                                                        
-----------------------------------------------------------------------------------------------------------------
 # -----------------------------                                                                                +
 # PostgreSQL configuration file                                                                                +
 # -----------------------------                                                                                +
 #                                                                                                              +
 # This file consists of lines of the form:                                                                     +
 #                                                                                                              +
 #   name = value                                                                                               +
...

This works even with binary files (but do you really want to have binary files in the database?):

postgres=# create table t2 ( a bytea );
CREATE TABLE
postgres=# insert into t2 select pg_read_binary_file('/bin/cp');
INSERT 0 1
postgres=# select * from t2;
                                                                                                                                                                                                                   
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 \x7f454c4602010100000000000000000002003e0001000000293e4000000000004000000000000000c0560200000000000000000040003800090040001f001e000600000005000000400000000000000040004000000000004000400000000000f801000000000000
(1 row)
postgres=# drop table t1,t2;
DROP TABLE

As usual this is all very well documented in the PostgreSQL documentation.

Cet article Working with files on the filesystem in PostgreSQL est apparu en premier sur Blog dbi services.

Create a primary database using the backup of a standby database on 12cR2

Thu, 2019-02-21 02:05

The scope of this blog will be to show how to create a primary role database based on a backup of a standby database on 12cR2.

Step1: We are assuming that an auxiliary instance has been created and started in nomount mode.

rman target /
restore primary controlfile from 'backup_location_directory/control_.bkp';
exit;

By specifying “restore primary” , will modify the flag into the controlfile, and will mount a primary role instance instead of a standby one.

Step2: Once mounted the instance, we will restore the backup of the standby database.

run
{
catalog start with 'backup_location_directory';
restore database;
alter database flashback off;
recover database 
}

If in the pfile used to start the instance, you specified the recovery destination and size parameters it will try to enable the flashback.
The flashback enable , before during the recovery is not allowed, so we will deactivate for the moment.

Step3: Restore/recover completed successfully, we will try to open the database, but got some errors:

alter database open :

ORA-03113: end-of-file on communication channel
Process ID: 2588
Session ID: 1705 Serial number: 5

Step4: Fix the errors and try to open the database:

--normal redo log groups
alter database clear unarchived logfile group YYY;

--standby redo log groups
alter database clear unarchived logfile group ZZZ;
alter database drop logfile group ZZZ;

Is not enough. Looking on the database alert log file we can see :

LGWR: Primary database is in MAXIMUM AVAILABILITY mode 
LGWR: Destination LOG_ARCHIVE_DEST_2 is not serviced by LGWR 
LGWR: Destination LOG_ARCHIVE_DEST_1 is not serviced by LGWR 

Errors in file /<TRACE_DESTINATION>_lgwr_1827.trc: 
ORA-16072: a minimum of one standby database destination is required 
LGWR: terminating instance due to error 16072 
Instance terminated by LGWR, pid = 1827

Step5: Complete the opening procedure:

alter system set log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST';
alter database set standby database to maximize performance;

SQL> select name,open_mode,protection_mode from v$database;

NAME      OPEN_MODE            PROTECTION_MODE
--------- -------------------- --------------------
NAME      MOUNTED              MAXIMUM PERFORMANCE

SQL> alter database flashback on;

Database altered.

SQL> alter database open;

Database altered.

SQL> select name,db_unique_name,database_role from v$database;

NAME      DB_UNIQUE_NAME                 DATABASE_ROLE
--------- ------------------------------ ----------------
NAME      NAME_UNIQUE                    PRIMARY

Cet article Create a primary database using the backup of a standby database on 12cR2 est apparu en premier sur Blog dbi services.

Documentum – MigrationUtil – 3 – Change Server Config Name

Thu, 2019-02-21 02:00

In the previous blog I changed the Docbase Name to repository1 instead of RepoTemplate using MigrationUtil, in this blog it is Server Config Name’s turn to be changed.

In general, the repository name and the server config name are the same except in High availability case.
You can find the Server Config Name in the server.ini file:

[dmadmin@vmtestdctm01 ~]$ cat $DOCUMENTUM/dba/config/repository1/server.ini
[SERVER_STARTUP]
docbase_id = 1000600
docbase_name = repository1
server_config_name = RepoTemplate
database_conn = DCTMDB
...
1. Migration preparation

To change the server config name to repository1, you need first to update the configuration file of MigrationUtil, like below:

[dmadmin@vmtestdctm01 ~]$ cat $DM_HOME/install/external_apps/MigrationUtil/config.xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
<comment>Database connection details</comment>
<entry key="dbms">oracle</entry> <!-- This would be either sqlserver, oracle, db2 or postgres -->
<entry key="tgt_database_server">vmtestdctm01</entry> <!-- Database Server host or IP -->
<entry key="port_number">1521</entry> <!-- Database port number -->
<entry key="InstallOwnerPassword">install164</entry>
<entry key="isRCS">no</entry>    <!-- set it to yes, when running the utility on secondary CS -->

<!-- <comment>List of docbases in the machine</comment> -->
<entry key="DocbaseName.1">repository1</entry>

<!-- <comment>docbase owner password</comment> -->
<entry key="DocbasePassword.1">install164</entry>

...

<entry key="ChangeServerName">yes</entry>
<entry key="NewServerName.1">repository1</entry>

Put all other entry to no.
The tool will use above information, and load more from the server.ini file.

2. Execute the migration

Use the below script to execute the migration:

[dmadmin@vmtestdctm01 ~]$ cat $DM_HOME/install/external_apps/MigrationUtil/MigrationUtil.sh
#!/bin/sh
CLASSPATH=${CLASSPATH}:MigrationUtil.jar
export CLASSPATH
java -cp "${CLASSPATH}" MigrationUtil

Update it if you need to overload the CLASSPATH only during migration.

2.a Stop the Docbase and the DocBroker

$DOCUMENTUM/dba/dm_shutdown_repository1
$DOCUMENTUM/dba/dm_stop_DocBroker

2.b Update the database name in the server.ini file
Like during the Docbase Name change, it is a workaround to avoid below error:

...
Database URL:jdbc:oracle:thin:@vmtestdctm01:1521/DCTMDB
ERROR...Listener refused the connection with the following error:
ORA-12514, TNS:listener does not currently know of service requested in connect descriptor

Check the tnsnames.ora and note the service name, in my case is dctmdb.local.

[dmadmin@vmtestdctm01 ~]$ cat $ORACLE_HOME/network/admin/tnsnames.ora 
DCTMDB =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = vmtestdctm01)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = dctmdb.local)
    )
  )

Make the change in the server.ini file:

[dmadmin@vmtestdctm01 ~]$ vi $DOCUMENTUM/dba/config/repository1/server.ini
...
[SERVER_STARTUP]
docbase_id = 1000600
docbase_name = repository1
server_config_name = RepoTemplate
database_conn = dctmdb.local
...

2.c Execute the migration script

[dmadmin@vmtestdctm01 ~]$ $DM_HOME/install/external_apps/MigrationUtil/MigrationUtil.sh

Welcome... Migration Utility invoked.
 
Skipping Docbase ID Change...
Skipping Host Name Change...
Skipping Install Owner Change...

Created log File: /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/ServerNameChange.log
Changing Server Name...
Database owner password is read from config.xml
Finished changing Server Name...

Skipping Docbase Name Change...
Skipping Docker Seamless Upgrade scenario...

Migration Utility completed.

All changes have been recorded in the log file:

[dmadmin@vmtestdctm01 ~]$ cat /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/ServerNameChange.log
Start: 2019-02-02 19:55:52.531
Changing Server Name
=====================

DocbaseName: repository1
Retrieving server.ini path for docbase: repository1
Found path: /app/dctm/product/16.4/dba/config/repository1/server.ini
ServerName: RepoTemplate
New ServerName: repository1

Database Details:
Database Vendor:oracle
Database Name:dctmdb.local
Databse User:RepoTemplate
Database URL:jdbc:oracle:thin:@vmtestdctm01:1521/dctmdb.local
Successfully connected to database....

Validating Server name with existing servers...
select object_name from dm_sysobject_s where r_object_type = 'dm_server_config'

Processing Database Changes...
Created database backup File '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/ServerNameChange_DatabaseRestore.sql'
select r_object_id,object_name from dm_sysobject_s where r_object_type = 'dm_server_config' and object_name = 'RepoTemplate'
update dm_sysobject_s set object_name = 'repository1' where r_object_id = '3d0f449880000102'
select r_object_id,object_name from dm_sysobject_s where r_object_type = 'dm_jms_config' and object_name like '%repository1.RepoTemplate%'
update dm_sysobject_s set object_name = 'JMS vmtestdctm01:9080 for repository1.repository1' where r_object_id = '080f4498800010a9'
select r_object_id,object_name from dm_sysobject_s where r_object_type = 'dm_cont_transfer_config' and object_name like '%repository1.RepoTemplate%'
update dm_sysobject_s set object_name = 'ContTransferConfig_repository1.repository1' where r_object_id = '080f4498800004ba'
select r_object_id,target_server from dm_job_s where target_server like '%repository1.RepoTemplate%'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800010d3'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f44988000035e'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f44988000035f'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000360'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000361'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000362'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000363'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000364'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000365'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000366'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000367'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000372'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000373'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000374'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000375'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000376'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000377'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000378'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000379'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f44988000037a'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f44988000037b'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000386'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000387'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000388'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000389'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000e42'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000cb1'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000d02'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000d04'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000d05'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003db'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003dc'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003dd'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003de'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003df'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003e0'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003e1'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003e2'
Successfully updated database values...

Processing File changes...
Backed up '/app/dctm/product/16.4/dba/config/repository1/server.ini' to '/app/dctm/product/16.4/dba/config/repository1/server.ini_server_RepoTemplate.backup'
Updated server.ini file:/app/dctm/product/16.4/dba/config/repository1/server.ini
Backed up '/app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties' to '/app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties_server_RepoTemplate.backup'
Updated acs.properties: /app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties
Finished processing File changes...
Finished changing server name 'repository1'

Processing startup and shutdown scripts...
Backed up '/app/dctm/product/16.4/dba/dm_start_repository1' to '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/dm_start_repository1_server_RepoTemplate.backup'
Updated dm_startup script.
Backed up '/app/dctm/product/16.4/dba/dm_shutdown_repository1' to '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/dm_shutdown_repository1_server_RepoTemplate.backup'
Updated dm_shutdown script.

Finished changing server name....
End: 2019-02-02 19:55:54.687

2.d Reset the value of database_conn in the server.ini file

[dmadmin@vmtestdctm01 ~]$ vi $DOCUMENTUM/dba/config/repository1/server.ini
...
[SERVER_STARTUP]
docbase_id = 1000600
docbase_name = repository1
server_config_name = repository1
database_conn = DCTMDB
...
3. Check after update

Start the Docbroker and the Docbase:

$DOCUMENTUM/dba/dm_launch_DocBroker
$DOCUMENTUM/dba/dm_start_repository1

Check the log to be sure that the repository has been started correctly. Notice that the log name has been changed from RepoTemplate.log to repository1.log:

[dmadmin@vmtestdctm01 ~]$ tail -5 $DOCUMENTUM/dba/log/repository1.log
...
IsProcessAlive: Process ID 0 is not > 0
2019-02-02T20:00:09.807613	29293[29293]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 29345, session 010f44988000000b) is started sucessfully."
IsProcessAlive: Process ID 0 is not > 0
2019-02-02T20:00:10.809686	29293[29293]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 29362, session 010f44988000000c) is started sucessfully."
4. Manual rollback is possible?

In fact, in the MigrationUtilLogs folder, you can find logs, backup of start/stop scripts, and also the sql file for manual rollback:

[dmadmin@vmtestdctm01 ~]$ ls -rtl $DM_HOME/install/external_apps/MigrationUtil/MigrationUtilLogs
total 980
-rw-rw-r-- 1 dmadmin dmadmin   4323 Feb  2 19:55 ServerNameChange_DatabaseRestore.sql
-rwxrw-r-- 1 dmadmin dmadmin   2687 Feb  2 19:55 dm_start_repository1_server_RepoTemplate.backup
-rwxrw-r-- 1 dmadmin dmadmin   3623 Feb  2 19:55 dm_shutdown_repository1_server_RepoTemplate.backup
-rw-rw-r-- 1 dmadmin dmadmin   6901 Feb  2 19:55 ServerNameChange.log

lets see the content of the sql file :

[dmadmin@vmtestdctm01 ~]$ cat $DM_HOME/install/external_apps/MigrationUtil/MigrationUtilLogs/ServerNameChange_DatabaseRestore.sql
update dm_sysobject_s set object_name = 'RepoTemplate' where r_object_id = '3d0f449880000102';
update dm_sysobject_s set object_name = 'JMS vmtestdctm01:9080 for repository1.RepoTemplate' where r_object_id = '080f4498800010a9';
update dm_sysobject_s set object_name = 'ContTransferConfig_repository1.RepoTemplate' where r_object_id = '080f4498800004ba';
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f4498800010d3';
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f44988000035e';
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f44988000035f';
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f449880000360';
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f449880000361';
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f449880000362';
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f449880000363';
...

I already noticed that a manual rollback is possible after Docbase ID and Docbase Name change but I didn’t test it… I would like to try this one.
So to rollback:
Stop the Docbase and the Docbroker

$DOCUMENTUM/dba/dm_shutdown_RepoTemplate
$DOCUMENTUM/dba/dm_stop_DocBroker

Execute the sql

[dmadmin@vmtestdctm01 ~]$ cd $DM_HOME/install/external_apps/MigrationUtil/MigrationUtilLogs
[dmadmin@vmtestdctm01 MigrationUtilLogs]$ sqlplus /nolog
SQL*Plus: Release 12.1.0.2.0 Production on Sun Feb 17 19:53:12 2019
Copyright (c) 1982, 2014, Oracle.  All rights reserved.

SQL> conn RepoTemplate@DCTMDB
Enter password: 
Connected.
SQL> @ServerNameChange_DatabaseRestore.sql
1 row updated.
1 row updated.
1 row updated.
...

The DB User is still RepoTemplate, it hasn’t been changed when I changed the docbase name

Copy back the files saved, you can find the list of files updated and saved in the log:

cp /app/dctm/product/16.4/dba/config/repository1/server.ini_server_RepoTemplate.backup /app/dctm/product/16.4/dba/config/repository1/server.ini
cp /app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties_server_RepoTemplate.backup /app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties
cp /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/dm_start_repository1_server_RepoTemplate.backup /app/dctm/product/16.4/dba/dm_start_repository1
cp /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/dm_shutdown_repository1_server_RepoTemplate.backup /app/dctm/product/16.4/dba/dm_shutdown_repository1

Think about changing back the the database connection in /app/dctm/product/16.4/dba/config/repository1/server.ini (see 2.d step).

Then start the DocBroker and the Docbase:

$DOCUMENTUM/dba/dm_launch_DocBroker
$DOCUMENTUM/dba/dm_start_repository1

Check the repository log:

[dmadmin@vmtestdctm01 ~]$ tail -5 $DOCUMENTUM/dba/log/RepoTemplate.log
...
2019-02-02T20:15:59.677595	19200[19200]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 19232, session 010f44988000000a) is started sucessfully."
IsProcessAlive: Process ID 0 is not > 0
2019-02-02T20:16:00.679566	19200[19200]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 19243, session 010f44988000000b) is started sucessfully."
IsProcessAlive: Process ID 0 is not > 0
2019-02-02T20:16:01.680888	19200[19200]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 19255, session 010f44988000000c) is started sucessfully."

Yes, the rollback works correctly! :D Despite this, I hope you will not have to do it on a production environment. ;)

Cet article Documentum – MigrationUtil – 3 – Change Server Config Name est apparu en premier sur Blog dbi services.

DOAG day “database migration” in Mannheim at 19.02.2019

Wed, 2019-02-20 02:12

Yesterday I attended DOAG conference in Mannheim about migrating Oracle databases.

First presentation was about challenges about migrating to multitenant databases. With Oracle 20 it is probably not possible anymore to create a non CDB database or to upgrade from a non CDB database. So in the next years all databases have to be migrated to multitenant architecture. Problems with licensing, different charactersets were covered and some migration methods to PDB were shown.
In the second lecture causes of failures of database migrations were shown which were not caused by database technology itself but by surrounding technologies and human errors.
In the third speech a new migration method with RMAN incremental backups in combination with transportable tablespaces was shown, this is very interesting for migration of big databases.
Also a migration method with duplicate command with noopen option was presented.

Last but not least an entertaining show about migration projects was hold, the lessons learned were presented in Haiku (Japanese poem form).

Cet article DOAG day “database migration” in Mannheim at 19.02.2019 est apparu en premier sur Blog dbi services.

PostgreSQL : barman rsync method vs streaming method

Sat, 2019-02-16 06:16

Barman is a tool to perform backup and recovery for PostgreSQL databases. It can do backup using two methods:
-rsync/ssh
-streaming
In this blog I am going to explain how to use these two methods to backup a PostgreSQL database. Generally it is a good practice to dedicate a server for barman instead of installing it on the database server. My environment is described below
postgreSQL server: dbi-pg-essentials 192.168.22.101 Centos 7
barman server: pgservertools 192.168.22.104 Oracle Linux 7
postgreSQL version: 11.1
barman version: 2.6
The first step is to install barman on the barman server pgservertools

[root@pgservertools network-scripts]# yum install  barman.noarch barman-cli.noarch
Loaded plugins: langpacks, ulninfo
Resolving Dependencies
--> Running transaction check
---> Package barman.noarch 0:2.6-1.rhel7 will be installed
---> Package barman-cli.noarch 0:1.3-1.rhel7.1 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package        Arch       Version             Repository                  Size
================================================================================
Installing:
 barman         noarch     2.6-1.rhel7         pgdg10-updates-testing     300 k
 barman-cli     noarch     1.3-1.rhel7.1       pgdg10-updates-testing      14 k

Transaction Summary
================================================================================
...
...
Installed:
  barman.noarch 0:2.6-1.rhel7         barman-cli.noarch 0:1.3-1.rhel7.1

Complete!
[root@pgservertools network-scripts]#

The installation will create a linux user named barman.
As the rsync method need connections without passwords between two servers for the barman user, we have to configure ssh keys
On the server pgservertools (barman server) let’s create keys with the user barman and then copy the public key to the database server dbi-pg-essentials for the user postgres

-bash-4.2$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/var/lib/barman/.ssh/id_rsa):
Created directory '/var/lib/barman/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /var/lib/barman/.ssh/id_rsa.
Your public key has been saved in /var/lib/barman/.ssh/id_rsa.pub.
The key fingerprint is:
f4:b7:6b:6e:38:25:ae:be:7f:9a:34:03:a1:1c:a0:ac barman@pgservertools
The key's randomart image is:
+--[ RSA 2048]----+
|    .            |
| . . .           |
|  o   . o        |
| .   . + o       |
|E     o S . .    |
|         .....   |
|         .++.    |
|         .+++.   |
|       .+++Bo    |
+-----------------+
-bash-4.2$ 


-bash-4.2$ cd .ssh/
-bash-4.2$ ls
id_rsa  id_rsa.pub


-bash-4.2$ ssh-copy-id postgres@dbi-pg-essentials
The authenticity of host 'dbi-pg-essentials (192.168.22.101)' can't be established.
ECDSA key fingerprint is 33:65:38:f4:eb:5b:f4:10:d3:36:7b:ea:5a:70:33:18.
Are you sure you want to continue connecting (yes/no)? yes
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
postgres@dbi-pg-essentials's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'postgres@dbi-pg-essentials'"
and check to make sure that only the key(s) you wanted were added.

If everything is fine, barman should be able to connect to database server as postgres linux user without password

-bash-4.2$ hostname
pgservertools.localdomain
-bash-4.2$ id
uid=994(barman) gid=992(barman) groups=992(barman) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
-bash-4.2$ ssh postgres@dbi-pg-essentials hostname
dbi-pg-essentials
-bash-4.2$

On the database server I also have installed the package barman-cli.noarch which will allow us to use the command barman-wal-archive. We will talk about this later.

[root@dbi-pg-essentials ~]# yum install barman-cli.noarch
Loaded plugins: fastestmirror
dbipgessentials                                          | 3.6 kB     00:00
edb-repos                                                | 2.4 kB     00:00
…
…
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : barman-cli-1.3-1.rhel7.1.noarch                              1/1
  Verifying  : barman-cli-1.3-1.rhel7.1.noarch                              1/1

Installed:
  barman-cli.noarch 0:1.3-1.rhel7.1

Complete!
[root@dbi-pg-essentials ~]# 
rsync backup

As specified earlier in this case the backup is done using rsync. But we have many ways to ship WAL to the barman server. So before talking about barman configuration let’s take a moment to see the WAL shipping
1- With WAL archiving
To better understand I put this picture I got from the barman documentation . As we see backup is done via rsync and the WAL are sent via the barman-wal-archive. This utility comes with barman 2.6.
Before barman 2.6 the rsync command was used to send WAL to barman.
In the documentation we can find that using barman-wal-archive instead of rsync/SSH reduces the risk of data corruption of the shipped WAL file on the Barman server.
rsync_1
The use of barman_wal_archive is done via the postgresql.conf file on the database server. It appears on the value of the parameter archive_command. Below values in my postgresql.conf file.

[postgres@dbi-pg-essentials PG1]$ grep -E  "archive_mode|wal_level|archive_command" postgresql.conf
wal_level = replica                    # minimal, replica, or logical
archive_mode = on               # enables archiving; off, on, or always
archive_command = 'barman-wal-archive 192.168.22.104 pgserver11 %p'             # command to use to archive a logfile segment

Before barman 2.6 we would use following for archive_command to send WAL to barman server

archive_command = 'rsync -a  %p  barman@pgservertools:/var/lib/barman/pgserver11/incoming/%f' # command to use to archive a logfile segment
[postgres@dbi-pg-essentials PG1]$

2- With WAL archiving and WAL streaming
This picture from barman documentation will help to better understand
rsync_2
To use WAL streaming to the barman server, we need pg_receivewal (pg_receivexlog up to PostgreSQL 10) to be installed on the barman server. Be careful of the version of pg_receivewal. In my case I installed the version 11.1 as my PostgreSQL is 11.1

[postgres@pgservertools bin]$ /usr/pgsql-11/bin/pg_receivewal -V
pg_receivewal (PostgreSQL) 11.1
[postgres@pgservertools bin]$

A streaming connection also should be configured and the parameter streaming_archiver should be set to on.
Now to resume let’s say that I want to configure barman with
-rsync method
-using barman_wal_archive and WAL streaming
The barman file configuration /etc/barman.conf should be like

[postgres@pgservertools bin]$ cat /etc/barman.conf | grep -v  ^\;

[barman]
barman_user = barman
configuration_files_directory = /etc/barman.d
barman_home = /var/lib/barman
log_file = /var/log/barman/barman.log
log_level = INFO
compression = gzip
retention_policy = REDUNDANCY 2

[pgserver11]
description = "Main DB Server"
ssh_command = ssh postgres@dbi-pg-essentials
streaming_conninfo=host=192.168.22.101 user=postgres
conninfo=host=192.168.22.101 user=postgres
backup_method = rsync
streaming_archiver = on
archiver = on
path_prefix=/usr/pgsql-11/bin/
[postgres@pgservertools bin]$

And the postgresql.conf should contain following entries

[postgres@dbi-pg-essentials PG1]$ grep -E  "archive_mode|wal_level|archive_command" postgresql.conf
wal_level = replica                    # minimal, replica, or logical
archive_mode = on               # enables archiving; off, on, or always
archive_command = 'barman-wal-archive 192.168.22.104 pgserver11 %p'             # command to use to archive a logfile segment

The first thing is to test that the barman configuration is fine for the PostgreSQL database. The check command should not return any errors. On the barman server with the user barman

-bash-4.2$ barman check pgserver11
Server pgserver11:
        PostgreSQL: OK
        is_superuser: OK
        PostgreSQL streaming: OK
        wal_level: OK
        directories: OK
        retention policy settings: OK
        backup maximum age: OK (no last_backup_maximum_age provided)
        compression settings: OK
        failed backups: OK (there are 0 failed backups)
        minimum redundancy requirements: OK (have 0 backups, expected at least 0)
        ssh: OK (PostgreSQL server)
        not in recovery: OK
        archive_mode: OK
        archive_command: OK
        continuous archiving: OK
        pg_receivexlog: OK
        pg_receivexlog compatible: OK
        receive-wal running: OK
        archiver errors: OK
-bash-4.2$

Now we can launch a backup using the backup command on the barman server with the user barman

-bash-4.2$ barman backup pgserver11
Starting backup using rsync-exclusive method for server pgserver11 in /var/lib/barman/pgserver11/base/20190215T153350
Backup start at LSN: 0/2E000060 (00000005000000000000002E, 00000060)
This is the first backup for server pgserver11
Starting backup copy via rsync/SSH for 20190215T153350
Copy done (time: 12 seconds)
This is the first backup for server pgserver11
WAL segments preceding the current backup have been found:
        00000005000000000000002D from server pgserver11 has been removed
Asking PostgreSQL server to finalize the backup.
Backup size: 74.1 MiB
Backup end at LSN: 0/2E000168 (00000005000000000000002E, 00000168)
Backup completed (start time: 2019-02-15 15:33:52.392144, elapsed time: 15 seconds)
Processing xlog segments from file archival for pgserver11
        00000005000000000000002E
        00000005000000000000002E.00000060.backup
-bash-4.2$

We can list the existing backup. On the barman server with the barman user

-bash-4.2$ barman list-backup pgserver11
pgserver11 20190215T153350 - Fri Feb 15 15:34:08 2019 - Size: 74.1 MiB - WAL Size: 0 B
-bash-4.2$
Streaming backup

Since the version 2.0, barman supports streaming replication for backup. This method uses the native pg_basebackup
1- Streaming-only backup
This picture is from the barman documentation may help
stream_1
As we can see, In this case backup are done via streaming. WAL are also using streaming protocol.
2- WAL archiving and WAL streaming
Once again following picture may help
stream_2
In this case we configure standard archiving as well to implement a more robust architecture

For example to implement a barman configuration with streaming backup WAL streaming and WAL archiving, the /etc/barman.conf should be like

[postgres@pgservertools bin]$ cat /etc/barman.conf | grep -v  ^\;

[barman]
barman_user = barman
configuration_files_directory = /etc/barman.d
barman_home = /var/lib/barman
log_file = /var/log/barman/barman.log
log_level = INFO
compression = gzip
retention_policy = REDUNDANCY 2

[pgserver11]
description = "Main DB Server"
ssh_command = ssh postgres@dbi-pg-essentials
streaming_conninfo=host=192.168.22.101 user=postgres
conninfo=host=192.168.22.101 user=postgres
backup_method = postgres
streaming_archiver = on
archiver = on
slot_name=barman
path_prefix=/usr/pgsql-11/bin/

and the postgressql.conf

[postgres@dbi-pg-essentials PG1]$ grep -E  "archive_mode|wal_level|archive_command" postgresql.conf
wal_level = replica                    # minimal, replica, or logical
archive_mode = on               # enables archiving; off, on, or always
archive_command = 'barman-wal-archive 192.168.22.104 pgserver11 %p'             # command to use to archive a logfile segment

So the check should not return any errors

-bash-4.2$ barman check pgserver11
Server pgserver11:
        PostgreSQL: OK
        is_superuser: OK
        PostgreSQL streaming: OK
        wal_level: OK
        replication slot: OK
        directories: OK
        retention policy settings: OK
        backup maximum age: OK (no last_backup_maximum_age provided)
        compression settings: OK
        failed backups: OK (there are 0 failed backups)
        minimum redundancy requirements: OK (have 1 backups, expected at least 0)
        pg_basebackup: OK
        pg_basebackup compatible: OK
        pg_basebackup supports tablespaces mapping: OK
        archive_mode: OK
        archive_command: OK
        continuous archiving: OK
        pg_receivexlog: OK
        pg_receivexlog compatible: OK
        receive-wal running: OK
        archiver errors: OK
-bash-4.2$

And we launch a backup, we can see that barman is using pg_basebackup

-bash-4.2$ barman backup pgserver11
Starting backup using postgres method for server pgserver11 in /var/lib/barman/pgserver11/base/20190215T160757
Backup start at LSN: 0/2F0001A8 (00000005000000000000002F, 000001A8)
Starting backup copy via pg_basebackup for 20190215T160757
Copy done (time: 11 seconds)
Finalising the backup.
Backup size: 73.9 MiB
Backup end at LSN: 0/31000060 (000000050000000000000031, 00000060)
Backup completed (start time: 2019-02-15 16:07:57.919595, elapsed time: 11 seconds)
Processing xlog segments from file archival for pgserver11
        000000050000000000000030
        000000050000000000000031
Processing xlog segments from streaming for pgserver11
        000000050000000000000030
-bash-4.2$

Available backups are now

-bash-4.2$ barman list-backup pgserver11
pgserver11 20190215T160757 - Fri Feb 15 16:08:09 2019 - Size: 73.9 MiB - WAL Size: 0 B
pgserver11 20190215T153350 - Fri Feb 15 15:34:08 2019 - Size: 74.1 MiB - WAL Size: 48.3 KiB

To restore with barman, we use the command recover. For example the following command will restore the backup 20190215T160757 on server dbi-pg-essentials_3 in the directory /u02/pgdata/PGRESTORE

-bash-4.2$ barman recover --remote-ssh-command "ssh postgres@dbi-pg-essentials_3" pgserver11 20190215T160757 /u02/pgdata/PGRESTORE
Conclusion

In this blog I have tried to explain different scenarios for using barman. We talked about rsync method and streaming methods for backups. Before starting the setup a choice must be done. One can check documentation for more information

Ref: http://docs.pgbarman.org/

Cet article PostgreSQL : barman rsync method vs streaming method est apparu en premier sur Blog dbi services.

New search function for Oracle database features

Fri, 2019-02-15 10:25

Oracle released a new way to search for database features, and it is …

..really great.
Try out the cool new search application for Oracle database features.

What is special about? The new site gives you a very smart overview of the database features and in which version they are available (at least from 11-19)

As example, when you search for “compression” it shows you this output:

With a click on the feature you are interessted in you can see a short description and jump directly into the documentation

Oh and when somebody from Oracle is reading this:
Please add also the license information for all the features: Thank you :-)

Playing around with the new site, I already found some features I did not know that they exist.
Or did you know, there is an ASM Cloning feature starting with Oracle 18c and Flex Disk groups?

Have a nice weekend.

P.S.
Thanks to
Thomas Teske
from Oracle who showed me this feature!

Cet article New search function for Oracle database features est apparu en premier sur Blog dbi services.

Pages