Feed aggregator

[Video] 7 Things Every Oracle Apps DBA or Architect Must know for Cloud

Online Apps DBA - Tue, 2019-09-10 06:39

7 Things Every Oracle Apps DBA or Architect Must know in order to Build Manage & Migrate EBS (R12) on Oracle’s Gen 2 Cloud that’s Oracle Cloud Infrastructure (OCI) These 7 things include: ✔ Deployment Options On Oracle Cloud ✔ The architecture of EBS (R12) on OCI ✔ Cloud Tools i.e. EBS Cloud Manager, Cloud […]

The post [Video] 7 Things Every Oracle Apps DBA or Architect Must know for Cloud appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Introducing Accelerated Database Recovery with SQL Server 2019

Yann Neuhaus - Tue, 2019-09-10 06:01

SQL Server 2019 RC1 was released out a few weeks ago and it is time to start blogging about my favorite core engine features that will be shipped with the next version of SQL Server. Things should not be completely different with the RTM, so let’s introduce the accelerated database recovery (aka ADR) which is mainly designed to solve an annoying issue that probably most of SQL Server DBAs already faced at least one time: long running transactions that impact the overall recovery time. As a reminder with current versions of SQL Server, database recovery time is tied to the largest transaction at the moment of the crash. This is even more true in high-critical environments where it may have a huge impact on the service or application availability and ADR is another feature that may help for sure.

Image from Microsoft documentation

In order to allow very fast rollback and recovery process the SQL Server team redesigned completely the SQL database engine recovery process and the interesting point is they have introduced row-versioning to achieve it. Row-versioning, however, exist since the SQL Server 2005 version through RCSI and SI isolation levels and from my opinion this is finally good news to extend (finally) such capabilities to address long recovery time.

Anyway, I performed some testing to get an idea of what could be the benefit of ADR and the impact of the workload as well. Firstly, I performed a recovery test without ADR and after initiating a long running transaction, I simply crashed my SQL Server instance. I used an AdventureWorks database with the dbo.bigTransactionHistory table which is big enough (I think) to get a relevant result.

The activation of ADR is per database meaning that row-versioning is also managed locally per database. It allows a better workload isolation compared to using the global tempdb version store with previous SQL Server versions.

USE AdventureWorks_dbi;

ALTER DATABASE AdventureWorks_dbi SET
    ACCELERATED_DATABASE_RECOVERY = OFF; 

ALTER DATABASE AdventureWorks_dbi SET
	COMPATIBILITY_LEVEL = 150;
GO

 

The dbo.bigtransactionHistory table has only one clustered primary key …

EXEC sp_helpindex 'dbo.bigTransactionHistory';
GO

 

… with 158’272’243 rows and about 2GB of data

EXEC sp_helpindex 'dbo.bigTransactionHistory';
GO

 

I simulated a long running transaction with the following update query that touches every row of the dbo.bigTransactionHistory table to get a relevant impact on the recovery process duration time.

BEGIN TRAN;

UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;
GO

 

The related transactions wrote a log of records into the transaction log size as show below:

SELECT 
	DB_NAME(database_id) AS [db_name],
	total_log_size_in_bytes / 1024 / 1024 AS size_MB,
	used_log_space_in_percent AS [used_%]
FROM sys.dm_db_log_space_usage;
GO

 

The sys.dm_tran_* and sys.dm_exec_* DMVs may be helpful to dig into the transaction detail including the transaction start time and log used in the transaction log:

SELECT 
   GETDATE() AS [Current Time],
   [des].[login_name] AS [Login Name],
   DB_NAME ([dtdt].database_id) AS [Database Name],
   [dtdt].[database_transaction_begin_time] AS [Transaction Begin Time],
   [dtdt].[database_transaction_log_bytes_used] / 1024 / 1024 AS [Log Used MB],
   [dtdt].[database_transaction_log_bytes_reserved] / 1024 / 1024 AS [Log Reserved MB],
   SUBSTRING([dest].text, [der].statement_start_offset/2 + 1,(CASE WHEN [der].statement_end_offset = -1 THEN LEN(CONVERT(nvarchar(max),[dest].text)) * 2 ELSE [der].statement_end_offset END - [der].statement_start_offset)/2) as [Query Text]
FROM 
   sys.dm_tran_database_transactions [dtdt]
   INNER JOIN sys.dm_tran_session_transactions [dtst] ON  [dtst].[transaction_id] = [dtdt].[transaction_id]
   INNER JOIN sys.dm_exec_sessions [des] ON  [des].[session_id] = [dtst].[session_id]
   INNER JOIN sys.dm_exec_connections [dec] ON   [dec].[session_id] = [dtst].[session_id]
   LEFT OUTER JOIN sys.dm_exec_requests [der] ON [der].[session_id] = [dtst].[session_id]
   OUTER APPLY sys.dm_exec_sql_text ([der].[sql_handle]) AS [dest]
GO

 

The restart of my SQL Server instance kicked-in the AdventureWorks_dbi database recovery process. It took about 6min in my case:

EXEC sp_readerrorlog 0, 1, N'AdventureWorks_dbi'

 

Digging further in the SQL Server error log, I noticed the phase2 (redo) and phase3 (undo) of the recovery process that took the most of time (as expected).

However, if I performed the same test with ADR enabled for the AdventureWorks_dbi database …

USE AdventureWorks_dbi;

ALTER DATABASE AdventureWorks_dbi SET
    ACCELERATED_DATABASE_RECOVERY = ON;

 

… and I dig again into the SQL Server error log:

Well, the output above is pretty different but clear and irrevocable: there is a tremendous improvement of the recovery time process here. The SQL Server error log indicates the redo phase took 0ms and the undo phase 119ms. I also tested different variations in terms of long transactions and logs generated in the transaction log (4.5GB, 9.1GB and 21GB) without and with ADR. With the latter database recovery remained fast irrespective to the transaction log size as shown below:

But there is no free lunch when enabling ADR because it is a row-versioning based process which may have an impact on the workload. I was curious to compare the performance of my update queries between scenarios including no row-versioning (default), row-versioning with RCSI only, ADR only and finally both RCSI and ADR enabled. I performed all my tests on a virtual machine quad core Intel® Core ™ i7-6600U CPU @ 2.6Ghz and 8GB of RAM. SQL Server memory is capped to 6GB. The underlying storage for SQL Server data files is hosted on SSD disk Samsung 850 EVO 1TB.

Here the first test I performed. This is the same update I performed previously which touches every row on the dbo.bigTransactionHistory table:

BEGIN TRAN;

UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;
GO

 

And here the result with the different scenarios:

Please don’t focus strongly on values here because it will depend on your context but the result answers to the following questions: Does the activation of ADR will have an impact on the workload and if yes is it in the same order of magnitude than RCSI / SI? The results are self-explanatory.

Then I decided to continue my tests by increasing the impact of the long running transaction with additional updates on the same data in order to stress a little bit the version store.

BEGIN TRAN;

UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;
GO
UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;
GO
UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;
GO
UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;
GO

 

Here the new results:

This time ADR seems to have a bigger impact than RCSI in my case. Regardless the strict values of this test, the key point here is we have to be aware that enabling ADR will have an impact to the workload.

After performing these bunch of tests, it’s time to get a big picture of ADR design with several components per database including a persisted version store (PVS), a Logical Revert, a sLog and a cleaner process. In this blog post I would like to focus on the PVS component that acts as persistent version store for the concerned database. In other words, with ADR, tempdb will not be used to store row versions anymore. The interesting point is that RCSI / SI row-versioning will continue to be handle through the PVS if ADR is enabled according to my tests.

There is the new added column named is_accelerated_database_recovery_on to the sys.databases system view. In my case both RCSI and ADR are enabled in AdventureWorks_dbi database.

SELECT 
	name AS [database_name],
	is_read_committed_snapshot_on,
	is_accelerated_database_recovery_on
FROM sys.databases
WHERE database_id = DB_ID()

 

The sys.dm_tran_version_store_space_usage DMV displays the total space in tempdb used by the version store for each database whereas the new sys.dm_tran_persistent_version_store_stats DMV provides information related to the new PVS created with the ADR activation.

BEGIN TRAN;

UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;
GO

SELECT 
	DB_NAME(database_id) AS [db_name],
	oldest_active_transaction_id,
	persistent_version_store_size_kb / 1024 AS pvs_MB
FROM sys.dm_tran_persistent_version_store_stats;
GO

SELECT 
	database_id,
	reserved_page_count / 128 AS reserved_MB
FROM sys.dm_tran_version_store_space_usage;
GO

 

After running my update query, I noticed the PVS in AdventureWorks_dbi database was used rather the version store in tempdb.

So, getting rid of the version store in tempdb seems to be a good idea and probably more scalable per database but according to my tests and without drawing any conclusion now it may lead to performance considerations … let’s see in the future what happens …

In addition, from a storage perspective, because SQL Server doesn’t use tempdb anymore as version store, my curiosity led  to see what happens behind the scene and how PVS interacts with the data pages where row-versioning comes into play. Let’s do some experiments:

Let’s create the dbo.bigTransationHistory_row_version table from the dbo.bigTransationHistory table with fewer data:

USE AdventureWorks_dbi;
GO

DROP TABLE IF EXISTS [dbo].[bigTransactionHistory_row_version];

SELECT TOP 1 *
INTO [dbo].[bigTransactionHistory_row_version]
FROM [dbo].[bigTransactionHistory]

 

Now, let’s have a look at the data page that belongs to my dbo.bigTransacitonHistory_row_version table with the page ID 499960 in my case:

DBCC TRACEON (3604, -1);
DBCC PAGE (AdventureWorks_dbi, 1, 499960, 3);

 

Versioning info exists in the header but obviously version pointer is set to Null because there is no additional version of row to maintain in this case. I just inserted one.

Let’s update the only row that exists in the table as follows:

BEGIN TRAN;
UPDATE [dbo].[bigTransactionHistory_row_version]
SET Quantity = Quantity + 1

 

The version pointer has been updated (but not sure the information is consistent here or at least the values displayed are weird). One another interesting point is there exists more information than the initial 14 bytes of information we may expect to keep track of the pointers. There is also extra 21 bytes at the end of row as show above. On the other hand, the sys.dm_db_index_physical_stats() DMF has been updated to reflect the PVS information with new columns inrow_*, total_inrow_* and offrow_* and may help to understand some of the PVS internals.

SELECT 
	index_id,
	index_level,
	page_count,
	record_count,
	version_record_count,
	inrow_version_record_count,
	inrow_diff_version_record_count,
	total_inrow_version_payload_size_in_bytes,
	offrow_regular_version_record_count,
	offrow_long_term_version_record_count
FROM sys.dm_db_index_physical_stats(
	DB_ID(), OBJECT_ID('dbo.bigTransactionHistory_row_version'), 
	NULL, 
	NULL, 
	'DETAILED'
)

 

Indeed, referring to the above output and correlating them to results I found inside the data page, I would assume the extra 21 bytes stored in the row seems to reflect a (diff ?? .. something I need to get info) value of the previous row (focus on in_row_diff_version_record_count and in_row_version_payload_size_in_bytes columns).

Furthermore, if I perform the update operation on the same data the storage strategy seems to switch to a off-row mode if I refer again to the sys.dm_db_index_physical_stats() DMF output:

Let’s go back to the DBCC PAGE output to confirm this assumption:

Indeed, the extra payload has disappeared, and it remains only the 14 bytes pointer which has been updated accordingly.

Finally, if I perform multiple updates of the same row, SQL Server should keep the off-row storage and should create inside it a chain of version pointers and their corresponding values.

BEGIN TRAN;

UPDATE [dbo].[bigTransactionHistory_row_version]
SET Quantity = Quantity + 1
GO 100000

 

My assumption is verified by taking a look at the previous DMVs. The persistent version store size has increased from ~16MB to ~32MB and we still have 1 version record in off-row mode meaning there is still one version pointer that references the off-row mode structure for my record.

Finally, let’s introduce the cleaner component. Like the tempdb version store, cleanup of old row versions is achieved by an asynchronous process that cleans page versions that are not needed. It wakes up periodically, but we can force it by executing the sp_persistent_version_cleanup stored procedure.

Referring to one of my first tests, the PVS size is about 8GB.

BEGIN TRAN;

UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;
GO


UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;
GO
UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;
GO
UPDATE dbo.bigTransactionHistory
SET Quantity = Quantity + 1;
GO
SELECT 
	DB_NAME(database_id) AS [db_name],
	oldest_active_transaction_id,
	persistent_version_store_size_kb / 1024 AS pvs_MB
FROM sys.dm_tran_persistent_version_store_stats;
GO
-- Running PVS cleanu process
EXEC sp_persistent_version_cleanup

 

According to my tests, the cleanup task took around 6min for the entire PVS, but it was not a blocking process at all as you may see below. As ultimate test, I executed in parallel an update query that touched every row of the same table, but it was not blocked by the cleaner as show below:

This is a process I need to investigate further. Other posts are coming as well .. with other ADR components.

See you!

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Cet article Introducing Accelerated Database Recovery with SQL Server 2019 est apparu en premier sur Blog dbi services.

Azure Advisor And Fixing Errors

Jeff Moss - Mon, 2019-09-09 17:23

Azure can be configured to send you advisor reports detailing things that are not quite right in your environment. The advisor is not necessarily always right but it’s sensible to review the outputs periodically, even if they relate to non production environments.

A few issues popped up on an advisor report on my recent travels and although you can just use the entries on the report on the portal to target the offending resources, I thought it might be helpful to write some Powershell to identify the offending resources as an alternative.

Secure transfer to storage accounts should be enabled

This error shows up similar to this on the report:

Fairly obvious what this means really – the storage account has a setting which is currently set to allow insecure transfers (via http rather than https) – an example looks like this under the Configuration blade of the Storage Account:

The advisor highlights this and the solution is to just set the toggle to Enabled for “Secure transfer required” and press save.

To identify all the storage accounts which have this issue use the following:

Get-AzStorageAccount | where {$_.EnableHttpsTrafficOnly -eq $False}
code

This gives output similar to the following (redacted):

PS Azure:> Get-AzStorageAccount | where {$_.EnableHttpsTrafficOnly -eq $False}

StorageAccountName      ResourceGroupName      Location    SkuName      Kind    AccessTier CreationTime         ProvisioningState EnableHttps TrafficOnly
------------------ ----------------- -------- ------- ---- ---------- ------------ ----------------- -----------
XXXXXXXXXXXXXXXXXX AAAAAAAAAAAAAAA northeurope Standard_LRS Storage 9/6/19 9:51:53 PM Succeeded False
YYYYYYYYYYYYYYYYYY AAAAAAAAAAAAAAA northeurope Standard_LRS Storage 6/26/19 3:29:38 PM Succeeded False
An Azure Active Directory
administrator should be
provisioned for SQL servers

This one appears like the following in the advisor output:

As a long term Oracle guy I’m no SQL Server expert so I can’t quite see why this is an issue if you have a SQL Server authenticated administrative user active – no doubt a friendly SQL DBA will chime in and explain.

To fix this navigate to the SQL Server in question and the Active Directory admin blade and select “Set admin”, choose a user from the Active Directory and press Save.

To find all SQL Servers affected by this I wrote the following Powershell:

$sqlservers = Get-AzResource -ResourceType Microsoft.Sql/servers
foreach ($sqlserver in $sqlservers)
{
    $sqlserver.Name
    $ADAdmin = Get-AzureRmSqlServerActiveDirectoryAdministrator -ServerName $sqlserver.Name -ResourceGroupName $sqlserver.ResourceGroupName
    "AD Administrator:" + $ADAdmin.DisplayName + "/" + $ADAdmin.ObjectId
}
code

This returns output similar to the following (redacted):

mysqlserver1
AD Administrator:clark.kent@dailyplanet.com/fe93c742-d83c-2b4c-bc38-83bc34def38c
mysqlserver2
AD Administrator:/
mysqlserver3
AD Administrator:clark.kent@dailyplanet.com/fe93c742-d83c-2b4c-bc38-83bc34def38c
mysqlserver4
AD Administrator:clark.kent@dailyplanet.com/fe93c742-d83c-2b4c-bc38-83bc34def38c

From the above you can that mysqlserver2 has no AD Administrator and will be showing up on the advisor report.

Enable virtual machine backup to
protect your data from corruption
and accidental deletion

This one appears like the following in the advisor output:


To fix this, navigate to the Backup blade on the VM Resource in question and set the appropriate settings to enable the backup.

To identify VMs where this issue is evident use the following Powershell:

$VMs = Get-AzVM
foreach ($VM in $VMs)
{
    "VM: " + $VM.Name
    $RecoveryServicesVaults = Get-AzRecoveryServicesVault
    foreach ($RecoveryServicesVault in $RecoveryServicesVaults)
    {
        Get-AzRecoveryServicesBackupContainer -VaultID $RecoveryServicesVault.ID -ContainerType "AzureVM" -Status "Registered" -FriendlyName $VM.Name
    }
}
code

This gives results similar to the following, allowing you to see VMs where no backup is enabled:

VM: myVM1

FriendlyName                   ResourceGroupName    Status               ContainerType
------------                   -----------------    ------               -------------
myVM1                          myResourceGroup      Registered           AzureVM
myVM1                          myResourceGroup      Registered           AzureVM
myVM1                          myResourceGroup      Registered           AzureVM
VM: myVM2
VM: myVM3
myVM3                          myResourceGroup      Registered           AzureVM
myVM3                          myResourceGroup      Registered           AzureVM
myVM3                          myResourceGroup      Registered           AzureVM

What you can see from the above is myVM1 and myVM3 both have registered backups unlike myVM2 which has none and therefore myVM2 needs backup enabling.

POUG Conference 2019

Yann Neuhaus - Mon, 2019-09-09 03:47

POUG (Pint with Oracle users group) organized his annual conference on 6-7th September in Wroclaw in New Horizons Cinema.

My abstract about “MySQL 8.0 Community: Ready for GDPR?” was accepted, so I had the opportunity to be there.

Sessions

My talk was planned for the first day. New MySQL 8.0 version introduces several improvements about security and these are the main points I discussed:
– Encryption of Redo/Undo and Binary/Relay log files, which comes to enrich existing datafile encryption
– Some password features such as:
* Password Reuse Policy, to avoid a user to always use the same passwords
* Password Verification Policy, to require current password before changing it
* validate_password Component (which replaces the old validate_password Plugin), to define a secure password policy through some system variables and 3 different levels
– New caching_sha2_password plugin, which let you manage authentication in a faster and more secure way
– SQL Roles, to simplify the user access right management

Here some interesting sessions that I attended.

Keep them out of the database!

How to avoid unwanted connections to have access to our database? Flora Barrièle and Martin Berger explained some possibilities.
Following methods have limitations:
– Filter through a firewall, cause we have to involve the network team
– Use a dedicated listener for each instance, cause it’s difficult to manage in case of big number of databases and environments
To solve these issues we can use instead:
– Connection Manager (a sort of listener with in addition a set of rules to define the source, service, activity, destination)
– Access Control List (ACL, a new functionality of Oracle 12.2 which is used to protect PDBs and associated services)
– Logon triggers
– Audit and reports
In conclusion, different solutions exist. First of all we have to know our ecosystem and our environments before deciding to put something in place. Then we should make it as simple as possible, test and check what is the best for our specific situation.

The MacGyver approach

Lothar Flatz explained an approach to analyze what’s wrong with a query and how to fix it when we don’t have a lot of time.
The first step is to optimize, and for this point we have to know how the optimizer works. Then we can enforce new plans (inserting hints, changing statements text, …) and look for the outline.
Sometimes it’s not easy. Lothar’s session ended with this quote: “Performance optimization is not magic: it’s based on knowledge and facts”.

From transportable tablespaces to pluggable databases

Franck Pachot showed different ways to transport data in different Oracle versions:
– Simple logical move through export/import -> slow
– Logical move including direct-path with Data Pump export/import -> flexible, but slow
– Physical transport with RMAN duplicate -> fast, but not cross-versions
– Transportable Tablespaces which provides a mix between logical move (for metadata) and physical transport (for application/user data) -> fast and flexible (cross-versions)
– Physical transport through PDB clone -> fast, efficient, ideal in a multi-tenant environment
– Full Transportable Tablespaces to move user tablespaces and other objects such as roles, users, … -> flexible, ideal to export from 11R2 to 12c and then to non-CDB to multi-tenant, no need to run scripts on dictionary

Data Guard new features

The Oracle MAA (Maximum Availability Architectures) describes 4 HA reference architectures in order to align Oracle capabilities with customer Service Level requirements. Oracle Data Guard can match Silver, Gold and Platinum reference architectures.
Pieter Van Puymbroeck (Oracle Product Manager for Data Guard) talked about following new 19c features:
– Flashback operations are propagated automatically to the standby (requirements: configure standby for flashback database and in MOUNT state first, set DB_FLASHBACK_RETENTION_TARGET)
– Restore points are automatically propagated from the primary to the standby
– On the Active Data Guard standby, the database buffer cache state is preserved during a role change
– Multi-Instance Redo Apply (parallel redo log apply in RAC environments)
– Observe-Only mode to test fast-start failover without having any impact on the production database
– New commands such as “show configuration lag;” to check all members, and to export/import the Broker configuration

Discussion Panel

In the form of a discussion animated by Kamil Stawiarski, and with funny but serious exchanges with the audience, some Oracle Product Managers and other Oracle specialists talked about one of most topical subject today: Cloud vs on-prem. Automation, Exadata Cloud at Customer, Oracle documentation and log files and much more…

Networking moments

Lots of networking moments during this conference: a game in the city center, a speakers dinner, lunch time at the conference, the party in the Grey Music Club.

As usual it was a real pleasure to share knowledge and meet old friends and new faces.
Thanks to Luiza, Kamil and the ORA-600 Database Whisperers for their warm welcome and for the perfect organization of the event.

A suggestion? Don’t miss it next year!

Cet article POUG Conference 2019 est apparu en premier sur Blog dbi services.

Oracle GoldenGate Microservices Upgrade – 12.3.0.x/18.1.0.x to 19.1.0.0.x

DBASolved - Sun, 2019-09-08 16:45

Oracle GoldenGate Microservices have been out for a few years now. Many customers have pursued the architecture in many different industries and have this in many dfifernt use-cases and architectures. But what do you do when you want to upgrade your Oracle GoldenGate Microservices Architecture?

In a previous post, I wrote about how to upgrade Oracle GoldenGate Microservices using the GUI or HTML5 approach in this post – Upgrading GoldenGate Microservices Architecture – GUI Based (January 2018). Today, many of the steps are exactly the same as they were a year ago. The good news is that Oracle has documented the process a bit clearer in the lates upgrade document (here).

So why a new post on upgrading the architecture? Over the last few days, I’ve been looking into a problem that has been reported by customers. This problem affects the upgrade process, not so much in how to do the upgrade but when the upgrade is done.

In nutshell, the upgrade process for Oracle GoldenGate Microservices is done in these few steps:

1. Download the latest version of Oracle GoldenGate Microservices -> In this case: 19.1.0.0.1 (here); however, this approach will work with 19.1.0.0.2 as well.
2. Upload the software, if needed, to a staging area on the server where Oracle GoldenGate Microservices is running. Ideally, you should be upgrading from OGG 12c (12.3.x) or 18c (18.1.x).
3. Unzip the downloaded zip file to a temporary folder in the staging area
4. Execute runInstaller from the directory in the staging area. This will start the Oracle Universal Installer for Oracle GoldenGate.
5. Within the installation process, provide the Oracle GoldenGate Home for the Software Location.
6. Click Install to begin the installation into a New Oracle GoldenGate Home.

Note: At this point, you should have two Oracle GoldenGate Microservices Homes. One for the older version and one for the 19c version.

7. Login to the ServiceManager
8. Under Deployments -> select ServiceManager
9. Under Deployment Details -> select the pencil icon. This will open the edit field for the GoldenGate Home.
10. Edit the GoldenGate Home -> change to the new Oracle GoldenGate Microservices Home then click Apply.
This will force the ServiceManager to reboot.

At this point, you may be asking yourself, I’ve done everything but the ServiceManager has not come back up. What is going on?

If you have configured the ServiceManager as a daemon, you can try to start the ServiceManager by using the systemctl commands.

systemctl start OracleGoldenGate

 

This command will just return with nothing important. In order to find out if it start successfully or not, check the status of the service.

systemctl status OracleGoldenGate
OracleGoldenGate.service - Oracle GoldenGate Service Manager
   Loaded: loaded (/etc/systemd/system/OracleGoldenGate.service; enabled; vendor preset: disabled)
   Active: failed (Result: start-limit) since Sun 2019-09-08 21:27:59 UTC; 2s ago
  Process: 3430 ExecStart=/opt/app/oracle/product/12.3.0/oggcore_1/bin/ServiceManager (code=killed, signal=SEGV)
 Main PID: 3430 (code=killed, signal=SEGV)


Sep 08 21:27:59 OGG12c219cUpgrade systemd[1]: Unit OracleGoldenGate.service entered failed state.
Sep 08 21:27:59 OGG12c219cUpgrade systemd[1]: OracleGoldenGate.service failed.
Sep 08 21:27:59 OGG12c219cUpgrade systemd[1]: OracleGoldenGate.service holdoff time over, scheduling restart.
Sep 08 21:27:59 OGG12c219cUpgrade systemd[1]: Stopped Oracle GoldenGate Service Manager.
Sep 08 21:27:59 OGG12c219cUpgrade systemd[1]: start request repeated too quickly for OracleGoldenGate.service
Sep 08 21:27:59 OGG12c219cUpgrade systemd[1]: <strong>Failed to start Oracle GoldenGate Service Manage</strong>r.
Sep 08 21:27:59 OGG12c219cUpgrade systemd[1]: Unit OracleGoldenGate.service entered failed state.
Sep 08 21:27:59 OGG12c219cUpgrade systemd[1]: OracleGoldenGate.service failed.

 

As you can tell the ServiceManager has failed to start. Why is this?

If you look at the output of the last systemctl status command, you see that the service is still referencing the old Oracle GoldenGate Microservices home.

Now the question becomes, how to I fix this?

The solution here is simple. Go to the deployment home for the ServiceManager and look under the bin directory. You will see teh registerServiceManager.sh script. Edit this script and change the variable OGG_HOME to match the new Oracle GoldenGate Home for 19c.

$ cd /opt/app/oracle/gg_deployments/ServiceManager/bin
$ ls
registerServiceManager.sh
$ vi registerServiceManager.sh


#!/bin/bash

# Check if this script is being run as root user
if [[ $EUID -ne 0 ]]; then
  echo "Error: This script must be run as root."
  exit
fi


# OGG Software Home location
OGG_HOME="/opt/app/oracle/product/12.3.0/oggcore_1” <— Change to reflect new OGG_HOME

Wit the registerServiceManager.sh file edit, go back and re-run the file as the root user.

# cd /opt/app/oracle/gg_deployments/ServiceManager/bin
# ./registerServiceManager.sh
Copyright (c) 2017, Oracle and/or its affiliates. All rights reserved.
----------------------------------------------------
     Oracle GoldenGate Install As Service Script
----------------------------------------------------
OGG_HOME=/opt/app/oracle/product/19.1.0/oggcore_1
OGG_CONF_HOME=/opt/app/oracle/gg_deployments/ServiceManager/etc/conf
OGG_VAR_HOME=/opt/app/oracle/gg_deployments/ServiceManager/var
OGG_USER=oracle
Running OracleGoldenGateInstall.sh…

With the service now updated, you can start and check the service.

# systemctl start OracleGoldenGate
# systemctl status OracleGoldenGate
OracleGoldenGate.service - Oracle GoldenGate Service Manager
   Loaded: loaded (/etc/systemd/system/OracleGoldenGate.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2019-09-08 21:39:58 UTC; 2s ago
 Main PID: 21946 (ServiceManager)
    Tasks: 13
   CGroup: /system.slice/OracleGoldenGate.service
           └─21946 /opt/app/oracle/product/19.1.0/oggcore_1/bin/ServiceManager

Sep 08 21:39:58 OGG12c219cUpgrade systemd[1]: Started Oracle GoldenGate Service Manager.
Sep 08 21:39:58 OGG12c219cUpgrade ServiceManager[21946]: 2019-09-08T21:39:58.509+0000 INFO | Configuring user authorization secure store path as '/opt/app/oracle/gg_deployments/Serv...ureStore/'.
Sep 08 21:39:58 OGG12c219cUpgrade ServiceManager[21946]: 2019-09-08T21:39:58.510+0000 INFO | Configuring user authorization as ENABLED.
Sep 08 21:39:58 OGG12c219cUpgrade ServiceManager[21946]: Oracle GoldenGate Service Manager for Oracle
Sep 08 21:39:58 OGG12c219cUpgrade ServiceManager[21946]: Version 19.1.0.0.0 OGGCORE_19.1.0.0.0_PLATFORMS_190508.1447
Sep 08 21:39:58 OGG12c219cUpgrade ServiceManager[21946]: Copyright (C) 1995, 2019, Oracle and/or its affiliates. All rights reserved.
Sep 08 21:39:58 OGG12c219cUpgrade ServiceManager[21946]: Linux, x64, 64bit (optimized) on May  8 2019 18:17:50
Sep 08 21:39:58 OGG12c219cUpgrade ServiceManager[21946]: Operating system character set identified as UTF-8.
Hint: Some lines were ellipsized, use -l to show in full.


At this point, you can now log back into the ServiceManager and confirm that the upgrade was done successfully.

Note: If you have your ServiceManager configured to be manually started and stopped, then you will need to edit the startSM.sh and stopSM.sh file. The OGG_HOME has to be changed in these files as well.

Enjoy!!!

Categories: DBA Blogs

Finding databases on each SQL Server using Powershell

Jeff Moss - Sat, 2019-09-07 06:07

A client had the requirement to list out the SQL Servers and the databases they have installed on those SQL Servers in their Azure Cloud environment this week. The reason for the requirement was to find SQL Servers that no longer had any databases on them so they could be considered for removal.

Essentially, it gathers a list of SQL Server resources, loops through them and counts and itemises them, not including the master database since that’s not relevant to the requirement.

I wrote the following powershell:

$sqlservers = Get-AzResource -ResourceType Microsoft.Sql/servers
foreach ($sqlserver in $sqlservers)
 {
     $sqlserver.Name
     $databases = Get-AzResource -ResourceType Microsoft.Sql/servers/databases|Where-Object {$_.Name -notlike "master"}|Where-Object {$_.Name -like $sqlserver.Name + "/*"}
     "Database Count:" + $databases.Count
     "Databases…"
     ">>>" + $databases.Name
 }

code

Which returns the following type of output (amended for privacy):

mytestsqlserver1
Database Count:0
Databases...
>>>
mytestsqlserver2
Database Count:1
Databases...
>>>mytestsqlserver2/mydatabase1
mytestsqlserver3
Database Count:1
Databases...
>>>mytestsqlserver3/mydatabase2
mytestsqlserver4
Database Count:3
Databases...
>>>mytestsqlserver4/mydatabase3 mytestsqlserver4/mydatabase4 mytestsqlserver4/mydatabase5

Oracle Cloud: Sign up failed... [2]

Dietrich Schroff - Fri, 2019-09-06 14:36
After my failed registration to Oracle cloud, i got very fast an email from Oracle support with the following requirements:
So i tried once again with a firefox "private" window - but this failed again.
Next idea was to use a completely new installed browser: so i tried with a fresh google-chrome.
But the error still remained:
Let's hope Oracle support has another thing which will put me onto Oracle cloud.

UPDATE:


There is a tiny link "click here" just abouve the blue button. This link a have to use with the verification code provided by Oracle support.
But then the error is:
I checked this a VISA and MASTERCARD. Neither of them worked...

UPDATE 2: see here how the problem was solved.

Oracle OpenWorld and Code One 2019

Tim Hall - Fri, 2019-09-06 02:40

It’s nearly time for the madness to start again. This will be my 14th trip to San Francisco for OpenWorld, and however many it is since Java One and Code One got wrapped up into this…

  • Flights booked : ✔
  • Hotel booked : ✔
  • ESTA approved : ✔
  • Irrational fear of flying and general anxiety : ✔
  • 80 lbs weight loss : ❌
  • Talk complete : ❌
  • Denial : ✔

At the moment the scheduled stuff looks like this.

Friday :

  • 03:00 UK time : Start the trip over to SF. I know I said I would never do this again, and I know what the consequences will be…
  • Evening SF time : Groundbreaker Ambassador Dinner

Saturday : Day : ACE Director Briefing

Sunday :

  • Day : Groundbreaker Ambassador Briefing
  • Evening : Oracle ACE Dinner

Tuesday :

Session ID: DEV1314
The Seven Deadly Sins of SQL
Date: 17th Sept 2019
Time: 11:30 – 12:15

Wednesday :

Session ID: DEV6013
Embracing Constant Technical Innovation in Our Daily Life
Date: 18th Sept 2019
Time: 16:00 – 16:45
Panel: Gustavo Gonzalez, Sven Bernhardt, Debra Lilley, Francisco Munoz Alvarez, Me

Thursday : Fly home.

Friday : Arrive home, have a post-conference breakdown and promise myself I’ll never do it again…

In addition to those I have to schedule in the following:

  • A shift on the Groundbreakers Hub, but I’m not sure what day or what demo yet. I’ll probably hang around there a lot anyway.
  • Meet a photographer to get some photos done. I’ve told them they’ve got to be tasteful and “only above the waist”.
  • Spend some time annoying everyone on the demo grounds. I know Kris and Jeff are desperate to see me. It’s the highlight of their year!
  • Stalk Wim Coekaerts, whilst maintaining an air of ambivalence, so as not to give the game away. Can anyone else hear Bette Midler singing “Wind Beneath My Wings”? No? Just me?

There’s a whole bunch of other stuff too, but I’ve not got through all my emails yet. Just looking at this is giving me the fear. So much for my year off conferences…

See you there!

Cheers

Tim…

Oracle OpenWorld and Code One 2019 was first posted on September 6, 2019 at 8:40 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Multi-Nodes Redis Cluster With Docker

Tugdual Grall - Thu, 2019-09-05 09:10
Read this article on my new blog As part of my on-boarding/training at RedisLabs I continue to play with the product, and I have decided today to install a local 3 nodes cluster of Redis Enterprise Server (RS); and show how easy is to move from a single node/shard database to a multi nodes highly available one. Once your cluster is up & running, you will kill some containers to see how Tug Grallhttp://www.blogger.com/profile/12028480831632266604noreply@blogger.com0

September 27 Arizona Oracle User Group Meeting

Bobby Durrett's DBA Blog - Wed, 2019-09-04 10:30

The Arizona Oracle User Group (AZORA) is cranking up its meeting schedule again now that the blazing hot summer is starting to come to an end. Our next meeting is Friday, September 27, 2019 from 12:00 PM to 4:00 PM MST.

Here is the Meetup link: Meetup

Thank you to Republic Services for allowing us to meet in their fantastic training rooms.

Thanks also to OneNeck IT Solutions for sponsoring our lunch.

OneNeck’s Biju Thomas will speak about three highly relevant topics:

  • Oracle’s Autonomous Database — “What’s the Admin Role?”
  • Oracle Open World #OOW 19 Recap
  • Let’s Talk AI, ML, and DL

I am looking forward to learning something new about these areas of technology. We work in a constantly evolving IT landscape so learning about the latest trends can only help us in our careers. Plus, it should be interesting and fun.

I hope to see you there.

Bobby

Categories: DBA Blogs

Azure Active Directory (AAD)

Jeff Moss - Tue, 2019-09-03 16:57
Find the ID of an existing user:
PS Azure:> $azureaduser=$(az ad user list --filter "userPrincipalName eq 'Fred.Smith@acme.com'" --query [].objectId --output tsv)

Azure:/

PS Azure:> $azureaduser

a5cga394-f1da-4f2c-bc4b-931c10bad9e1

Show all users in AAD:

az ad user list --query [].userPrincipalName

London March 2020: “Oracle Indexing Internals and Best Practices” and “Oracle Performance Diagnostics and Tuning” Seminars !!

Richard Foote - Tue, 2019-09-03 06:44
It’s with great excitement that I announce I’ll finally be returning to London, UK in March 2020 to run both of my highly acclaimed seminars. The dates and registration links are as follows: 23-24 March 2020: “Oracle Indexing Internals and Best Practices” seminar – Tickets and Registration Link 25-26 March 2020: “Oracle Performance Diagnostics and […]
Categories: DBA Blogs

[Video] Oracle Cloud Infrastructure Architect Certification 1Z0-932 Complete Guide

Online Apps DBA - Tue, 2019-09-03 04:53

8 Week Roadmap & Exam Dumps for Oracle Cloud Infra Architect Certification 1Z0-932 In Part 1 of the OCI Architecture Certification 1Z0-932 series, We saw all the exam details. Now, in part 2, let’s take a look at how to prepare for this exam and ACE it at once. ✔What is the weightage of each […]

The post [Video] Oracle Cloud Infrastructure Architect Certification 1Z0-932 Complete Guide appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Getting Started With Redis Streams & Java

Tugdual Grall - Tue, 2019-09-03 04:12
Read this article on my new blog As you may have seen, I have joined Redis Labs a month ago; one of the first task as a new hire is to learn more about Redis. So I learned, and I am still learning. This is when I discovered Redis Streams. I am a big fan of streaming-based applications so it is natural that I start with a small blog post explaining how to use Redis Streams and Java. Go to Tug Grallhttp://www.blogger.com/profile/12028480831632266604noreply@blogger.com0

Oracle Database 19c Automatic Indexing: Default Index Column Order Part I (Anyway Anyhow Anywhere)

Richard Foote - Mon, 2019-09-02 08:06
The next thing I was curious about regarding Automatic Indexing was in which order would Oracle by default order the columns within an index. This can be a crucial decision with respect to the effectiveness of the index (but then again, may not be so crucial as well). Certainly one would expect the index column […]
Categories: DBA Blogs

Tableau | Dashboard Design ::Revoke A50 Petition Data::

Rittman Mead Consulting - Mon, 2019-09-02 03:00

Dashboards are most powerful through visual simplicity. They’re designed to automatically keep track of a specific set of metrics and keep human beings updated. Visual overload is like a binary demon in analytics that many developers seem possessed by; but less is more.

For example, many qualified drivers know very little about their dashboard besides speed, revs, temperature and fuel gauge. When an additional dash warning light comes on, even if it is just the tyre pressure icon let alone engine diagnostics light, most people will just take their car to the garage. The most obvious metrics in a car are in regard to its operation; if you didn't know your speed while driving you'd feel pretty blind. The additional and not so obvious metrics (i.e. dash warning lights) are more likely to be picked up by the second type of person who will spend the most time with that car: its mechanic. It would be pointless to overload a regular driver with all the data the car can possibly output in one go; that would just intimidate them. That's not what you want a car to do to the driver and that's certainly not what any organisation would want their operatives to feel like while their “car” is moving.

In light of recent political events, the exact same can metaphorically be applied to the big red Brexit bus. Making sense of it all might be a stretch too far for this article. Still, with appropriate use of Tableau dashboard design it is possible to answer seemingly critical questions on the topic with publicly available data.



There's An Ongoing Question That Needs Answering?
Where did 6 million+ signatures really come from?


Back in the UK, the Brexit fiasco is definitely still ongoing. Just before the recent A50 extensions took place, a petition to revoke article 50 and remain in the EU attracted more than 6 million signatures, becoming the biggest and fastest growing ever in history and sparking right wing criticism over the origin of thousands of signatures, claiming that most came from overseas and discrediting its legitimacy. Government responded by rejecting the petition.

Thankfully the data is publicly available (https://petition.parliament.uk/petitions/241584.json) for us to use as an example of how a dashboard can be designed to settle such a question (potentially in real time too as more signatures come in).

Tableau can handle JSON data quite well and, to nobody’s surprise, we quickly discover that over 95% of signatures are coming from the UK.

Now that we know what we're dealing with, lets focus the map on Britain and provide additional countries data in a format that is easier to digest visually. As cool as it is to hover over the world map, there's simpler ways to take this in.

Because in this case we know more than 95% of signatures originate from the UK, the heatmap above is far more useful, showing us the signature count for each constituency at a glance. The hotter the shading, the higher the count.


Scales Might Need Calibration
Bar Chart All The Way


Humans of all levels compute a bar chart well and it's perfect for what we need to know on how many signatures are coming from abroad altogether and from what countries in descending order.

With a margin so tiny, it's trickier to get a visual that makes sense. A pie chart, for example, would hardly display the smaller slice containing all of the non-UK origin signatures. Even with a bar chart we are struggling to see anything outside of the UK in a linear scale; but it is perfect if using logarithmic scales, which are definitely a must in this scenario.

And voila! The logarithmic scale allows the remaining counts to appear alongside the UK, even though France, the next country after the UK with most signatures, has a count below 50k. This means we can keep an eye on the outliers in more detail quite effortlessly. Not much looks out of place right now considering the number of expats Britain produces to the countries on the list. Now we know, as long as none of the other countries turn red, we have nothing to worry about!


Innovate When Needed

The logarithmic scale in Tableau isn't as useful for these %, so hacking the visualised values in order to amplify the data sections of interest is a perfectly valid way of thinking outside the box. In this example, half the graph is dedicated to 90-100% and the other half 0-90%. The blue chunk is the percentage of signatures coming from the UK, while every other country colour chunk is still so small. Since the totals from other countries are about the same as each mainland constituency, it's more useful to see it as one chunk. Lastly, adding the heat colour coding keeps the visual integrity.

Interactivity

Now that we have the count, percentage and location breakdown into 3 simple graphs we feel much wiser. So it's time to make them interact with each other.

The constituency heatmap doesn't need to interact with the bar charts. The correlation between the hottest bars and the heatmap is obvious from the get go, but if we were to filter the bars using the map, the percentages would be so tiny you wouldn't see much on the % graph. The same occurs for the Country bar chart, meaning that only the percentage chart can be usefully used as a filter. Selecting the yellow chunk will show the count of signatures for every country within it only.

Another way in which interactivity can be introduced is through adding further visualisations to the tooltip. The petition data contains the MP responsible for each constituency, so we can effectively put a count of signatures to each name. It's nice to be able to see what their parliamentary voting record has been throughout this Brexit deadlock, which was obtained publicly from the House of Commons portal https://commonsvotes.digiminster.comand blended in; as more votes come in, the list will automatically increase.

Keep It Simple

As you can see, 3 is a magic number here. The trio of visuals working together makes a dashing delivery of intel to the brain. With very little effort, we can see how many signatures come from the UK compared to rest of the world, how many thousands are coming from each country, how many from each constituency, who the MP you should be writing to is and how they voted in the indicative votes. Furthermore, this dashboard can keep track of all of that in real time, flagging any incoming surge of signatures from abroad, continuously counting the additional signatures until August 2019 and providing a transparent record of parliamentary votes in a format that is very easy to visually digest.

Categories: BI & Warehousing

Virtual Machines (VMs)

Jeff Moss - Mon, 2019-09-02 01:59
Getting an Image
$Location="northeurope"
Get-AzureRmVMImagePublisher -Location $Location
Get-AzureRmVMImageOffer -Location $Location -PublisherName "MicrosoftSQLServer"
Get-AzureRmVMImageSku -Location $Location -PublisherName "MicrosoftSQLServer" -Offer "SQL2019-WS2016"
Get-AzureRmVMImage -Location $Location -PublisherName "MicrosoftSQLServer" -Offer "SQL2019-WS2016" -Skus "SQLDEV"

Announcement: New “Oracle Indexing Internals and Best Practices” Webinar – 19-23 November 2019 in USA Friendly Time Zone

Richard Foote - Sun, 2019-09-01 23:54
I’m very excited to announce a new Webinar series for my highly acclaimed “Oracle Indexing Internals and Best Practices” training event, running between 19-23 November 2019 !! Indexes are fundamental to every Oracle database and are crucial for optimal performance. However, there’s an incredible amount of misconception, misunderstanding and pure myth regarding how Oracle indexes function […]
Categories: DBA Blogs

Oracle Cloud: Sign up failed...

Dietrich Schroff - Sun, 2019-09-01 08:38
Yesterday i tried to sign up for oracle cloud:

 So let's start the registration process:


The mobile number verification is done with SMS and after entering the 7 digit pin, you are allowed to enter a password:

As payment information only credit cards are accepted:
  • VISA
  • Mastercard
  • Amex


Eve though my credit card was accepted:



"Your credit card has been successfully validated. Please proceed to complete the Sign up."
I got the following error:

"We're unable to process your transaction. Please contact Oracle Customer Service."
The link "Oracle Customer Service" did not work, so i used the Chat Support. But inside the chat was no agent available and only "Send E-Mail" worked. Let's see what kind of response i will be given...

EDIT: Some further attempts...

EDIT 2: see here how the problem was solved.  

Ubuntu Linux: Change from Legacy boot to UEFI boot after installation

Dietrich Schroff - Sat, 2019-08-31 10:01
This weekend i did an installation of Linux on a laptop where already a windows 10 was installed.
Because laptop did not recognize my linux boot usb-stick i changed from UEFI to legacy mode and the installation went through without any problem.

At the end grub was in place but the windows installation was not listed. This is, because windows does not support booting.

The problem: If i switch back to UEFI the linux installation did not start anymore.

My solution:
  • Change to UEFI and boot with a live linux
  • Install boot-repair into the live linux
    (https://help.ubuntu.com/community/Boot-Repair)
    sudo add-apt-repository ppa:yannubuntu/boot-repair
    sudo apt-get update
    sudo apt-get install -y boot-repair
  • Then run boot repair
    boot-repair
  • Follow the instructions on the Boot-Repair homepage (s. above)
  • Enter the commands of the following popus:


And after removing the live CD i got an boot grub menu, where windows was in place and working (and the Ubuntu Linux worked, too ;-)

Pages

Subscribe to Oracle FAQ aggregator