Feed aggregator

“No Primary Key item has been defined for form region”

Jeff Kemp - Fri, 2019-08-16 02:30

Otherwise reported as “WWV_FLOW_FORM_REGION.NO_PRIMARY_KEY_ITEM” or merely “Internal error”, this bug caused me to waste hours of investigation, twice, because a simple Google search was not returning any results. This post is merely to remedy that situation.

On an APEX 19.1 page with a new Form region combined with an Interactive Grid region, when the user makes a change to a record and Submits the page, they get this error.

This is caused by bug 29612553 – FORM REGION DML PROCESS THROWS INTERNAL ERROR WHEN INTERACTIVE GRID DML PROCESS IS ON THE SAME PAGE and fixed by APEX patch 29369869.

Basic Replication -- 1 : Introduction

Hemant K Chitale - Thu, 2019-08-15 23:24
Basic Replication, starting with Read Only Snapshots has been available in Oracle since  V7.   This was doable with the "CREATE SNAPSHOT" command.

In 8i, the term was changed from "Snapshot" to "Materialized View"  and the "CREATE MATERIALIZED VIEW" command was introduced, while "CREATE SNAPSHOT" was still supported.

Just as CREATE SNAPSHOT is still available in 19c,  DBMS_SNAPSHOT.REFRESH is also available.


























Not that I recommend that you use CREATE SNAPSHOT and DBMS_SNAPSHOT anymore.  DBAs and Developers should have been using CREATE MATERIALIZED VIEW and DBMS_REFRESH since 8i.

In the next few blog posts (this will be a very short series) I will explore Basic Replication.  Let me know if you want to see it in 11.2 and 12c as well.



Categories: DBA Blogs

Find Docker Container IP Address?

DBASolved - Wed, 2019-08-14 11:38

This is just simple post for later reference, if I need it …

In setting up some docker containers for testing Oracle GoldenGate, I needed to find the IP address of the container where my database was running (I keep my database in a seperate container in order not to rebuild it every time).

To find the address of my database container, I had to use the docker “inspect” command. This command returns low level infomation on Docker objects.

The syntax is as follows:

docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'  

 

 

Enjoy!!!!

Categories: DBA Blogs

Identifying the TNS_ADMIN for an deployment in GoldenGate Microservices

DBASolved - Wed, 2019-08-14 10:30

Setting up network routing/access with Oracle GoldenGate Microservices (12.3 and later) can be an interesting experience. As many in the Oracle space knows, you simply have to setup TNS_ADMIN to point to the location where your sqlnet.ora and tnsnames.ora files are located. This would normally look like this:


export TNS_ADMIN=${ORACLE_HOME}/network/admin


or


export TNS_ADMIN=$(ORA_CLIENT_HOME}/network/admin

 

These examples still working for Oracle GoldenGate Classic, however, when we start looking at this through the lens of Microservices; it changes a bit. Within the Oracle GoldenGate Microservices architecture the TNS_ADMIN enviroment variable has to be set “per deployment”. Depending on the number of deployments that are deployed with in the architecture, it is possible to have 1+N TNS_ADMIN variables.

As a illistration, it would look something like this:

As you can see this is specific to the Microservices architecture and how to setup network routing for individual deployments.

Setting TNS_ADMIN

How do you set the TNS_ADMIN environment variable for each deployment? It is quite simple, when you are building a deployment using the Oracle GoldenGate Configuration Assistant (OGGCA). Priort to running OGGCA, you can set the TNS_ADMIN variable at the OS level and the OGGCA will pick it up for that run and configuration of that specific deployment.

Optionally, you don’t have to set it at the OS level. During the OGGCA walkthrough, you will be able to set the variable manually. The OGGCA will not move past the enviornment variables step until it is provided.

Changing TNS_ADMIN

After building a deployment, you many want to chang the location of your network related files. This can be done from either the HTML5 web page for the deployment or from REST API.

To change TNS_ADMIN from the HTML5 pages within Oracle GoldenGate Microservices, you need to start at the ServiceManager Overview page. At the bottom on this page, there is a section called “Deployments”

The select the deployment you want to work with. After clicking on the deployment name, you should now be on the “Deployment Information” page. This page has two tabs at the top. The first tab is related to details of the deployment. The second table is related to configurations for the deployment.

Within the second tab – Configurations, is where you can set/change the environment variables for the deployment. In this case, we want to to modify the TNS_ADMIN enviornment variable.

 

To the right of the variable in the “Actions” column, click on the pencil icon. This will allow you to edit the environment variable. Change to the new location and save it. You may need to restart the deployment (hint, that step is on the ServiceManager Overview page).

At this point, you should now be able to change the location of your TNS_ADMIN variable. This is also handy for Oracle GoldenGate Microserivces on Marketplace as well … just saying.

Using REST API

This same process can be done quickly using the REST API. The below sample code, is only and sample and has not been tested. Use at your own risk!

curl -X PATCH \
  <a href="https://<ip_address>/services/v2/deployments/alpha" target="_blank" rel="noopener">https://<ip_address>/services/v2/deployments/alpha</a> \
  -H 'cache-control: no-cache' \
  -d '{
    "oggHome":"/opt/app/oracle/product/19.1.0/oggcore_1",
    "oggEtcHome":"/opt/app/oracle/gg_deployments/Atlanta/etc",
    "oggVarHome":"/opt/app/oracle/gg_deployments/Atlanta/var",
    "environment"{
    	"tns_admin":"/opt/app/oracle/product/18.1.0/network/admin"
    }
    "status":"restart"
}'

Enjoy!!!

Categories: DBA Blogs

Menu Popup with Declarative List

Jeff Kemp - Tue, 2019-08-13 23:04

In the past when I’ve needed to add half a dozen or more buttons to a page, I’ve sometimes encased them in a Collapsible region so that the user can slide them out of the way instead of clogging up the screen. Recently however I’ve started (sparingly) using a Menu Popup, as per this tutorial. The issue I have with this method, however, is that the menu items are defined in a shared component (a List) which means it’s not defined on a per-page basis.

Some of the actions simply need to do a Submit on the page, which is simple enough: set the URL Target to something like:

In other cases, the action needs to do something more specific to the page, e.g. show a region:

apex.theme.openRegion("popupQuestion")

Or the action might need to navigate to another page, passing parameters based on specific items on the page. This means the list, defined in Shared Components, now has hardcoded elements that are only useful for that one page; more to the point, they are defined outside of the page – I’d rather that everything specific to a page is defined within that page’s definition.

The approach I’m using now is to use a custom trigger. Each list item has its URL Target set to something like:

The third parameter is set to a unique code that the page can use to identify which menu option was chosen. This parameter will be passed to this.data in the custom trigger’s event handler.

On the page, I have a Dynamic Action with the following attributes:

  • Event: Custom
  • Custom Event: menuAction
  • Selection Type: JavaScript Expression
  • JavaScript Expression: document
  • True Action: Execute JavaScript Code, e.g.:
switch(this.data) {
  case 'OPEN_POPUP':
    apex.theme.openRegion("popupQuestion");
    break;

  default:
    apex.page.submit({request:this.data,showWait:true});
}

Note that to do a simple Submit on the page, all I need to do is set the request on the third parameter of the menu item’s URL. If I want to do something slightly different for a particular request, I can put an extra “case” in the JavaScript code to handle it.

The benefit of this approach is that this trigger becomes the jumping-off point for all such menu actions for this page. In theory I could re-use the same List on multiple pages (if the items in the list are generic enough) but have different behaviour occur for each menu item specific to each page. The only challenge with this approach might be if you needed some conditions on each menu item, e.g. so they are shown or hidden in specific circumstances. If the condition for a menu item references a particular page item the List will no longer be generic and re-usable. For this reason, I usually still use a separate List for each menu for each page.

Perhaps in a future release of APEX we will gain the ability to define a List on a Page instead of in Shared Components. In the meantime, if you are interested in all the details on this method (including a solution for implementing a redirect to another page, or to open a modal page), refer to this tip.

Oracle JET - How To Reference JavaScript Module in HTML

Andrejus Baranovski - Sun, 2019-08-11 07:06
I will explain how to reference JavaScript module function in Oracle JET HTML. In previous Oracle JET versions, we were using $root to access parent appController module. Syntax $root looks a bit like magic, it is better to reference module through a predefined variable. I will show you how.

Sample app comes with number converter, which helps to format numeric value. For convenience reasons and better reuse, number formatting function is moved to a separate module. Our goal is to call function self.convertNumericValue from converter module inside our module HTML:


Import the converter module into your target module. Make sure to define a variable for the import. Then define the local variable and assign it with the value pointing to the imported module. This will allow calling functions from the imported module, anywhere within our target module:


To demonstrate how it works, I will include a call to format number for the table column. I have defined a table column template for that reason:


Within the template, I'm calling converter (function is called by referencing local variable) to format number:


It works well, the column with salary values is formatted by function from the imported module:


A sample application is available on my GiHub repo.

Rolling out patched 19c home using gold image

Bobby Durrett's DBA Blog - Thu, 2019-08-08 18:49

For Oracle versions 11.2 through 18 I have been applying quarterly patches to a test database and then gathering the Oracle home into a tar file. When we want to roll out a new database server with the latest patches we untar the file and clone the Oracle home with a command like this:

$ORACLE_HOME/oui/bin/runInstaller -clone -silent ...

This command no longer works for 19c so now I am using a new command like this:

$ORACLE_HOME/runInstaller -silent -responseFile ...

Before running this command I had to unzip my gold image zip file into $ORACLE_HOME. I created the gold image zip file using a command like this:

./runInstaller -createGoldImage -destinationLocation ...

I ran this through MobaXterm to use their X server. I created the response file when I initially installed 19c on this test server. Then I patched the Oracle home with the July 2019 PSU and finally ran the above command to create the gold image.

Some useful links that I ran into:

Franck Pachot’s post about doing a silent 18c install using the new runInstaller

Oracle support document that says the old Oracle home cloning does not work in 19c:

19.x:Clone.pl script is deprecated and how to clone using gold-image (Doc ID 2565006.1)

Oracle documentation about installing silent with response file

DBA Stackexchange post about how you have to use a response file because the command line options don’t work

This is kind of terse but it has some of the key information. I may update it later if I can.

Bobby

Categories: DBA Blogs

2million PageViews

Hemant K Chitale - Wed, 2019-08-07 23:53

This blog has now achieved 2million PageViews :




(The "drop" at the end is the count for only the first week of August 2019).

Although this blog began in December 2006, the PageViews counts start with 8,176 in July 2010.  So, effectively, this blog has had 2million PageViews in 9years.

The first 1million PageViews were achieved in March 2015.

Unfortunately, the rate at which I have been publishing has declined since 2017 (36 posts in 2017, 30 in 2018 and only 8 so far this year).  I apologise for this.  Hopefully, I should be able to add more posts in the coming months.




Categories: DBA Blogs

Feedback on the EMEA Red Hat Partner Conference 2019 in Prague

Yann Neuhaus - Wed, 2019-08-07 04:06

A few weeks ago I attended the Red Hat EMEA Partner Conference for the first time. The Red Hat Conference took place in Prague from June 25th to 27th. If you are interested in Open Source technologies and in Red Hat, feel free to read this personal feedback on trends at Red Hat and in the IT sector.

Stronger Together!

Representing dbi services at the Partner Conference in Prague was a great opportunity for us as a Red Hat Advanced Partner.

About 850 people attended this amusing event! Interactions with the Red Hat Community were very interesting and relaxed. Is it because of the Open Source atmosphere? The organization, catering, and location were great also! Many thanks to the organizers !

Also a sincere thank you to all Swiss Red Hat and Tech Data contacts at the event for welcoming and assisting Swiss Partners during the 3 days. Everything went extremely professional thanks to Leonard Bodmer (Country Manager Red Hat Switzerland), Richard Zobrist (Head of Partner & Alliances Red Hat), Maja Zurovec (Account Manager Partner & Alliances Red Hat), Sandra Maria Sigrist (Tech Data), and Daria Stempkowski (Tech Data). Many thanks to all of you, also for the warm and relaxed evening at Villa Richter at the foot of Prague’s castle !

We are Stronger Together!

All about automation, integration, hybrid cloud, and multi-cloud

With this 3 days Partner Conference, Red Hat proposed a bride agenda of Breakouts, Exams, Hackathon, Keynotes, Labs, and an Open Innovation Lab. I mostly attended sessions where Red Hat partners and customers had the opportunity to give feedbacks on their experience with Red Hat products. Some of the sessions and keynotes were remarkable.

Red Hat Middleware Roadmap

The “Red Hat Middleware Roadmap” sessions (Part 1 & 2) with Rich Sharpels were a good opportunity to learn more about productivity (automation, integration, runtimes), reliability (security, reduced complexity), and flexibility (containers for scaling, cloud, hybrid-cloud, multi-cloud) with OpenShift. With these 2 presentations you also got informed on the iPaaS which is a new private integration Platform-as-a-Service offering to provide cloud-based services for application integration and messaging. The goal is here to strengthen collaboration within the business teams (devOps) thanks to Managed Integration + OpenShift Dedicated. Rich Sharpels summarizes the benefits of the iPaaS with: “cloud services and packages where customers don’t have to administrate anything!”

Ansible Partner Enablement Offerings

Günter Herold from Red Hat and Daniel Knözinger from Open Networks Austria held the session “Ansible Partner Enablement Offerings”. This session was focusing on advantages of automating tasks with Ansible for reducing mistakes, errors, and complexity because “Ansible is the universal language for the whole IT team”. With Ansible, “start small and develop” .

Best Practices for Working with Red Hat Support

Who wanted to get informed on “Best Practices for Working with Red Hat Support” attended the session with Caroline Baillargeon, Leona Meeks, and Peter Jakobs from Red Hat. This presentation gave the opportunity to learn and discuss on:

  • The Customer Portal which is said to be “full of information and best practices that could be useful before opening and escalating issues”. For example, should you search for information on Ansible, have a look at this page
  • The TSAnet Connect where answers for multi IT solutions are centralized
  • The Case Management Tool for sharing the open source spirit and to be part of the community (example)
  • Tips to work efficiently with the Red Hat support:
    1. “Make sure the customer is registered on the correct time zone”
    2. “To get 7×24 support, a Premium Support subscription is needed”
    3. “in case the answer on an issue is not appropriate, use the escalation button”
    4. “contact Red Hat to get access to the trainings that also the Red Hat engineers follow for technical problem solving”

While keeping the end customer’s satisfaction in mind, this session could probably be best summarized with “why not contributing and sharing knowledge within the Red Hat community ?”

Keynotes

Keynotes mainly concentrated on “marketing topics” that aim at boosting partner engagement, but still interesting, in particular:

  • “The Killer App in digital transformation is human connection” with Margaret Dawson on collaborating within the community
  • “Open Hybrid Cloud Ecosystems: Bold Goals for Tomorrow” with Lars Herrmann on innovations that have been considered impossible. In short, “if you think it is impossible, just do it”, so develop cloud, hybrid cloud and multi-cloud opportunities…

 

On Red Hat’s and Partner’s booths at the Congress Center

Besides sessions and presentations, lots of interesting technical booths at the Congress Center Prague did promote the work of the Red Hat engineers within the Open Source community. In particular I spent some time with Romain Pelisse (Red Hat Berlin) and Robert Zahradnícek (Red Hat Brno) to let me explain how they work and what are the trends in their areas. Of course we did speak about automation and integration, and about findings that are developed within the open source community first, before getting implemented in Red Hat solutions.

Last but not least, some Red Hat partners were present with a booth to promote their activities and products during the conference, among which Tech Data and Arrows ECS which are well know at dbi services.

What to take from the Red Hat EMEA Conference? On Red Hat and the Conference

At the end of the day, the keywords from the Red Hat EMEA Conference were probably not that far from the keywords you would get from other technology conferences. Concepts and products like “automation”, “integration”, “Ansible”, or “OpenShift” are means to get companies into the cloud. But why not? The trend into the cloud is getting more and more clear as it now makes sense for lots of projects at least for Disaster Recovery, Test and Development in the cloud.

If private cloud, hybrid cloud, or multi-cloud is not the topic at Red Hat. Their solutions are agnostic. And Red Hat’s strategy is clearly based on a strong commitment to open source. It’s all about “products” (not projects), “collaboration”, “community”, and “customer success”.

On Open Source trends and strategy

Now you may ask why subscribing to Red Hat’s products and support? Sure, with Ansible and other Open Source products you can easily “start small and develop”. Therefore the community version may fit. But what if you go in production? The more and the bigger the projects will become, the more you will need support. And to subscribe will probably make sense.

Then don’t forget that Open Source is not for free. That you go for community or enterprise Open Source makes no difference, at the end you will need to invest at least in time and knowledge. And, depending on the situation, you may subscribe in products and support. If you don’t know where to start, ask dbi services for Open Source expertise.

Looking forward to reading your comments.

Cet article Feedback on the EMEA Red Hat Partner Conference 2019 in Prague est apparu en premier sur Blog dbi services.

19c Grid Dry-Run Upgrade

Michael Dinh - Tue, 2019-08-06 07:42

First test using GUI.

[oracle@racnode-dc2-1 grid]$ /u01/app/19.3.0.0/grid/gridSetup.sh -dryRunForUpgrade
Launching Oracle Grid Infrastructure Setup Wizard...

The response file for this session can be found at:
 /u01/app/19.3.0.0/grid/install/response/grid_2019-08-06_00-20-31AM.rsp

You can find the log of this install session at:
 /u01/app/oraInventory/logs/GridSetupActions2019-08-06_00-20-31AM/gridSetupActions2019-08-06_00-20-31AM.log
[oracle@racnode-dc2-1 grid]$

Create dryRunForUpgradegrid.rsp from grid_2019-08-06_00-20-31AM.rsp (above GUI test)

[oracle@racnode-dc2-1 grid]$ grep -v "^#" /u01/app/19.3.0.0/grid/install/response/grid_2019-08-06_00-20-31AM.rsp | grep -v "=$" | awk 'NF' > /home/oracle/dryRunForUpgradegrid.rsp

[oracle@racnode-dc2-1 ~]$ cat /home/oracle/dryRunForUpgradegrid.rsp
oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v19.0.0
INVENTORY_LOCATION=/u01/app/oraInventory
oracle.install.option=UPGRADE
ORACLE_BASE=/u01/app/oracle
oracle.install.crs.config.scanType=LOCAL_SCAN
oracle.install.crs.config.ClusterConfiguration=STANDALONE
oracle.install.crs.config.configureAsExtendedCluster=false
oracle.install.crs.config.clusterName=vbox-rac-dc2
oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.autoConfigureClusterNodeVIP=false
oracle.install.crs.config.gpnp.gnsOption=CREATE_NEW_GNS
oracle.install.crs.config.clusterNodes=racnode-dc2-1:,racnode-dc2-2:
oracle.install.crs.configureGIMR=true
oracle.install.asm.configureGIMRDataDG=false
oracle.install.crs.config.storageOption=FLEX_ASM_STORAGE
oracle.install.crs.config.sharedFileSystemStorage.ocrLocations=
oracle.install.crs.config.useIPMI=false
oracle.install.asm.diskGroup.name=CRS
oracle.install.asm.diskGroup.AUSize=0
oracle.install.asm.gimrDG.AUSize=1
oracle.install.asm.configureAFD=false
oracle.install.crs.configureRHPS=false
oracle.install.crs.config.ignoreDownNodes=false
oracle.install.config.managementOption=NONE
oracle.install.config.omsPort=0
oracle.install.crs.rootconfig.executeRootScript=false
[oracle@racnode-dc2-1 ~]$

Create directory grid home for all nodes:

[root@racnode-dc2-1 ~]# id oracle
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54318(asmdba),54322(dba),54323(backupdba),54324(oper),54325(dgdba),54326(kmdba)

[root@racnode-dc2-1 ~]# mkdir -p /u01/app/19.3.0.0/grid
[root@racnode-dc2-1 ~]# chown oracle:oinstall /u01/app/19.3.0.0/grid
[root@racnode-dc2-1 ~]# chmod 775 /u01/app/19.3.0.0/grid

[root@racnode-dc2-1 ~]# ll /u01/app/19.3.0.0/
total 4
drwxrwxr-x 2 oracle oinstall 4096 Aug  6 02:07 grid
[root@racnode-dc2-1 ~]#

Extract grid software for node1 ONLY:

[oracle@racnode-dc2-1 ~]$ unzip -qo /media/swrepo/LINUX.X64_193000_grid_home.zip -d /u01/app/19.3.0.0/grid/

[oracle@racnode-dc2-1 ~]$ ls /u01/app/19.3.0.0/grid/
addnode     clone  dbjava     diagnostics  gpnp          install        jdbc  lib      OPatch   ords  perl     qos       rhp            rootupgrade.sh  sqlpatch  tomcat  welcome.html  xdk
assistants  crs    dbs        dmu          gridSetup.sh  instantclient  jdk   md       opmn     oss   plsql    racg      root.sh        runcluvfy.sh    sqlplus   ucp     wlm
bin         css    deinstall  env.ora      has           inventory      jlib  network  oracore  oui   precomp  rdbms     root.sh.old    sdk             srvm      usm     wwg
cha         cv     demo       evm          hs            javavm         ldap  nls      ord      owm   QOpatch  relnotes  root.sh.old.1  slax            suptools  utl     xag

[oracle@racnode-dc2-1 ~]$ du -sh /u01/app/19.3.0.0/grid/
6.0G    /u01/app/19.3.0.0/grid/
[oracle@racnode-dc2-1 ~]$

Run gridSetup.sh -silent -dryRunForUpgrade:

[oracle@racnode-dc2-1 ~]$ env|grep -i ora
USER=oracle
MAIL=/var/spool/mail/oracle
PATH=/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/oracle/.local/bin:/home/oracle/bin
PWD=/home/oracle
HOME=/home/oracle
LOGNAME=oracle

[oracle@racnode-dc2-1 ~]$ date
Tue Aug  6 02:35:47 CEST 2019

[oracle@racnode-dc2-1 ~]$ /u01/app/19.3.0.0/grid/gridSetup.sh -silent -dryRunForUpgrade -responseFile /home/oracle/dryRunForUpgradegrid.rsp
Launching Oracle Grid Infrastructure Setup Wizard...

[WARNING] [INS-13014] Target environment does not meet some optional requirements.
   CAUSE: Some of the optional prerequisites are not met. See logs for details. /u01/app/oraInventory/logs/GridSetupActions2019-08-06_02-35-52AM/gridSetupActions2019-08-06_02-35-52AM.log
   ACTION: Identify the list of failed prerequisite checks from the log: /u01/app/oraInventory/logs/GridSetupActions2019-08-06_02-35-52AM/gridSetupActions2019-08-06_02-35-52AM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.
The response file for this session can be found at:
 /u01/app/19.3.0.0/grid/install/response/grid_2019-08-06_02-35-52AM.rsp

You can find the log of this install session at:
 /u01/app/oraInventory/logs/GridSetupActions2019-08-06_02-35-52AM/gridSetupActions2019-08-06_02-35-52AM.log


As a root user, execute the following script(s):
        1. /u01/app/19.3.0.0/grid/rootupgrade.sh

Execute /u01/app/19.3.0.0/grid/rootupgrade.sh on the following nodes:
[racnode-dc2-1]

Run the script on the local node.

Successfully Setup Software with warning(s).
[oracle@racnode-dc2-1 ~]$

Run rootupgrade.sh for node1 ONLY and review log:

[root@racnode-dc2-1 ~]# /u01/app/19.3.0.0/grid/rootupgrade.sh
Check /u01/app/19.3.0.0/grid/install/root_racnode-dc2-1_2019-08-06_02-44-59-241151038.log for the output of root script

[root@racnode-dc2-1 ~]# cat /u01/app/19.3.0.0/grid/install/root_racnode-dc2-1_2019-08-06_02-44-59-241151038.log
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/19.3.0.0/grid
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Performing Dry run of the Grid Infrastructure upgrade.
Using configuration parameter file: /u01/app/19.3.0.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/racnode-dc2-1/crsconfig/rootcrs_racnode-dc2-1_2019-08-06_02-45-31AM.log
2019/08/06 02:45:44 CLSRSC-464: Starting retrieval of the cluster configuration data
2019/08/06 02:45:52 CLSRSC-729: Checking whether CRS entities are ready for upgrade, cluster upgrade will not be attempted now. This operation may take a few minutes.
2019/08/06 02:47:56 CLSRSC-693: CRS entities validation completed successfully.
[root@racnode-dc2-1 ~]#

Check grid home for node2:

[oracle@racnode-dc2-2 ~]$ du -sh /u01/app/19.3.0.0/grid/
6.6G    /u01/app/19.3.0.0/grid/
[oracle@racnode-dc2-2 ~]$

Check oraInventory for ALL nodes:

[oracle@racnode-dc2-2 ~]$ cat /u01/app/oraInventory/ContentsXML/inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2019, Oracle and/or its affiliates.
All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
   <SAVED_WITH>12.2.0.7.0</SAVED_WITH>
   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="OraGI12Home1" LOC="/u01/app/12.2.0.1/grid" TYPE="O" IDX="1" CRS="true"/>
<HOME NAME="OraDB12Home1" LOC="/u01/app/oracle/12.2.0.1/db1" TYPE="O" IDX="2"/>
==========================================================================================
<HOME NAME="OraGI19Home1" LOC="/u01/app/19.3.0.0/grid" TYPE="O" IDX="3"/>
==========================================================================================
</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>
[oracle@racnode-dc2-2 ~]$

Check crs activeversion: 12.2.0.1.0

[oracle@racnode-dc2-1 ~]$ . /media/patch/gi.env
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.2.0.1/grid
ORACLE_HOME=/u01/app/12.2.0.1/grid
Oracle Instance alive for sid "+ASM1"

[oracle@racnode-dc2-1 ~]$ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [12.2.0.1.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [927320293].
[oracle@racnode-dc2-1 ~]$

Check log location:

[oracle@racnode-dc2-1 ~]$ cd /u01/app/oraInventory/logs/GridSetupActions2019-08-06_02-35-52AM/

[oracle@racnode-dc2-1 GridSetupActions2019-08-06_02-35-52AM]$ ls -alrt
total 17420
-rw-r-----  1 oracle oinstall     129 Aug  6 02:35 installerPatchActions_2019-08-06_02-35-52AM.log
-rw-r-----  1 oracle oinstall       0 Aug  6 02:35 gridSetupActions2019-08-06_02-35-52AM.err
drwxrwx---  3 oracle oinstall    4096 Aug  6 02:35 temp_ob
-rw-r-----  1 oracle oinstall       0 Aug  6 02:39 oraInstall2019-08-06_02-35-52AM.err
drwxrwx--- 17 oracle oinstall    4096 Aug  6 02:39 ..
-rw-r-----  1 oracle oinstall     157 Aug  6 02:39 oraInstall2019-08-06_02-35-52AM.out
-rw-r-----  1 oracle oinstall       0 Aug  6 02:43 oraInstall2019-08-06_02-35-52AM.err.racnode-dc2-2
-rw-r-----  1 oracle oinstall     142 Aug  6 02:43 oraInstall2019-08-06_02-35-52AM.out.racnode-dc2-2
-rw-r-----  1 oracle oinstall 9341920 Aug  6 02:43 gridSetupActions2019-08-06_02-35-52AM.out
-rw-r-----  1 oracle oinstall   13419 Aug  6 02:43 time2019-08-06_02-35-52AM.log
-rw-r-----  1 oracle oinstall 8443087 Aug  6 02:43 gridSetupActions2019-08-06_02-35-52AM.log
drwxrwx---  3 oracle oinstall    4096 Aug  6 02:56 .
[oracle@racnode-dc2-1 GridSetupActions2019-08-06_02-35-52AM]$

Since I have not performed actual upgrade, I don’t know if 19.3.0.0 grid home in oraInventory will be problematic.

It was problematic when performing test with silent mode after initial test with GUI.

To resolve the issue, detach 19.3.0.0 grid home

export ORACLE_HOME=/u01/app/19.3.0.0/grid
$ORACLE_HOME/oui/bin/runInstaller -detachHome -silent ORACLE_HOME=$ORACLE_HOME

Cool Roblox Usernames

VitalSoftTech - Tue, 2019-08-06 02:02

Are you about to make a new Roblox account to play with your friends and are in a bind for a cool username? We’ll help you there. Roblox is an interactive, imaginative, online game that you can play with your friends. It is one of the largest online gaming platforms, with over 15 million games […]

The post Cool Roblox Usernames appeared first on VitalSoftTech.

Categories: DBA Blogs

Oracle JET Model ID Attribute

Andrejus Baranovski - Tue, 2019-08-06 01:20
When building Oracle JET Model/Collection structure to render table, you should be careful defining ID attribute. This attribute must have unique values only, if there will be duplicates in ID, then table rows will be rendered incorrectly.

Take as example below structure for Employees REST endpoint. EmployeId attribute provides unique values and it should be defined as ID for JET model:


Let's try and see what happens when non-unique DepartmentId attribute is set for a key in JET model:


In this case, during data fetch and UI table rendering (specifically when navigating to the next page of the table) - JET runtime for the table will get confused and will fetch all of the records from the backend (definitely, not what we want):


Ok, change it to the proper one - use EmployeeId for the ID in JET model:


The table works as expected:


Lesson learned - make sure to use an attribute with unique values for JET model ID. If there is no single attribute with unique values, the concatenate multiple attributes on the backend to construct a single attribute with unique value.

Source code is available from my GitHub repo.

Announcing Fluid 3!

Jim Marion - Mon, 2019-08-05 12:43

At jsmpros, we teach Fluid training courses several times a month. PeopleTools Fluid is by far our most popular course. Through our Fluid 1 and 2 courses, we teach an incredible amount of material including:

  • Fluid navigation,
  • Fluid page development strategies,
  • Oracle-delivered style classes and layout
  • External CSS libraries,
  • Fluid grid layouts
  • Fluid Group Boxes
  • Dynamic Tiles
  • Responsive and adaptive mobile design concepts, etc.

The feedback from our Fluid 1 and 2 series is overwhelmingly positive. What we are announcing today is our next level Fluid course: Fluid 3. Through this course you will learn how to:

  • Use Master/Detail to build business process based solutions,
  • Build effective secondary headers similar to PeopleSoft's self-service headers (including Related Actions),
  • Use Scroll Areas and Fluid alternatives,
  • Extend Fluid with JavaScript libraries such as d3 and Oracle JET,
  • Leverage the DataGrid to create compelling solutions,
  • Add Fluid Related Content,
  • Convert Classic components to Fluid,
  • Extend Fluid Pages with 8.57+ Drop Zones,
  • Construct robust, business-centric dynamic tiles and Fluid navigation, and
  • Learn additional PeopleSoft CSS classes not covered in the Fluid 1 and 2 courses.

To register for our upcoming August Fluid 3 session or any of our live virtual training courses, please visit our Live Virtual Training online catalog.

Has it been a while since your last Fluid training course? Are your Fluid skills a little rusty? Use our Fluid 3 course as a refresher to get you back into shape.

Note: If you have taken Fluid from other trainers, feel free to start with our Fluid 2 course. Even though we do not monitor prerequisites, we do encourage attendees with Fluid experience to attend our Fluid 2 training before continuing to Fluid 3.

Sparse OVM virtual disks on appliances

Yann Neuhaus - Mon, 2019-08-05 00:23

For some reason, you may need to sparse OVM virtual disks in an Oracle appliances. Even though that feature is present trough the OVM Manager, most of the Oracle appliances doesn’t have any OVM Manager deployed on it. Therefore if you un-sparse your virtual disk by mistake, you are on your own.

This is a note on how to sparse virtual disks which are un-sparse.

Stop I/Os on the virtual disk

First, ensure the VM using the disk is stopped:
xm shutdown {VM_NAME}

For instance:
xm shutdown exac01db01.domain.local

Sparse disk


dd if={PATH_TO_DISK_TO_BE_SPARSED} of={PATH_TO_NEW_SPARSED_DISK} conv=sparsed

For instance:

dd if=/EXAVMIMAGES/GuestImages/exac01db01.domain.local/vdisk_root_01.img \
of=/staging/vdisk_root_01.img \
conv=sparsed

Move disk to former location

After the sparsing operation finished, copy the disk back to their former location:

# Retrieve the disks path:
cat /EXAVMIMAGES/GuestImages/{VM_NAME}/vm.cfg | grep disk
# Copy each disk back to its location:
mv /staging/{DISK_NAME}.img /EXAVMIMAGES/GuestImages/{VM_NAME}/{DISK_NAME}.img

For instance:

mv /staging/vdisk_root_01.img /EXAVMIMAGES/GuestImages/exac01db01.domain.local/vdisk_root_01.img

Start back the VM

Then you can start back the VM which use the new disk:
xm create /EXAVMIMAGES/GuestImages/{VM_NAME}/vm.cfg

I hope this helps and please contact us or comment below should you need more details.

Cet article Sparse OVM virtual disks on appliances est apparu en premier sur Blog dbi services.

Alfresco Clustering – Solr6

Yann Neuhaus - Sat, 2019-08-03 01:00

In previous blogs, I talked about some basis and presented some possible architectures for Alfresco, I talked about the Clustering setup for the Alfresco Repository, the Alfresco Share and for ActiveMQ. I also setup an Apache HTTPD as a Load Balancer. In this one, I will talk about the last layer that I wanted to present, which is Solr and more particularly Solr6 (Alfresco Search Services) Sharding. I planned on writing a blog related to Solr Sharding Concepts & Methods to explain what it brings concretely but unfortunately, it’s not ready yet. I will try to post it in the next few weeks, if I find the time.

 

I. Solr configuration modes

So, Solr supports/provides three configuration modes:

  • Master-Slave
  • SolrCloud
  • Standalone


Master-Slave
: It’s a first specific configuration mode which is pretty old. In this one, the Master node is the only to index the content and all the Slave nodes will replicate the Master’s index. This is a first step to provide a Clustering solution with Solr, and Alfresco supports it, but this solution has some important drawbacks. For example, and contrary to an ActiveMQ Master-Slave solution, Solr cannot change the Master. Therefore, if you lose your Master, there is no indexing happening anymore and you need to manually change the configuration file on each of the remaining nodes to specify a new Master and target all the remaining Slaves nodes to use the new Master. This isn’t what I will be talking about in this blog.

SolrCloud: It’s another specific configuration mode which is a little bit more recent, introduced in Solr4 I believe. SolrCloud is a true Clustering solution using a ZooKeeper Server. It adds an additional layer on top of a Standalone Solr which is slowing it down a little bit, especially on infrastructures with a huge demand on indexing. But at some points, when you start having dozens of Solr nodes, you need a central place to organize and configure them and that’s what SolrCloud is very good at. This solution provides Fault Tolerance as well as High Availability. I’m not sure if SolrCloud could be used by Alfresco because sure SolrCloud also has Shards and its behaviour is pretty similar to a Standalone Solr but it’s not entirely working in the same way. Maybe it’s possible, however I have never seen it so far. Might be the subject of some testing later… In any cases, using a SolrCloud for Alfresco might not be that useful because it’s really easier to setup a Master-Master Solr mixed with Solr Sharding for pretty much the same benefits. So, I won’t talk about SolrCloud here either.

You guessed it, in this blog, I will only talk about Standalone Solr nodes and only using Shards. Alfresco supports Solr Shards only since the version 5.1. Before that, it wasn’t possible to use this feature, even if Solr4 provided it already. When using the two default cores (the famous “alfresco” & “archive” cores), with all Alfresco versions (all supporting Solr… So since Alfresco 4), it is possible to have a High Available Solr installation by setting up two Solr Standalone nodes and putting a Load Balancer in front of it but in this case, there is no communication between the Solr nodes so, it’s only a HA solution, nothing more.

 

In the architectures that I presented in the first blog of this series, if you remember the schema N°5 (you probably don’t but no worry, I didn’t either), I put a link between the two Solr nodes and I mentioned the following related to this architecture:
“N°5: […]. Between the two Solr nodes, I put a Clustering link, that’s in case you are using Solr Sharding. If you are using the default cores (alfresco and archive), then there is no communication between distinct Solr nodes. If you are using Solr Sharding and if you want a HA architecture, then you will have the same Shards on both Solr nodes and in this case, there will be communications between the Solr nodes, it’s not really a Clustering so to speak, that’s how Solr Sharding is working but I still used the same representation.”

 

II. Solr Shards creation

As mentioned earlier in this blog, there are real Cluster solutions with Solr but in the case of Alfresco, because of the features that Alfresco adds like the Shard Registration, there is no real need to set up complex things like that. Having just a simple Master-Master installation of Solr6 with Sharding is already a very good and strong solution to provide Fault Tolerance, High Availability, Automatic Failover, Performance improvements, aso… So how can that be setup?

First, you will need to install at least two Solr Standalone nodes. You can use exactly the same setup for all nodes and it’s also exactly the same setup to use the default cores or Solr Sharding so just do what you are always doing. For the Tracking, you will need to use the Load Balancer URL so it can target all Repository nodes, if there are several.

If you created the default cores, you can remove them easily:

[alfresco@solr_n1 ~]$ curl -v "http://localhost:8983/solr/admin/cores?action=removeCore&storeRef=workspace://SpacesStore&coreName=alfresco"
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8983 (#0)
> GET /solr/admin/cores?action=removeCore&storeRef=workspace://SpacesStore&coreName=alfresco HTTP/1.1
> Host: localhost:8983
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: application/xml; charset=UTF-8
< Content-Length: 150
<
<?xml version="1.0" encoding="UTF-8"?>
<response>
<lst name="responseHeader"><int name="status">0</int><int name="QTime">524</int></lst>
</response>
* Connection #0 to host localhost left intact
[alfresco@solr_n1 ~]$
[alfresco@solr_n1 ~]$ curl -v "http://localhost:8983/solr/admin/cores?action=removeCore&storeRef=archive://SpacesStore&coreName=archive"
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8983 (#0)
> GET /solr/admin/cores?action=removeCore&storeRef=archive://SpacesStore&coreName=archive HTTP/1.1
> Host: localhost:8983
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: application/xml; charset=UTF-8
< Content-Length: 150
<
<?xml version="1.0" encoding="UTF-8"?>
<response>
<lst name="responseHeader"><int name="status">0</int><int name="QTime">485</int></lst>
</response>
* Connection #0 to host localhost left intact
[alfresco@solr_n1 ~]$

 

A status of “0” means that it’s successful.

Once that’s done, you can then simply create the Shards. In this example, I will:

  • use the DB_ID_RANGE method
  • use two Solr nodes
  • for workspace://SpacesStore: create 2 Shards out of a maximum of 10 with a range of 20M
  • for archive://SpacesStore: create 1 Shard out of a maximum of 5 with a range of 50M

Since I will use only two Solr nodes and since I want a High Availability on each of the Shards, I will need to have them all on both nodes. With a simple loop, it’s pretty easy to create all the Shards:

[alfresco@solr_n1 ~]$ solr_host=localhost
[alfresco@solr_n1 ~]$ solr_node_id=1
[alfresco@solr_n1 ~]$ begin_range=0
[alfresco@solr_n1 ~]$ range=19999999
[alfresco@solr_n1 ~]$ total_shards=10
[alfresco@solr_n1 ~]$
[alfresco@solr_n1 ~]$ for shard_id in `seq 0 1`; do
>   end_range=$((${begin_range} + ${range}))
>   curl -v "http://${solr_host}:8983/solr/admin/cores?action=newCore&storeRef=workspace://SpacesStore&numShards=${total_shards}&numNodes=${total_shards}&nodeInstance=${solr_node_id}&template=rerank&coreName=alfresco&shardIds=${shard_id}&property.shard.method=DB_ID_RANGE&property.shard.range=${begin_range}-${end_range}&property.shard.instance=${shard_id}"
>   echo ""
>   echo "  -->  Range N°${shard_id} created with: ${begin_range}-${end_range}"
>   echo ""
>   sleep 2
>   begin_range=$((${end_range} + 1))
> done

*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8983 (#0)
> GET /solr/admin/cores?action=newCore&storeRef=workspace://SpacesStore&numShards=10&numNodes=10&nodeInstance=1&template=rerank&coreName=alfresco&shardIds=0&property.shard.method=DB_ID_RANGE&property.shard.range=0-19999999&property.shard.instance=0 HTTP/1.1
> Host: localhost:8983
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: application/xml; charset=UTF-8
< Content-Length: 182
<
<?xml version="1.0" encoding="UTF-8"?>
<response>
<lst name="responseHeader"><int name="status">0</int><int name="QTime">254</int></lst><str name="core">alfresco-0</str>
</response>
* Connection #0 to host localhost left intact

  -->  Range N°0 created with: 0-19999999


*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8983 (#0)
> GET /solr/admin/cores?action=newCore&storeRef=workspace://SpacesStore&numShards=10&numNodes=10&nodeInstance=1&template=rerank&coreName=alfresco&shardIds=1&property.shard.method=DB_ID_RANGE&property.shard.range=20000000-39999999&property.shard.instance=1 HTTP/1.1
> Host: localhost:8983
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: application/xml; charset=UTF-8
< Content-Length: 182
<
<?xml version="1.0" encoding="UTF-8"?>
<response>
<lst name="responseHeader"><int name="status">0</int><int name="QTime">228</int></lst><str name="core">alfresco-1</str>
</response>
* Connection #0 to host localhost left intact

  -->  Range N°1 created with: 20000000-39999999

[alfresco@solr_n1 ~]$
[alfresco@solr_n1 ~]$ begin_range=0
[alfresco@solr_n1 ~]$ range=49999999
[alfresco@solr_n1 ~]$ total_shards=4
[alfresco@solr_n1 ~]$ for shard_id in `seq 0 0`; do
>   end_range=$((${begin_range} + ${range}))
>   curl -v "http://${solr_host}:8983/solr/admin/cores?action=newCore&storeRef=archive://SpacesStore&numShards=${total_shards}&numNodes=${total_shards}&nodeInstance=${solr_node_id}&template=rerank&coreName=archive&shardIds=${shard_id}&property.shard.method=DB_ID_RANGE&property.shard.range=${begin_range}-${end_range}&property.shard.instance=${shard_id}"
>   echo ""
>   echo "  -->  Range N°${shard_id} created with: ${begin_range}-${end_range}"
>   echo ""
>   sleep 2
>   begin_range=$((${end_range} + 1))
> done

*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8983 (#0)
> GET /solr/admin/cores?action=newCore&storeRef=archive://SpacesStore&numShards=4&numNodes=4&nodeInstance=1&template=rerank&coreName=archive&shardIds=0&property.shard.method=DB_ID_RANGE&property.shard.range=0-49999999&property.shard.instance=0 HTTP/1.1
> Host: localhost:8983
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: application/xml; charset=UTF-8
< Content-Length: 181
<
<?xml version="1.0" encoding="UTF-8"?>
<response>
<lst name="responseHeader"><int name="status">0</int><int name="QTime">231</int></lst><str name="core">archive-0</str>
</response>
* Connection #0 to host localhost left intact

-->  Range N°0 created with: 0-49999999

[alfresco@solr_n1 ~]$

 

On the Solr node2, to create the same Shards (another Instance of each Shard) and therefore provide the expected setup, just re-execute the same commands but replacing solr_node_id=1 with solr_node_id=2. That’s all there is to do on Solr side, just creating the Shards is sufficient. On the Alfresco side, configure the Shards registration to use the Dynamic mode:

[alfresco@alf_n1 ~]$ cat $CATALINA_HOME/shared/classes/alfresco-global.properties
...
# Solr Sharding
solr.useDynamicShardRegistration=true
search.solrShardRegistry.purgeOnInit=true
search.solrShardRegistry.shardInstanceTimeoutInSeconds=60
search.solrShardRegistry.maxAllowedReplicaTxCountDifference=500
...
[alfresco@alf_n1 ~]$

 

After a quick restart, all the Shard’s Instances will register themselves to Alfresco and you should see that each Shard has its two Shard’s Instances. Thanks to the constant Tracking, Alfresco knows which Shard’s Instances are healthy (up-to-date) and which ones aren’t (either lagging behind or completely silent). When performing searches, Alfresco will make a request to any of the healthy Shard’s Instances. Solr will be aware of the healthy Shard’s Instances as well and it will start the distribution of the search request to all the Shards for the parallel query. This is the communication between the Solr nodes that I mentioned earlier: it’s not really Clustering but rather query distribution between all the healthy Shard’s Instances.

 

 

Other posts of this series on Alfresco HA/Clustering:

Cet article Alfresco Clustering – Solr6 est apparu en premier sur Blog dbi services.

AdminClient and Set Commands

DBASolved - Fri, 2019-08-02 13:32

AdminClient is the “new” command line utility that is used with Oracle GoldenGate Microservices. Initally, AdminClient was released with Oracle GoldenGate 12c (12.3.0.0.1) and enhanced in each release there after. With this new command line tool, there are a few things you can do with it that makes it a powerful tool for administering Oracle GoldenGate.

Reminder: This is only avaliable in Oracle GoldenGate Microservices Editions.

Features that make this tool so nice:

  • Default command line tool for Microservices
  • Can be installed on a remote linux machine or Windows Workstations/Laptops
  • Can “Set” advanced setting that provide a few nice features

The third bullet is what will be the focus of this post.

The “Set” command within AdminClient provide you with options that allow you to extend the command line for Oracle GoldenGate. These features are:

After starting the AdminClient, it is possible to see the current settings of these values by using the SHOW command:

Oracle GoldenGate Administration Client for Oracle
Version 19.1.0.0.1 OGGCORE_19.1.0.0.0_PLATFORMS_190524.2201


Copyright (C) 1995, 2019, Oracle and/or its affiliates. All rights reserved.


Linux, x64, 64bit (optimized) on May 25 2019 02:00:23
Operating system character set identified as US-ASCII.


OGG (not connected) 1> show


Current directory: /home/oracle/software/scripts
COLOR            : OFF
DEBUG            : OFF
EDITOR           : vi
PAGER            : more
VERBOSE          : OFF


OGG (not connected) 2>

 

If you want to change any of these settings, you can simply run the “set <option> <value>” at the command prompt. For example, I want to turn on the color option.

OGG (not connected) 2> set color on


OGG (not connected) 3> show


Current directory: /home/oracle/software/scripts
COLOR            : ON
DEBUG            : OFF
EDITOR           : vi
PAGER            : more
VERBOSE          : OFF


OGG (not connected) 4>

 

Now, that we can set these values and change how AdminClient responds; how can these settings be automated (to a degree)? In order to do this, you can write a wrapper around the execution of the AdminClient executable (similar to my post on resolving OGG-01525 error). Within this wrapper, the setting you want to change has to be prefixed with ADMINCLIENT_. This would like this:

export ADMINCLIENT_COLOR=<value>

Note: The <value> is case sensitive.

My shell script for AdminClient with the settings I like to have turned on is setup as follows:

#/bin/bash


export OGG_VAR_HOME=/tmp
export ADMINCLIENT_COLOR=ON
export ADMINCLIENT_DEBUG=OFF


${OGG_HOME}/bin/adminclient

 

Now, when I start AdminClient, I have all the settings I want for my environment. Plus, the ones I do not set will take the default settings.

[oracle@ogg19c scripts]$ sh ./adminclient.sh
Oracle GoldenGate Administration Client for Oracle
Version 19.1.0.0.1 OGGCORE_19.1.0.0.0_PLATFORMS_190524.2201


Copyright (C) 1995, 2019, Oracle and/or its affiliates. All rights reserved.


Linux, x64, 64bit (optimized) on May 25 2019 02:00:23
Operating system character set identified as US-ASCII.


OGG (not connected) 1> show


Current directory: /home/oracle/software/scripts
COLOR            : ON
DEBUG            : OFF
EDITOR           : vi
PAGER            : more
VERBOSE          : OFF


OGG (not connected) 2>

 

Enjoy!!!

Categories: DBA Blogs

Taking Pivotal Build Service (PBS) for a test drive

Pas Apicella - Fri, 2019-08-02 05:13
Pivotal Build Service ALPHA was just released and in this blog post let's take it for a test drive to work out how it works. The Pivotal blog post about this release is below. In short it Assembles and Updates Containers in Kubernetes

https://content.pivotal.io/blog/pivotal-build-service-now-alpha-assembles-and-updates-containers-in-kubernetes

Steps:

1. Once you have deployed Pivotal Build Service, the pb CLI can be used to target it with the following command.

Note: Use the --skip-ssl-validation flag if the Pivotal Build Service targets a UAA that has a self-signed CA cert

$ pb api set https://pbs.picorivera.cf-app.com --skip-ssl-validation
Successfully set 'https://pbs.picorivera.cf-app.com' as the Build Service

2. Login using "pb login" as shown below

$ pb login
Target Build Server at: https://pbs.picorivera.cf-app.com

Username: papicella@gmail.com
Password: ******
Login successful

Using the Pivotal Build Service (PBS) requires us to create a TEAM and IMAGE. Both are explained below.

TEAM: A team is an entity on Pivotal Build Service that is used to manage authentication for the images built by Pivotal Build Service and to manage registry and git credentials for the images managed by the team

3. Create a TEAM yaml as per below and then apply that config using the pb cli

example-team.yaml

name: example-team-name
registries:
- registry: index.docker.io
  username: pasapples
  password: *****
repositories:
- domain: github.com
  username: papicella
  password: *****

$ pb team apply -f example-team.yaml
Successfully applied team 'example-team-name'

IMAGE: An image defines the specification that Pivotal Build Service uses to create images for a user.

4. Create a IMAGE yaml as per below and then apply that config using the pb cli. The PBS will automatically kick off a build.

example-image.yaml

team: example-team-name
source:
  git:
    url: https://github.com/papicella/pbs-demo
    revision: master
image:
  tag: pasapples/pbs-demo-image

$ pb image apply -f example-image.yaml
Successfully applied image configuration 'pasapples/pbs-demo-image'

$ pb image builds pasapples/pbs-demo-image
Build    Status      Image    Started Time           Finished Time    Reason
-----    ------      -----    ------------           -------------    ------
    1    BUILDING    --       2019-08-02 09:43:06    --               CONFIG

5. You can view the logs of the build using it's ID as shown below driving off the Build ID

$ pb image logs pasapples/pbs-demo-image -b 1

papicella@papicella:~$ pb image logs pasapples/pbs-demo-image -b 1
[build-step-credential-initializer] {"level":"info","ts":1564739008.561973,"logger":"fallback-logger","caller":"creds-init/main.go:40","msg":"Credentials initialized.","commit":"002a41a"}
[build-step-credential-initializer]
[build-step-git-source-0] git-init:main.go:81: Successfully cloned "https://github.com/papicella/pbs-demo" @ "c1aae50feaffcd61c521796cd675e6576e58bc64" in path "/workspace"
[build-step-git-source-0]
[build-step-prepare]
[build-step-detect] Trying group 1 out of 3 with 27 buildpacks...
[build-step-detect] ======== Results ========
[build-step-detect] skip: Cloud Foundry Archive Expanding Buildpack
[build-step-detect] pass: Pivotal OpenJDK Buildpack
[build-step-detect] pass: Pivotal Build System Buildpack
[build-step-detect] pass: Cloud Foundry JVM Application Buildpack
[build-step-detect] pass: Cloud Foundry Spring Boot Buildpack
[build-step-detect] pass: Cloud Foundry Apache Tomcat Buildpack
[build-step-detect] pass: Cloud Foundry DistZip Buildpack
[build-step-detect] skip: Cloud Foundry Procfile Buildpack
[build-step-detect] skip: Pivotal AppDynamics Buildpack
[build-step-detect] skip: Pivotal AspectJ Buildpack
[build-step-detect] skip: Pivotal CA Introscope Buildpack
[build-step-detect] pass: Pivotal Client Certificate Mapper Buildpack

....


6. So after a few minutes or so we will see we have built our initial image from the GitHub repo and that OCI compliant image built using Cloud Native Buildpacks is created on our DockerHub account

$ pb image builds pasapples/pbs-demo-image
Build    Status     Image       Started Time           Finished Time          Reason
-----    ------     -----       ------------           -------------          ------
    1    SUCCESS    98239112    2019-08-02 09:43:06    2019-08-02 09:44:34    CONFIG



One of the PBS job is to keep this image updated as new successful commits occur off the matser branch. Lets show how this works as per below

7. Let's make a change to the code for or GitHub repo here I do this in IntelliJ IDEA



8. Commit the changes as shown below



9. Let's see if indeed the PBS actually started a new build for us and we should see it is doing that.

$ pb image builds pasapples/pbs-demo-image
Build    Status      Image       Started Time           Finished Time          Reason
-----    ------      -----       ------------           -------------          ------
    1    SUCCESS     98239112    2019-08-02 09:43:06    2019-08-02 09:44:34    CONFIG
    2    BUILDING    --          2019-08-02 09:57:11    --                     COMMIT

10. We can tail the logs as shown below and actually tail the build logs live using "-f"

papicella@papicella:~$ pb image logs pasapples/pbs-demo-image -b 2 -f
[build-step-credential-initializer] {"level":"info","ts":1564739850.5331886,"logger":"fallback-logger","caller":"creds-init/main.go:40","msg":"Credentials initialized.","commit":"002a41a"}
[build-step-credential-initializer]
[build-step-git-source-0] git-init:main.go:81: Successfully cloned "https://github.com/papicella/pbs-demo" @ "0bb81c7523be7ada3ed956569d0241cda6b410d2" in path "/workspace"
[build-step-git-source-0]
[build-step-prepare]
[build-step-detect] Trying group 1 out of 3 with 27 buildpacks...
[build-step-detect] ======== Results ========
[build-step-detect] skip: Cloud Foundry Archive Expanding Buildpack
[build-step-detect] pass: Pivotal OpenJDK Buildpack
[build-step-detect] pass: Pivotal Build System Buildpack
[build-step-detect] pass: Cloud Foundry JVM Application Buildpack
[build-step-detect] pass: Cloud Foundry Spring Boot Buildpack
[build-step-detect] pass: Cloud Foundry Apache Tomcat Buildpack
[build-step-detect] pass: Cloud Foundry DistZip Buildpack
[build-step-detect] skip: Cloud Foundry Procfile Buildpack
[build-step-detect] skip: Pivotal AppDynamics Buildpack
[build-step-detect] skip: Pivotal AspectJ Buildpack
[build-step-detect] skip: Pivotal CA Introscope Buildpack
[build-step-detect] pass: Pivotal Client Certificate Mapper Buildpack
[build-step-detect] skip: Pivotal Elastic APM Buildpack
[build-step-detect] skip: Pivotal JaCoCo Buildpack
[build-step-detect] skip: Pivotal JProfiler Buildpack
[build-step-detect] skip: Pivotal JRebel Buildpack
[build-step-detect] skip: Pivotal New Relic Buildpack
[build-step-detect] skip: Pivotal OverOps Buildpack
[build-step-detect] skip: Pivotal Riverbed AppInternals Buildpack
[build-step-detect] skip: Pivotal SkyWalking Buildpack
[build-step-detect] skip: Pivotal YourKit Buildpack
[build-step-detect] skip: Cloud Foundry Azure Application Insights Buildpack
[build-step-detect] skip: Cloud Foundry Debug Buildpack
[build-step-detect] skip: Cloud Foundry Google Stackdriver Buildpack
[build-step-detect] skip: Cloud Foundry JDBC Buildpack
[build-step-detect] skip: Cloud Foundry JMX Buildpack
[build-step-detect] pass: Cloud Foundry Spring Auto-reconfiguration Buildpack
[build-step-detect]
[build-step-restore] Restoring cached layer 'io.pivotal.openjdk:openjdk-jdk'
[build-step-restore] Restoring cached layer 'io.pivotal.buildsystem:build-system-application'
[build-step-restore] Restoring cached layer 'io.pivotal.buildsystem:build-system-cache'
[build-step-restore] Restoring cached layer 'org.cloudfoundry.jvmapplication:executable-jar'
[build-step-restore] Restoring cached layer 'org.cloudfoundry.springboot:spring-boot'
[build-step-restore]
[build-step-analyze] Analyzing image 'index.docker.io/pasapples/pbs-demo-image@sha256:982391123b47cdbac534aaeed78c5e121d89d2064b53897c23f2248a7658fa50'
[build-step-analyze] Using cached layer 'io.pivotal.openjdk:openjdk-jdk'
[build-step-analyze] Writing metadata for uncached layer 'io.pivotal.openjdk:java-security-properties'
[build-step-analyze] Writing metadata for uncached layer 'io.pivotal.openjdk:jvmkill'
[build-step-analyze] Writing metadata for uncached layer 'io.pivotal.openjdk:link-local-dns'
[build-step-analyze] Writing metadata for uncached layer 'io.pivotal.openjdk:memory-calculator'
[build-step-analyze] Writing metadata for uncached layer 'io.pivotal.openjdk:openjdk-jre'
[build-step-analyze] Writing metadata for uncached layer 'io.pivotal.openjdk:security-provider-configurer'
[build-step-analyze] Writing metadata for uncached layer 'io.pivotal.openjdk:class-counter'
[build-step-analyze] Using cached layer 'io.pivotal.buildsystem:build-system-application'
[build-step-analyze] Using cached layer 'io.pivotal.buildsystem:build-system-cache'
[build-step-analyze] Using cached launch layer 'org.cloudfoundry.jvmapplication:executable-jar'
[build-step-analyze] Rewriting metadata for layer 'org.cloudfoundry.jvmapplication:executable-jar'
[build-step-analyze] Using cached launch layer 'org.cloudfoundry.springboot:spring-boot'
[build-step-analyze] Rewriting metadata for layer 'org.cloudfoundry.springboot:spring-boot'
[build-step-analyze] Writing metadata for uncached layer 'io.pivotal.clientcertificatemapper:client-certificate-mapper'
[build-step-analyze] Writing metadata for uncached layer 'org.cloudfoundry.springautoreconfiguration:auto-reconfiguration'
[build-step-analyze]
[build-step-build]
[build-step-build] Pivotal OpenJDK Buildpack 1.0.0-M9
[build-step-build]   OpenJDK JDK 11.0.3: Reusing cached layer
[build-step-build]   OpenJDK JRE 11.0.3: Reusing cached layer
[build-step-build]   Java Security Properties 1.0.0-M9: Reusing cached layer
[build-step-build]   Security Provider Configurer 1.0.0-M9: Reusing cached layer
[build-step-build]   Link-Local DNS 1.0.0-M9: Reusing cached layer
[build-step-build]   JVMKill Agent 1.16.0: Reusing cached layer
[build-step-build]   Class Counter 1.0.0-M9: Reusing cached layer
[build-step-build]   Memory Calculator 4.0.0: Reusing cached layer
[build-step-build]
[build-step-build] Pivotal Build System Buildpack 1.0.0-M9
[build-step-build]     Using wrapper
[build-step-build]     Linking Cache to /home/vcap/.m2
[build-step-build]   Compiled Application (141 files): Contributing to layer
[build-step-build] [INFO] Scanning for projects...
[build-step-build] [INFO]
[build-step-build] [INFO] ------------------------< com.example:pbs-demo >------------------------
[build-step-build] [INFO] Building pbs-demo 0.0.1-SNAPSHOT
[build-step-build] [INFO] --------------------------------[ jar ]---------------------------------
[build-step-build] [INFO]
[build-step-build] [INFO] --- maven-resources-plugin:3.1.0:resources (default-resources) @ pbs-demo ---
[build-step-build] [INFO] Using 'UTF-8' encoding to copy filtered resources.
[build-step-build] [INFO] Copying 1 resource
[build-step-build] [INFO] Copying 0 resource
[build-step-build] [INFO]
[build-step-build] [INFO] --- maven-compiler-plugin:3.8.1:compile (default-compile) @ pbs-demo ---
[build-step-build] [INFO] Changes detected - recompiling the module!
[build-step-build] [INFO] Compiling 9 source files to /workspace/target/classes
[build-step-build] [INFO]
[build-step-build] [INFO] --- maven-resources-plugin:3.1.0:testResources (default-testResources) @ pbs-demo ---
[build-step-build] [INFO] Not copying test resources
[build-step-build] [INFO]
[build-step-build] [INFO] --- maven-compiler-plugin:3.8.1:testCompile (default-testCompile) @ pbs-demo ---
[build-step-build] [INFO] Not compiling test sources
[build-step-build] [INFO]
[build-step-build] [INFO] --- maven-surefire-plugin:2.22.2:test (default-test) @ pbs-demo ---
[build-step-build] [INFO] Tests are skipped.
[build-step-build] [INFO]
[build-step-build] [INFO] --- maven-jar-plugin:3.1.2:jar (default-jar) @ pbs-demo ---
[build-step-build] [INFO] Building jar: /workspace/target/pbs-demo-0.0.1-SNAPSHOT.jar
[build-step-build] [INFO]
[build-step-build] [INFO] --- spring-boot-maven-plugin:2.1.6.RELEASE:repackage (repackage) @ pbs-demo ---
[build-step-build] [INFO] Replacing main artifact with repackaged archive
[build-step-build] [INFO] ------------------------------------------------------------------------
[build-step-build] [INFO] BUILD SUCCESS
[build-step-build] [INFO] ------------------------------------------------------------------------
[build-step-build] [INFO] Total time:  7.214 s
[build-step-build] [INFO] Finished at: 2019-08-02T09:57:52Z
[build-step-build] [INFO] ------------------------------------------------------------------------
[build-step-build]   Removing source code
[build-step-build]
[build-step-build] Cloud Foundry JVM Application Buildpack 1.0.0-M9
[build-step-build]   Executable JAR: Reusing cached layer
[build-step-build]   Process types:
[build-step-build]     executable-jar: java -cp $CLASSPATH $JAVA_OPTS org.springframework.boot.loader.JarLauncher
[build-step-build]     task:           java -cp $CLASSPATH $JAVA_OPTS org.springframework.boot.loader.JarLauncher
[build-step-build]     web:            java -cp $CLASSPATH $JAVA_OPTS org.springframework.boot.loader.JarLauncher
[build-step-build]
[build-step-build] Cloud Foundry Spring Boot Buildpack 1.0.0-M9
[build-step-build]   Spring Boot 2.1.6.RELEASE: Reusing cached layer
[build-step-build]   Process types:
[build-step-build]     spring-boot: java -cp $CLASSPATH $JAVA_OPTS com.example.pbsdemo.PbsDemoApplication
[build-step-build]     task:        java -cp $CLASSPATH $JAVA_OPTS com.example.pbsdemo.PbsDemoApplication
[build-step-build]     web:         java -cp $CLASSPATH $JAVA_OPTS com.example.pbsdemo.PbsDemoApplication
[build-step-build]
[build-step-build] Pivotal Client Certificate Mapper Buildpack 1.0.0-M9
[build-step-build]   Cloud Foundry Client Certificate Mapper 1.8.0: Reusing cached layer
[build-step-build]
[build-step-build] Cloud Foundry Spring Auto-reconfiguration Buildpack 1.0.0-M9
[build-step-build]   Spring Auto-reconfiguration 2.7.0: Reusing cached layer
[build-step-build] 

...


11. This time the build will be faster given we are using Cloud Native Buildpacks a CNCF project and it will only rebuild the layers required versus the whole image itself. You can see from the time taken of build "2"

$ pb image builds pasapples/pbs-demo-image
Build    Status     Image       Started Time           Finished Time          Reason
-----    ------     -----       ------------           -------------          ------
    1    SUCCESS    98239112    2019-08-02 09:43:06    2019-08-02 09:44:34    CONFIG
    2    SUCCESS    1e4b63b1    2019-08-02 09:57:11    2019-08-02 09:58:15    COMMIT

Hopefully this demo shows what the PBS is all about and how it will simplify how you create and keep updated your OCI compliant images.

More Information:

1. Get started with Pivotal Build Service.
https://github.com/pivotal-cf/docs-build-service/blob/master/using.md

2. Request alpha access to Build Service via this form, or by reaching out to your account team. Once you’ve gained access, you’ll see the bits on up PivNet

3. Cloud Native buildpacks
https://buildpacks.io/


Categories: Fusion Middleware

Alfresco Clustering – Apache HTTPD as Load Balancer

Yann Neuhaus - Fri, 2019-08-02 01:00

In previous blogs, I talked about some basis and presented some possible architectures for Alfresco, I talked about the Clustering setup for the Alfresco Repository, the Alfresco Share and for ActiveMQ. In this one, I will talk about the Front-end layer, but in a very particular setup because it will also act as a Load Balancer. For an Alfresco solution, you can choose the front-end that you prefer and it can just act as a front-end to protect your Alfresco back-end components, to add SSL or whatever. There is no real preferences but you will obviously need to know how to configure it. I posted a blog some years ago for Apache HTTPD as a simple front-end (here) or you can check the Alfresco documentation which now include a section for that as well but there is no official documentation for a Load Balancer setup.

In an Alfresco architecture that includes HA/Clustering you will, at some point, need a Load Balancer. From time to time, you will come across companies that do not already have a Load Balancer available and you might therefore have to provide something to fill this gap. Since you will most probably (should?) already have a front-end to protect Alfresco, why not using it as well as a Load Balancer? In this blog, I choose Apache HTTPD because that’s the front-end I’m usually using and I know it’s working fine as a LB as well.

The architectures that I described in the first blog of this series, there always were a front-end installed on each node with Alfresco Share and there were a LB above that. Here, these two boxes are actually together. There are multiple ways to set that up but I didn’t want to talk about that in my first blog because it’s not really related to Alfresco, it’s above that so it would just have multiplied the possible architectures that I wanted to present and my blog would just have been way too long. There were also no communications between the different front-end nodes because technically speaking, we aren’t going to setup Apache HTTPD as a Cluster, we only need to provide a High Availability solution.

Alright so let’s say that you don’t have a Load Balancer available and you want to use Apache HTTPD as a front-end+LB for a two-node Cluster. There are several solutions so here are two possible ways to do that from an inbound communication point of view that will still provide redundancy:

  • Setup a Round Robin DNS that points to both Apache HTTPD node1 and node2. The DNS will redirect connections to either of the two Apache HTTPD (Active/Active)
  • Setup a Failover DNS with a pretty low TimeToLive (TTL) which will point to a single Apache HTTPD node and redirect all traffic there. If this one isn’t available, it will failover to the second one (Active/Passive)

 

In both cases above, the Apache HTTPD configuration can be exactly the same, it will work. From an outbound communication point of view, Apache HTTPD will talk directly with all the Share nodes behind it. To avoid disconnection and loss of sessions in case an Apache HTTPD is going down, the solution will need to support session stickiness across all Apache HTTPD. With that, all communications coming a single browser will always be redirected to the same backend server which ensures that the sessions are still intact, even if you are losing an Apache HTTPD. I mentioned previously that there won’t be any communications between the different front-ends so this session stickiness must be based on something present inside the session (header or cookie) or inside the URL.

With Apache HTTPD, you can use the Proxy modules to provide both a front-end configuration as well as a Load Balancer but, in this blog, I will use the JK module. The JK module is provided by Apache for communications between Apache HTTPD and Apache Tomcat. It has been designed and optimized for this purpose and it also provides/supports a Load Balancer configuration.

 

I. Apache HTTPD setup for a single back-end node

For this example, I will use the package provided by Ubuntu for a simple installation. You can obviously build it from source to customize it, add your best practices, aso… This has nothing to do with the Clustering setup, it’s a simple front-end configuration for any installation. So let’s install a basic Apache HTTPD:

[alfresco@httpd_n1 ~]$ sudo apt-get install apache2 libapache2-mod-jk
[alfresco@httpd_n1 ~]$ sudo systemctl enable apache2.service
[alfresco@httpd_n1 ~]$ sudo systemctl daemon-reload
[alfresco@httpd_n1 ~]$ sudo a2enmod rewrite
[alfresco@httpd_n1 ~]$ sudo a2enmod ssl

 

Then to configure it for a single back-end Alfresco node (I’m just showing a minimal configuration again, there is much more to do add security & restrictions around Alfresco and mod_jk):

[alfresco@httpd_n1 ~]$ cat /etc/apache2/sites-available/alfresco-ssl.conf
...
<VirtualHost *:80>
    RewriteRule ^/?(.*) https://%{HTTP_HOST}/$1 [R,L]
</VirtualHost>

<VirtualHost *:443>
    ServerName            dns.domain
    ServerAlias           dns.domain dns
    ServerAdmin           email@domain
    SSLEngine             on
    SSLProtocol           -all +TLSv1.2
    SSLCipherSuite        EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:AES2
    SSLHonorCipherOrder   on
    SSLVerifyClient       none
    SSLCertificateFile    /etc/pki/tls/certs/dns.domain.crt
    SSLCertificateKeyFile /etc/pki/tls/private/dns.domain.key

    RewriteRule ^/$ https://%{HTTP_HOST}/share [R,L]

    JkMount /* alfworker
</VirtualHost>
...
[alfresco@httpd_n1 ~]$
[alfresco@httpd_n1 ~]$ cat /etc/libapache2-mod-jk/workers.properties
worker.list=alfworker
worker.alfworker.type=ajp13
worker.alfworker.port=8009
worker.alfworker.host=share_n1.domain
worker.alfworker.lbfactor=1
[alfresco@httpd_n1 ~]$
[alfresco@httpd_n1 ~]$ sudo a2ensite alfresco-ssl
[alfresco@httpd_n1 ~]$ sudo a2dissite 000-default
[alfresco@httpd_n1 ~]$ sudo rm /etc/apache2/sites-enabled/000-default.conf
[alfresco@httpd_n1 ~]$
[alfresco@httpd_n1 ~]$ sudo service apache2 restart

 

That should do it for a single back-end Alfresco node. Again, this was just an example, I wouldn’t recommend using the configuration as is (inside the alfresco-ssl.conf file), there is much more to do for security reasons.

 

II. Adaptation for a Load Balancer configuration

If you want to configure your Apache HTTPD as a Load Balancer, then on top of the standard setup shown above, you just have to modify two things:

  • Modify the JK module configuration to use a Load Balancer
  • Modify the Apache Tomcat configuration to add an identifier for Apache HTTPD to be able to redirect the communication to the correct back-end node (session stickiness). This ID put in the Apache Tomcat configuration will extend the Session’s ID like that: <session_id>.<tomcat_id>

 

So on all the nodes hosting the Apache HTTPD, you should put the exact same configuration:

[alfresco@httpd_n1 ~]$ cat /etc/libapache2-mod-jk/workers.properties
worker.list=alfworker

worker.alfworker.type=lb
worker.alfworker.balance_workers=node1,node2
worker.alfworker.sticky_session=true
worker.alfworker.method=B

worker.node1.type=ajp13
worker.node1.port=8009
worker.node1.host=share_n1.domain
worker.node1.lbfactor=1

worker.node2.type=ajp13
worker.node2.port=8009
worker.node2.host=share_n2.domain
worker.node2.lbfactor=1
[alfresco@httpd_n1 ~]$
[alfresco@httpd_n1 ~]$ sudo service apache2 reload

 

With the above configuration, we keep the same JK Worker (alfworker) but instead of using a ajp13 type, we use a lb type (line 4) which is an encapsulation. The alfworker will use 2 sub-workers named node1 and node2 (line 5), that’s just a generic name. The alfworker will also enable stickiness and use the method B (Busyness), which means that for new sessions, Apache HTTPD to choose to use the worker with the less requests being served, divided by the lbfactor value.

Each sub-worker (node1 and node2) define their type which is ajp13 this time, the port and host it should target (where the Share nodes are located) and the lbfactor. As mentioned above, increasing the lbfactor means that more requests are going to be sent to this worker:

  • For the node2 to serve 100% more requests than the node1 (x2), then set worker.node1.lbfactor=1 and worker.node2.lbfactor=2
  • For the node2 to serve 50% more requests than the node1 (x1.5), then set worker.node1.lbfactor=2 and worker.node2.lbfactor=3

 

The second thing to do is to modify the Apache Tomcat configuration to add a specific ID. On the Share node1:

[alfresco@share_n1 ~]$ grep "<Engine" $CATALINA_HOME/conf/server.xml
    <Engine name="Catalina" defaultHost="localhost" jvmRoute="share_n1">
[alfresco@share_n1 ~]$

 

On the Share node2:

[alfresco@share_n2 ~]$ grep "<Engine" $CATALINA_HOME/conf/server.xml
    <Engine name="Catalina" defaultHost="localhost" jvmRoute="share_n2">
[alfresco@share_n2 ~]$

 

The value to be put in the jvmRoute parameter is just a string so it can be anything but it must be unique across all Share nodes so that the Apache HTTPD JK module can find the correct back-end node that it should transfer the requests to.

It’s that simple to configure Apache HTTPD as a Load Balancer in front of Alfresco… To check which back-end server you are currently using, you can use the browser’s utilities and in particular the network recording which will display, in the headers/cookies section, the Session ID which will therefore display the value that you put in the jvmRoute.

 

 

Other posts of this series on Alfresco HA/Clustering:

Cet article Alfresco Clustering – Apache HTTPD as Load Balancer est apparu en premier sur Blog dbi services.

Too Old To Remember

Michael Dinh - Thu, 2019-08-01 12:01

Is it required to run datapatch after creating database?

Why bother trying to remember versus running datapatch -prereq to find out?

Test case for 12.2.

Database July 2019 Release Update 12.2 applied:

[oracle@racnode-dc2-1 ~]$ /media/patch/lspatches.sh
+ . /media/patch/gi.env
++ set +x
The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.2.0.1/grid
ORACLE_HOME=/u01/app/12.2.0.1/grid
Oracle Instance alive for sid "+ASM1"
+ /u01/app/12.2.0.1/grid/OPatch/opatch version
OPatch Version: 12.2.0.1.17

OPatch succeeded.
+ /u01/app/12.2.0.1/grid/OPatch/opatch lspatches
29770090;ACFS JUL 2019 RELEASE UPDATE 12.2.0.1.190716 (29770090)
29770040;OCW JUL 2019 RELEASE UPDATE 12.2.0.1.190716 (29770040)
29757449;Database Jul 2019 Release Update : 12.2.0.1.190716 (29757449)
28566910;TOMCAT RELEASE UPDATE 12.2.0.1.0(ID:180802.1448.S) (28566910)
26839277;DBWLM RELEASE UPDATE 12.2.0.1.0(ID:170913) (26839277)

OPatch succeeded.
+ exit
[oracle@racnode-dc2-1 ~]$

Create 12.2 RAC database:

[oracle@racnode-dc2-1 ~]$ dbca -silent -createDatabase -characterSet AL32UTF8 \
> -createAsContainerDatabase true \
> -templateName General_Purpose.dbc \
> -gdbname hawkcdb -sid hawkcdb -responseFile NO_VALUE \
> -sysPassword Oracle_4U! -systemPassword Oracle_4U! \
> -numberOfPDBs 1 -pdbName pdb01 -pdbAdminPassword Oracle_4U! \
> -databaseType MULTIPURPOSE \
> -automaticMemoryManagement false -totalMemory 3072 \
> -storageType ASM -diskGroupName DATA -recoveryGroupName FRA \
> -redoLogFileSize 100 \
> -emConfiguration NONE \
> -nodeinfo racnode-dc2-1,racnode-dc2-2 \
> -listeners LISTENER \
> -ignorePreReqs

Copying database files
21% complete
Creating and starting Oracle instance
35% complete
Creating cluster database views
50% complete
Completing Database Creation
57% complete
Creating Pluggable Databases
78% complete
Executing Post Configuration Actions
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/hawkcdb/hawkcdb.log" for further details.
[oracle@racnode-dc2-1 ~]$

Run datapatch -prereq for 12.2

[oracle@racnode-dc2-1 ~]$ $ORACLE_HOME/OPatch/datapatch -prereq
SQL Patching tool version 12.2.0.1.0 Production on Thu Aug  1 17:45:13 2019
Copyright (c) 2012, 2019, Oracle.  All rights reserved.

Connecting to database...OK
Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)
Determining current state...done
Adding patches to installation queue and performing prereq checks...done

**********************************************************************
Installation queue:
  For the following PDBs: CDB$ROOT PDB$SEED PDB01
    Nothing to roll back
    Nothing to apply
**********************************************************************

SQL Patching tool complete on Thu Aug  1 17:46:39 2019
[oracle@racnode-dc2-1 ~]$

Test case for 12.1.

Database July 2019 Bundle Patch 12.1 applied:

[oracle@racnode-dc1-1 ~]$ /media/patch/lspatches.sh
+ . /media/patch/gi.env
++ set +x
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.1.0.2/grid
ORACLE_HOME=/u01/app/12.1.0.2/grid
Oracle Instance alive for sid "+ASM1"
+ /u01/app/12.1.0.2/grid/OPatch/opatch version
OPatch Version: 12.2.0.1.17

OPatch succeeded.
+ /u01/app/12.1.0.2/grid/OPatch/opatch lspatches
29509318;OCW PATCH SET UPDATE 12.1.0.2.190716 (29509318)
29496791;Database Bundle Patch : 12.1.0.2.190716 (29496791)
29423125;ACFS PATCH SET UPDATE 12.1.0.2.190716 (29423125)
26983807;WLM Patch Set Update: 12.1.0.2.180116 (26983807)

OPatch succeeded.
+ exit
[oracle@racnode-dc1-1 ~]$

Create 12.1 RAC database:

[oracle@racnode-dc1-1 ~]$ dbca -silent -createDatabase -characterSet AL32UTF8 \
> -createAsContainerDatabase true \
> -templateName General_Purpose.dbc \
> -gdbname cdbhawk -sid cdbhawk -responseFile NO_VALUE \
> -sysPassword Oracle_4U! -systemPassword Oracle_4U! \
> -numberOfPDBs 1 -pdbName pdb01 -pdbAdminPassword Oracle_4U! \
> -databaseType MULTIPURPOSE \
> -automaticMemoryManagement false -totalMemory 3072 \
> -storageType ASM -diskGroupName DATA -recoveryGroupName FRA \
> -redoLogFileSize 100 \
> -emConfiguration NONE \
> -nodeinfo racnode-dc1-1,racnode-dc1-2 \
> -listeners LISTENER \
> -ignorePreReqs

Copying database files
23% complete
Creating and starting Oracle instance
38% complete
Creating cluster database views
54% complete
Completing Database Creation
77% complete
Creating Pluggable Databases
81% complete
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/cdbhawk/cdbhawk.log" for further details.
[oracle@racnode-dc1-1 ~]$

Run datapatch -prereq for 12.2

[oracle@racnode-dc1-1 ~]$ $ORACLE_HOME/OPatch/datapatch -prereq
SQL Patching tool version 12.1.0.2.0 Production on Thu Aug  1 18:24:53 2019
Copyright (c) 2012, 2017, Oracle.  All rights reserved.

Connecting to database...OK
Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)
Bootstrapping registry and package to current versions...done
Determining current state...done
Adding patches to installation queue and performing prereq checks...done

**********************************************************************
Installation queue:
  For the following PDBs: CDB$ROOT PDB$SEED PDB01
    Nothing to roll back
    The following patches will be applied:
      29496791 (DATABASE BUNDLE PATCH 12.1.0.2.190716)
**********************************************************************

SQL Patching tool complete on Thu Aug  1 18:26:26 2019
[oracle@racnode-dc1-1 ~]$

For 12.1, datapatch is required and not for 12.2.

Recursive Subquery

Bar Solutions - Thu, 2019-08-01 03:01

At my current assignment we are processing files coming from an online system to be inserted into our database (kind of a data warehouse). This is done using external tables and a scheduled job. The job just checks if there is a file available and will process this. The trouble is that the files might not make it to our database, for various reasons.

I want to be able to check if all the files have been processed and no file has gone missing. After processing a file, its name gets logged into a table so I can check this at a later time. Of course, I don’t want to eyeball the list to see if all the files have been processed.

I have been playing around with some queries to find out the gaps. A couple of things to know in advance.
The files are named using a timestamp (‘YYYYMMDDHH24MI’)
The files are always 5 minutes apart from each other.
Let’s first create a test table:

rem create the testdata table
create table testdata
( filename            varchar2(4000)
)
/

Then I create some testdata. Using the DBMS_RANDOM.VALUE function I determine which records should be inserted.

rem fill the testdata table
begin
  for d in 10 .. 15 loop -- create 5 days of test data
    for hr in 0 .. 23 loop -- for every hour of the day
      for mn in 0 .. 59 loop -- for every second of the hour
        if mod(mn, 5) = 0 then -- only if it's a 5 minute value
          if trunc(dbms_random.value * 10) &lt;&gt; 4 then -- and our randomizer doesn't end up being 4 (create random gaps)
            insert into testdata
              (filename)
            values
              ('201907' || trim(to_char(d, '09')) || trim(to_char(hr, '09')) || trim(to_char(mn, '09')) || '.tst');
          end if;
        end if;
      end loop;
    end loop;
  end loop;
  commit;
end;
/

My first attempt at this query was to find all the gaps between to filenames which weren’t 5 minutes apart. So I started creating the query. Using subquery factoring I can show the steps I took.
Since the table consists of filenames and not dates (or timestamps) I needed to get the date portion of the file first:

select to_timestamp(substr(filename, 1, 12), 'YYYYMMDDHH24MISS') filedate
    from testdata
   order by 1

Using this set of data I can create the next step in the query, that is determining the preceding and the following date for each date and while we’re at it, determine the gap before and after:

with filenames as
 (select to_timestamp(substr(filename, 1, 12), 'YYYYMMDDHH24MISS') filedate
    from testdata
   order by 1)
select lag(filedate) over(order by filedate) previousfiledate
      ,filedate
      ,lead(filedate) over(order by filedate) nextfiledate
      ,(filedate - lag(filedate) over(order by filedate)) gapbefore
      ,(lead(filedate) over(order by filedate) - filedate) gapafter
  from filenames

Wrapping this set of data into yet another factored subquery, I can remove all the rows that have a gap of exactly 5 minutes.

with filenames as
 (select to_timestamp(substr(filename, 1, 12), 'YYYYMMDDHH24MISS') filedate
    from testdata
   order by 1),
gaps as
 (select lag(filedate) over(order by filedate) previousfiledate
        ,filedate
        ,lead(filedate) over(order by filedate) nextfiledate
        ,(filedate - lag(filedate) over(order by filedate)) gapbefore
        ,(lead(filedate) over(order by filedate) - filedate) gapafter
    from filenames)
select *
  from gaps
 where 1 = 1
   and (gapafter &lt;&gt; to_dsinterval('0 00:05:00')) 
    or (gapbefore &lt;&gt; to_dsinterval('0 00:05:00'))

After I got the results for this query it made me wonder: This query shows me where the gaps in the data are, but it doesn’t tell me exactly which file or files are missing. I still have to figure that out myself. It also shows every gap twice, once after one file and once before the next file. There has got to be a better way to find and fill up the gaps. What if I could just generate all the filenames that should be there and then subtract the filenames that have been recorded.
First I need to create a list of all the possible filenames that exist between the first and last recorded filename. I know about a feature called recursive subquery factoring, but I never used it before. Luckily Tim Hall has created a nice post on this subject.

with
/* 
 * First determine all the possible dates between the first and the last recorded file
 * Using recursive subquery
 * Thanks to Tim Hall for https://oracle-base.com/articles/11g/recursive-subquery-factoring-11gr2
 */
possibledates(thedate) as
 ( -- Anchor member
  select min(to_timestamp(substr(filename, 1, 12), 'YYYYMMDDHH24MI')) thedate
    from testdata
  union all
  -- Recursive member
  select thedate + to_dsinterval('0 00:05:00') thedate
    from possibledates
   where thedate &lt;
         (select max(to_timestamp(substr(filename, 1, 12), 'YYYYMMDDHH24MI'))
            from testdata)),
/*
 * Then determing the filenames from these dates
 */
possiblefilenames as
 (select to_char(thedate, 'YYYYMMDDHH24MI') || '.tst' filename
    from possibledates)
/*
 * Using a simple minus operation, determine which filenames are missing
 */
select filename
  from possiblefilenames
minus
select filename
  from testdata

Not only is this query very fast, it also takes away the problem for me to eye-ball the data to find the missing files. It just shows which files are actually missing.

Pages

Subscribe to Oracle FAQ aggregator