Feed aggregator

Installing Nginx

DBASolved - Sat, 2019-12-28 22:03

With Oracle GoldenGate Microservices, you have the option of using a reverse proxy or not.  In reality, it is a best practice to install the recommended reverse proxy for the architecture.  The main benefit here is the security aspect of using it.  In Oracle GoldenGate Microservices, depending on the number of deployments you have per […]

The post Installing Nginx appeared first on DBASolved.

Categories: DBA Blogs

[Troubleshoot] instance ocid1.instance.oc1.iad.XX Not Found While Deploying EBS Cloud Manager: config.pl

Online Apps DBA - Sat, 2019-12-28 07:39

[Troubleshoot] instance ocid1.instance.oc1.iad.XX Not Found While Deploying EBS Cloud Manager: config.pl While running the “config.pl” to configure the EBS Cloud Manager did you encounter the ⚠”instance ocid1.instance.oc1.iad.XX Not Found” issue? If yes, check the blog post at https://k21academy.com/ebscloud35 that covers the root cause & fixes of the issue encountered while running the “config.pl” and things […]

The post [Troubleshoot] instance ocid1.instance.oc1.iad.XX Not Found While Deploying EBS Cloud Manager: config.pl appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

[New Feature] Share Block Volume with Multiple Instances In Oracle Cloud (OCI)

Online Apps DBA - Sat, 2019-12-28 04:10

[New Feature] Share Block Volume with Multiple Instances In Oracle Cloud (OCI) Oracle released Block Volume sharing in Reading/Write Mode that will definitely ease design and lower the storage cost too. To read more about it, check out our blog post at https://k21academy.com/1z0107213 that discusses: ✦ Storage options in ☁Oracle Cloud ✦New Feature Updated By […]

The post [New Feature] Share Block Volume with Multiple Instances In Oracle Cloud (OCI) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Use Conda to Generate Requirements.txt for Docker Containers

Pakistan's First Oracle Blog - Fri, 2019-12-27 00:01
pip is a standard package manager. Requirements.txt can be generated in one environment and installed by pip in a new environment. Conda replicates own installation. Pip produces a list of packages that were installed on top of standard library to make the package you wrote work.

Following are the steps to generate requirements.txt file to be used insdie Dockerfile for docker containers:



Go to your project environment conda activate

conda list gives you list of packages used for the environment

conda list -e > requirements.txt save all the info about packages to your folder

conda env export > .yml

pip freeze

Hope that helps.
Categories: DBA Blogs

[AZ-103] Microsoft Azure Administrator Certification Exam: Everything You Need To Know

Online Apps DBA - Thu, 2019-12-26 04:33

AZ-103 | Microsoft Azure Administrator Associate The AZ-100 and AZ-101 certifications are being replaced by the new AZ-103 Microsoft Azure Administrator certification exam. Check out our blog post at https://k21academy.com/az10311 which covers: ▪ What is the AZ-103 Certification? ▪ Who This Certification Is For? ▪ Why Should You Go For It? ▪ Exam Details ▪ […]

The post [AZ-103] Microsoft Azure Administrator Certification Exam: Everything You Need To Know appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

1Z0-932 V/S 1Z0-1072: Oracle Cloud Infra Architect Associate Certification

Online Apps DBA - Thu, 2019-12-26 01:56

1Z0-932 V/S 1Z0-1072: Oracle Cloud Infra Architect Associate Certification Oracle has recently introduced a new certification for Oracle Cloud Infrastructure Architect Associate i.e. 1Z0-1072 In our FREE Masterclass, https://k21academy.com/1z0107202, we got a lot of questions regarding what is 1Z0-1072 & how is it different from 1Z0-932? Check at https://k21academy.com/1z0107212 which covers: 1. What is 1Z0-1072 […]

The post 1Z0-932 V/S 1Z0-1072: Oracle Cloud Infra Architect Associate Certification appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Flashback Archive

Jonathan Lewis - Tue, 2019-12-24 14:33

A classic example of Oracle’s “mix and match” problem showed up on the Oracle Developer Forum a few days ago. Sometimes you see two features that are going to be really helpful in your application – and when you combine them something breaks. In this case it was the combination of Virtual Private Database (VPD/FGAC/RLS) and Flashback Data Archive (FDA/FBA) that resulted in the security predicate not being applied the way you would expect, hence allowing users to see data they were not supposed to see.

The OP supplied us with a model (based in part on Tim Hall’s FDA article) to demonstrate the issue on 11.2.0.4, and I’ve hacked it about a bit to explain it here, and to test it on 12.2.0.1 and 19.3.0.0 where the same failure occurs.

I’m going to start with just the VPD part of the setup before adding in the FDA. Most of the code has been written to run as the SYS user and it creates a new tablespace and a couple of users so you may want to do some editing before you try any tests. There’s also a short script at the end of the blog to remove the flashback data archive, tablespace, and users – again, something to be run by SYS.

You’ll note that this script assumes you already have a tablespace called test_8k_assm, and a temporary tablespace called temp. The latter may well be a valid assumption, the former probably isn’t.

rem
rem     Script:         vpd_fda_bug.sql
rem     Author:         Duncs (ODC)
rem     Dated:          Dec 2019
rem
rem     Last tested
rem             19.3.0.0
rem             12.2.0.1
rem             11.2.0.4
rem
rem     Notes
rem     Has to be run as SYS
rem
rem     See also
rem     https://community.oracle.com/thread/4307453
rem


create user vpd_test_data_owner identified by Password_1234 
        default tablespace test_8k_assm
        temporary tablespace temp 
        quota unlimited on test_8k_assm
;
 
grant 
        connect,
        resource, 
        create any context
to
        vpd_test_data_owner
;

grant
        execute on dbms_rls
to
        vpd_test_data_owner
;
 
 
create table vpd_test_data_owner.person (
        person_id       number, 
        surname         varchar2(30), 
        unit_id         number
);

insert into  vpd_test_data_owner.person values (-1, 'One',  -1);
insert into  vpd_test_data_owner.person values (.2, 'Two',  -2);
insert into  vpd_test_data_owner.person values (.3, 'Three',-3);
insert into  vpd_test_data_owner.person values (-4, 'Four', -4);
insert into  vpd_test_data_owner.person values (-5, 'Five', -5);

commit;

create user vpd_test_function_owner identified by Password_1234
        default tablespace test_8k_assm 
        temporary tablespace temp 
        quota unlimited on test_8k_assm
;
 
grant 
        connect, 
        resource
to 
        vpd_test_function_owner
;
 
prompt  ============================================
prompt  Create a packaged function to set a context
prompt  that we will use in a VPD security predicate
prompt  ============================================

create or replace package vpd_test_function_owner.context_api_pkg AS

procedure set_parameter(
        p_name  in  varchar2,
        p_value in  varchar2
);

end context_api_pkg;
/
 
create or replace package body vpd_test_function_owner.context_api_pkg IS
 
procedure set_parameter (
        p_name  in  varchar2,
        p_value in  varchar2
) is
begin
        dbms_session.set_context('my_vpd_context', p_name, p_value);
end set_parameter;

end context_api_pkg;
/

prompt  ======================================================
prompt  Allow public to set the context value.  (Not sensible)
prompt  ======================================================

grant execute on vpd_test_function_owner.context_api_pkg to public;

prompt  ===============================================================
prompt  Create a context that can only be set by our packaged procedure
prompt  ===============================================================

create or replace context my_vpd_context 
        using vpd_test_function_owner.context_api_pkg
;

prompt  =====================================================
prompt  Create a security function that generates a predicate
prompt  based on our context, then create a policy to connect
prompt  the function to the test table for select statements.
prompt  =====================================================
 
create or replace function vpd_test_function_owner.test_vpd_function (
    p_schema  in varchar2 default null
  , p_object  in varchar2 default null
)
return varchar2
as
    lv_unit_id number := nvl(sys_context('my_vpd_context','unit_id'), -1);
begin
    return 'unit_id = ' || lv_unit_id;
end test_vpd_function;
/

begin
      dbms_rls.add_policy (
               object_schema    => 'vpd_test_data_owner'
             , object_name      => 'person'
             , policy_name      => 'test_vpd_policy'
             , function_schema  => 'vpd_test_function_owner'
             , policy_function  => 'test_vpd_function'
             , statement_types  => 'select'
      );
end;
/


There are several quick and dirty bits to the script – you shouldn’t be using the connect and resoruce roles, for example; they exist only for backwards compatibility and don’t even manage that very well any more. Any grants made should be carefully chosen to be the minimum necessary to achieve the required functionality, and you should be defining roles of your own rather than using pre-existing ones.

Generally you don’t expect to set up a security policy that stops the owner of the data from seeing all the data – and I’ve left the policy to default to dynamic which means the function will execute on every parse and execute of a statement accessing the table (and that’s somethin to avoid if you can). For convenience I’ve also alloweed the owner of the data to execute the function that changes the context that is used by the predicate function – and you don’t really want to allow anyone who is constrained by a security policy to be able to modify their own access rights like this.

Since the code allows a deliberately lax setup on VPD you could at this point do something like the following to check that VPD is actually working before moving on to test the effect of FDA:

connect vpd_test_data_owner/Password_1234
select * from person;

execute vpd_test_function_owner.context_api_pkg.set_parameter('unit_id',-2)
select * from person;

The first execution of the query should show you only the row where unit_id = -1 as “unit_id = -1” is the default return value from the security function. The second execution should return only the row where unit_id = -2 as the call to set_parameter() changes the context value so that when the security function re-executes it generate a new security predicate “unit_it = -2”. (It’s worth noting that one of the options for security policies is to make them context-dependent so that they re-execute only when the relevant context is changed – but in this case the policy defaults to “re-execute the function on every parse and execute”.)  [NOTE: for some clues on the possible performance impact of badly defined VPD, check the comments made in response to this blog note]

Once you’re satisfied that the security policy is working correctly you can move on to the second feature – flashback data archive. Logging on as SYS once again, execute the following code – which, amongst other things, creates a new tablespace. You’ll notice that I’ve got three lines in the “create tablespace” statement naming a datafile (though one of them doesn’t actually supply a name). The names (or absence thereof) correspond to the default naming conventions I have for my sandbox 11g, 12c, and 19c instances in that order. You will want to adjust according to your file-naming conventions.


prompt  ============================
prompt  Setting up Flashback Archive
prompt  ============================

create tablespace fda_ts 
        datafile        
--              no name needed if OMF
--              '/u01/app/oracle/oradata/orcl12c/orcl/fda_ts.dbf'
--              '/u02/data/OR19/orclpdb/fda_ts.dbf'
        size 1m autoextend on next 1m
;

alter user vpd_test_data_owner quota unlimited on fda_ts;

create flashback archive default fda_1year tablespace fda_ts
quota 1g retention 1 year;
 
grant flashback archive on fda_1year to vpd_test_data_owner;
grant flashback archive administer to vpd_test_data_owner;
grant execute on dbms_flashback_archive to vpd_test_data_owner;
 
prompt  Sleeping for 1 minute before adding table to flashback archive
execute dbms_lock.sleep(60);
alter table vpd_test_data_owner.person flashback archive fda_1year;

prompt  Sleeping for 1 minute before updating the date
execute dbms_lock.sleep(60);
update vpd_test_data_owner.person set surname = upper(surname);

commit;

prompt  Sleeping for 5 minutes to give FDA a chance to do its thing.
execute dbms_lock.sleep(300);
alter system flush shared_pool;

prompt  ==================================================
prompt  Now connect to the data owner schema and run the 
prompt  original query then a couple of flashback queries, 
prompt  pulling their plans from memory
prompt  ==================================================

connect vpd_test_data_owner/Password_1234

set linesize 120
set pagesize 50
set trimspool on
set serveroutput off

spool vpd_fda_bug.lst
-- set autotrace on explain

select * from vpd_test_data_owner.person;
select * from vpd_test_data_owner.person AS OF TIMESTAMP SYSDATE-1/1440;
select * from vpd_test_data_owner.person AS OF TIMESTAMP SYSDATE-2/1440;
select * from vpd_test_data_owner.person AS OF TIMESTAMP SYSDATE-3/1440;
select * from vpd_test_data_owner.person AS OF TIMESTAMP SYSDATE-4/1440;
select * from vpd_test_data_owner.person AS OF TIMESTAMP SYSDATE-5/1440;
select * from vpd_test_data_owner.person AS OF TIMESTAMP SYSDATE-6/1440;
select * from vpd_test_data_owner.person AS OF TIMESTAMP SYSDATE-7/1440;  
select * from vpd_test_data_owner.person AS OF TIMESTAMP SYSDATE-8/1440;  
select * from vpd_test_data_owner.person AS OF TIMESTAMP SYSDATE-9/1440;  
select * from vpd_test_data_owner.person AS OF TIMESTAMP SYSDATE-10/1440; 
select * from vpd_test_data_owner.person AS OF TIMESTAMP SYSDATE-15/1440; 
set autotrace off
spool off 

I’ve created a tablespace that I’m going to reserve for the flashback archive and given my data owner a quota on that tablespace; then I’ve created a flashback archive in that tablespace and granted various privileges relating to flashback archive to my data owner.

The next few lines of code include a couple of calls to dbms_lock.sleep() because I want to avoid the risk of getting an Oracle error ORA-01466: unable to read data – table definition has changed, but all I’ve done otherwise is modify the person table to be archiving and then made a little change that will eventually be recorded as part of the archive.

I’ve then introduced a 5 minute wait as it seems to take about 5 minutes before the flashback process takes any action to capture the original table data and copy any related undo; but after that 5 minutes is up I’ve queried the person table directly (which should show you the one row where unit_id = -1, then gradually gone backwards in time re-querying the data.

You should see the same result being produced for a few minutes, then a version of the “pre-update” data (upper case ‘ONE’ changing to mixed case ‘One’), and then you will (I hope) see the entire original data set appearing and finally you should see Oracle raising error “ORA-01466: unable to read data – table definition has changed” when your “as of timestamp” goes back beyond the moment you created the archive. (Except that that doesn’t happen with 11.2.0.4, which manages to report the data as if it had existed long before you created it).

I’ve commented out the “set autotrace on explain” in the above, but if you leave it in, or introduce it for one of the queries, you’ll see what’s going on that allows flashback data archive show you data that should have been hidden by the security predicate. Here’s the execution plan for one run:

-----------------------------------------------------------------------------------------------------------------
| Id  | Operation                 | Name                | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |                     |     2 |    86 |    17  (12)| 00:00:01 |       |       |
|   1 |  VIEW                     |                     |     2 |    86 |    17  (12)| 00:00:01 |       |       |
|   2 |   UNION-ALL               |                     |       |       |            |          |       |       |
|*  3 |    FILTER                 |                     |       |       |            |          |       |       |
|   4 |     PARTITION RANGE SINGLE|                     |     1 |    71 |     7   (0)| 00:00:01 |   KEY |     1 |
|*  5 |      TABLE ACCESS FULL    | SYS_FBA_HIST_353151 |     1 |    71 |     7   (0)| 00:00:01 |   KEY |     1 |
|*  6 |    FILTER                 |                     |       |       |            |          |       |       |
|   7 |     MERGE JOIN OUTER      |                     |     1 |  2083 |    10  (20)| 00:00:01 |       |       |
|   8 |      SORT JOIN            |                     |     1 |    55 |     7  (15)| 00:00:01 |       |       |
|*  9 |       TABLE ACCESS FULL   | PERSON              |     1 |    55 |     6   (0)| 00:00:01 |       |       |
|* 10 |      SORT JOIN            |                     |     5 | 10140 |     3  (34)| 00:00:01 |       |       |
|* 11 |       TABLE ACCESS FULL   | SYS_FBA_TCRV_353151 |     5 | 10140 |     2   (0)| 00:00:01 |       |       |
-----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("TIMESTAMP_TO_SCN"(SYSDATE@!-.004861111111111111111111111111111111111111)<12670390363943)
   5 - filter("ENDSCN" .le. 12670390363943 AND ("OPERATION" IS NULL OR "OPERATION"<>'D') AND
              "ENDSCN">"TIMESTAMP_TO_SCN"(SYSDATE@!-.004861111111111111111111111111111111111111) AND ("STARTSCN" IS
              NULL OR "STARTSCN" .le. "TIMESTAMP_TO_SCN"(SYSDATE@!-.004861111111111111111111111111111111111111)))
   6 - filter("STARTSCN"<="TIMESTAMP_TO_SCN"(SYSDATE@!-.004861111111111111111111111111111111111111) OR
              "STARTSCN" IS NULL)
   9 - filter("UNIT_ID"=(-1) AND ("VERSIONS_OPERATION" IS NULL OR "VERSIONS_OPERATION"<>'D') 
             AND ("VERSIONS_STARTSCN" IS NULL OR "VERSIONS_STARTSCN".le."TIMESTAMP_TO_SCN(SYSDATE@!-.004861111111111111111111111111111111111111))
             AND ("VERSIONS_ENDSCN" IS NULL OR "VERSIONS_ENDSCN">"TIMESTAMP_TO_SCN"(SYSDATE@!-.004861111111111111111111111111111111111111)))
  10 - access("RID"(+)=ROWIDTOCHAR("T".ROWID))
       filter("RID"(+)=ROWIDTOCHAR("T".ROWID))
  11 - filter(("ENDSCN"(+) IS NULL OR "ENDSCN"(+)>12670390363943) AND ("STARTSCN"(+) IS NULL OR
              "STARTSCN"(+)<12670390363943))

Note
-----
   - dynamic sampling used for this statement (level=2)

Notice that the predicate “unit_id = -1″ appears on the full table scan of person at operation 9 – that’s Oracle applying the security predicate to the person table. But the flashback code has replaced the person table with a union all of (some partititions of) the SYS_FBA_HIST_353151 and a join between the person table and the SYS_FBA_TCRV_353151 table. And the code path that attaches the security predicate fails to attach it to the history table.

tl;dr

VPD (virtual private database) does not seem to be aware of the query rewrite that takes place if a table has an assocated FDA (flashback data archive), so a flashback query may report rows from the “history” table that should have been blocked by the VPD security policy.

Lagniappe

There is another little problem with FDA that might affect you if you try to optimizer flashback queries by creating SQL Plan Baselines. If you create a baseline on a test system (that isn’t a backup copy of the production system) and use the export/import facility to move the baseline to production then the baseline won’t work because the sys_fba_hist_nnnnn and sys_dba_tcrv_nnnnn table names are constructed from the object_id of the base table – which means the archive table names (and associated baseline hints) in the test system are probably going to have different names from the production system.

Housekeeping

To clean up the database after you’ve done all this testing, run the following script (modified to match any changes you’ve made in the test) after logging on as SYS:


alter table vpd_test_data_owner.person no flashback archive;

drop flashback archive fda_1year;

drop USER VPD_TEST_FUNCTION_OWNER cascade;
drop USER VPD_TEST_DATA_OWNER cascade;

drop tablespace fda_ts including contents and datafiles;

OT Footnote

I’ve decided this year to donate to a charity that works to reduce infant mortality rates in Nepal with a two-pronged attack on malnutrition: feeding starving children, then educating their parents on how to make best use local resources to grow the most appropriate crops and use the best preparation methods to  produce nourishing meals in the future. (They also run other projects to improve the lives of the young people in Nepal – here’s a link to their home page, and a direct link to a 4 minute video that gives you a quick insight into what they do and how they do it.)

If you’re thinking of making any small donations to charity over the next couple of weeks, please think of this one. To make your donation more valuable I’ve set up a justgiving page and will match any donations made there before 5th Jan 2020, up to a total of £1,000.

 

 

 

Identity wallets used by Oracle GoldenGate Microservices

DBASolved - Tue, 2019-12-24 13:54

Wallets, Wallets, and more wallets! … Wallets are used for a few different things within Oracle GoldenGate Microservices; identifying what wallet is being used by a service is simple if you know where to look.   The biggest usage of wallets within Oracle GoldenGate Microservices is to help secure the communication between the Distribution Service […]

The post Identity wallets used by Oracle GoldenGate Microservices appeared first on DBASolved.

Categories: DBA Blogs

Publishing Keras Model API with TensorFlow Serving

Andrejus Baranovski - Tue, 2019-12-24 11:40
Building a ML model is a crucial task. Running ML model in production is not a less complex and important task. I had a post in the past about serving ML model through Flask REST API — Publishing Machine Learning API with Python Flask. While this approach works, it certainly lacks some important points:

  • Model versioning 
  • Request batching 
  • Multithreading 

TensorFlow comes with a set of tools to help you run ML model in production. One of these tools — TensorFlow Serving. There is an excellent tutorial that describes how to configure and run it — TensorFlow Serving with Docker. I will follow the same steps in my example.

Read more in my Towards Data Science post.

One Host, multiple PostgreSQL Clusters – Monitoring using EDB PEM

Yann Neuhaus - Tue, 2019-12-24 07:21

At a customer we implemented the EDB Postgres Enterprise Manager. As they run multiple PostgreSQL cluster on one server, they wanted, for sure, to monitor all the cluster using PEM. This blog should give you a short guidance on how to add databases to PEM.

Starting position

– PEM Server installed, up and running
– PEM Agent installed on the hosts, that should be monitored
– Hosts already registered in the PEM
– pg_hba.conf of all databases prepared to accept connections from PEM

host    postgres        enterprisedb    192.168.22.53/32           trust
First impression

As you can see, the hosts, where the agent is installed and registered with PEM are visible in PEM, but the cluster are not automatically detected.

Add a cluster

Let’s add a PostgeSQL cluster.


First of all make sure to choose a self-documenting name. This name (per default) is not unique. So you can add many servers with the same name. Very confusing.

In the connection tab, enter the hostname or IP address and the other connection settings for the PostgreSQL cluster.

In case you run EFM High Availability, you can configure this on the Tab “Advanced”


Add the second cluster

Use the same steps as in step “Add a cluster”. Just take another name and type the correct port.

That’s it, really simple and straight forward. And all cluster and their databases are shown in a nice manner.

Cet article One Host, multiple PostgreSQL Clusters – Monitoring using EDB PEM est apparu en premier sur Blog dbi services.

Merry Christmas and Happy New Year !! (“Heroes”)

Richard Foote - Mon, 2019-12-23 16:01
I would like to take this opportunity to wish all my readers a very Merry Christmas and a most happy, peaceful and prosperous New Year. My gift this year is not David Bowie and Bing Crosby doing their famous Christmas duet but a performance by Bowie of his classic “Heroes” as featured on the same […]
Categories: DBA Blogs

Oracle VM Server: Pool is locked forever

Dietrich Schroff - Mon, 2019-12-23 15:27
If you are using Oracle VM Server it is possible that you are hitting the following problem:

After changing something on a server pool the pool gets locked and the lock stays there:
Even a reboot of your OVMM (oracle vm manager) does not remove the lock.

The solution can be very easy:
Just edit the pool and add (or change) the description.

This edit triggers an update and removes the lock. I think every other change would do the job, but this is a minimal change, which solves the problem...

Hopefully there will be an update, so this problem will disappear in future versions.


I See Systems

Floyd Teter - Mon, 2019-12-23 14:46
Working in enterprise software has changed my perspective on the world around me... I see everything through more of a systemic lens.

My wife recently had knee replacement surgery.  World-renown surgeon performing the surgery.  But being done at his 2nd choice of hospitals due to limitations of our health insurance policy.  Good hospital, but not his hospital of choice.  Like everything else involving health care in the U.S., it's a money thing.

The surgery was originally scheduled for Friday, December 13th but postponed until Monday, December 16th.  On the 16th, we were requested to arrive at the hospital at 11 am for a planned surgery of 1 pm.  But the surgery did not begin until 6 pm.  At this point, I'm thinking that these guys have no idea about linear programming and managing their supply chain.

While my wife was in surgery, I was directed to a "Surgery Waiting Room".  The room allegedly had an attendant and a television screen for monitoring patient status during surgery.  When I entered the waiting room, I immediately noticed the attendant's desk was unoccupied.  Seems that the attendant's shift ends at 5 pm - those waiting on surgeries beyond 5 pm are asked to answer the phones on a volunteer basis.  The idea being that surgical staff will call the waiting room to update family and friends when surgery is completed, and to alert those waiting that the surgeon is on the way up with a more detailed debrief.  So, being the volunteering type, I sit at the attendant's desk and start answering the phone.  After about three incoming calls, I notice a trend - no friends or family in the room to receive the updates.  And I'm also noticing surgeons coming into the room looking for friends and family who are not there.  I put the phone to use and make a few inquiries within the hospital - and discover that there are three surgery waiting rooms in this tower alone (the hospital in question has seven towers).  Neither the surgical staff nor the surgeons have any idea which waiting room contains friends and family for any particular patient, as they have no waiting room check-in function.  So they're literally hunting for the pea under any one of three shells as they go from waiting room to waiting room searching for friends and family of a particular patient.  Low-key chaos.

Another observation from the surgical waiting room:  the status screen showing patient status is listed by case number rather than patient name.  I did not have a case number for my wife's surgery.  And a quick survey of the folks in the waiting room indicated that none of them had a case number either.  So the status screen was useless to those of us waiting on surgical outcomes.  During the next few incoming calls from surgical staff, I asked for case numbers.  But the surgical staff had no idea of the case numbers either.  What in the world?

Fortunately, the surgery went well, the outcome was positive, and Marlene is now home working through the long recovery process.

After rolling the surgical experience over in my head, I decided to view this experience as a systemic failure for this particular hospital.  Lots of good people with lots of energy all trying to do the right thing, but nobody has tied it all together.  So how might I suggest improving things from a system point of view?

First, I'd establish personas:  the patient, the family member/friend of the patient, the surgical staff member, the attendant, the pre-op nurse, the surgeon, and so on.  Then I'd walk through the entire process, from the time a patient walks in the door until they leave post-op recovery.  And I'd do it for each persona.  Essentially build a systemic "As Is" flow.  Then design how we want things to work, review the changes with the people doing the work, then implement.

Sounds a bit like an enterprise applications implementation project, doesn't it?  Well, enterprise applications are all about implementing systems.  Like I said, working in enterprise software has changed my perspective of the world around me...

How about you?  Ever have a similar experience?  Healthcare?  The DMV? Retail returns?  Sound off in the comments about your experience and how you'd approach a fix.

I See Systems

Floyd Teter - Mon, 2019-12-23 14:46
Working in enterprise software has changed my perspective on the world around me... I see everything through more of a systemic lens.

My wife recently had knee replacement surgery.  World-renown surgeon performing the surgery.  But being done at his 2nd choice of hospitals due to limitations of our health insurance policy.  Good hospital, but not his hospital of choice.  Like everything else involving health care in the U.S., it's a money thing.

The surgery was originally scheduled for Friday, December 13th but postponed until Monday, December 16th.  On the 16th, we were requested to arrive at the hospital at 11 am for a planned surgery of 1 pm.  But the surgery did not begin until 6 pm.  At this point, I'm thinking that these guys have no idea about linear programming and managing their supply chain.

While my wife was in surgery, I was directed to a "Surgery Waiting Room".  The room allegedly had an attendant and a television screen for monitoring patient status during surgery.  When I entered the waiting room, I immediately noticed the attendant's desk was unoccupied.  Seems that the attendant's shift ends at 5 pm - those waiting on surgeries beyond 5 pm are asked to answer the phones on a volunteer basis.  The idea being that surgical staff will call the waiting room to update family and friends when surgery is completed, and to alert those waiting that the surgeon is on the way up with a more detailed debrief.  So, being the volunteering type, I sit at the attendant's desk and start answering the phone.  After about three incoming calls, I notice a trend - no friends or family in the room to receive the updates.  And I'm also noticing surgeons coming into the room looking for friends and family who are not there.  I put the phone to use and make a few inquiries within the hospital - and discover that there are three surgery waiting rooms in this tower alone (the hospital in question has seven towers).  Neither the surgical staff nor the surgeons have any idea which waiting room contains friends and family for any particular patient, as they have no waiting room check-in function.  So they're literally hunting for the pea under any one of three shells as they go from waiting room to waiting room searching for friends and family of a particular patient.  Low-key chaos.

Another observation from the surgical waiting room:  the status screen showing patient status is listed by case number rather than patient name.  I did not have a case number for my wife's surgery.  And a quick survey of the folks in the waiting room indicated that none of them had a case number either.  So the status screen was useless to those of us waiting on surgical outcomes.  During the next few incoming calls from surgical staff, I asked for case numbers.  But the surgical staff had no idea of the case numbers either.  What in the world?

Fortunately, the surgery went well, the outcome was positive, and Marlene is now home working through the long recovery process.

After rolling the surgical experience over in my head, I decided to view this experience as a systemic failure for this particular hospital.  Lots of good people with lots of energy all trying to do the right thing, but nobody has tied it all together.  So how might I suggest improving things from a system point of view?

First, I'd establish personas:  the patient, the family member/friend of the patient, the surgical staff member, the attendant, the pre-op nurse, the surgeon, and so on.  Then I'd walk through the entire process, from the time a patient walks in the door until they leave post-op recovery.  And I'd do it for each persona.  Essentially build a systemic "As Is" flow.  Then design how we want things to work, review the changes with the people doing the work, then implement.

Sounds a bit like an enterprise applications implementation project, doesn't it?  Well, enterprise applications are all about implementing systems.  Like I said, working in enterprise software has changed my perspective of the world around me...

How about you?  Ever have a similar experience?  Healthcare?  The DMV? Retail returns?  Sound off in the comments about your experience and how you'd approach a fix.

To compare two same tables from different schema without primary key and not same number of columns

Tom Kyte - Fri, 2019-12-20 08:55
We have table 'CUSTOMER' in two different schema's. Both are not having any primary key and the column numbers in both table do not match(i.e schema1 table can have 97 column other schema table has 101).the column names are same which are present in ...
Categories: DBA Blogs

PL/SQL code in packages on DB vs on APEX sites - how it affect performance?

Tom Kyte - Fri, 2019-12-20 08:55
Dear Tom, As you develop in APEX you can move all your PL/SQL code into packages or you can put all on APEX. How this affect performance? I know that it is better to move code to packages to make quick changes and have more control over code. ...
Categories: DBA Blogs

PLSQL

Tom Kyte - Fri, 2019-12-20 08:55
Requ:- If table do not have records then need to be inserted if have already the END DATE column only update with an no.of years based on Terms( For example If Term year is 10, then 10*12=120 Months, means..10 years needs to added to the END DATE col...
Categories: DBA Blogs

Merge Delete

Tom Kyte - Fri, 2019-12-20 08:55
How do I delete with a merge statement? I want to update MERGE_DELETE_TEST2 to match MERGE_DELETE_TEST1 (think ETL). I cannot get merge delete to remove the row that exists in MERGE_DELETE_TEST2 that does not exist in MERGE_DELETE_TEST1.
Categories: DBA Blogs

Import Production Dump to new schema

Tom Kyte - Fri, 2019-12-20 08:55
Hi, We have a live project which has only 1 schema. We are upgrading some features and so we are going to release it as version 2.0 . My question is that the production dump which is only one schema has to be to imported to 4 different schema o...
Categories: DBA Blogs

Check Constraints and Explain Plan Filter Predicates

Tom Kyte - Fri, 2019-12-20 08:55
Why does the Oracle SQL Optimizer include a filter predicate for a Check Constraint when generating an execution plan for a SELECT statement? If the constraint is valid (according to DBA_CONSTRAINTS), then the table rows are all compliant with the co...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator