Feed aggregator

Creating a DIRECTORY - Forensics Example in 23c

Pete Finnigan - Fri, 2023-05-26 12:06
I want to talk about the foibles of DIRECTORY creation in the Oracle database. This is not a 23c specific issue but one that goes back years. I want to understand what rights and objects are created when we make....[Read More]

Posted by Pete On 26/05/23 At 01:29 PM

Categories: Security Blogs

Data load wizard page support for Excel files?

Tom Kyte - Thu, 2023-05-25 18:26
Does oracle apex 23.1 data load wizard support excel files like the data loader in the SQL Workshop?
Categories: DBA Blogs

create an account equivalent to APPS but with read only access

Tom Kyte - Wed, 2023-05-24 05:46
How to create a user for eBusiness DB account equivalent to APPS user but only with select and execute on FND, INV and APPLSYS objects?
Categories: DBA Blogs

Calculate more than attendance period per day with some conditions

Tom Kyte - Wed, 2023-05-24 05:46
I have the following query: select OPERATION_CODE, to_char(OPERATION_DATE, 'dd/mm/yyyy hh24:mi:ss') as OPERATION_DATE, EMP_CODE from HR_ORIGINAL_ATTENDANCE where EMP_CODE = 4415 and to_char(OPERATION_DATE, 'yyyymmdd') = 20230517 order by OPERATION_DATE; And I have the following output: OPERATION_CODE OPERATION_DATE EMP_CODE 1 17/05/2023 07:08:03 4415 1 17/05/2023 07:55:15 4415 2 17/05/2023 08:00:00 4415 1 17/05/2023 15:07:01 4415 2 17/05/2023 16:00:00 4415 2 17/05/2023 16:58:27 4415 2 17/05/2023 17:26:05 4415 1 17/05/2023 20:00:00 4415 The report query I want is as follow: Entrance Leave 17/05/2023 07:08:03 17/05/2023 08:00:00 17/05/2023 15:07:01 17/05/2023 17:26:05 17/05/2023 20:00:00 Witch meaning that, the first attendance period row is the min entrance and the max leave before the next entrance period start. Where the leave can be overrided by the user, but if the user entrance more than one time without leave, I want to select the min entrance. I hope it is clear. Thank you. The above was my question in an old thread, and Chris answer me with the following: select * from hr_original_attendance match_recognize ( order by operation_date measures first ( op1.operation_date ) st_dt, last ( op2.operation_date ) en_dt pattern ( op1+ op2+ ) define op1 as operation_code = first ( operation_code ), op2 as operation_code <> first ( operation_code ) ); The result was: ST_DT EN_DT -------- -------- 17/05/23 17/05/23 17/05/23 17/05/23 And Connor follow up, and answer me the following: with prep (operation_code, operation_date, emp_code) as ( select operation_code, operation_date, emp_code from hr_original_attendance where emp_code = 4415 and operation_date >= date '2023-05-17' and operation_date < date '2023-05-18' ) select emp_code, entrance, leave from prep match_recognize( order by operation_date measures first(emp_code) as emp_code, first(one.operation_date) as entrance, last (two.operation_date) as leave pattern ( one* two* ) define one as operation_code = 1, two as operation_code = 2 ); The result was: EMP_CODE ENTRANCE LEAVE ---------- ------------------- ------------------- 4415 17/05/2023 07:08:03 17/05/2023 08:00:00 4415 17/05/2023 15:07:01 17/05/2023 17:26:05 4415 17/05/2023 20:00:00 the exactly was I need. Thank you very much Chris and Connor. The new question is that: When I have oracle 11gr2 version, how to rewrite the query? Thank you.
Categories: DBA Blogs

Sending Email with Apex 4.2.6.00.03

Tom Kyte - Tue, 2023-05-23 11:26
We have been using an older version of Apex for about 10 years now and now want to set up email capability for sending reports as attachments. I have scoured the internet for the requirements to use but am getting confused with all the different answers. We need to connect to an external smtp server. What exactly do I need to get in place to get this to work on this older version of Apex? Please let me know if you need more info to better answer this question. Thank you.
Categories: DBA Blogs

SQL help

Tom Kyte - Tue, 2023-05-23 11:26
Hi, I have column having data like below. 101010 10203040 2030405060 etc From each row above I have to break as below to check whether that number exist in a table or not, if yes skip and if no then insert. E.g. for first row 101010 First I have extract two digit ie 10 and check if exist in a table or not and if not then insert if yes then skip. Second time i have to take first 4 digital eg 1010 and verify whether exist in a table or not. Similarly I have to look 101010 last time for first row to verify if it exist or not. Can you please help how to achieve this. Thanks.
Categories: DBA Blogs

Instance Caging

Tom Kyte - Tue, 2023-05-23 11:26
What do we mean by instance caging in context of oracle database? How it works? Kindly elaborate with examples?
Categories: DBA Blogs

Are Oracle 23c Shipped Profiles Weak

Pete Finnigan - Tue, 2023-05-23 10:46
Whilst the 23c version shipped by Oracle is a free developer release we should not complain as its free and we should also recognise that this is not production and this 23c version is not intended to be used in....[Read More]

Posted by Pete On 23/05/23 At 02:09 PM

Categories: Security Blogs

Unified audit with schema filtering

Tom Kyte - Mon, 2023-05-22 17:06
Dear Sirs, I'm using database version 19c and trying to set a filter on schema level before inserting into UNIFIED_AUDIT_TRAIL view. I tried these example policies but without success: <code>CREATE AUDIT POLICY test1 ACTIONS UPDATE ON <schemaname>.* WHEN 'INSTR(UPPER(SYS_CONTEXT(''USERENV'',''CLIENT_PROGRAM_NAME'')), ''FRMWEB'') = 0' EVALUATE PER SESSION ONLY TOPLEVEL; CREATE AUDIT POLICY test2 ACTIONS UPDATE ON <schemaname> WHEN 'INSTR(UPPER(SYS_CONTEXT(''USERENV'',''CLIENT_PROGRAM_NAME'')), ''FRMWEB'') = 0' EVALUATE PER SESSION ONLY TOPLEVEL;</code> Also there is in my opinion no paramter, like obj_owner, in the namespace userenv in sys_context. Do you have a hint please or is logging on schema level not possible? Many thanks Juergen
Categories: DBA Blogs

PeopleTools Learning Subscriptions

Jim Marion - Sat, 2023-05-20 11:41

Are you looking to take your PeopleSoft Development skills to the next level? Look no further than our All-access Learning Membership.

With this membership, you'll gain access to a wealth of resources to help you improve your PeopleSoft Development knowledge and skills. From video tutorials to code samples and webinars, you'll have everything you need to become a PeopleSoft Development pro.

The membership also includes access to PeopleSoft experts, where you can ask questions and get feedback.

But what sets the All-access Learning Membership apart from other online learning platforms is its focus on real-world applications. You'll learn how to use PeopleTools to build practical, functional applications that you can use in your own projects.

And with new content added regularly, you'll always have something new to learn and explore.

So if you're ready to take your PeopleTools skills to the next level, sign up for the All-access Learning Membership at https://www.jsmpros.com/groups/all-access/. Your future self will thank you!





Manually purging trail files from OCI GoldenGate Service

DBASolved - Fri, 2023-05-19 21:09

Oracle GoldenGate Service is Oracle’s cloud offering  to quickly use GoldenGate to move data within OCI as well as other […]

The post Manually purging trail files from OCI GoldenGate Service appeared first on DBASolved.

Categories: DBA Blogs

Let’s talk about ORDS VII

Mathias Magnusson - Wed, 2023-05-17 08:00

This post is part of a series that starts with this post.

Having gone through much of what can be done with a basic REST-service earlier in this series, it is time to look at securing the service. When you can access a service with noting noire than just the URL, then so can anyone else that has access to sen toe ORDS-server the URL. Not only can they read, but if the view allows writing then they can do that too as an autoREST view has DML capability too in the generated API.

In the example used for this series it will not work as the view goes against a view Oracle has defined and that cannot be updated by you as a user. However, you will typically want to protect read as well as write for your services. That is what we’ll achieve with securing the service. In this post we’ll secure it from access and in the next we’ll look at accessing the secured service.

To secure a service we create a role, and a privilege and then define what it will protect, in our case we’re protecting a path in the URL.

Let’s start with creating a role. It is done by just giving it a name, nothing else is needed.

Remember, we protect the service in the source database/schema. That is rest_demo with the alias rest_demo_schema in the setup for this series.

exec ords.create_role(p_role_name => 'gnome_role');

Why gnome you ask? Well, there is a certain theme to the images in this series.

Now, the next step is to set up a privilege that is used to protect the service.

begin
  ords.create_privilege(
      p_name        => 'gnome_priv',
      p_role_name   => 'gnome_role',
      p_label       => 'Gnome Data',
      p_description => 'Give access till demo data.');
end;
/

With that all there is left is to define what it should protect. With an AutoREST-enabled view our only option is to define a path for the URL to protect.

begin
  ords.create_privilege_mapping(
      p_privilege_name => 'gnome_priv',
      p_pattern        => '/vw_rest_svc/*');
end;
/

With that the service on that path is now protected. Note that the pattern is within the schema-alias. It starts from that point in the URL so it does not work to have /ords/… That is good as it allows the alias for the schema to be changed without the security being side stepped.

All that is left now is to verify that the service is in deed not available anymore.

curl https://.../ords/rest_demo_schema/vw_rest_svc/ | python3 -m json.tool 
{
    "code": "Unauthorized",
    "message": "Unauthorized",
    "type": "tag:oracle.com,2020:error/Unauthorized",
    "instance": "tag:oracle.com,2020:ecid/039ed419abad226de418d37c6f146756"
}

Great, the service is now protected. In fact, it is so well protected that there is no way to access it. Setting up to allow access is where we pick up in the next post.

HIUG Interact 2023

Jim Marion - Tue, 2023-05-16 16:36

User group conferences are a fantastic opportunity to network with and learn from peers. And we are just a few weeks away from the Healthcare PeopleSoft Industry User Group meeting Interact 2023! I am presenting my signature session, Getting the Most out of PeopleSoft PeopleTools: Tips and Techniques on Tuesday, June 13 from 9:30 AM - 10:30 AM in room Condesa 3.

See you there!

i am not able to login to sqlplus / as sysdba

Tom Kyte - Tue, 2023-05-16 14:46
Hello Team, I am new to Oracle and I have recently installed oracle-21c in Centos. The installation was successful , but During installation password was automatically generated. But now when i try to run sqlplus / as sysdba, its repeatedly asking me for the password which i don't have. How to fix this? [tony@instance-7 ~]$ sqlplus / as sysdba SQL*Plus: Release 21.0.0.0.0 - Production on Wed May 10 17:51:55 2023 Version 21.3.0.0.0 Copyright (c) 1982, 2021, Oracle. All rights reserved. ERROR: ORA-12162: TNS:net service name is incorrectly specified Enter user-name:
Categories: DBA Blogs

Apex 21.2 Automations - Send Email with Attachment

Tom Kyte - Tue, 2023-05-16 14:46
Hello I am trying to use Automations to set up a weekly e-mail with a csv attachment that is a simple select from a table. I am getting?ORA-20987?.. has a wrong data type in SQL statement. Column# 1 is VARCHAR2 but BLOB expected? error. It seems that I need to modify my query to convert to BLOB. Can you provide an example of how this should look? Under Automations, in "Actions", I select Type = "Send E-mail". Under "Attachement SQL" I entered select column1, column2 from data_table
Categories: DBA Blogs

JSON Relational Duality Views

Tom Kyte - Tue, 2023-05-16 14:46
JSON Relational Duality views rely on a normalised, uniquely constrained schema to store the data in the 'current view' aka 'the last known state' aka 'snapshot'. In my experience, the vast majority of schemas designed by developers do not conform to this pattern. For practical reasons (because the application needs more that just the last known state), schemas always include audit tables/history tables/timestamps/additional intersections between 1:m tables etc etc, all of which mean that unique keys (and therefore foreign keys) are NOT enforced in the database. So, how can JSON Relational duality views be of any practical use?
Categories: DBA Blogs

How to move tables from one schema to another ?

Tom Kyte - Tue, 2023-05-16 14:46
Tom, My basic requirement is like this : I need to take a dump of a schema. But I don't want to export certain tables which are really huge and I don't want them in the dump. Since there is no way to exclude tables in the export command, I created a new schema and moved these not-necessary tables into this new schema. Then I took the export of the original schema. It seems to work fine, except for the time. Problem : Each of these tables that are not required are really huge with 50-80 million records. So, in order to move the table from one schema to another, I have (as far as I know) just two options. 1. Create table schema2.t1 as select * from schema1.t1; 2. Export these tables from schema 1 and use import with from_user=schema1 and to_user=schema2. Currently I have used method 2. But because of these huge tables, it is taking really a long time and also huge disk space to store these dump files temporarily (though I am not too worried about the diskspace). So, what I would like to know is that, is there way in Oracle 8i and above, wherein we can just move the table from one schema to another. I am looking for some command like "Rename t1 to t2" but in a different schema. Please advise. Thanks Prakash.
Categories: DBA Blogs

User Defined Snippets in SQL Developer

Tom Kyte - Mon, 2023-05-15 20:26
Has the use of user defined snippets been eliminated from the most recent release? Thanks
Categories: DBA Blogs

Are we Securing Oracle or are we Securing Data in Oracle?

Pete Finnigan - Mon, 2023-05-15 19:26
I have spoken about this before in this blog and I have advised and taught people for years about the same idea. My focus is Oracle Security but what does that mean? I always tell people we are NOT securing....[Read More]

Posted by Pete On 15/05/23 At 01:59 PM

Categories: Security Blogs

The Ultimate Question of Life, the Universe, and… how big is my Oracle Table ?

The Anti-Kyte - Mon, 2023-05-15 01:30

At a time when the debate rages about how much you should trust what AI tells you, it’s probably worth recalling Deep Thought’s wildly incorrect assertion that the answer to the Ultimate Question of Life, the Universe, and Everything is forty-two.
As any Database specialist will know, the answer is the same as it is to the question “How big is my Oracle Table ?” which is, of course, “It depends”.

What it depends on is whether you want to know the volume of data held in the table, or the amount of space the database is using to store the table and any associated segments (e.g. indexes).

Connecting to my trusty Free Tier OCI Oracle Instance ( running 19c Enterprise Edition), I’ve set out on my journey through (disk) space to see if I can find some more definitive answers…

How big is my table in terms of the raw data stored in it ?

Before going any further, I should be clear on the database language settings and character set that’s being used in the examples that follow. Note particularly that I’m not using a multi-byte character set :

select parameter, value    
from gv$nls_parameters
order by parameter
/


PARAMETER                      VALUE                         
------------------------------ ------------------------------
NLS_CALENDAR                   GREGORIAN                     
NLS_CHARACTERSET               AL32UTF8                      
NLS_COMP                       BINARY                        
NLS_CURRENCY                   £                             
NLS_DATE_FORMAT                DD-MON-YYYY                   
NLS_DATE_LANGUAGE              ENGLISH                       
NLS_DUAL_CURRENCY              €                             
NLS_ISO_CURRENCY               UNITED KINGDOM                
NLS_LANGUAGE                   ENGLISH                       
NLS_LENGTH_SEMANTICS           BYTE                          
NLS_NCHAR_CHARACTERSET         AL16UTF16                     
NLS_NCHAR_CONV_EXCP            FALSE                         
NLS_NUMERIC_CHARACTERS         .,                            
NLS_SORT                       BINARY                        
NLS_TERRITORY                  UNITED KINGDOM                
NLS_TIMESTAMP_FORMAT           DD-MON-RR HH24.MI.SSXFF       
NLS_TIMESTAMP_TZ_FORMAT        DD-MON-RR HH24.MI.SSXFF TZR   
NLS_TIME_FORMAT                HH24.MI.SSXFF                 
NLS_TIME_TZ_FORMAT             HH24.MI.SSXFF TZR             

19 rows selected. 

Now, let see if we can work out how much raw data is held in a table.
We’ll start with a very simple example :

create table marvin as
    select 1000 + rownum as id
    from dual
    connect by rownum <= 1024;

Marvin may have a brain the size of a planet but his tabular namesake has more modest space requirements.
It’s 1024 records are all 4 digits long.
Therefore, the size of the table data should be 4096 bytes, right ?

“Hang on”, your thinking, “why not just lookup the size in USER_SEGMENTS and make this a really short post ?”

Well :

select bytes
from user_segments
where segment_name = 'MARVIN'
and segment_type = 'TABLE'
/

     BYTES
----------
     65536

USER_SEGMENTS will give you a size in bytes, but it’s not the same as the amount of raw data.
We’ll come back to this in a bit.

For now though, we can cross-check the size from elsewhere in the data dictionary, provided the stats on the table are up-to-date.

To ensure that this is so, I can run :

exec dbms_stats.gather_table_stats('MIKE', 'MARVIN');

This will ensure that statistics data is populated in the USER_TABLES view. This means that we can estimate the data volume by running the following query :

select num_rows, avg_row_len,
    num_rows * avg_row_len as data_in_bytes
from user_tables
where table_name = 'MARVIN'
/

…which returns…

  NUM_ROWS AVG_ROW_LEN DATA_IN_BYTES
---------- ----------- -------------
      1024           4          4096

That looks promising.

We can further verify this by running :

select sum(length(id)) as data_in_bytes
from marvin
/

DATA_IN_BYTES
-------------
         4096

OK, now let’s see what happens with a slightly more complex data set, and an index as well …

create table hitchikers
(
    id number generated always as identity,
    character_name varchar2(250),
    quote varchar2(4000),
    constraint hitchikers_pk primary key (id)
)
/

declare
    procedure ins( i_character in varchar2, i_quote in varchar2) 
    is
    begin
        insert into hitchikers( character_name, quote)
        values(i_character, i_quote);
    end;
begin
    for i in 1..1024 loop
        ins('Deep Thought', 'Forty-Two');
        ins('Trillian', q'[we have normality... anything you can't cope with is, therefore, your own problem]');
        ins('Ford Prefect', 'Time is an illusion. Lunchtime doubly so.');
        ins('Zaphod Beeblebrox', q'[If there's anything more important than my ego around, I want it caught and shot right now!]');
        ins(null, 'Anyone who is capable of getting themselves made President should on no account be allowed to do the job');
        ins('Marvin', q'[Life! Loathe it or hate it, you can't ignore it.]');
        ins('Arthur Dent', 'This must be Thursday. I never could get the hang of Thursdays');
        ins('Slartibartfast', q'[I'd rather be happy than right any day]');
    end loop;
    commit;
end;
/
commit;

Once stats are present on the table, we can check the expected data size as before :

select num_rows, avg_row_length,
    num_rows * avg_row_length as data_in_bytes
from user_tables
where table_name = 'HITCHIKERS'
/

  NUM_ROWS AVG_ROW_LEN DATA_IN_BYTES
---------- ----------- -------------
      8192          75        614400

This time, the size in bytes figure we get back is not exact, as we can confirm with :

select 
    sum( length(id) + 
    nvl(length(character_name),0) +
    nvl(length(quote), 0)) as data_in_bytes
from hitchikers
/

DATA_IN_BYTES
-------------
       598957

To verify the actual size in bytes, we can dump the contents of a table into a csv file. In this case, I’m using SQLDeveloper :

The resulting file is a different size again :

ls -l hitchikers.csv
-rw-rw-r-- 1 mike mike 656331 May 13 11:50 hitchikers.csv

This can be accounted for by the characters added as part of the csv formatting.

First, the csv file includes a header row :

head -1 hitchikers.csv
"ID","CHARACTER_NAME","QUOTE"

Including the line terminator this is 30 bytes :

head -1 hitchikers.csv |wc -c
30

The format in each of the 8192 data rows includes :

  • a comma after all but the last attribute on a row
  • a line terminator after the last attribute
  • double quotes enclosing each of the two VARCHAR attributes.

For example :

grep ^42, hitchikers.csv

42,"Trillian","we have normality... anything you can't cope with is, therefore, your own problem"

That’s a total of 7 extra bytes per data row.
Add all that up and it comes to 57374 bytes which are a consequence of csv formatting.

Subtract that from the file size and we get back to the calculated data size we started with :

656331 - 57374 = 598957

This confirms that the figures in USER_TABLES are approximate and you’ll need to bear this in mind if you’re relying on them to calculate the size of the data in a table.

Whilst were here, let’s see what effect compression might have on our ability to determine the raw data size.
We can do this by creating a table that has the same structure as HITCHIKERS and contains the same data, but which is compressed :

create table magrathea
(
    id number,
    character_name varchar2(250),
    quote varchar2(4000),
    constraint magrathea_pk primary key (id)
)
    row store compress advanced
/

insert into magrathea( id, character_name, quote)
    select id, character_name, quote
    from hitchikers
/

commit;

exec dbms_stats.gather_table_stats(user, 'MAGRATHEA');

It turns out that, for the purposes of our raw data calculation, the effect of table compression is…none at all :

select num_rows, avg_row_len,
    num_rows * avg_row_len
from user_tables
where table_name = 'MAGRATHEA'
/


  NUM_ROWS AVG_ROW_LEN NUM_ROWS*AVG_ROW_LEN
---------- ----------- --------------------
      8192          75               614400

However, if you look at the number of blocks used to store the table, the effects of compression are more evident :

select table_name, blocks
from user_tables
where table_name in ('HITCHIKERS', 'MAGRATHEA')
order by 2
/

TABLE_NAME                         BLOCKS
------------------------------ ----------
MAGRATHEA                              20 
HITCHIKERS                             95

Incidentally, it’s worth noting that, as with the data size, the number of blocks reported in USER_TABLES are somewhat approximate.
USER_SEGMENTS reports the number of blocks for each table as :

select segment_name, blocks
from user_segments
where segment_name in ('HITCHIKERS', 'MAGRATHEA')
order by 2
/

SEGMENT_NAME                       BLOCKS
------------------------------ ----------
MAGRATHEA                              24
HITCHIKERS                            104

So it looks like compression will affect the amount of database space required to store an object but not the size of the actual data. This brings us nicely on to…

How big is my table in terms of the amount of space it’s taking up in the database ?

Let’s go back to MARVIN. Remember, this table contains 4K of raw data, but USER_SEGMENTS claims that it’s quite a bit larger :

select bytes
from user_segments
where segment_name = 'MARVIN'
and segment_type = 'TABLE'
/

     BYTES
----------
     65536

To understand how Oracle has come up with this figure, you need to consider that :

  • the smallest unit of space that Oracle addresses is measured in blocks
  • the size of these blocks is defined at tablespace level.
  • any object that uses space is allocated that space in units of an extent – which is a number of contiguous blocks.

If we take a look at MARVIN, we can see that the table resides in the DATA tablespace and has been allocated a single extent of 8 blocks :

select tablespace_name, bytes, blocks, extents
from user_segments
where segment_name = 'MARVIN';

TABLESPACE_NAME                     BYTES     BLOCKS    EXTENTS
------------------------------ ---------- ---------- ----------
DATA                                65536          8          1

The block size is defined at the tablespace level and is held in USER_TABLESPACES in bytes :

select block_size
from user_tablespaces
where tablespace_name = 'DATA';

BLOCK_SIZE
----------
      8192

If we now multiply the number of blocks in the table by the size of those blocks, we get back to the size that USER_SEGMENTS is reporting :

select seg.blocks * tsp.block_size
from user_segments seg
inner join user_tablespaces tsp
    on seg.tablespace_name = tsp.tablespace_name
where seg.segment_name = 'MARVIN';

SEG.BLOCKS*TSP.BLOCK_SIZE
-------------------------
                    65536

MARVIN is a table with no ancillary segments, such as indexes.
To find the total space being used for the HITCHIKERS table, we’ll also need to consider the space being taken up by it’s index, HITCHIKERS_PK :

select seg.segment_name, seg.segment_type, seg.blocks, ts.block_size,
    seg.bytes
from user_segments seg
inner join user_tablespaces ts
    on ts.tablespace_name = seg.tablespace_name
where seg.segment_name in ('HITCHIKERS', 'HITCHIKERS_PK')
/

SEGMENT_NAME         SEGMENT_TYPE        BLOCKS BLOCK_SIZE      BYTES
-------------------- --------------- ---------- ---------- ----------
HITCHIKERS           TABLE                  104       8192     851968
HITCHIKERS_PK        INDEX                   24       8192     196608

…in other words…

select sum(seg.bytes)
from user_segments seg
where seg.segment_name in ('HITCHIKERS', 'HITCHIKERS_PK')
/

SUM(SEG.BYTES)
--------------
       1048576

On the subject of ancillary segments, what about LOBS ?

create table the_guide(
    id number generated always as identity,
    message clob);
    
declare
    v_msg clob;
begin
    for i in 1..1000 loop
        v_msg := v_msg||q'[Don't Panic!]';
    end loop;
    insert into the_guide(message) values( v_msg);
end;
/

commit;

Unlike other segment types, LOBSEGMENT and LOBINDEX segments do not have their parent tables listed as the SEGMENT_NAME in USER_SEGMENTS.

Therefore, we need to look in USER_LOBS to identify it’s parent table for a LOBSEGMENT and USER_INDEXES for a LOBINDEX :

select segment_name, segment_type, bytes, blocks
from user_segments 
where(
    segment_name = 'THE_GUIDE'
    or
    segment_name in ( 
        select segment_name 
        from user_lobs 
        where table_name = 'THE_GUIDE'
        )
    or 
    segment_name in ( 
        select index_name 
        from user_indexes 
        where table_name = 'THE_GUIDE'
        )
    )
/   

SEGMENT_NAME                   SEGMENT_TYPE         BYTES     BLOCKS
------------------------------ --------------- ---------- ----------
THE_GUIDE                      TABLE                65536          8
SYS_IL0000145509C00002$$       LOBINDEX             65536          8
SYS_LOB0000145509C00002$$      LOBSEGMENT         1245184        152

In this instance, although the table segment itself is only taking up 65536 bytes, when you add in all of the supporting objects, the total space requirement increases to 1376256 bytes.

If you’ve managed to make this far then meet me at the Big Bang Burger Bar for a Pan Galactic Garble Blaster. I need a drink after that.

Pages

Subscribe to Oracle FAQ aggregator