Feed aggregator

Gather Stats while doing a CTAS

Tom Kyte - 3 hours 53 min ago
Can you please provide your opinion on the below point. This is what I have noticed. When we create a table using a CTAS, and then check the user_Tables, the last_analyzed and num_rows column is already populated with accurate data. If it is so, ...
Categories: DBA Blogs

How can application control to explicitly call OCIStmtPrepare2 rather than OCIStmtPrepare when using pro*C

Tom Kyte - 3 hours 53 min ago
Our application got an ORA-25412: transaction replay disabled by call to OCIStmtPrepare. Oracle Version: 12.2. The Oracle runs in RAC mode. After searched on the internet, we found below explanation: <i>This call(OCIStmtPrepare) does no...
Categories: DBA Blogs

'BEFORE CREATE ON SCHEMA' trigger apparently not firing before Create Table

Tom Kyte - 3 hours 53 min ago
In Oracle 8.1.7 instance set up with characterset US7ASCII <code> Connected to: Oracle8i Enterprise Edition Release 8.1.7.4.0 - Production With the Partitioning option JServer Release 8.1.7.4.0 - Production SQL> create table t1 (c1 varchar2...
Categories: DBA Blogs

Left Joining Four Tables without duplicates from right tables or Cartesian product!

Tom Kyte - 3 hours 53 min ago
I am running the query below to get data from 4 tables, but the problem that data is fetched as Cartesian product, in other words, items from tblEdu is being duplicated with items from tblTrain <code> SELECT tblpersonal.*, tbltrain.*, tbledu.*,...
Categories: DBA Blogs

Temp space

Jonathan Lewis - 15 hours 27 min ago

A question about hunting down the source of the error “ORA-01652 unable to extend temp segment by NNN in tablespace XXX” shows up on the Oracle-L mailing list or the Oracle developer community forum from time to time. In most cases the tablespace referenced is the temporary tablespace, which means the session reporting the error was probably trying to allocate some space for sorting, or doing a hash join, or instantiating a GTT (global temporary table) or a CTE (common table expression / “with” subquery). The difficulty in cases like this is that the session reporting the error might be the victim of some other session’s greed – so looking at what the session was doing won’t necessarily point you to the real problem.

Of course you then run into a further problem tracking down the source of the problem. By the time you hear the complaint (even if it’s only seconds after the error appeared) the session that had been hogging the temporary tablespace may have finished what it was doing, leaving a huge amount of free space in the temporary tablespace and suggesting (to the less experienced and cynical user) that there’s something fundamentally wrong with the way Oracle has been accounting for space usage.

If you find yourself in this situation remember that (if you’re licensed to take advantage of it) the active session history may be able to help.  One of the columns in v$active_session_history is called temp_space_allocated with the slightly misleading description: “Amount of TEMP memory (in bytes) consumed by this session at the time this sample was taken”. A simple query against v$active_session_history may be enough to identify the session and SQL  statement that had been holding the temporary space when the error was raised, for example:


column pga_allocated        format 999,999,999,999
column temp_space_allocated format 999,999,999,999

break on session_id skip 1 on session_serial#

select
        session_id, session_serial#, 
        sample_id, 
        sql_id, 
        pga_allocated,
        temp_space_allocated
from
        v$active_session_history
where
        sample_time between sysdate - 5/1440 and sysdate
and     nvl(temp_space_allocated,0) != 0
order by
        session_id, sample_id
/

All I’ve done for this example is query v$active_session_history for the last 5 minutes reporting a minimum of information from any rows that show temp space usage. As a minor variation on the theme you can obviously change the time range, and you might want to limit the output to rows reporting more than 1MB (say) of temp space usage.

You’ll notice that I’ve also reported the pga_allocated (Description: Amount of PGA memory (in bytes) consumed by this session at the time this sample was taken) in this query; this is just a little convenience – a query that’s taking a lot of temp space will probably start by acquiring a lot of memory so it’s nice to be able to see the two figures together.

There are plenty of limitations and flaws in the usefulness of this report and I’ll say something about that after showing an example of usage. Let’s start with a script to build some data before running a space-consuming query:


rem
rem     Script:         allocate_tempspace.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Nov 2019
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem

create table t1 as 
select * from all_objects
;

insert into t1 select * from t1;
commit;

insert into t1 select * from t1;
commit;

insert into t1 select * from t1;
commit;

insert into t1 select * from t1;
commit;

execute dbms_stats.gather_table_stats(null,'t1')

execute dbms_lock.sleep(20)

set pagesize  60
set linesize 255
set trimspool on
set serveroutput off
alter session set statistics_level = all;

with ttemp as (
        select /*+ materialize */ * from t1
)
select 
        /*+ no_partial_join(@sel$2 t1b) no_place_group_by(@sel$2) */ 
        t1a.object_type,
        max(t1a.object_name)
from
        ttemp t1a, ttemp t1b
where
        t1a.object_id = t1b.object_id
group by
        t1a.object_type
order by
        t1a.object_type
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

My working table t1 consists of 16 copies of the view all_objects – so close to 1 million rows in my case – and the query is hinted to avoid any of the clever transformations that the optimizer could use to reduce the workload so it’s going to do a massive hash join and aggregation to report a summary of a couple of dozen rows. Here’s the execution plan (in this case from 12.2.0.1, though the plan is the same for 19.3 with some variations in the numbers).


SQL_ID  1cwabt12zq6zb, child number 0
-------------------------------------

Plan hash value: 1682228242

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                | Name                       | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem | Used-Tmp|
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                         |                            |      1 |        |     29 |00:00:10.03 |   47413 |  21345 |  12127 |       |       |          |         |
|   1 |  TEMP TABLE TRANSFORMATION               |                            |      1 |        |     29 |00:00:10.03 |   47413 |  21345 |  12127 |       |       |          |         |
|   2 |   LOAD AS SELECT (CURSOR DURATION MEMORY)| SYS_TEMP_0FD9D665B_E2772D3 |      1 |        |      0 |00:00:01.51 |   28915 |      0 |   9217 |  2068K|  2068K|          |         |
|   3 |    TABLE ACCESS FULL                     | T1                         |      1 |    989K|    989K|00:00:00.24 |   19551 |      0 |      0 |       |       |          |         |
|   4 |   SORT GROUP BY                          |                            |      1 |     29 |     29 |00:00:08.51 |   18493 |  21345 |   2910 |  6144 |  6144 | 6144  (0)|         |
|*  5 |    HASH JOIN                             |                            |      1 |     15M|     15M|00:00:03.93 |   18493 |  21345 |   2910 |    48M|  6400K|   65M (1)|   25600 |
|   6 |     VIEW                                 |                            |      1 |    989K|    989K|00:00:00.36 |    9233 |   9218 |      0 |       |       |          |         |
|   7 |      TABLE ACCESS FULL                   | SYS_TEMP_0FD9D665B_E2772D3 |      1 |    989K|    989K|00:00:00.35 |    9233 |   9218 |      0 |       |       |          |         |
|   8 |     VIEW                                 |                            |      1 |    989K|    989K|00:00:00.40 |    9257 |   9217 |      0 |       |       |          |         |
|   9 |      TABLE ACCESS FULL                   | SYS_TEMP_0FD9D665B_E2772D3 |      1 |    989K|    989K|00:00:00.39 |    9257 |   9217 |      0 |       |       |          |         |
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   5 - access("T1A"."OBJECT_ID"="T1B"."OBJECT_ID")

Critically this plan shows us two uses of the temp space but only reports one of them as Used-Tmp. The “hash join” at operation 5 tells us that it reached 65MB of (tunable PGA) memory before going “1-pass”, eventually dumping 25,600 KB to disc. This space usage is corroborated by the 2,910 writes (which, at an 8KB block size, would be 23,280 KB). The missing Used-Tmp, however, is the space taken up by the materialized CTE. We can see that operation 2 is a “load as select” that writes 9,217 blocks to disc (subsequently read back twice – the tablescans shown in operations 7 and 9). That’s  74,000 KB of temp space that doesn’t get reported Used-Tmp.

If we take a look at the plan from 19.3 we see different numbers, but the same “error of omission”:

SQL_ID  1cwabt12zq6zb, child number 0
-------------------------------------

Plan hash value: 1682228242

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                | Name                       | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem | Used-Tmp|
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                         |                            |      1 |        |     25 |00:00:08.15 |   34905 |  13843 |   8248 |       |       |          |         |
|   1 |  TEMP TABLE TRANSFORMATION               |                            |      1 |        |     25 |00:00:08.15 |   34905 |  13843 |   8248 |       |       |          |         |
|   2 |   LOAD AS SELECT (CURSOR DURATION MEMORY)| SYS_TEMP_0FD9D6624_E259E68 |      1 |        |      0 |00:00:01.26 |   23706 |      0 |   5593 |  2070K|  2070K|          |         |
|   3 |    TABLE ACCESS FULL                     | T1                         |      1 |    907K|    907K|00:00:00.21 |   18024 |      0 |      0 |       |       |          |         |
|   4 |   SORT GROUP BY                          |                            |      1 |     25 |     25 |00:00:06.89 |   11193 |  13843 |   2655 |  6144 |  6144 | 6144  (0)|         |
|*  5 |    HASH JOIN                             |                            |      1 |     14M|     14M|00:00:03.55 |   11193 |  13843 |   2655 |    44M|  6400K|   64M (1)|      23M|
|   6 |     VIEW                                 |                            |      1 |    907K|    907K|00:00:00.26 |    5598 |   5594 |      0 |       |       |          |         |
|   7 |      TABLE ACCESS FULL                   | SYS_TEMP_0FD9D6624_E259E68 |      1 |    907K|    907K|00:00:00.25 |    5598 |   5594 |      0 |       |       |          |         |
|   8 |     VIEW                                 |                            |      1 |    907K|    907K|00:00:00.34 |    5595 |   5594 |      0 |       |       |          |         |
|   9 |      TABLE ACCESS FULL                   | SYS_TEMP_0FD9D6624_E259E68 |      1 |    907K|    907K|00:00:00.33 |    5595 |   5594 |      0 |       |       |          |         |
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   5 - access("T1A"."OBJECT_ID"="T1B"."OBJECT_ID")


With slightly fewer rows in t1 (907K vs. 989K) we write 5,593 blocks for the materialized CTE  (instead of 9,217) and spill 2,655 blocks during the hash join (instead of 2,910). But again it’s only the hash join spill that is reported under Used-Tmp. Note, by the way, that the Used-Tmp in 12.2 was reported in KB when it’s reported in MB in 19.3.0.0.

Side note: comparing the number of rows created and blocks written for the CTE, it looks as if 19.3 is using the data blocks much more efficiently than 12.2. There’s no obvious reason for this (though a first guess would be that the older mechanism is to write a GTT with pctfree=10 while the new avoid any free space and transactional details) so, as ever, I now have another draft for a blog note reminding me to investigate (eventually) what differences there are in CTE storage on the upgrade. It’s something that might make a difference in a few special cases.

With the figures from the execution plans in mind we can now look at the results of the query against v$active_session_history. Conveniently the queries took a few seconds to complete, so we’re going to see several rows for each execution.

First the results from 12.2.0.1

SESSION_ID SESSION_SERIAL#  SAMPLE_ID SQL_ID           PGA_ALLOCATED TEMP_SPACE_ALLOCATED
---------- --------------- ---------- ------------- ---------------- --------------------
        14           22234   15306218 1cwabt12zq6zb       95,962,112            1,048,576
                             15306219 1cwabt12zq6zb       97,731,584           37,748,736
                             15306220 1cwabt12zq6zb      148,194,304           77,594,624
                             15306221 1cwabt12zq6zb      168,117,248           85,983,232
                             15306222 1cwabt12zq6zb      168,117,248           90,177,536
                             15306223 1cwabt12zq6zb      168,117,248           95,420,416
                             15306224 1cwabt12zq6zb      168,117,248           98,566,144
                             15306225 1cwabt12zq6zb      168,117,248          102,760,448
                             15306226 1cwabt12zq6zb      116,933,632          103,809,024
                             15306227 1cwabt12zq6zb      116,933,632          103,809,024
                             15306228 b66ycurnwpgud        8,602,624            1,048,576

I pointed out that we had 25,600 KB reported as Used-Tmp and roughly 74,000 KB unreported – a total of nearly 100,000 KB that is reasonably close to the 103,800,000 bytes reported by ASH. Moreover the timing of the plan (loading the CTE in the first 2 seconds) seems to agree with the growth to 77,590,000 of temp_space_allocated by the time we get to sample_id 15306220 in ASH. Then we have several seconds of slow growth as the hash join takes place and feeds its resulte up to the sort group by. At the end of the query we happen to have been lucky enough to catch one last sample just before the session had released all its temp space and ceased to be active.  (Note: however, that the sql_id at that sample point was not the sql_id of our big query – and that’s a clue about one of the limitations of using ASH to find the greedy SQL.)

We see the same pattern of behaviour in 19.3.0.0:


SESSION_ID SESSION_SERIAL#  SAMPLE_ID SQL_ID           PGA_ALLOCATED TEMP_SPACE_ALLOCATED
---------- --------------- ---------- ------------- ---------------- --------------------
       136           42767    2217500 1cwabt12zq6zb      143,982,592           46,137,344
                              2217501 1cwabt12zq6zb      193,527,808           54,525,952
                              2217502 1cwabt12zq6zb      193,527,808           57,671,680
                              2217503 1cwabt12zq6zb      193,527,808           61,865,984
                              2217504 1cwabt12zq6zb      197,722,112           67,108,864
                              2217505 1cwabt12zq6zb      150,601,728           70,254,592
                              2217506 1cwabt12zq6zb      150,601,728           70,254,592

We start with an almost instantaneous jump to 46MB of temp_space_allocated in the first second of the query – that’s the 5,593 blocks of the CTE being materialized, then the slow growth of temp space as the hash join runs, spills to disc, and passes its data up to the sort group by. Again we can see that the peak usage was the CTE (46MB) plus the reported spill of 23MB (plus rounding errors and odd bits).

Preliminary Observations

Queries against ASH (v$active_session_history) can show us sessions that were holding space in the temporary tablespace at the moment a sample of active sessions was taken. This may allow us to identify greedy sessions that were causing other sessions to fail with ORA-01652 (unable to allocate temp segment).

We have seen that there is at least one case where we get better information about temp space allocation from ASH than we do from the variants on v$sql_plan that include the SQL Workarea information (v$sql_workarea, v$sql_workarea_active) because the space acquired during materialization of CTEs is not reported as a “tunable SQL workarea” but does appear in the ASH temp_space_allocated.

At first sight it looks as if we may be able to use the query against ASH to identify the statement (by sql_id) that was the one being run by the greedy session when it consumed all the space. As we shall see in a further article, there are various reasons why this may over-optimistic, however in many cases there’s a fair chance that when you see the same sql_id appearing in a number of consecutive rows of the report then that statement may be the thing that is responsible for the recent growth in temp space usage – and you can query v$sql to find the text and call dbms_xplan.display_cursor() to get as much execution plan information as possible.

Further questions
  • When does a session release the temp_space_allocated? Will the space be held (locked) as long as the cursor is open, or can it be released when it is no longer needed? Will it be held, but releasable, even after the cursor has (from the client program’s perspective) been closed?
  • Could we be fooled by a report that said a session was holding a lot of space when it didn’t need it and would have released it if the demand had appeared?
  • Under what conditions might the temp_space_allocated in an ASH sample have nothing to do with the sql_id reported in the same sample?
  • Are there any other reasons why ASH might report temp_space_allocated when an execution plan doesn’t?
  • Is temp_space_allocated only about the temporary tablespace, or could it include informatiom about other (“permanent”) tablespaces ?

Stay tuned for further analysis of the limitations of using v$active_session_history.temp_space_allocated to help identify the srouce of a space management ORA-01652 issue.

 

 

Machine Learning and Spatial for FREE in the Oracle Database

Rittman Mead Consulting - 17 hours 11 min ago
Machine Learning and Spatial for FREE in the Oracle Database

Last week at UKOUG Techfest19 I spoke a lot about Machine Learning both with Oracle Analytics Cloud and more in depth in the Database with Oracle Machine Learning together with Charlie Berger, Oracle Senior Director of Product Management.

Machine Learning and Spatial for FREE in the Oracle Database

As mentioned several times in my previous blog posts, Oracle Analytics Cloud provides a set of tools helping Data Analysts start their path to Data Science. If, on the other hand, we're dealing with experienced Data Scientists and huge datasets, Oracle's proposal is to move Machine Learning where the data resides with Oracle Machine Learning. OML is an ecosystem of various options to perform ML with dedicated integration with Oracle Databases or Big Data appliances.

Machine Learning and Spatial for FREE in the Oracle Database

One of the most known branches is OML4SQL which provides the ability of doing proper data science directly in the database with PL/SQL calls! During the UKOUG TechFest19 talk Charlie Berger demoed it using a collaborative Notebook on top of an Autonomous Data Warehouse Cloud.

Machine Learning and Spatial for FREE in the Oracle Database

Both Oracle ADW and ATP include OML by default at no extra cost. This wasn't true for all the other database offerings in cloud or on-premises which required an additional option to be purchased (the Advanced Analytics one for on-premises deals). The separate license requirement was obviously something that limited the spread of this functionality, but, I'm happy to say that it's going away!

Oracle's blog post yesterday announced that:

As of December 5, 2019, the Machine Learning (formerly known as Advanced Analytics), Spatial and Graph features of Oracle Database may be used for development and deployment purposes with all on-prem editions and Oracle Cloud Database Services. See the Oracle Database Licensing Information Manual (pdf) for more details.

What this means is that both features are included for FREE within the Oracle Database License! Great news for both Machine Learning as well as Graph Databases fans! The following tweet from Dominic Giles (Master Product Manager for the Oracle DB) provides a nice summary of the licenses including the two options for the Oracle DB 19c.

The #Oracle Database now has some previously charged options added to the core functionality of both Enterprise Edition and Standard Edition 2. Details in the 19c licensing guide with more information to follow. pic.twitter.com/dqkRRQvWq2

— dominic_giles (@dominic_giles) December 5, 2019

But hey, this license change effects also older versions starting from the 12.2, the older one still in general support! So, no more excuses, perform Machine Learning where your data is: in the database with Oracle Machine Learning!

Categories: BI & Warehousing

Oracle Ranks First in all Four Use Cases for Oracle Database in Gartner’s Critical Capabilities for Operational Database Management Systems Report

Oracle Press Releases - Thu, 2019-12-05 07:00
Press Release
Oracle Ranks First in all Four Use Cases for Oracle Database in Gartner’s Critical Capabilities for Operational Database Management Systems Report Oracle also named a Leader in 2019 Gartner Magic Quadrant for Operational Database Management Systems, recognized in every report published since 2013

Redwood Shores, Calif.—Dec 5, 2019

Oracle today announced that it has been recognized in two newly released Gartner database reports. Oracle was ranked first in all four use cases of the 2019 Gartner “Critical Capabilities for Operational Database Management Systems” report1 and was named a Leader in Gartner’s 2019 “Magic Quadrant for Operational Database Management Systems” report2.

The self-driving Oracle Autonomous Database eliminates complexity, human error, and manual management to enable highest reliability, performance, and security at low cost.

“We believe Oracle’s placement in Gartner’s reports demonstrates our continued leadership in the database market and our commitment to innovation across our data management portfolio,” said Andrew Mendelsohn, Executive Vice President Database Server Technologies, Oracle. “Oracle continues to deliver unprecedented performance, reliability, security, and new cutting-edge technology via our cloud and on-premises offerings.”

Oracle believes it was positioned as a Leader in the Gartner Magic Quadrant for Operational Database Management Systems for its continued innovation across its database management portfolio. The Oracle Autonomous Database is available in the cloud and will be available for on-premises deployment soon through its Oracle Generation 2 Cloud at Customer offering. Oracle Database 19c includes all the latest database innovations, and is the long term support release for Oracle Database 12c Release 2. Oracle also recently shipped the Oracle Exadata Database Machine X8M, which employs Intel® Optane DC persistent memory and innovative database RDMA technologies to deliver up to 20x better latency than All Flash storage arrays.

For the Gartner Operational Database Management Systems Critical Capabilities report, Oracle Database once again ranked No. 1 in all four core operational database use cases: traditional transactions, distributed variable data, event processing/data in motion, and augmented transactions.

Oracle further demonstrates its commitment in continuing to deliver a converged database that makes it easy for developers to build multi-model, data-driven applications. The Oracle Database now includes several different sharding capabilities, enhancing automated data distribution especially important for hybrid cloud or hyperscale requirements.

Oracle Autonomous Database builds on 40 years of experience supporting the world’s most demanding applications. The first-of-its-kind, Oracle Autonomous Database uses groundbreaking machine learning to enable self-driving, self-repairing, and self-securing capabilities with cloud economies of scale and elasticity. The complete automation of database and infrastructure operations like patching, tuning and upgrading, cuts administrative costs, and allows developers, business analysts, and data scientists to focus on getting more value from data and building new innovations.

Download a complimentary copy of Gartner’s 2019 Critical Capabilities for Operational Database Management Systems here.

Download a complimentary copy of Gartner’s 2019 Magic Quadrant for Operational Database Management Systems here.

[1] Source: Gartner, Critical Capabilities for Operational Database Management Systems, Donald Feinberg, Merv Adrian, Nick Heudecker, 25 November 2019.
[2] Source: Gartner, Magic Quadrant for Operational Database Management Systems, Merv Adrian, Donald Feinberg, Nick Heudecker, 25 November 2019.

Gartner Disclaimer

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Contact Info
Victoria Brown
Oracle
+1.650.850.2009
victoria.brown@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, timing, and pricing of any features or functionality described for Oracle's products may change and remains at the sole discretion of Oracle Corporation.

Talk to a Press Contact

Victoria Brown

  • +1.650.850.2009

E-rows / A-rows

Jonathan Lewis - Wed, 2019-12-04 07:17

This note was prompted by an error I made at the UKOUG TechFest19 yesterday. It’s fairly well-known that when you read an execution plan that includes the rowsource execution stats – so you get the E-rows (estimated) and A-rows (Actual) reported – then a sensible check of the quality of the optimizer’s calculations is to compare the estimates and actuals allowing for the fact that the E-rows is “per start” and the A-rows is “cumulative”, so A-rows = E-rows * Starts.

The error I made yesterday was to forget that this relationship isn’t always true. In particular partitioning and parallel query introduced the need to be a little flexibility in reading the numbers – which I’ll demonstrate with a coupld of simple examples running under 12.2.0.1


rem
rem     Script:         estimate_actual.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Nov 2019
rem

create table pt_composite_1 (
        id,
        grp,
        small_vc,
        padding
)
nologging
partition by range(id) 
subpartition by hash (grp)
subpartitions 4
(
        partition p1 values less than (  4000),
        partition p2 values less than (  8000),
        partition p3 values less than ( 16000),
        partition p4 values less than ( 32000),
        partition p5 values less than ( 64000),
        partition p6 values less than (128000)
)
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                                            id,
        trunc(rownum/100)                                 grp,
        cast(to_char(trunc(rownum/20)) as varchar2(10))   small_vc,
        cast(rpad('x',100) as varchar2(100))              padding
from
        generator       g1,
        generator       g2
where 
        rownum <= 1e5 -- > comment to avoid WordPress format issue
/

create table t3 
nologging pctfree 80
storage (initial 1M next 1M)
as
select * from pt_composite_1
/

All I’ve done is create a couple of tables with 100,000 rows each – and now I’m going to count the rows and see what I get from the execution plans with rowsource execution stats enabled:


set serveroutput off
alter session set statistics_level = all;

prompt  =================
prompt  Partition effects
prompt  =================

select count(id) from pt_composite_1;
select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

prompt  ================
prompt  Parallel effects
prompt  ================

select /*+ parallel (t3 4) */ count(id) from t3;
select * from table(dbms_xplan.display_cursor(null,null,'allstats'));

With a little cosmetic tidying, here are the two execution plans (note that I haven’t used the “last” format option when reporting the parallel plan:


=================
Partition effects
=================

select count(id) from pt_composite_1

-------------------------------------------------------------------------------------------------
| Id  | Operation            | Name           | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |                |      1 |        |      1 |00:00:00.04 |    1866 |
|   1 |  SORT AGGREGATE      |                |      1 |      1 |      1 |00:00:00.04 |    1866 |
|   2 |   PARTITION RANGE ALL|                |      1 |    100K|    100K|00:00:00.04 |    1866 |
|   3 |    PARTITION HASH ALL|                |      6 |    100K|    100K|00:00:00.04 |    1866 |
|   4 |     TABLE ACCESS FULL| PT_COMPOSITE_1 |     24 |    100K|    100K|00:00:00.04 |    1866 |
-------------------------------------------------------------------------------------------------


================
Parallel effects
================

select /*+ parallel (t3 4) */ count(id) from t3

------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name     | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |          |      1 |        |      1 |00:00:00.04 |      20 |      0 |
|   1 |  SORT AGGREGATE        |          |      1 |      1 |      1 |00:00:00.04 |      20 |      0 |
|   2 |   PX COORDINATOR       |          |      1 |        |      4 |00:00:00.04 |      20 |      0 |
|   3 |    PX SEND QC (RANDOM) | :TQ10000 |      0 |      1 |      0 |00:00:00.01 |       0 |      0 |
|   4 |     SORT AGGREGATE     |          |      4 |      1 |      4 |00:00:00.11 |    8424 |   7692 |
|   5 |      PX BLOCK ITERATOR |          |      4 |    100K|    100K|00:00:00.11 |    8424 |   7692 |
|*  6 |       TABLE ACCESS FULL| T3       |     61 |    100K|    100K|00:00:00.06 |    8424 |   7692 |
------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   6 - access(:Z>=:Z AND :Z<=:Z)

As you can see, the lines specifying partition selection report E-Rows for the whole table, not for any partition-level approximation, so for operations 3 and 4 we shouldn’t multiply Starts by E-rows to compare with A-row. (Starts = 6 for operation 3 because we have 6 partitions, and Start = 24 for operation 4 because at the lowest level we have a total of 24 data segments).

For the parallel query we see the same pattern – every parallel slave reports the expected total number of rows, and every “block iterator” (rowid range) reports the expected total number of rows. Again we see that using multiplication to compare E-rows and A-rows would not be valid.

In fact it’s not just partitioning and parallelism that can cause confusion. Even something as simple as a serial nested loop join has a couple of surprises (largely thanks to the evolution of the mechanics – without a matching adjustment to the execution plans – over time). Here’s a script to generate a couple of tables, which we will then join – hinting various mechanisms for the nested loop.


create table t1
as
select
        rownum           id,
        mod(rownum,100)  n1,
        cast(lpad(rownum,20) as varchar2(20)) v1 
from
        dual
connect by
        level <= 1000 -- > comment to avoid WordPress format issue
;

create table t2
as
select  * from t1
union all
select  * from t1
union all
select  * from t1
;

create index t2_i1 on t2(id);

It’s not a subtle test – as you can see we have 3 rows in table t2 for every row in t1. So let’s pick some t1 rows and join to t2 on id. Again it’s 12.2.0.1:

set serveroutput off
alter session set statistics_level = all;

prompt  ==============================
prompt  Nested loop join (traditional)
prompt  ==============================

select
        /*+ 
                leading(t1 t2) use_nl_with_index(t2 t2_i1) 
                opt_param('_nlj_batching_enabled', 0)
                no_nlj_prefetch(t2)
        */
        t1.v1, t2.v1
from
        t1, t2
where
        t1.n1 = 4
and
        t2.id = t1.id
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

prompt  ==============================
prompt  Nested loop join with prefetch
prompt  ==============================

select
        /*+ 
                leading(t1 t2) use_nl_with_index(t2 t2_i1) 
                nlj_prefetch(t2) 
        */
        t1.v1, t2.v1
from
        t1, t2
where
        t1.n1 = 4
and
        t2.id = t1.id
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

prompt  ==============================
prompt  Nested loop join with batching
prompt  ==============================

select
        /*+ 
                leading(t1 t2) use_nl_with_index(t2 t2_i1) 
                nlj_batching(t2) 
        */
        t1.v1, t2.v1
from
        t1, t2
where
        t1.n1 = 4
and
        t2.id = t1.id
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

Here are the three plans, with a couple of comments after each – all three queries returned the same 30 *- 10 * 3) rows.


==============================
Nested loop join (traditional)
==============================

--------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |       |      1 |        |     30 |00:00:00.01 |      60 |
|   1 |  NESTED LOOPS                        |       |      1 |     30 |     30 |00:00:00.01 |      60 |
|*  2 |   TABLE ACCESS FULL                  | T1    |      1 |     10 |     10 |00:00:00.01 |      15 |
|   3 |   TABLE ACCESS BY INDEX ROWID BATCHED| T2    |     10 |      3 |     30 |00:00:00.01 |      45 |
|*  4 |    INDEX RANGE SCAN                  | T2_I1 |     10 |      3 |     30 |00:00:00.01 |      15 |
--------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("T1"."N1"=4)
   4 - access("T2"."ID"="T1"."ID")

This is the original nested loop structured (apart from the “batched” option that appeared in 12c) and follows the rule/guideline:

  • Operation 2 operates once and returns the 10 rows predicted.
  • Operation 3 is started 10 times by operation 1, with a prediction of 3 rows per start – and the actual comes out at 30 rows.
  • Operation 4 is started 10 times (once for each start of operation 3), with a predication of 3 rowids per start – and the actual comes out at 30 rows
==============================
Nested loop join with prefetch
==============================

-------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |       |      0 |        |      0 |00:00:00.01 |       0 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T2    |      1 |      3 |     30 |00:00:00.01 |      60 |
|   2 |   NESTED LOOPS                      |       |      1 |     30 |     30 |00:00:00.01 |      30 |
|*  3 |    TABLE ACCESS FULL                | T1    |      1 |     10 |     10 |00:00:00.01 |      15 |
|*  4 |    INDEX RANGE SCAN                 | T2_I1 |     10 |      3 |     30 |00:00:00.01 |      15 |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("T1"."N1"=4)
   4 - access("T2"."ID"="T1"."ID")
 

Again in the order of rowsource generation

  • Operation 3 starts once with a prediction of 10 rows and the rule works.
  • Operation 4 is started 10 times by operation 2, with a prediction of 3 rows (rowids) per start, and the rule works.
  • Operation 2 was started once by operation 1, with a predication of 30 rows (rowids), and the rule works.
  • Operation 1 starts once, but the prediction is reported as the value you would have got from the original NLJ shape – and breaks the rule.
==============================
Nested loop join with batching
==============================

------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |      1 |        |     30 |00:00:00.01 |      60 |
|   1 |  NESTED LOOPS                |       |      1 |     30 |     30 |00:00:00.01 |      60 |
|   2 |   NESTED LOOPS               |       |      1 |     30 |     30 |00:00:00.01 |      30 |
|*  3 |    TABLE ACCESS FULL         | T1    |      1 |     10 |     10 |00:00:00.01 |      15 |
|*  4 |    INDEX RANGE SCAN          | T2_I1 |     10 |      3 |     30 |00:00:00.01 |      15 |
|   5 |   TABLE ACCESS BY INDEX ROWID| T2    |     30 |      3 |     30 |00:00:00.01 |      30 |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("T1"."N1"=4)
   4 - access("T2"."ID"="T1"."ID")


In the order in which rowsources are created

  • Operation 3 starts once with a prediction of 10 rows – and the A-rows matches the rule
  • Operation 4 is started 10 times by opreation 2, with a prediction of 3 rows per start – and the A-rows matches the rule.
  • Operation 5 is started 30 times by operation 1, with a prediction of 3 rows per start – again reporting the value that you would have seen from the original representation of the NLJ, the prediction obviously should be 1 – so the rule is broken again
tl;dr

It is important to remember that the basic rule of “A-rows = starts * E-rows” does not hold for the partition-related lines or the PX related lines of partitioned and parallel execution plans.

You may also find a few other cases where you need be a little cautious about trusting the rule without first thinking carefully about the mechanics of what the shape of the plan is telling you.

Autonomous Database Choices to Fit Your Needs

Oracle Press Releases - Tue, 2019-12-03 12:09
Blog
Autonomous Database Choices to Fit Your Needs

By Juan Loaiza, Executive Vice President, Mission-Critical Database Technologies—Dec 3, 2019

Loiza

The role of the enterprise database has evolved, and its importance has increased tremendously since Oracle pioneered the technology more than four decades ago. Today's databases perform near real-time processing for the most important enterprise workloads in the world, such as at telecoms and financial institutions. They handle petabytes of data, and that data comes in many forms.

At Oracle, we are bringing enterprise-grade database technology to the cloud and making it dramatically simpler and more cost effective. Now every user in enterprises of every size of can get the benefits of database technology that was previously restricted to mission-critical systems. The world’s most sophisticated database is now also the world’s simplest.

Automation, Security, and Dynamic Elasticity

Oracle Autonomous Database supports all kinds of workloads—transaction processing, data warehousing, and mixed workloads. And with Autonomous Database, most of it can be automated.

Databases—especially mission-critical enterprise databases—have traditionally required a lot of oversight and manual management. With Autonomous Database, we’ve automated most of these operational tasks. The Autonomous Database uses machine learning and automation to help eliminate complexity and human error, enabling higher reliability and more efficiency.

This automation also makes Autonomous Database much more secure than other databases because it constantly updates and patches itself in response to the latest threats. Additionally, Autonomous Database can automatically scale up or down, and you pay only for the compute and storage you use. It's a dramatically better database experience.

Bringing the Cloud to You

Despite the many benefits of the public cloud, we recognize that customers — particularly large enterprises — can’t move everything to the cloud. There are many reasons for that, including regulatory compliance concerns, latency issues, and application integration complexity. That's why at OpenWorld 2019 we announced Oracle Gen 2 Exadata Cloud at Customer.

The idea of Exadata Cloud at Customer is, if you can't come to the public cloud, we'll bring the cloud to you. We take the same hardware, software, and APIs that we have in our Exadata Cloud Service in the public cloud, and we deploy them in the customer's data center. The customer stays in control of all their data, and all of their security policies can remain the same, so it makes it a lot easier to adopt. Customers get all the benefits of cloud technology, but they’re also able to control the compliance and policies that had previously prevented them from going to the cloud.

Today we have Oracle Database available as a cloud service on Exadata inside customers’ data centers, and in 2020 we also plan to bring Autonomous Database to customers’ data centers.

Get the Best for Free

Another Autonomous Database option available to customers is Oracle Cloud Free Tier. The free tier can be used by anyone, but it is particularly interesting for developers, students, and small businesses. The free tier makes it super easy to learn Autonomous Database, or to use it to develop new applications.

A lot of clouds have a free tier, but you normally don’t get their best database. You normally get a lower-end commodity database. In the Oracle Cloud Free Tier, Oracle is providing the full Autonomous Database running on the Exadata platform—the same database and platform that is used by some of the biggest enterprises in the world.

With support for online transaction processing or data warehousing, public cloud or Cloud at Customer deployment, and a free tier for students and developers, Oracle Autonomous Database supports any kind of workload, deployment, or user with the simplest ease of use and the highest elasticity and security.

All the previously existing barriers to running databases in the cloud have now been eliminated.

k8s info: VMware Tanzu Octant - A web-based, highly extensible platform for developers to better understand the complexity of Kubernetes clusters

Pas Apicella - Tue, 2019-12-03 10:33
Octant is a tool for developers to understand how applications run on a Kubernetes cluster. It aims to be part of the developer's toolkit for gaining insight and approaching complexity found in Kubernetes. Octant offers a combination of introspective tooling, cluster navigation, and object management along with a plugin system to further extend its capabilities

So how would I install this?

1. First on my k8s cluster lets create a deployment and a service. You can skip this step if you already have workloads on your cluster. These commands will work on any cluster as the image exists on DockerHub itself so as long as you can get to DockerHub these kubectl commands will work.

$ kubectl run pbs-demo --image=pasapples/pbs-demo-image --replicas=2 --port=8080
$ kubectl expose deploy pbs-demo --type=LoadBalancer --port=80 --target-port=8080
$ http http://101.195.48.144/customers/1

HTTP/1.1 200
Content-Type: application/hal+json;charset=UTF-8
Date: Tue, 03 Dec 2019 16:11:54 GMT
Transfer-Encoding: chunked

{
    "_links": {
        "customer": {
            "href": "http://101.195.48.144/customers/1"
        },
        "self": {
            "href": "http://101.195.48.144/customers/1"
        }
    },
    "name": "pas",
    "status": "active"
}

2. To install Octant you can view instructions on the GitHub page as follows

https://github.com/vmware-tanzu/octant

Given I am on a Mac it's installed using brew as shown below. For other OS refer to link above

$ brew install octant

3. Thats it you can now launch the UI as shown below.

$  octant

2019-12-03T21:47:56.271+0530 INFO module/manager.go:79 registering action {"component": "module-manager", "actionPath": "deployment/configuration", "module-name": "overview"}
2019-12-03T21:47:56.271+0530 INFO module/manager.go:79 registering action {"component": "module-manager", "actionPath": "overview/containerEditor", "module-name": "overview"}
2019-12-03T21:47:56.271+0530 INFO module/manager.go:79 registering action {"component": "module-manager", "actionPath": "overview/serviceEditor", "module-name": "overview"}
2019-12-03T21:47:56.271+0530 INFO module/manager.go:79 registering action {"component": "module-manager", "actionPath": "octant/deleteObject", "module-name": "configuration"}
2019-12-03T21:47:56.272+0530 INFO dash/dash.go:370 Using embedded Octant frontend
2019-12-03T21:47:56.277+0530 INFO dash/dash.go:349 Dashboard is available at http://127.0.0.1:7777

Octant should immediately launch your default web browser on 127.0.0.1:7777

And to view our deployed application!!!!







It's a nice UI and it even has the ability to switch to a different k8s context from the menu bar itself



More Information

1. Seeing is Believing: Octant Reveals the Objects Running in Kubernetes Clusters
https://blogs.vmware.com/cloudnative/2019/08/12/octant-reveals-objects-running-in-kubernetes-clusters/

2. GitHub project page
https://github.com/vmware-tanzu/octant

Categories: Fusion Middleware

k8s info: kubectx and kubens to the rescue

Pas Apicella - Tue, 2019-12-03 05:54
kubectx is a utility to manage and switch between kubectl(1) contexts. To me this is so handy I can't live without it. I am constantly using k8s everywhere from PKS (Pivotal Container Service) clusters, GKE clusters, minikube and wherever I can get my hands on a cluster.

So when I heard about kubectx and no I can't live with this and it makes my life so much easier. His how

Where is my current k8s context and potentially what other contexts could I switch to?


Ok so I am in the k8s cluster with the context of "apples". Let's switch to "lemons" then


It's really as simple as that. In my world every k8s cluster is named after a FRUIT.

Finally if you wish to set the correct context namespace you can use "kubens" to do that just as easily as shown below



More Information

https://github.com/ahmetb/kubectx

https://formulae.brew.sh/formula/kubectx
Categories: Fusion Middleware

Joined the ranks of the 100+ CKA/CKAD certified Pivotal Platform Architects

Pas Apicella - Tue, 2019-12-03 05:22
I am now officially CKAD certified in fact I am Cloud Foundry certified as well. Great to be certified with the leaders in container technology both with PaaS and CaaS.





Categories: Fusion Middleware

Observation regarding Interval partitioning

Tom Kyte - Mon, 2019-12-02 17:52
<b>Hi, <b>I had some observation regarding Interval partitioning when I was looking into one issue,</b> Below is the use case </b> For ex : We have two tables <code>CREATE TABLE TEST_GURU_1 ( ENAME VARCHAR2(500), EMPDATE DATE ) ...
Categories: DBA Blogs

Parallel execution of procedure like multithreading in java

Tom Kyte - Mon, 2019-12-02 17:52
Hi All, Consider below sample Data model: Application is maintaining information of different countries - States -Cities (each of this is individual tables). At the end of quarter we are doing assessment and calculating different metrics at countr...
Categories: DBA Blogs

A stored procedure taking 98% CPU time as a percentage of Elapsed Time

Tom Kyte - Mon, 2019-12-02 17:52
A stored procedure taking 98% CPU time as a percentage of Elapsed Time. Please provide us the fix and recommendations to check
Categories: DBA Blogs

Password change of remote database user by dblink but showing error as "ORA-06550: line 1, column 8: PLS-00352: Unable to access another database &#x27;SYS_DBALINK"

Tom Kyte - Mon, 2019-12-02 17:52
HI,i created a form in Oracle Apex and when i click the submit button then Stored Procedure runs that changes the remote database user password over dblink but iam getting the error as "ORA-06550: line 1, column 8: PLS-00352: Unable to access another...
Categories: DBA Blogs

insert records into multiple tables

Tom Kyte - Mon, 2019-12-02 17:52
Hai bro, How to insert a records for multiple tables with out using INSERT ALL and with single INSERT statement only, is that possible ?
Categories: DBA Blogs

DDL related parse, execute, fetch

Tom Kyte - Mon, 2019-12-02 17:52
Dear Mr. Tom, When we issue a DML it undergoes PARSE EXECUTE FETCH process and the things happen in Database Buffer Cache. Kindly tell me when we issue a DDL then PARSE EXECUTE is understandable but what about FETCH? What it will...
Categories: DBA Blogs

Plan changes when json_arrayagg function got added

Tom Kyte - Mon, 2019-12-02 17:52
Team, could you please help us to understand why the plan changes when json_arrayagg function got added? <code> demo@PDB1> create table t as select * from all_objects; Table created. demo@PDB1> alter table t add constraint t_pk primary ...
Categories: DBA Blogs

transportable DBF Import in 12c

Tom Kyte - Mon, 2019-12-02 17:52
i'm trying to import transportable data files to Oracle DB 12.2 . These files are exported from as transportable from Oracle DB 11.1.i recieve the following error. ORA-39123: Data Pump transportable tablespace job aborted ORA-19721: Cannot find dat...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator