Speeding up gathering statistics a bit in 11g.

Something about running statistics. Got quite a large database of a few TB (datawarehouse), and was curious how fast the statistics are running, and if I could speed it up a little.  Database: 11g release 1.

First of all, the standard gathering of the statistics of a test_user:
exec dbms_stats.gather_schema_stats(‘DWH_USER_TEST’,DBMS_STATS.AUTO_SAMPLE_SIZE);

This lasted about 8 hours.

The machine has got 2 quad core-cpu’s at his disposal, so this must and can be used. It must be said by the way that the ‘degree’ of the tables are 1 (migrated tables from 8i). So that’s why it lasted that long.

I put the degree-preference on the schema (you can put this preference on several levels in 11g nowadays):
– exec dbms_stats.set_schema_prefs(‘DWH_USER_TEST’, ‘DEGREE’,’6′);

Ran the statistics:

– exec dbms_stats.gather_schema_stats(‘DWH_USER_TEST’,DBMS_STATS.AUTO_SAMPLE_SIZE,method_opt => ‘for all columns size auto’,degree => DBMS_STATS.DEFAULT_DEGREE);

This lasted 1 hour!

Is there a penalty? Yes, a bit: the machine had an average load of 14 ,  which is reasonably busy, so we have to run this definitely during the night, and beware of scheduled batches of the users. Or put the degree lower…. Anyway, it’s important to keep control over the time when it is running.
The number of processes I counted were 14 at the time it ran, which also depends on the parameter ‘parallel_threads_per_cpu’, which is now 2 at our database. So : apparently 6 x 2=12 slaves + his own processes.

Now this was only a test-user. The production user is more as 10 times as big. The statistics of this user has been gathered in 2 days and 15 hours in the old fashioned way.  Put a degree of 5  on the schema, and ran it again: 10 hours and 10 minutes… Still quite a lot, but now I just need a night to accomplish the task.

Nice readable resources to read regarding this subject:


By | November 23rd, 2009|Categories: Database|Tags: |0 Comments

Using http://localhost:8080 at APEX(11gr1), horrible performance

I had to install a 11g release 1 on Windows for demonstration purpose of APEX.
APEX comes standard with 11g, so that’s nice. When using Embedded PLSQL you don’t even need Apache…

But when I’m using http://localhost:8080/apex/apex_admin, the login-page popped up after a minute or two.And every page I use has the same horrible performance.  Using the loopback by the way like on the server gives the same result.  Interesting too, also http://localhost:1158/em does not perform as I would expect.

Outside the server though, using the full computer-name , such as http://<host>.<domain>:8080/apex/apex_admin, it works like hell. Seemed that the cause was not in the Oracle-corner. Metalink gaves no solution or what so ever, another clue that this was not Oracle.

I was not able to install a Mozilla Firefox (firewalls etc.) or another browser on that server to investigate in simple wy  if the cause was indeed Internet Explorer (7.0.5730.11). But after a while, deep down in a forum, I found out that it was a setting in Internet Explorer on the server.
Tools –> Internet Options –> Advanced Tab –> http1.1 settings.  Uncomment the item ‘Use HTTP 1.1 through Proxy settings’.

Looks like this:


And it’s working normal again..

Seems like a riddle: it begins with an M, ends with icrosoft…

By | November 21st, 2009|Categories: Database|Tags: |0 Comments

Determining compression-factor 11g release 1

Working on 11g, release 1 (, managers wanted to know what kind of compression factor we achieved in the datawarehouse.

Amount of (compressed) data: just about 1,5 TB, excluding indexes.

In all my naivity I thought Oracle delivered the package ‘DBMS_COMPRESSION’ which will show me the rate of achieved compression per table. But  in 11gr1 there was no package like ‘DBMS_COMPRESSION’, this is only delivered with release 2.

So I downloaded the DBMS_COMP_ADVISOR from :


Second disappointment appears after reading the readme-file:


This package can be used on Oracle Database 9i Release 2 through 11g Release 1 to estimate the compression ratio for a table and to determine the space saving that can be potentially realized using the Oracle Database 11g OLTP Table Compression feature (part of the Advanced Compression option). Compression ratio is defined as the ratio of table size before and after compression.

–> Discoverd that’s not what I want, not before the compression, but afterwards!

The following example made it clear. This is the ratio what Oracle expect to achieve, and run it on an uncompressed table:
SQL> set serveroutput on
SQL> exec dbms_compression.getratio(‘SH’,’SALES’,’OLTP’,10)
Sampling table: SH.SALES
Sampling percentage: 10%
Compression Type: OLTP
Expected Compression ratio with Advanced Compression Option: 2.96
PL/SQL procedure successfully completed.

O.k., then get it the old-fashioned way, comparing a compressed and uncompressed table.

Used an uncompressed tablespace, created an uncompressed table, imported some data.
Then determine how much space it used:

select sum(bytes)/1024/1024/1024 “Size in GB”
from dba_segments
where owner = ‘JOB_DWH’
and segment_name=’DWH_TABLE’
Size in GB

Just to check how many rows:
SQL> select count(*) from job_dwh.dwh_table;

Then compressing the table:
SQL> alter table job_dwh.dwh_table move compress;

select sum(bytes)/1024/1024/1024 “Size in GB”
from dba_segments
where owner = ‘JOB_DWH’
and segment_name=’DWH_TABLE’
Size in GB

Compression-factor of 2,9 .

By | November 13th, 2009|Categories: Database|Tags: |0 Comments

PL/SQL Best Practices with Steven Feuerstein

An old seminar from Steven Feuerstein:


By | November 5th, 2009|Categories: Oratube|0 Comments

Common Mistakes in Oracle PL/SQL Programming

An old little seminar from Steven Feuerstein:


By | November 5th, 2009|Categories: Oratube|0 Comments