Ask The Real Tom

July 28, 2010



Enable and Validate Constraints in Parallel (Oracle)

Enabling and validating constraints is can take a huge amount of time in big oracle databases.
To speed the validaten up, it can be done in parallel.
This can save you valuable time in release windows.

  1. Enable Constraint with novalidate option
  2. Set parallel degree on the table or on session with force ddl parallel
  3. Enable nable the constraints with validation

Example:

ALTER TABLE T_1 ADD CONSTRAINT NC_1 not null (colname) disabled;
ALTER TABLE T_1 ENABLE NOVALIDATE CONSTRAINT NC_1;
ALTER SESSION FORCE PARALLEL DDL PARALLEL 8;
ALTER TABLE T_1 PARALLEL 8;
ALTER TABLE T_1 ENABLE VALIDATE CONSTRAINT NC_1;
ALTER SESSION DISABLE PARALLEL DDL;



ORA-27102: out of memory Linux-x86_64 Error: 28: No space left on device

Problem:


SQL> startup nomount
ORA-27102: out of memory
Linux-x86_64 Error: 28: No space left on device

There are 64G physical memoray avaliable.
shmmax is set to 52G
But still its not possibel to start the Oracle instance with more than 30G SGA.

Solution:

kernel.shmall is set to small.


cat /proc/sys/kernel/shmall
8248733
getconf PAGE_SIZE
4096

This means that only 8248733 memory pages can be allocate.
4096 x 8248733 = 32G

To allocate 50G we need to set shmall to at least 13107200

July 15, 2010



ORA-56708: Could not find any datafiles with asynchronous i/o capability

Filed under: oracle — Tags: , , , — admin @ 8:57 am

I tried to run IO calibrate to check what the system is able to handle.
But I run into following error (using ASM / ASMlib)


ORA-56708: Could not find any datafiles with asynchronous i/o capability
ORA-06512: at "SYS.DBMS_RMIN", line 456
ORA-06512: at "SYS.DBMS_RESOURCE_MANAGER", line 1285
ORA-06512: at line 6

The ASM files show ASYNC_ON

SELECT name, asynch_io
FROM v$datafile f,v$iostat_file i
WHERE f.file# = i.file_no
AND filetype_name = 'Data File'
/
+DG1/xxxx/datafile/system.260.721649043
ASYNC_ON
....

did not help as well

alter system set filesystemio_options=setall scope=spfile;
shutdown immediate
startup

I guess that the problem is that moste datafiles (smallfiles) are full (max. number of blocks). So the io calibrate tool can not write to this files.

System aio slots:

To find out the maximum avalable asynch I/O slots:
$ cat /proc/sys/fs/aio-max-nr
65536

To find out how many are being used:
$ cat /proc/sys/fs/aio-nr

increase fs.aio-max-nr if nessesary (or restart some db's)
65536

July 14, 2010



commit rate sampler for oracle

Filed under: Uncategorized — Tags: , , , , , — admin @ 11:26 am

Today a wrote a simple commit sampler for oracle databases. With this simple tool it possible to see the commit rate on the database.

It can be usefull to compre the application perfomance on batch processing system and analyze commit related waits.

@snapcommit

Example
@snapcommit 1 20

snapcommit.sql

set tab off verify off linesize 299
set termout on serverout on size 1000000 format wrapped

define snap_interval="&1"
define snap_limit="&2"

declare
cmt_start number;
cmt_last number;
cmt_curr number;
cmt_diff number;

procedure oprint(p_var varchar2) as
PRAGMA AUTONOMOUS_TRANSACTION;
begin
dbms_output.put_line(p_var);
end oprint;
begin

for x in 1..&snap_limit+1
loop
cmt_last := cmt_curr;

select value into cmt_curr from v$sysstat where name='user commits';
if(x = 1) then
cmt_start:= cmt_curr;
else
cmt_diff := cmt_curr - cmt_last;
oprint('commits: '||cmt_diff||' in '||&snap_interval||'s' );

end if;

dbms_lock.sleep(&snap_interval);
end loop;
cmt_diff := cmt_curr - cmt_start;
dbms_output.put_line('total commits: '||cmt_diff );
end;
/

July 9, 2010



access to anydata columns (in DBA_HIST_SQLBIND)

Filed under: Uncategorized — Tags: , , — admin @ 1:10 pm

Just a shorty today:
How do you access the anydata_value from dba_hist_sqlbind?

Solution:
use tha statice converter function of the anydata object type

Example

select SQL_ID,NAME,DATATYPE_STRING,VALUE_STRING,anydata.AccessTimestamp(VALUE_ANYDATA) from DBA_HIST_SQLBIND where sql_id='f17fd6xuvxc7x' and name=':1'

July 8, 2010



Oracle Web Application Trace – Tuning Approach

Filed under: Uncategorized — Tags: , , , , , , — admin @ 10:07 am

Situation:

The user claims that a oracle web application is slow. The web application is unsing in database java.

Test 1: Determine the slow Part

The webapplication can be run in sqlplus with following pl/sql

set serveroutput on
set timing on
declare
name owa.vc_arr;
val owa.vc_arr;
res owa.vc_arr;
simple_list__ owa_util.vc_arr;
complex_list__ owa_util.vc_arr;
c clob;
begin
name(1) := '1';
val(1) := '1';
OWA.init_cgi_env (0, name, val);
HTP.init;
-- if ((owa_match.match_pattern('jdb', simple_list__, complex_list__, true))) then
jdb(1, name, val, res);
-- end if;
wpg_docload.get_download_file(:doc_info);
OWA_UTIL.showpage;
end;
/

The result showed that the application is using 4 frames. This means that 1 web request is followed by 4 subsequend requests.
We see as well that the owa functions are not the problem.

Test 2: Trace the application
To just trace the relevant part, I traced by server and modul name


select username,client_identifier,process,module from v$session where module like 'httpd%';

exec DBMS_MONITOR.SERV_MOD_ACT_TRACE_ENABLE('SYS$USERS','httpd@svzpca2 (TNS V1-V3)', DBMS_MONITOR.ALL_ACTIONS,TRUE,TRUE,NULL);

exec DBMS_MONITOR.SERV_MOD_ACT_TRACE_DISABLE ('SYS$USERS','httpd@svzpca2 (TNS V1-V3)', DBMS_MONITOR.ALL_ACTIONS,NULL);

cd udump
tkprof

The trace showed one query taken about 10 to 20 seconds per executionn

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 15 0.00 0.00 0 0 0 0
Execute 15 155.99 152.95 1 4147 8870 0
Fetch 24 0.35 0.38 1 3632 0 134
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 54 156.34 153.34 2 7779 8870 134

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 192 (recursive depth: 1)

Now its a simple sql tuning task to solve the problem

July 2, 2010



Partition on an Index Organized Table (IOT)

Filed under: Uncategorized — Tags: , , , , , — admin @ 12:21 pm

Today a application developer asket me how he should partition his index organized table.

All query include the primary key, no range, no full scan’s.

So I told him it doen’t make sense to partition this table for perfomance reasons.
To profe my statement I did a small Test case.


create table part_range_iot (c1 number, c2 varchar2(20), c3 char(1),
constraint part_range_iot_pk primary key (c1))
partition by HASH (c1)
PARTITIONS 100
/
begin
for i in 1..999
loop
insert into part_range_iot values (i,'abcd','y');
end loop;
commit;
end;
/

set autotrace on
set timing on

select * from part_range_iot where c1 = 99;

The output from the query was

Execution Plan
----------------------------------------------------------
Plan hash value: 3787470184

------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 28 | 1 (0)| 00:00:01 | | |
| 1 | TABLE ACCESS BY GLOBAL INDEX ROWID| PART_RANGE_IOT | 1 | 28 | 1 (0)| 00:00:01 | 58 | 58 |
|* 2 | INDEX UNIQUE SCAN | PART_RANGE_IOT_PK | 1 | | 1 (0)| 00:00:01 | | |
------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("C1"=99)

Statistics
----------------------------------------------------------
104 recursive calls
0 db block gets
14 consistent gets
0 physical reads

So 14 consitent gets were used

The same case without partitioning

create table part_range_iot (c1 number, c2 varchar2(20), c3 char(1),
constraint part_range_iot_pk primary key (c1))
/

begin
for i in 1..1000
loop
insert into part_range_iot values (i,'abcd','y');
end loop;
commit;
end;
/

select * from part_range_iot where c1 = 99;

The results on the IOT table without partitions

Execution Plan
----------------------------------------------------------
Plan hash value: 2398071293

-------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 28 | 1 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| PART_RANGE_IOT | 1 | 28 | 1 (0)| 00:00:01 |
|* 2 | INDEX UNIQUE SCAN | PART_RANGE_IOT_PK | 1 | | 1 (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("C1"=99)

Statistics
----------------------------------------------------------
46 recursive calls
0 db block gets
8 consistent gets
0 physical reads

Just 8 consitent reads.

Conclusion:
Partition an IOT on its primary key has just a negative perfomance impact.
The difference is here
TABLE ACCESS BY INDEX ROWID and TABLE ACCESS BY GLOBAL INDEX ROWID.
On the the non partitioned table we have to do just a unique index scan. On the partitioned table we do the same, but we have to find the right partition first.

June 22, 2010



ORA-31623: a job is not attached to this session via the specified handle

Filed under: oracle — Tags: , , , , , , — tkalo @ 11:23 am

We recently faced a strange error when trying to import a schema through a database link on 11gR2.

shell> impdp system/*** SCHEMAS=LFXQAA02 directory=DATA_PUMP_DIR LOGFILE=liflexdp1.log NETWORK_LINK=tliflex01.world PARALLEL=2

Import: Release 11.2.0.1.0 – Production on Tue Jun 22 11:07:58 2010
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 – 64bit Production
With the Partitioning and Automatic Storage Management options

UDI-31623: operation generated ORACLE error 31623
ORA-31623: a job is not attached to this session via the specified handle
ORA-06512: at “SYS.DBMS_DATAPUMP”, line 3263
ORA-06512: at “SYS.DBMS_DATAPUMP”, line 4488
ORA-06512: at line 1

After serval approaches we found out that streams_pool_size=0 was set.

alter system set streams_pool_size=40M scope=spfile;

restart DB and run import again..

There is also a Metalink note on this topic: 1080775.1

April 14, 2010



How do I find the device for a ASMLIB disk

Filed under: oracle — Tags: , , — admin @ 10:00 am

Just lookup the minor, major id.


oracle@server1:/u00/app/oracle/ [grid11201] ll /dev/oracleasm/disks/
total 0
drwxr-xr-x 1 root root 0 2009-12-18 14:31 .
drwxr-xr-x 4 root root 0 2009-12-18 14:31 ..
brw-rw---- 1 oracle dba 253, 19 2009-12-18 14:31 ORAASM1
brw-rw---- 1 oracle dba 253, 20 2009-12-18 14:31 ORAASM2

oracle@server1:/u00/app/oracle/ [grid11201] ll /dev/dm-*|grep '253, 19'
brw-r----- 1 root disk 253, 19 2009-12-18 14:31 /dev/dm-19

oracle@server1:/u00/app/oracle/ [grid11201] ll /dev/disk/by-id/ |grep dm-19
lrwxrwxrwx 1 root root 11 2009-12-18 14:31 scsi-sanvg-oraasm01 -> ../../dm-19

March 12, 2010



Quick Help: TNS – 12519 TNS: no appropriate service handler found

Filed under: oracle — Tags: , , , , — admin @ 4:35 pm

If you get a “TNS – 12519 TNS: no appropriate service handler found” the very first thing to check is the max session/processes.

select * from v$resource_limit

RESOURCE_NAME                  CURRENT_UTILIZATION MAX_UTILIZATION INITIAL_AL LIMIT_VALU
------------------------------ ------------------- --------------- ---------- ----------
processes                                       47             300        300        300
sessions                                        52             309        335        335
enqueue_locks                                   34              91       5008       5008
enqueue_resources                               53             260       2514  UNLIMITED
ges_procs                                        0               0          0          0
ges_ress                                         0               0          0  UNLIMITED
ges_locks                                        0               0          0  UNLIMITED
ges_cache_ress                                   0               0          0  UNLIMITED
ges_reg_msgs                                     0               0          0  UNLIMITED
ges_big_msgs                                     0               0          0  UNLIMITED
ges_rsv_msgs                                     0               0          0          0
gcs_resources                                    0               0          0          0
gcs_shadows                                      0               0          0          0
dml_locks                                       59             454       1472  UNLIMITED
temporary_table_locks                            0              23  UNLIMITED  UNLIMITED
transactions                                  1409            1442        368  UNLIMITED
branches                                         0              16        368  UNLIMITED
cmtcallbk                                        0               3        368  UNLIMITED
sort_segment_locks                               0              40  UNLIMITED  UNLIMITED
max_rollback_segments                           29              58        368      65535
max_shared_servers                               1               1  UNLIMITED  UNLIMITED
parallel_max_servers                             0               8          8       3600

As it is visible at MAX_UTILIZATION for processes the maximum processes was once reached.

Just increase it an restart your database.

alter system set processes=600

« Newer PostsOlder Posts »

Powered by WordPress