Ask The Real Tom

May 14, 2016



The blog is dead – long life the blog

Filed under: Uncategorized — admin @ 6:02 pm

After quite some time I started a new blog about Data Analytics and Software Development http://www.datageek.ch/

March 21, 2011



enq:CR – block range reuse ckpt

Filed under: Uncategorized — admin @ 1:29 pm

“enq:CR – block range reuse ckpt”

http://forums.oracle.com/forums/thread.jspa?messageID=9170907

I think the problem is this: when doing parallel DML, Oracle uses direct path reads to read the data. To make sure the data being read is consistent though it needs to checkpoint blocks it is going to read. CR enqueue seems to be used to synchronize CKPT with PX slaves reading the data. It’s going like this: PX slave requests CKPT to checkpoint a range of blocks it’s about to read (and possibly enqueues on CR in order to be able to do so,) CKPT in turn orders DBWR to write all dirty blocks in the requested range, waits for this write to complete and signals PX slave that the blocks are consistent and can be read. Repeat as necessary. Now, with really large buffer cache and good portion of it dirty due to recent DML on source tables, busy CPUs and relatively slow writes this checkpointing might build up into a major performance inhibitor when these recently touched tables are being read in direct path mode: you have 8 cores and 8 parallel slaves, so CKPT, DBWR and pretty much all other background processes are competing with them for CPU time and I/O bandwidth (they all have the same priority, so your PX slaves can easily evict CKPT or DBWR from CPU only to immediately go back to sleep on an enqueue held by a process it just preempted.)

Things to try here:

1. do a checkpoint before running the insert and see if this makes any difference;
2. lower the degree of parallelism to 5-6;
3. If you have more than one DBWR configured – configure db_writer_processes back to 1.

July 30, 2010



MySQL solution: Speed up pattern searches starting with a wildchar (like ‘%abc%’)

Filed under: Uncategorized — Tags: , , , , , , , , — admin @ 10:04 am

Searches on MySQL with expression like ” … like ‘%abc%'” can be incredibly slow on large tables.
The problem is, MySQL can not use indices for expression starting with a wildcard “%”.
Neither MySQL supports function based indexes.

I will explain a simple solution. But first lets have a look to our problem.
Table size: 2G
Row count: 33’000
Query response time: ~ 25s

mysql> desc images;
+-------------+-----------------------------------------------------------------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------+-----------------------------------------------------------------------------+------+-----+---------+----------------+
| imageid | int(11) | NO | PRI | NULL | auto_increment |
| filename | varchar(200) | YES | MUL | NULL | |
| description | varchar(200) | YES | | NULL | |
| data | longblob | YES | | NULL | |
| type | enum('coversmall','cover','coverback','screenshot','specialcover','banner') | YES | MUL | NULL | |
+-------------+-----------------------------------------------------------------------------+------+-----+---------+----------------+
5 rows in set (0.00 sec)

The index can not be used


mysql> explain select imageid,filename,description,type from images where filename like '%call%' order by filename;
+----+-------------+--------+------+---------------+------+---------+------+--------+-----------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+--------+------+---------------+------+---------+------+--------+-----------------------------+
| 1 | SIMPLE | images | ALL | NULL | NULL | NULL | NULL | 371398 | Using where; Using filesort |
+----+-------------+--------+------+---------------+------+---------+------+--------+-----------------------------+
1 row in set (0.00 sec)

Index can be use if the expression does not start with a wild card


mysql> explain select imageid,filename,description,type from images where filename like 'call%' order by filename;
+----+-------------+--------+-------+---------------------+---------------------+---------+------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+--------+-------+---------------------+---------------------+---------+------+------+-------------+
| 1 | SIMPLE | images | range | idx_images_filename | idx_images_filename | 603 | NULL | 66 | Using where |
+----+-------------+--------+-------+---------------------+---------------------+---------+------+------+-------------+
1 row in set (0.00 sec)

mysql> explain select imageid,filename,description,type from images where filename like 'call%abc' order by filename;
+----+-------------+--------+-------+---------------------+---------------------+---------+------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+--------+-------+---------------------+---------------------+---------+------+------+-------------+
| 1 | SIMPLE | images | range | idx_images_filename | idx_images_filename | 603 | NULL | 66 | Using where |
+----+-------------+--------+-------+---------------------+---------------------+---------+------+------+-------------+

Since we are searching for the filename, we a new column for the filename. Add a character in front of the string and index the new column.


mysql> alter table images add vfilename varchar(201);

Query OK, 33064 rows affected (3 min 4.73 sec)
Records: 33064 Duplicates: 0 Warnings: 0

update images set vfilename=concat('a',filename);

Query OK, 33064 rows affected (52.40 sec)
Rows matched: 33064 Changed: 33064 Warnings: 0

create index idx_images_vfilename on images (vfilename);

Now we change our application so it add always the same character “a” in the pattern.
So we can use the index.


mysql> explain select imageid,filename,description,type from images where vfilename like 'a%abc%' order by filename;
+----+-------------+--------+-------+----------------------+----------------------+---------+------+-------+-----------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+--------+-------+----------------------+----------------------+---------+------+-------+-----------------------------+
| 1 | SIMPLE | images | range | idx_images_vfilename | idx_images_vfilename | 603 | NULL | 75840 | Using where; Using filesort |
+----+-------------+--------+-------+----------------------+----------------------+---------+------+-------+-----------------------------+
1 row in set (0.00 sec)

The perfomance benefit is, that we can do now a index scan (full/range scan) which read much less data than a full table scan.
We reduced i/o, saved our cache and speed up the query by magnitudes!

This is a real ask the real tom solution!

July 29, 2010



Set lock_sga to true on Linux (SLES)

Filed under: oracle,Uncategorized — Tags: , , — admin @ 1:01 pm

To lock the SGA into physical memory you have to set the lock_sga parameter to true.

Usually you get following Error message.

SQL> startup
SQL> ORA-27102: out of memory
Linux-x86_64 Error: 12: Cannot allocate memory

The problem it ulimit -l
You have to set it in /etc/security/limits.conf and relogon to the system and restart database.

cat /etc/security/limits.conf
...
oracle soft memlock unlimited
oracle hard memlock unlimited


alter system set lock_sga=true scope=spfile;
shutdown immediate
startup

And you SGA stays in physical memory.

If you SGA gets pages out you will encounter following waits.
Wait Event: latch: shared pool

July 14, 2010



commit rate sampler for oracle

Filed under: Uncategorized — Tags: , , , , , — admin @ 11:26 am

Today a wrote a simple commit sampler for oracle databases. With this simple tool it possible to see the commit rate on the database.

It can be usefull to compre the application perfomance on batch processing system and analyze commit related waits.

@snapcommit

Example
@snapcommit 1 20

snapcommit.sql

set tab off verify off linesize 299
set termout on serverout on size 1000000 format wrapped

define snap_interval="&1"
define snap_limit="&2"

declare
cmt_start number;
cmt_last number;
cmt_curr number;
cmt_diff number;

procedure oprint(p_var varchar2) as
PRAGMA AUTONOMOUS_TRANSACTION;
begin
dbms_output.put_line(p_var);
end oprint;
begin

for x in 1..&snap_limit+1
loop
cmt_last := cmt_curr;

select value into cmt_curr from v$sysstat where name='user commits';
if(x = 1) then
cmt_start:= cmt_curr;
else
cmt_diff := cmt_curr - cmt_last;
oprint('commits: '||cmt_diff||' in '||&snap_interval||'s' );

end if;

dbms_lock.sleep(&snap_interval);
end loop;
cmt_diff := cmt_curr - cmt_start;
dbms_output.put_line('total commits: '||cmt_diff );
end;
/

July 9, 2010



access to anydata columns (in DBA_HIST_SQLBIND)

Filed under: Uncategorized — Tags: , , — admin @ 1:10 pm

Just a shorty today:
How do you access the anydata_value from dba_hist_sqlbind?

Solution:
use tha statice converter function of the anydata object type

Example

select SQL_ID,NAME,DATATYPE_STRING,VALUE_STRING,anydata.AccessTimestamp(VALUE_ANYDATA) from DBA_HIST_SQLBIND where sql_id='f17fd6xuvxc7x' and name=':1'

July 8, 2010



Oracle Web Application Trace – Tuning Approach

Filed under: Uncategorized — Tags: , , , , , , — admin @ 10:07 am

Situation:

The user claims that a oracle web application is slow. The web application is unsing in database java.

Test 1: Determine the slow Part

The webapplication can be run in sqlplus with following pl/sql

set serveroutput on
set timing on
declare
name owa.vc_arr;
val owa.vc_arr;
res owa.vc_arr;
simple_list__ owa_util.vc_arr;
complex_list__ owa_util.vc_arr;
c clob;
begin
name(1) := '1';
val(1) := '1';
OWA.init_cgi_env (0, name, val);
HTP.init;
-- if ((owa_match.match_pattern('jdb', simple_list__, complex_list__, true))) then
jdb(1, name, val, res);
-- end if;
wpg_docload.get_download_file(:doc_info);
OWA_UTIL.showpage;
end;
/

The result showed that the application is using 4 frames. This means that 1 web request is followed by 4 subsequend requests.
We see as well that the owa functions are not the problem.

Test 2: Trace the application
To just trace the relevant part, I traced by server and modul name


select username,client_identifier,process,module from v$session where module like 'httpd%';

exec DBMS_MONITOR.SERV_MOD_ACT_TRACE_ENABLE('SYS$USERS','httpd@svzpca2 (TNS V1-V3)', DBMS_MONITOR.ALL_ACTIONS,TRUE,TRUE,NULL);

exec DBMS_MONITOR.SERV_MOD_ACT_TRACE_DISABLE ('SYS$USERS','httpd@svzpca2 (TNS V1-V3)', DBMS_MONITOR.ALL_ACTIONS,NULL);

cd udump
tkprof

The trace showed one query taken about 10 to 20 seconds per executionn

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 15 0.00 0.00 0 0 0 0
Execute 15 155.99 152.95 1 4147 8870 0
Fetch 24 0.35 0.38 1 3632 0 134
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 54 156.34 153.34 2 7779 8870 134

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 192 (recursive depth: 1)

Now its a simple sql tuning task to solve the problem

July 2, 2010



Partition on an Index Organized Table (IOT)

Filed under: Uncategorized — Tags: , , , , , — admin @ 12:21 pm

Today a application developer asket me how he should partition his index organized table.

All query include the primary key, no range, no full scan’s.

So I told him it doen’t make sense to partition this table for perfomance reasons.
To profe my statement I did a small Test case.


create table part_range_iot (c1 number, c2 varchar2(20), c3 char(1),
constraint part_range_iot_pk primary key (c1))
partition by HASH (c1)
PARTITIONS 100
/
begin
for i in 1..999
loop
insert into part_range_iot values (i,'abcd','y');
end loop;
commit;
end;
/

set autotrace on
set timing on

select * from part_range_iot where c1 = 99;

The output from the query was

Execution Plan
----------------------------------------------------------
Plan hash value: 3787470184

------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 28 | 1 (0)| 00:00:01 | | |
| 1 | TABLE ACCESS BY GLOBAL INDEX ROWID| PART_RANGE_IOT | 1 | 28 | 1 (0)| 00:00:01 | 58 | 58 |
|* 2 | INDEX UNIQUE SCAN | PART_RANGE_IOT_PK | 1 | | 1 (0)| 00:00:01 | | |
------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("C1"=99)

Statistics
----------------------------------------------------------
104 recursive calls
0 db block gets
14 consistent gets
0 physical reads

So 14 consitent gets were used

The same case without partitioning

create table part_range_iot (c1 number, c2 varchar2(20), c3 char(1),
constraint part_range_iot_pk primary key (c1))
/

begin
for i in 1..1000
loop
insert into part_range_iot values (i,'abcd','y');
end loop;
commit;
end;
/

select * from part_range_iot where c1 = 99;

The results on the IOT table without partitions

Execution Plan
----------------------------------------------------------
Plan hash value: 2398071293

-------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 28 | 1 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| PART_RANGE_IOT | 1 | 28 | 1 (0)| 00:00:01 |
|* 2 | INDEX UNIQUE SCAN | PART_RANGE_IOT_PK | 1 | | 1 (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("C1"=99)

Statistics
----------------------------------------------------------
46 recursive calls
0 db block gets
8 consistent gets
0 physical reads

Just 8 consitent reads.

Conclusion:
Partition an IOT on its primary key has just a negative perfomance impact.
The difference is here
TABLE ACCESS BY INDEX ROWID and TABLE ACCESS BY GLOBAL INDEX ROWID.
On the the non partitioned table we have to do just a unique index scan. On the partitioned table we do the same, but we have to find the right partition first.

February 24, 2010



Extensive PL/SQL Exception Handling

Filed under: Uncategorized — Tags: , , , , — admin @ 12:09 pm

If pl/sql exception get catched, it can be difficult to get the cause of the error.

To work around this the exception handler should log what was going wrong.

Suggestion

DBMS_OUTPUT.PUT_LINE ( '----------' );
DBMS_OUTPUT.PUT_LINE ( 'FORMAT_CALL_STACK:' );
DBMS_OUTPUT.PUT_LINE( DBMS_UTILITY.FORMAT_CALL_STACK() );
--
DBMS_OUTPUT.PUT_LINE ( '----------' );
DBMS_OUTPUT.PUT_LINE ( 'FORMAT_ERROR_STACK:' );
DBMS_OUTPUT.PUT_LINE( DBMS_UTILITY.FORMAT_ERROR_STACK() );
--
DBMS_OUTPUT.PUT_LINE ( '----------' );
DBMS_OUTPUT.PUT_LINE ( 'FORMAT_ERROR_BACKTRACE:' );
DBMS_OUTPUT.PUT_LINE( DBMS_UTILITY.FORMAT_ERROR_BACKTRACE() );
--
DBMS_OUTPUT.PUT_LINE ( '----------' );

February 12, 2010



Find out if you java application was compiled with Java 1.6 on solaris

Filed under: Uncategorized — Tags: , , , , , — admin @ 3:17 pm

For class file version 50.0 (Java 1.6), the opening bytes are:

CA FE BA BE 00 00 00 32

Check if your class file was compiled with java 1.6

bash-3.00$ od -x JDBCVersion.class | head
0000000 cafe babe 0000 0032 009c 0a00 2f00 4007

tip:
EAR and JAR files are just zip files an can be unzipte with

unzip

Older Posts »

Powered by WordPress