Ask The Real Tom

May 4, 2008



Tune a oracle database for performance (II)

Filed under: oracle — Tags: , , , , , — admin @ 11:26 pm

It’s about time to give you an update on my tuning challange.

Few statments:

  • bad perfomance is better than unstable or changing performance
  • if you gather system stats to a stats table the intervall option does not work (I did it with cronjob to start and stop the gathering)

My old system stats looked like this:

col PVAL2 for a50
select * from sys.aux_stats$;
SNAME                          PNAME                               PVAL1 PVAL2
------------------------------ ------------------------------ ---------- -------------
SYSSTATS_INFO                  STATUS                                    COMPLETED
SYSSTATS_INFO                  DSTART                                    10-13-2006 05:41
SYSSTATS_INFO                  DSTOP                                     10-13-2006 05:41
SYSSTATS_INFO                  FLAGS                                   1
SYSSTATS_MAIN                  SREADTIM                              1.8
SYSSTATS_MAIN                  MREADTIM                              1.3
SYSSTATS_MAIN                  CPUSPEED                              590
SYSSTATS_MAIN                  MBRC                                    8
SYSSTATS_MAIN                  MAXTHR                          377000000
SYSSTATS_MAIN                  SLAVETHR                           131000

The new ones gathered from production and ajusted a bit like this.

exec DBMS_STATS.SET_SYSTEM_STATS('mbrc',12);
exec DBMS_STATS.SET_SYSTEM_STATS('sreadtim',1.8);
exec DBMS_STATS.SET_SYSTEM_STATS('mreadtim',2.2);
exec DBMS_STATS.SET_SYSTEM_STATS('cpuspeednw',500);
exec DBMS_STATS.SET_SYSTEM_STATS('ioseektim',12);
exec DBMS_STATS.SET_SYSTEM_STATS('iotfrspeed',19000);

The gathered mbrc was just 2. But that value was gathered over 7 hours an may not represent the peak load. I ajusted it to 12 just because I feel good with 12. (42 might be the best value, but lets stick with 12)

The mreadtim was lower as the sreadtim because of our multi-oi-optimized SAN. But the workload statistiks don’t get used if mreatime < sreadtime. I read somewhere (don’t rember where) that sreadtim*1.2 might be a reasonable value.

The CPU speed is set lower now, because the CPU workload on the system increased much.

If my new system stats do good… We will see after the first integration test.

The second tuning step will be sql profiles. It’s too dangerous to implement them on production. So I decided to do remote tuning.

  • Fist get a SQL Tuning Set (STS) on production
  • Load it to a staging table and export it
  • Import the staging table on the integration environment and unpack the STS
  • Let dbms_sqltune do some analysis on it
  • Implement the sql profiles an test their effect
  • If they are good (one is for sure 16s -> 0.01s) load them to a staging table
  • Import the sql profiles on the Production and unpack them

Some sql used to perform this step:

#create the sts staging table
BEGIN
  DBMS_SQLTUNE.CREATE_STGTAB_SQLSET( table_name => 'staging_table' );
END;
/ 

# put my sts to the staging table
BEGIN
  DBMS_SQLTUNE.PACK_STGTAB_SQLSET(
      sqlset_name  => 'my_sts',
      staging_table_name => 'staging_table');
END;
/ 

# create sql profile staging table
exec DBMS_SQLTUNE.CREATE_STGTAB_SQLPROF ( table_name => 'SQLPROF_STAGING', schema_name => 'SYSTEM' );

# move all my sql profiles to the staging table
begin
DBMS_SQLTUNE.PACK_STGTAB_SQLPROF (
   profile_name          => '%',
   profile_category      => 'DEFAULT',
   staging_table_name    => 'SQLPROF_STAGING',
   staging_schema_owner  => 'SYSTEM');
end;
/ 

# unpack all (%) my sql profiles on production
begin
DBMS_SQLTUNE.UNPACK_STGTAB_SQLPROF (
   profile_name          => '%',
   profile_category      => 'DEFAULT',
   replace               => true,
   staging_table_name    => 'SQLPROF_STAGING',
   staging_schema_owner  => 'SYSTEM');
end;
/

Please post a comment if you think there is a better way.

Powered by WordPress