Joseph Herlant
version 1.0.0, 2013-11-05 : Initial version
Warning
This document describes the installation of Statspack on Oracle instances version 8.1.7 and later. To get the scripts names for the versions between 8.0 and 8.1.7, go to the following website: http://www.dba-oracle.com/t_statspack_install_scripts.htm

1. Installation

1.1. Prerequisites

First thing is to isolate the data in a specific tablespace. The installation script of Statspack will ask you for one, so let’s create it! (I usually name it "PERFSTAT", but you can choose whatever name you want)

The best thing to do would be to set its size to its final size if you know it. I prefer to set it to its target size and let it extend a little bit if needed.

For databases with low activity, statspack snapshots every 1 hour and keeping 8 days of history, I usually have statspack data around 1G.

create tablespace PERFSTAT
  datafile '/u01/app/oracle/oradata/MYINSTANCE/perfstat_01.dbf'
  size 1G autoextend on next 128M maxsize 2G;
Important
You should monitor this tablespace’s size to ensure there will always be enough space for the statspack to run correctly.

1.2. User and tables creation

The spcreate.sql will create the PERFSTAT user, packages and tables. This script will call the following subscripts:

  • spcusr.sql which creates the user and assigns it the required rights

  • spctab.sql which creates the statspack tables (around 146 tables prefixed by "STATS$"). It is executed as PERFSTAT user.

  • spcpkg.sql which creates the STATSPACK package and its STATSPACK package body. It is executed as PERFSTAT user.

You can either install Statspack in interactive mode using the command line bellow which will prompt for:

  • PERFSTAT user password

  • The tablespace where to put Statpack’s data

  • The temporary tablespace to use

sqlplus "/ as sysdba" @?/rdbms/admin/spcreate.sql

To install Statspack in batch mode, just assign the variables in sqlplus as show in the example below. Adapt the variables values to fit your environment.

CONNECT / AS SYSDBA
define perfstat_password='A_Pr3tty_Pwd'
define default_tablespace='PERFSTAT'
define temporary_tablespace='TEMP'
@?/rdbms/admin/spcreate

2. Configuration

2.1. Automatic statistics gathering

To schedule the automatic collection of statspack statistics, you can use the spauto.sql script. This script creates a job (using DBMS_JOB package), scheduled to run once an hour at 00.

sqlplus "/ as sysdba" @?/rdbms/admin/spauto.sql

As this script is quite static and uses the old job scheduler of Oracle, you can choose to create a job manually using the DBMS_SCHEDULER package and customize the snapshot capture frequency. In the above example, we create a job as SYS to schedule the statspack statistics collection every 15 minutes. Be careful that thaking frequent snapshots will generate a large amount of data. In this case, you should reduce the history retention.

CONNECT / AS SYSDBA
BEGIN
  DBMS_SCHEDULER.CREATE_SCHEDULE(
   schedule_name => 'PERFSTAT.statspack_snap_15min',
   repeat_interval => 'FREQ=MINUTELY;BYMINUTE=00,15,30,45'
  );

  DBMS_SCHEDULER.CREATE_JOB(
   job_name => 'PERFSTAT.sp_snapshot',
   job_type => 'STORED_PROCEDURE',
   job_action => 'PERFSTAT.statspack.snap',
   schedule_name => 'PERFSTAT.statspack_snap_15min',
   auto_drop => FALSE,
   comments => 'Statspack collection'
  );

  DBMS_SCHEDULER.ENABLE('perfstat.sp_snapshot');
END;
/

If you want to create this kind of job directly using the PERFSTAT user (which is, according to me, a cleaner way to work), you will need to grant the create job system privilege. The above example creates a job with the same settings as the previous one, but with the correct privileges:

CONNECT / AS SYSDBA
grant create job to perfstat;
DISCONNECT;
CONNECT PERFSTAT/A_Pr3tty_Pwd
BEGIN
  DBMS_SCHEDULER.CREATE_SCHEDULE(
   schedule_name => 'statspack_snap_15min',
   repeat_interval => 'FREQ=MINUTELY;BYMINUTE=00,15,30,45'
  );

  DBMS_SCHEDULER.CREATE_JOB (
    job_name => 'sp_snapshot',
    job_type => 'STORED_PROCEDURE',
    job_action => 'statspack.snap',
    schedule_name => 'statspack_snap_15min',
    auto_drop => FALSE,
    comments => 'Statspack collection'
  );

  DBMS_SCHEDULER.ENABLE('sp_snapshot');
END;
/
/* Then check the job creation: */
set lines 200 pages 1024;
col JOB_NAME for a20;
col JOB_ACTION for a20;
col SCHEDULE_NAME for a20;
select JOB_NAME
  , JOB_TYPE, JOB_ACTION
  , SCHEDULE_NAME, ENABLED
    , AUTO_DROP, STATE
  , TO_CHAR(NEXT_RUN_DATE,'YYYY-MM-DD HH24:MI') as NEXT_RUN
from USER_SCHEDULER_JOBS;

The last query should output like this:

JOB_NAME        JOB_TYPE         JOB_ACTION           SCHEDULE_NAME        ENABL AUTO_ STATE           NEXT_RUN
--------------- ---------------- -------------------- -------------------- ----- ----- --------------- ----------------
SP_SNAPSHOT     STORED_PROCEDURE statspack.snap       STATSPACK_SNAP_1H    TRUE FALSE SCHEDULED       2013-11-06 10:00

1 row selected.

2.2. Automatic history purge

There is no script like the spauto to automatically purge statspack snapshots.

The best thing to do is to use the PURGE method of the STATSPACK package. This method can be called with various parameters, but the one that is interesting for us now is the `` which specifies how many days you want to keep (setting it to 0 will only raise an error and do nothing. See the manual purge paragraph for more information about truncating tables).

In the example below, we schedule a job that will run every day at 1:56 PM to purge every data older than 8 days.

Warning
if you aleady have a huge amount of snapsots history, refer to the paragraph explaining how to purge manually snapshots first.
CONNECT PERFSTAT/A_Pr3tty_Pwd
BEGIN
  DBMS_SCHEDULER.CREATE_JOB (
    job_name            => 'sp_purge_snapshots',
    job_type             => 'PLSQL_BLOCK',
    job_action           => 'STATSPACK.PURGE(I_NUM_DAYS => 8);',
    repeat_interval     => 'FREQ=DAILY; BYHOUR=13; BYMINUTE=56',
    auto_drop           => FALSE,
    comments            => 'Statspack snapshots purge'
  );
  DBMS_SCHEDULER.ENABLE('sp_purge_snapshots');
END;
/
Note
I know this is ugly to hard code parameters values in jobs, but this is the quickest way to workaround the limitations of DBMS_SCHEDULER’s lack of named arguments support.

To list currently scheduled jobs for the given user, use the following query as in the previous paragraph:

col JOB_NAME for a20;
col JOB_ACTION for a20;
col SCHEDULE_NAME for a20;
select JOB_NAME
  , JOB_TYPE, JOB_ACTION
  , SCHEDULE_NAME, ENABLED
    , AUTO_DROP, STATE
  , TO_CHAR(NEXT_RUN_DATE,'YYYY-MM-DD HH24:MI') as NEXT_RUN
from USER_SCHEDULER_JOBS;

To change the retention, you will need to change the job_action attribute like this (here you set it to 10 days):

CONNECT PERFSTAT/A_Pr3tty_Pwd
BEGIN
  DBMS_SCHEDULER.SET_ATTRIBUTE (
    name        => 'sp_purge_snapshots',
    attribute   => 'job_action',
    value       => 'STATSPACK.PURGE(I_NUM_DAYS => 10);'
  );
END;
/

2.3. Changing default parameters

You can modify the statspack parameters like using the modify_statspack_parameter of the STATSPACK package. This method will be explained in another post.

The most commonly modified parameter is the snapshot detail level which default to the level 5. Setting the level to a higher one is usefull when debugging, but it is not really advisable to have a level 10 for example on a running production server.

BEGIN
  statspack.modify_statspack_parameter(i_snap_level=>7, i_modify_parameter=>'true');
END;
/
Warning
Keep in mind that the higher the snapshot levels require more time and resources to execute than the lower snapshot levels.

To have the detail of what the levels correspond to, use the following query:

select * from stats$level_description;

Which returns:

SNAP_LEVEL  DESCRIPTION
----------  --------------------------------------------------------------------
         0  This level captures general statistics, including rollback segment,
            row cache, SGA, system events, background events, session events,
            system statistics, wait statistics, lock statistics, and Latch
            information

         5  This level includes capturing high resource usage SQL Statements,
            along with all data captured by lower levels

         6  This level includes capturing SQL plan and SQL plan usage
            information for high resource usage SQL Statements, along with all
            data captured by lower levels

         7  This level captures segment level statistics, including logical and
            physical reads, row lock, itl and buffer busy waits, along with all
            data captured by lower levels

        10  This level includes capturing Child Latch statistics, along with
            all data captured by lower levels

3. Manual operations

3.1. Generating report

This part can be either executed by the PERFSTAT user or any DBA user.

To launch a report on the instance you are currently working on, use the following sql script. It will ask for the snapshot number you want to start your report on, the one you want to stop your report on and the name of your report.

@?/rdbms/admin/spreport.sql

If you want to automate this, you just have to set the begin_snap, end_snap and report_name PL/SQL variables. To get the snapshots available, use the STATS$SNAPSHOT table. the SQL query below retrieves all the snapshots available and their corresponding date. You can then define the 3 variables using the define command or just go to the next example to have a much more automated method.

select SNAP_ID, TO_CHAR(SNAP_TIME, 'YYYY-MM-DD HH24:MI'), UCOMMENT
  from PERFSTAT.STATS$SNAPSHOT
order by SNAP_ID;

The above example show how to launch a statspack report over the last 12 hours. This won’t prompt you for any value. The report name will be something like 201311062300-201311071000_MYINSTANCE_hostname.sprpt (201311062300 is the first snapshot date of the report using the YYYYMMDDHH24MI format. 201311071000 is the same but for the last snapshot of the report) and lies in the current directory.

To adapt this to what you want, you only have to change the WHERE clause… easy! :)

set lines 200;
column begin_snap heading "Begin snap" new_value begin_snap format 999999999;
column end_snap heading "End snap" new_value end_snap format 999999999;
column report_name heading "Report name" new_value report_name format a60;
select min(s.SNAP_ID) as begin_snap, max(s.SNAP_ID) as end_snap,
  TO_CHAR(MIN(s.SNAP_TIME), 'YYYYMMDDHH24MI') ||'-'||
  TO_CHAR(MAX(s.SNAP_TIME), 'YYYYMMDDHH24MI') ||'_'||
  i.INSTANCE_NAME ||'_'|| i.HOST_NAME ||
  '.sprpt' as report_name
  from PERFSTAT.STATS$SNAPSHOT s INNER JOIN V$INSTANCE i
    ON s.INSTANCE_NUMBER = i.INSTANCE_NUMBER
  where s.SNAP_TIME > SYSDATE - NUMTODSINTERVAL(12, 'HOUR')
group by i.INSTANCE_NAME, i.HOST_NAME;
@?/rdbms/admin/spreport.sql
Caution
This execute a report on the CURRENT instance, so if you are on a RAC instance, using the spreport.sql script will only run a report for the instance you are currently working on!

3.2. Creating snapshot manually

To generate a snapshot in statspack, use the SNAP method of the STATSPACK package.

exec PERFSTAT.statspack.snap;

If, for debug purposes, you need to to temporarily use a snapshot with a more detailed level of informations, you can do it using the I_SNAP_LEVEL parameter of the STATSPACK.SNAP. It is advisable to comment this snapshot for later reference (use the I_UCOMMENT parameter of this method to do so).

Note
The value of the parameters given to the STATSPACK.SNAP method is used only for the current snapshot taken; the new value is not saved as the default.
BEGIN
  PERFSTAT.statspack.snap(
    i_snap_level => 10,
    i_ucomment => 'Level 10 snap for debugging at particular time'
  );
END;
/

For further informations about the options available in the SNAP method, see the post about the STATSPACK package (not done at the time I’m writing these lines).

3.3. Purging history manually

If you want to make some manual cleanup of the snapshots, use the sppurge.sql script which will ask for the first snap id to delete and the last snap of the range to delete.

CONNECT PERFSTAT
@?/rdbms/admin/sppurge.sql

If you have a huge amount of snapshots to delete, you should really set the transaction to use big_rbs rollback segment as mentionned above:

CONNECT PERFSTAT
set transaction use rollback segment big_rbs;
@?/rdbms/admin/sppurge.sql

To automate the purge of the snapshots, you can either use the following kind of script which, in our case, deletes all statspack data associated with snapshots older than 10 days:

set lines 200;
column losnapid heading "First snap" new_value losnapid  format 999999999;
column hisnapid heading "Last snap" new_value hisnapid format 999999999;
select min(s.SNAP_ID) as losnapid, max(s.SNAP_ID) as hisnapid
  from STATS$SNAPSHOT s
  where s.SNAP_TIME < SYSDATE - NUMTODSINTERVAL(10, 'DAY');
@?/rdbms/admin/sppurge.sql

Or rather use the PURGE version of the STATSPACK package. The example below also deletes all statspack data associated with snapshots older than 10 days:

BEGIN
  STATSPACK.PURGE(I_NUM_DAYS => 10);
END;
/

To truncate all statspack data, use the sptrunc.sql script.

@?/rdbms/admin/sptrunc.sql

3.4. Exporting PERFSTAT user

You can export STATSPACK schema using the `spuexp.par`par file like this:

exp userid=perfstat/A_Pr3tty_Pwd parfile=spuexp.par

4. Uninstallation

The spdrop.sql will drop the PERFSTAT user after dropping its tables. This script will call the following subscripts:

  • spdtab.sql which drops the PERFSTAT tables (executed as sysdba)

  • spdusr.sql which drops the PERFSTAT user (executed as sysdba)

sqlplus "/ as sysdba" @?/rdbms/admin/spdrop.sql

5. Known issues

Keep in mind the following issues when using statspak:

  • Some statistics may only be reported on COMPLETION of a query. For example, if a query runs for 12 hours, its processing won’t be reported during any of the snapshots taken while the query was busy executing.

  • If queries are aged out of the shared pool, the stats from V$SQL are reset. This can throw off the delta calculations and even make it negative. For example, query A has 10,000 buffer_gets at snapshot 1, but at snapshot #2, it has been aged out of the pool and reloaded and now shows only 1,000 buffer_gets. So, when you run spreport.sql from snapshot 1 to 2, you’ll get 1,000-10,000 = -9,000 for this query.

  • This post has to be adapted to RAC environments.