Oracle Performance Tuning & Troubleshooting

The aim of this document is to provide hints and tips on identifying and dealing with Oracle performance and tuning issues. As a personal go-to guide, it pays attention to issues which I have come across, and contains summarized information which I find useful. Oracle is huge and the possible tuning and diagnostic approaches are overwhelming; a quick look at Oracle’s Tuning Guide will demonstrate this. This guide tackles a small set of the problems which I have encountered, but is in no way an extensive guide. I have tried to keep this guide straight-forward and practical.

Note that any information you read here came from somewhere, and as this was originally a non-published personal guide, I didn’t worry about citation or copyright. It’s very possible that some text is has been copied-and-pasted (but I’ve since forgotton). Also, the scripts, which are available at the bottom, were not originally written by me, but were altered.

Note that the guidelines and scripts assume a default Oracle 10g Release 2 (10.2) installation with default initialization parameters, etc.

1. Introduction

Oracle 10g+ uses something called the CBO (Cost-Based Optimizer) which determines the most efficient way to execute a SQL statement after considering many factors related to the objects referenced and the conditions specified in the query. This determination is an important step in the processing of any SQL statement and can greatly affect execution time.

A SQL statement can be executed in many different ways, including the following:

  • Full table scans
  • Index scans
  • Nested loops
  • Hash joins

The output from the optimizer is a plan that describes an optimum method of execution. The CBO formulates this plan based on statistics. A combination of correct index-usage and and decent statistics is key for well-performing queries. Sometimes the CBO can make decisions that seem incorrect — this may be as a result of stale or inaccurate statistics. See below for some examples.

2. Locating Slow Queries

Personally, the most common performance problems I have had to deal with is down to slow executing of (usually) simple SQL query. Finding the guilty SQL is the first step in solving the issue. There are many approaches to this. Chances are if you have a performance problem you already know the offending SQL. The following are some simple ways of finding resource-intensive SQL.

2.1. Enterprise Manager

Click on the “Performance” on the top tab, then “Top Activity” near the bottom. You can see a time-line of recently executed SQL, ordered by the most intensive. This may be the easiest way to determining what SQL is causing you problems. You can use Enterprise Manager to carry out a bunch of tuning and performance tasks. You may be able to solve your problems solely using these features, but note that the majority of the tricky problems I’ve encountered required a histogram (described later). See section 5. Tuning with Enterprise Manager for more.

2.2. AWR Views

Basically, the AWR (Automatic Workload Repository) is an Oracle built-in tool that collects performance related statistics and derives performance metrics from them to track a potential problem.

The script plan.sql accesses the AWR and provides detailed hourly breakdown of table and index access, over the last number of days. For example, this script will summarize:

  • table access by ROWID and counts
  • index full scans and counts
  • index range scans and counts
  • index unique scans and counts

While full-table scans are not “bad”, per se, they are an indicator that you may have a missing index, and excessive full-table scans on large tables should be investigated.

To run, do:

@plan.sql mySchema

…where mySchema is the name of the schema you want to examine. The output will be saved in a file called plan.lst.

Alternatively, you can manually query views in the AWR, such as the dba_hist_*. For example, the dba_hist_sql_plan view contains time-series data about each object (table, index, view) involved in queries. The important columns include the cost, cardinality, cpu_cost, io_cost, and temp_space required for the object. By default, historical data is stored for the previous 7 days. Most of the scripts included in this document utilises the AWR, as does much of the Enterprse Manager’s tuning functionality.

2.3. Obtaining the Execution Plan

Once you have found the offending SQL, you should find its execution plan, which will explain to you the path of execution decided on by the CBO for that exact query. This provides a starting point for fixing the problem.

To see the execution plan for SQL, do the following. Log into SQLPlus and execute:

set pagesize 25
set linesize 121
set long 1000000
col xplan format a100

(you only need to do this once every time you log into SQLPlus)

Then you can do:

explain plan for select * from abc; — or whatever query you want

…and to see the actual plan type:

select * from table(dbms_xplan.display)

For more on understanding “explain plans”, see this part of the Oracle Documentation, or see this resource, which I prefer.

3. Indexes

There are several types of indexes. The oldest and most popular type of Oracle indexing is a standard b-tree index, which excels at servicing simple queries. Another useful type is the function-based index. In all likelihood, the indexes you are dealing with will be b-tree indexes, or you might have a few function-based indexes. Indexes are great when they work as you’d expect them to, but things can go wrong.

Note: indexes cause a performance hit in DML operations, and should only exist when they’re actually useful!

3.1. Common Problems with Indexes

The following describes a few scenarios where Oracle may not behave as you would expect.

3.1.1. Case A

You’re executing a query, but Oracle seems to be choosing some odd execution path and you’re sure there’s a better way to do it (i.e., use your index). Your statistics may be out-of-date or not accurate enough. That is, Oracle thinks using the index is the wrong way to go. See below section on generating statistics.

3.1.2. Case B

You’re doing SELECT COUNT(*) FROM T — and you have an index on T, but you notice that Oracle is doing a full table scan, rather than using the index. This is most likely because the column being indexed is nullable, meaning the index count could be less than the true row count, so it has to go the the table to get the true count. If the column was non-nullable then Oracle would have used the index instead — this is one of the reasons why you should only allow null values in a column when there is a good case for it. See below for a way to take advantage of nullable columns.

3.1.3. Case C

You have an index on X and you’re doing SELECT * FROM T WHERE f(X) = VALUE. You find the index is not being used; this is because of the use of the function. This may be a candidate for a function index instead.

3.1.4. Case D

You have a table T with a compound index (i.e., an index based on more than one column) on columns (X,Y). You do SELECT * FROM T WHERE Y = 5. This index will not be used since the query predicate did not involve column X. However, the index would be used for a query like SELECT X, Y FROM T WHERE Y = 10, as Oracle realises it does not need to go to the table to get X and Y as the values are in the index itself.

3.1.5. Case E

Suppose you have a table with primary key X and nullable column Y. Suppose there are 100 rows in it, where X ranges from 0 to 100. Doing SELECT COUNT(Y) FROM T WHERE X < 50 will result in a full table scan rather than an index access. Why? Because Oracle realised it would be retrieving 50% of the rows. If it used the index it would have to read an index block and process each of the rows on it. Then for every other row it would have do a database block get to get the row data. So it is more efficient to just read in the database block and find the 50 rows we’re interested in.

Consider the same query, and the same table, but with many more rows, and X ranging from 0 to 5000, say. In this case the index would be used, as it would be faster than a full table scan.

3.1.6. Case F

You have a column which has highly-skewed data (i.e., 98% of the values in this column are X, while the other 2% are Y and Z). This type of a column will gain little benefit from an index when querying for X, and Oracle will not use it. Histograms, described in a later section, are extra column statistics which greatly aid Oracle in making decisions on columns like these.

3.2. Nullable Columns & Indexes

B-tree indexes will not store completely null entries, so, typically, nullable columns will make any indexes on the column(s) slightly less effective — but it can also be used to an advantage.

Say you have a table with a column that takes exactly two values. Say the values are skewed; 90% of rows take one value, the other 10% take another. This can be indexed efficiently to gain access to the minority rows. The solution is to use a null for the majority rows and whatever other value you want for the minority rows. Say you have a JOB table with a nullable column called TASK_PENDING (which is a timestamp). When the a row is inserted, this column is given a timestamp. Now whatever processes the JOB table can just do a SELECT * FROM JOB WHERE TASK_PENDING IS NOT NULL. When it processes the tasks it can mark them as null, effectively removing it from the index. Therefore the index remains very small and efficient.

3.3. Identifying Unused Indexes

3.3.1 Monitoring Index Usage

Oracle Database provides a means of monitoring indexes to determine whether they are being used. If an index is not being used, then it can be dropped, eliminating the  unnecessary updating overhead.

To start monitoring the usage of an index, issue this statement:


Later, issue the following statement to stop the monitoring:


You should monitor during a reasonably long period of time, i.e., for 3 days during a busy period. Apparently there is no overhead (or at least its insignificant) when monitoring. You can then query the V$OBJECT_USAGE view to see whether that index was used or not. This view does not contain an OWNER column so you must to log on as the object owner to see the usage data.

Doing this for each index is rather impractical, so here is a much better way. Use index_monitoring_on.sql to turn on monitoring for all indexes in mySchema:

@index_monitoring_on.sql mySchema all

You can then run index_monitoring_status.sql to verify that they’re being monitored:

@index_monitoring_statis.sql mySchema all

After a period of time, use index_monitoring_off.sql to turn off index monitoring:

@index_monitoring_off.sql mySchema all

Finally, use index_usage.sql to see which indexes were or were not used:

@index_usage.sql mySchema all

3.3.2 Other

Although index monitoring is the best way to identify if an index is used or not, you can run plan.sql to see an hourly breakdown of index usage (and so much more) over the last number of days. This can help identify indexes that have been used, but not recently, or not frequently.

To run, do:

@plan.sql mySchema

…where mySchema is the name of the schema you want to examine. The output will be saved in a file called plan.lst.

The quick and small index_count.sql script is is another script that will show index usage.

To run, do:

@index_count.sql mySchema

…where mySchema is the name of the schema you want to examine.

The detect_duplicate_index.sql script can find duplicate indexes (i.e., indexes that index the same column).

To run, do:

@detect_duplicate_index.sql mySchema

…where mySchema is the name of the schema you want to examine.

Note that these scripts rely on information in the AWR, and if your index is not listed in the results, it may be because it isn’t being used (and hence has never ended up in the AWR). However, this is not 100% reliable, as, by default, the AWR does not retain statistics on low-intensive SQL (this can be changed, but degrades performance). So think hard before simply removing an index just because it isn’t showing up in the AWR (or this script). To the best of my knowlege, the only guaranteed way of knowing an index is unused is by using index monitoring. I invoked a discussion on this topic here — with some feedback from Jonathon Lewis, which may help you to find unused indexes.

3.4. Finding Index Candidates

The script plan.sql provides a wealth of information which help identifies column that would benefit from an index. For example, if you see many full table scans on large tables then you can be pretty sure that there is some poorly performing SQL that might benefit from an index.

To run, do:

@plan.sql mySchema

…where mySchema is the name of the schema you want to examine. The output will be saved in a file called plan.lst.

Alternatively, you can use Enterprise Manager’s advisors to suggest new indexes or index drops. See section 5. Tuning with Enterprise Manager.

4. CBO Statistics

Optimizer statistics are a collection of data that describe more details about the database and the objects in the database. These statistics are used by the query optimizer to choose the best execution plan for each SQL statement. Note that indexes are not statistics. Oracle gathers statistics on all database objects automatically and maintains those statistics in a regularly-scheduled maintenance job. Automated statistics collection eliminates many of the manual tasks associated with managing the query optimizer, and significantly reduces the chances of getting poor execution plans because of missing or stale statistics. However, Oracle will re-generate these statistics in order on objects believes need it the most, and it only has a certain ‘maintenance window; in which to do this. Also, it may not refresh stale statistics on an object (i.e., table or and index) which hasn’t been modified significantly (i.e., not more than 10% of the rows).

So sometimes (rarely) the statistics Oracle has are insufficient for it to make optimal execution plans. For these reasons, sometimes Oracle can make, what seem like odd, decisions — and it may be time to generate statistics manually.

The statistics used by the CBO are stored in the data dictionary. You can collect exact or estimated statistics about physical storage characteristics and data distribution in these schema objects by using the DBMS_STATS package.

4.1. Managing Statistics

This section describes the steps in generating statistics.

When you generate statistics for a table, column, or index, if the data dictionary already contains statistics for the object, then Oracle updates the existing statistics. When statistics are updated for a database object, Oracle invalidates any currently parsed SQL statements that access the object. The next time such a statement executes, the statement is re-parsed and the optimizer automatically chooses a new execution plan based on the new statistics.

The following is the most basic way to delete and re-generate statistics on a table X of schema MYSCHEMA:


However, I prefer to be more specific, as the above (I believe) might not generate a) statistics for the indexes or b) histograms. Also, the ’sample size’ will not be 100%, so it could miss something.

I would execute the following on table X:


This is an exhaustive gather; which basically means:

Analyze table X, even if it was recently analysed, or hasn’t been modified much recently. Examine 100% of rows and examine the data distribution of all columns and generate histograms as neccessary. Also, generate statistics for each index on the table too.

…it might take a while to complete on large tables, but is worth it. Generally, other versions of this query might be faster, but less exhaustive. One thing to note is that this query will generate histograms on columns with skewed data, even if that column is never referenced in SQL. This may create an unneccessary overhead — it might be better to identify such columns and create individual histograms for them.

There are other useful DBMS_STATS packages (e.g., to generate statistics for the whole schema, etc). See the Oracle Database PL/SQL Packages and Types Reference for syntax and examples of all DBMS_STATS procedures, including other parameters and what they mean — but the above should suit in many cases. Also, please see the Managing Optimizer Statistics section of Oracle 10g Documentation.

4.2. Histograms

The majority of performance problems which I have come across have been down to unevenly distributed data. For example, consider a STATE column in table X, with three possible values: CONFIRMED, DECLINED, UNPROCESSED. Say that we have 5,000,000 rows, where 95% contain the value CONFIRMED, 4% contain DECLINED and 1% contain UNPROCESSED. In general, Oracle assumes an even distribution of data and will not know about this skewed distribution. It will believe it’s 33.3%, 33.3%, 33.3%.

The optimal approach to take for a query like SELECT * FROM X WHERE STATE = ‘CONFIRMED’ would be a full table scan, rather than using the index for 95% of the rows and then accessing the table based on ROWID. Similarly, the optimal approach for the statement SELECT * FROM X WHERE STATE = ‘UNPROCESSED’ would be to access the index, and then access the table via ROWID, rather than doing a full table scan only to find the 1% UNPROCESSED rows near the bottom, for example.

So there are two very different optimal execution plans, for very similar queries. Without knowing about the skewed data, Oracle will not choose a optimal plan (which will lead to long execution times).

4.2.1. Why doesn’t Oracle know about skewed data?

When gathering statistics on a table, DBMS_STATS gathers information about the data distribution of the columns within the table. The most basic information about the data distribution is the maximum value and minimum value of the column. For skewed data distributions, histograms can also be created as part of the column statistics to describe the data distribution of a given column. However, the basic statistics collection, as far as I know, will not generate histograms. Or, maybe the statistics become stale or wrong. Or the sample size of the scheduled statistics collection may be too small. These are some of the reasons why you might need to create your own histograms.

4.2.2. Creating Histograms

Histograms are created using the METHOD_OPT argument of the DBMS_STATS gathering procedures. The following query will cause Oracle to analyze the STATE column, and generate histograms if necessary:

DBMS_STATS.GATHER_table_STATS (OWNNAME => ‘MYSCHEMA’, TABNAME => ‘X’, estimate_percent => 100, METHOD_OPT => ‘FOR COLUMNS SIZE 3 STATE’, cascade => true);

Estimate percent being 100 says “analyse 100% of rows, not just a sample size”. The parameter SIZE 3 refers to the number of distinct column values you expect the STATE column to contain. Alternatively, you can use the SIZE AUTO option that will let Oracle decide how many buckets to assign. Note: although time consuming, I recommend deleting all statistics on the table, dropping the index on the column in question, re-creating the index, then re-generating default statistics on the table, and then attempting to create the histogram.

Oracle will create one of two types of histograms for column statistics: a height-balanced histogram or frequency histogram. The type of histogram is stored in the HISTOGRAM column of the *TAB_COL_STATISTICS views (USER and DBA). This column can have values of HEIGHTBALANCED, FREQUENCY, or NONE.

In a height-balanced histogram, the column values are divided into bands so that each band contains approximately the same number of rows. The useful information that the histogram provides is where in the range of values the endpoints fall.

In a frequency histogram, each value of the column corresponds to a single bucket of the histogram. Each bucket contains the number of occurrences of that single value. Frequency histograms are automatically created instead of height-balanced histograms when the number of distinct values is less than or equal to the number of histogram buckets specified. This is the type of histogram we’d expected to be created for column STATE of table X.

To view the histogram details, you can do:

SELECT column_name, num_distinct, num_buckets, histogram
WHERE table_name = ‘X’ AND column_name = ‘STATE’;

Does NUM_DISTINCT correspond to the number of distinct state values? It should.

To see Oracles estimation of the data distribution, do:

SELECT endpoint_number, endpoint_value
WHERE table_name = ‘X’ and column_name = ‘STATE’
ORDER BY endpoint_number;

Note that you should then see different execution plans for the same queries when you specify different states in the where clause. You might need to clean the execution plan first:


4.2.3. Histograms and Bind Variables (a caveat)

Unfortunately, while histograms solve these performance issues, they are problematic when used with bind variables. This is best described with an example.

If you don’t use bind variables then queries like:




…will both be hard parsed and have different execution plans appropriate to the values used in the where clause. Note that these execution plans are then cached, and and re-used when the same queries are executed again, to avoid expensive hard parsing.

Now suppose you rewrite the query replacing the constants with a bind variable, e.g.:


The first time the query is parsed the optimizer will use the value of the bind variable (i.e., ‘CONFIRMED’, ‘DECLINED’ or ‘CANCELLED’) and the histograms to work out the best execution plan. The next time it will reuse the same plan, and the next, and so forth.

So, say the first time through the value is ‘CONFIRMED’ (appears in 95% of rows) the optimizer will peek the bind variable and probably go for a full table scan as that is the most efficient way to pull back that proportion of the rows. The second time through the value is ‘CANCELLED’ (appears in very few rows), but because there is already an execution plan cached, the optimizer won’t do the hard parse and so won’t peek the bind variable, it will just use the existing plan (a full table scan). A full table scan is a really inefficient way to read a small proportion of the rows in a table, as explained earlier.

Reverse the order of the queries (‘CANCELLED’ first then ‘CONFIRMED’) and you have the database doing an index read then access by ROWID to retrieve 95% of the rows in a table.

Note that, bind variables are pretty much necessary for performance and avoiding them completely isn’t really an option. In a transaction system, you might execute hundreds of queries per second. Therefore, in general you MUST use bind variables and reduce the number of soft parses as much as possible.

As a code-solution to this histogram/bind-variable problem, it might be a good idea to avoid bind variables for such rows. For example, rather than doing:

String query = “SELECT * FROM X WHERE STATE =:state”;
HashMap params = new HashMap();
doQuery(query, params);

…you might do this:

doQuery(query, null);

So here you have a literal in the query and this will use the histograms once again upon the first hard parse to find “the best overall general plan” for that query, even if the previous (or first) query was, say, SELECT * FROM X WHERE STATE = ‘DECLINED’.

Note that Oracle 11g solves the histogram/bind-variable issue by allowing the server to compare the effectiveness of execution plans between executions with different bind variable values (known as adaptive cursor sharing).

5. Tuning with Enterprise Manager

So far this document has outlined procedures for quickly solving major performance issues which may periodically occur. However, Enterprise Manager contains a few great tools which can accurately locate bottlenecks and suggest (and implement) solutions. Everything Enterprise Manager does boils down to PL/SQL and can thus be carried out manually, but obviously using the user interface is much easier.

Something called the Automatic Workload Repisitory (AWR) is used to automatically collect a wide variety of performance statistics, which are available through the dba_hist_* and v$ views of the SYS schema. By default, a ’snapshot’ is taken every hour, which contains statistics for that period and each snapshot is retained for 7 days. The tools about to be introducted makes use of this information. The following tools will be introduced: ADDM, SQL Tuning Advisor, SQL Access Advisor.

5.1. ADDM

After the AWR Snapshot is executed, the Automatic Database Diagnostic Monitor is triggered to analyze the information contained in the AWR for the period pertaining to the last two Snapshots. ADDM’s output includes a plethora of reports, charts, graphs, heartbeats and related visual aids. In general, ADDM provides users with a top-down analysis of performance bottlenecks and their associated resolutions.

The “Performance Analysis” section on the “Home” page is a list of the top five findings from the last ADDM analysis task.

Specific reports can be produced by clicking on the “Advisor Central” link, then the “ADDM” link. The resulting page allows you to select a start and end snapshot, create an ADDM task and display the resulting report by clicking on a few links. You can also see the top isses found and recommended solutions. Many of these solutions might recommend running an advisor. Others might suggest a configuration change, etc.

5.2. The Advisors

The advisors are probably the two most useful tuning features, with several ways to invoke them. You can run the advisors against against snapshots, period SQL, cached SQL, or from the Top Activity page, etc. Or, you can run the advisors against tuning sets. From the Oracle documentation, a “tuning set is a database object that includes one or more SQL statements, their execution statistics and execution context, and can be used as an input source for the SQL Tuning Advisor and SQL Access Advisor. You can load SQL statements into a SQL tuning set from different SQL sources, such as the Automatic Workload Repository (AWR), the cursor cache, or high-load SQL statements that you identified.” So, for example, you might create a tuning set from the earlest snapshot to the most recent snapshot, as this would be a fairly realistic representation all of the SQL the application in question uses. See here for more on tuning sets.

5.2.1. SQL Tuning Advisor

This advisor provides SQL tuning recommendations and includes information justifying why it generated those recommendations. The recommendations may include collecting statistics on objects, creating new indexes, restructuring the SQL statement or the creation of a SQL profile to create a more optimal access path.

There are several places to invoke it within Enterprise Manager. One way is to click on “SQL Tuning Advisor” Advisor Central page, where you can pick from a number of data sources to tune from (i.e., Top Activity, or snapshots, or Period SQL, etc). Most likely you will choose “Top Activity”, where you can then select one or more SQL statements for tuning. Alternatively, you may create a SQL Tuning Set. Or you may invoke the advisor via a link from the ADDM’s findings.

Either way, you will eventually get onto the Schedule Advisor page, which is straight forward. Chose comprehensive mode to include SQL profile recommendations (see below). However, this will take longer. You can then run the advisor immediately, or schedule it. Either way, the result will end up in the table on the Advisor Central page, which you can click into and select “View Recommendations”.

One of the recommendations may be to implement a SQL profile (a type of extra statistic). This is simple, but to keep this document short, I will not explain these here either. See here for more on SQL profiles. In case of the SQL profile, if you hit “Implement”, you will implement it immediately without any additional prompts.

5.2.2. SQL Access Advisor

The SQL Access Advisor uses the AWR to provide recommendations on creating/dropping Oracle objects (additional indexes, materialized views, etc.) to increase the performance of poorly performing SQL statements. It can be accessed via the “SQL Access Advisor” link on the Advisor Central page. Leave the default select option on the Initial Options page and click “Continue”. Then on the “Workload Source” you can, like the SQL Tuning Advisor, select a source of data. I recommend creating and using a SQL Tuning Set made from the widest range of snapshots possible. Also, select “Include only SQL statements executed by these users” in the “Filter Options” section, and choose only the schema you’re interested in. Then click “Next”. On the Recommendation Options page you can choose whichever “Recommendation Types” you’d be interested in seeing — typically this will be “Indexes”. I would recommend selecting the “Advisor Mode” as “Comprehensive Mode”, and choosing “Complete Workload” for “Workload Scope”, as this will also recommend index drops. Click “Next” twice, and finally “Submit” to begin the advisor task. Again, the results can be found on the Advisor Central page, and implementing and viewing the recommendations is intuitive and so will not be explained here.

For more information on the access advisor, see here. Note that there are several other advisers in Advisor Central that may be of interest.

5.3. AWR Reports

TODO: How to generate AWR reports. And explain the difference between ASH, AWR and ADDM reports.

5.4. ASH Reports

You can run Active Session History (ASH) reports to analyze transient performance problems with the database that only occur during specific times; select “Run ASH Report” on the main “Performance” page, or the “Top Activity” page.

From the Oracle Documentation: “Transient performance problems are short-lived and do not appear in the Automatic Database Diagnostics Monitor (ADDM) analysis. ADDM tries to report the most significant performance problems during an analysis period in terms of their impact on DB time. If a particular problem lasts for a very short duration, its severity might be averaged out or minimized by other performance problems in the entire analysis period; therefore, the problem may not appear in the ADDM findings. Whether or not a performance problem is captured by ADDM depends on its duration compared to the interval between the Automatic Workload Repository (AWR) snapshots.

If a performance problem lasts for a significant portion of the time between snapshots, it will be captured by ADDM. For example, if the snapshot interval is set to one hour, a performance problem that lasts for 30 minutes should not be considered as a transient performance problem because its duration represents a significant portion of the snapshot interval and will likely be captured by ADDM.

On the other hand, a performance problem that lasts for only 2 minutes could be a transient performance problem because its duration represents a small portion of the snapshot interval and will likely not show up in the ADDM findings. For example, if the user notifies you that the system was slow between 10:00 p.m. and 10:10 p.m., but the ADDM analysis for the time period between 10:00 p.m. and 11:00 p.m. does not show a performance problem, it is likely that a transient performance problem occurred that lasted for only a few minutes of the 10-minute interval reported by the user. This chapter describes how to resolve these types of transient performance problems with Oracle Database.”

For more on generating and reading the ASH report, see here.

6. Oracle SQL Developer

You should download and installed Oracle SQL Developer; a very handy tool that easily allows you to see a statement’s execution plan, the statistics on a table, including when it was last analysed, etc, etc, without writing a line of code. It can also be used with non-Oracle databases, such as MySQL.

7. Schema Design

  • If possible, do not allow a nullable columns
  • Try to avoid the need for columns which will have a low number of distinct values (i.e., something like the STATE column)

8. Scripts

Here are the links for all scripts mentioned in this document:

You should save these files in the directory which you run SQLPlus from. Make sure to run “SET SQLBLANKLINES ON;” in SQLPlus before running any of the scripts. Do NOT copy and paste these into SQLPlus. You must run them them as stated, or they will fail. I have gathered this scripts from various places, and altered them.