Many SQL Tuning tasks should be performed on a regular basis. You may see a way to rewrite a WHERE clause, but it may depend on a new index being built. The following is a list of tasks that gives you a background of some important tasks that must be performed, and gives you an idea of what dependencies you may have as you tune SQL:
1. Identifying high-load SQL statementsIdentifying high-load SQL statements is one of the most important tasks you should perform. The ADDM is the ideal tool for this particular task.
2. Scheduling job to gather optimizer statisticsBy default, Oracle Database gathers optimizer statistics automatically. For this, a job is schduled to run in the maintenance windows.
3. Identifying usage and performance of hardware componentsOS statistics provide information about the usage and performance of the main hardware components as well as the performance of the OS itself,
4. Rebuilding indexesOften, there is a beneficial impact on performance by rebuilding indexes. For example, removing non selective indexes to speed the DML, or adding columns to the index to improve selectivity.
5. Using stored statistics, outlines, or SQL plan baselinesYou can maintain the existing execution plan of SQL statements over time by using stored statistics, outlines, or SQL plan baselines.
When you tune your system, it is important that you compare the
CPU time with the
wait time of your system. By comparing CPU time with wait time, you can determine how much of response time is spent on useful work and how much on waiting for resources potentially held by other processes.
As general rule, the systems where CPU time is dominant usually need less tuning than the ones where wait time is dominant. On the other hand, high CPU usage can be caused by
badly-written SQL statements.
High CPU Time = Possibly needs SQL tuningHigh Wait time = Need instance/RAC tuningAlthough the proposition of CPU time to wait time always tends to decrease as load on the system increases, steep increases in wait time are a sign of contention and must be addressed for good scalability. Scalability is a system's ability to process more workload with a proportional increase in system resource use.
Adding more CPUs to a node, or nodes to a cluster, would provide very limited benefit under contention. Conversely, a system where the proportion of CPU time does not decrease significantly as load increases can scale better, and would most likely benefit from adding CPUs or RAC instances if needed.
AWR reports display CPU time together with wait time in the TOP Timed Events section, if the CPU time portion is among the top five events.
Poor application design, implementation, and configuration have a significant impact on scalability. This results in poor SQL and index design, resulting in a higher number of logical I/O for same number if rows returned. It also results in reduced availability because database objects take longer to maintain.
However, design is not the only problem. The physical implementation of the application can be the weak link, as in the following examples:
- System can move to production environments with poorly written SQL that cause high I/O
- Infrequent transaction COMMITs or ROLLBACKs can cause long locks on resources.
- The production environment can use different execution plan than those generated in testing
- Memory-intensive applications that allocated a large amount of memory without much thought for freeing the memory can cause excessive memory fragmentation, and.
- Inefficient memory usage places high stress on the operating virtual memory subsystem, and this affects performance and availability.
There are some common mistakes made on customer systems. Although these mistakes are not directly detected by ADDM, ADDM highlights the resulting high-load SQL. These mistakes are:
1. Bad Connection ManagementThe application connects and disconnects for each database interaction. This problem is common with stateless middleware in application servers. It has over two orders of magnitude impact on performance and is totally unscalable
2. Bad use of cursors and the shared poolNot using cursors results in repeated parses. If bind variables are not used, there may be hard parsing of all similar SQL statements. This has an order of magnitude impact on performance, and it is totally unscalable. Use cursors with bind variables that open the cursor and execute it many times. Be suspicious of application generating dynamic SQL.
3. Bad SQLBad SQLis SQL that uses more resources than appropriate for the application. This can be a DSS query that runs for more than 24 hours or a query from an online application that takes more than a minute.
SQL that consumes significant system resources should be investigated for potential improvement. ADDM identifies high-load SQL and the SQL Tuning Advisor can be used to provide recommendations for improvement.
4. Use of nonstandard initialization parametersThe use of nonstandard initialization parameters might have been implemented based on poor advise or incorrect assumptions. Most systems give acceptable performance using only the set of basic parameters. In particular, undocumented optimizer features can cause a great deal of problems that may require considerable investigation.
Likewise, optimizer parameter set in the initialization parameter file can override proven optimal execution plans. For these reasons, schemas, schema statistics, and optimizer setting should be managed together as a group to ensure consistency of performance.
5. Getting the database I/O wrongMany sites lay out their databases poorly over the available disks. Other sites specify the number of disks incorrectly because they configure disks by disk space and not by I/O bandwidth
6. Redo log setup problemsMany sites run with too small redo log files. Small redo logs cause system checkpoints to continuously put a high load on the buffer cache and the I/O system. If there are very few redo logs, the archive cannot keep up, and the database waits for the archive process to catch up.
7. Excessive serializationSerialization of data blocks in the buffer cache due to shortage of undo segments is particularly common in application with large number of active users and a few undo segments.
Use Automatic Segment Space Management of ASSM and automatic undo management to solve this problem.
8. Long full table scansLong full table scans for high-volume or interactive online operations could indicate poor transaction design, missing indexes, or poor SQL optimization. Long table scans, by nature, are I/O intensive an unscalable.
9. High amount of recursive (SYS) SQLLarge amounts of recursive SQL executed by SYS could indicate that space management activities, such as extent allocations, take place. This is unscalable and impact user response time, Use locally managed tablespaces top reduce recursive SQL due to extent allocation. Reccursive SQL executed under another user ID is probably SQL and PL/SQL, so this is not a problem.
10. Deployment and migration errorsIn many cases, an application uses too many resources because the schema owning the tables has not been successfully migrated from the development environment or from an older implementations. Example of this are missing indexes or incorrect statistics. These errors can lead to suboptimal execution plans and poor interactive user performance, When migrating applications of known performance, export the schema statistics to maintain plan stability using DBMS_STAT package.
Proactive TuningTuning usually implies fixing a performance problem. However , tuning should be part of the life cycle of an application, through the analysis, design, coding, production, and maintenance stages. The tuning phase is after left until the system is in production. At that time, tuning becomes a reactive exercise, where the most important bottleneck is identified and fixed.
Application are no different from any other designed and engineered product. If the design look right, it is probably is right. This principle should always kept in mind when building application. Consider some of the following design issue that should be tune proactively instead of reactively :
1. Simple DesignIf the table design is so complicated that nobody can fully understand it, the table is probably designed badly.
If SQL statements are so long and involved that it would be impossible for any optimizer to effectively optimize it in real time, there is probably a bad statement, underlying transaction, or table design.
If there are many indexes on a table and the same columns are repeatly indexed, there is probably a bad index design.
If queries are submitted without suitable qualification (the WHERE clause) for rapid response for online users, there is probably a bad user interface or transaction design.
2. Data ModelingData modeling is important in successful relation application design. This should be done in a way that quickly and accurately represents the business practices. Apply our greatest modeling efforts to those entities affected by the most frequent business transactions. Use of modeling tools can then rapidly generae schema definations and can be useful when a fast prototype is required.
Normalizing data prevents duplication. When data is normalized, you have a clear picture of the keys and relationships. It is then easier to perform the next step of creating tables, constraints, and indexes, A good data model ultimately means that your queries are written more efficiently.
3. Table and IndexesTable design is largely a compromise between flexibility and performance of core transactions. To keep the database flexible and able to accommodate unforeseen workloads, the table design should be very similar to the data model, and it should be normalized to at least third normal form. However, certain core transaction can require selective denormalization for performance purpose.
Use the features supplied with Oracle Database to simplify table design for performance, such as storing tables prejoined in clusters, adding derived columns and aggregate values, and using materialized view or partitioned tables. Additionally, crate check constrains and column with default value to prevent bad data from getting into the tables.
Design should be focused on business critical tables so that good performance can be achieved in areas that are the most used. For noncritical tables, shortcuts in design can be adopted to enable a more rapid application development. If, however, a noncore table becomes a performance problem during prototyping and testing, remedial design efforts should be applied immediately.
Index design is also largely iterative process based on the SQL that is generated by application designers. However, it is possible to make a sensible start by building indexes that enforce foreign key constraints (to reduce response time on joining between primary key tables and foreign key tables) and creating indexes on frequently accessed data, such as a person's name.
Primary keys and unique keys are automatically indexed except for the DISABLE VALIDATE and DISABLE NONVALIDATE RELY constraints. As the application evolves and testing is performed on realistic sizes of data, certain queries need performance improvements, for which building a better index is a good solution.
When buildig a nexe index, one of the easiest way to speed up a query is to reduce the number of logical I/O by eliminating a table scan from the execution plan. This can be done by appending to index all the columns of the table referenced by the query. These columns are the select list columns and any required join or sort columns.
This technique is particularly useful in speeding up an online application's response time when time-consuming I/Os are reduced. This is best applied when testing the application with properly-sized data for the first time. The most aggressive form of this technique is to build an index-organized table or IOT
4. ViewsViews can speed up and simplify application design. A simple view definition can mask data model complexity from the programmers whose priorities are to retrieve, display, collect, and store data.
However, though vies provide clean programming interfaces, they can cause suboptimal, resources-intensive queries when nested too deeply. The worst type of view use is creating joins on views that reference other views, which in turn reference other views. In many cases, developers can satisfy the query directly from the table without using a view. Because of their inherent properties, views usually make it difficult for the optimizer to generate the optimal execution plan.
5. Writing Efficient SQLAn application that is design for SQL execution efficiently must support the following characteristics:
5.1. good database connectivityConnecting to the database is an expensive operation that is highly unscalable. Therefore, the number of concurrent connections to the database should be minimized as much as possible. A simple system, where a user connects at application initialization, is ideal.
However, in a web-based or multi-tiered application, where application servers are used to multiplex database connections to users, this can be difficult. With these types of applications, design efforts should ensure that database connections are pooled and not re-established for each user request.
5.2. good cursor usage and managementMaintaining user connections is equally important for minimizing the parsing activity on the system. Parsing is the process of interpreting a SQL statement and creating an execution plan for it. This process has many phases, including syntax checking, security checking, execution plan generation, and loading shared structures into the shared pool.
5.3. minimal parsingThere are two types of parse operations - hard and soft. In hard parsing, a SQL statement is submitted for the first time, and no match is found in the shared pool. Hard parses are the most resource-intensive and unsaclable because they perform all the operations involved in a parse.
In soft parsing, a SQL statement is submitted for the first time, and a match is found in the shared pool. The match can be the result of previous execution by another user. The SQL statement is shared, which is good for performance. However, soft parses are not ideal because they still require syntax and security checking, which consume system resources.
Because parsing should be minimized as much as possible, application developers should design their application to parse SQL statements once and execute them many times. This is done through cursors. Experienced SQL programmers should be familiar with the concept of opening and re-executing cursors.
5.4. usage of bind variablesApplication developers must also ensure that SQL statements are shared within the shared pool. To do this, bind variables to represent the parts of the query that change for execution to execution. If this is not done, the SQL statement is likely to be parsed once and never reused by other users. To ensure that SQL is shared, use bind variables and do not use string literals with SQL statements.
6. Cursor SharingFor Oracle to shared cursors, the code must be written in the same way character wise, unless you use some special initialization parameters, such as CURSOR_SHARING. This allows the system to recognize that two statements are the same and thus can be shared, You should therefore, develop coding conventions for SQL statements in ad hoc queries, SQL scripts, and ORacle Call Interface or OCI calls.
7. Bind VariablesUse bind variables.
SQL code readability can be improved by adhering to these guidelines:1. Developing fromat standard for all statements, including those in PL/SQL code.
2. Developing rules for the use of uppercase and lowercase characters.
3. Developing rules for the use of whitespace (spces, table, returns).
4. Developing rules for the use of comments (preferably keeping them out of the SQL stements themselves).
5. Using the same names to refer to identical database objects. If possible, prefix each object with a schema name.
You can also create following performance checklist:Set minimal initialization parametersSet the minimal number of initialization parameters. Ideally, most initialization parameters should be left at default. If there is more tuning to perform, this shows up when the system is under load. Set storage options ofr tables and indexes in appropriate tablespaces.
Verify SQL statementVerify that all SQL statements are optimal and understand their usages.
Validate middleware and programs efficiencyValidate that middle ware and programs that connected to the database are efficient in their connection management and do not log on and log off repeatdly.
Validate the use of cursorsValidate that the SQL statements use cursors efficiently. Each SQL statement should be parsed once and then executed multiple times. This does not happen mostly because bind variables are not used properly and the WHERE clause predicates are sent as string literals.
Validate migration of schema objectsValidate that all schema objects are correctly migrated from the development environment to the production database. This includes tables, indexes, sequences, triggers, packages, proceduires, functions, Java objects, synonyms, grants, and views. Ensure that any modifications made in testing are made to the production system.
Establish a baseline set of statisticsAs soon as the system is rolled out, establish a baseline set of statistics from the database and operating system. This first set of statistics validates or corrects any assumptions made in the design and rollout process.