Table 5. Naming Convention. Supplied with this information, the DBA can construct a query that will return the trace file for a given session or for all sessions attached to the database. The query below will show the trace file name for each process. Notice that the trace files are for each session and not for each named user. Given that the SYS user has two connections to the database, the commands for each session would be in separate trace files. The query can be modified to return the file name for the currently connected session.
Both queries above generate the trace file names with Oracle9i on Windows XP that would exist if the session were to be traced. For this reason, a DBA should not trace a session indefinitely, as it will continue to consume both performance resources and file system resources.
When the DBA determines that enough data has been gathered, the next step is to disable tracing. The same options that we use to enable tracing are used to disable it. These include:. This process is a perfect candidate for automation. It can also serve as a wrapper for the standard methods of enabling tracing.
It requires that a time interval, in seconds, be set to run the trace so that it doesn't run perpetually and bog down the session. When the time has elapsed, it will disable tracing for the session and send the relevant trace information: user, time, and trace file name.
It accepts. After tracing is turned. When complete, stop. The time interval specified was 30 and we can see the elapsed time of the trace in the timestamps below. Tracing Start Time: Tracing Stop Time: The next step is to run tkprof against the trace file.
Step 4: Locate Trace File and Execute tkprof. Locating the file is easy because the script above gives us the file name. With minimal effort, a programmer could create a trace file parser and formatter similar to tkprof that provides the trace data in a format even more suitable for analysis.
The tkprof command can now be executed from the operating system prompt. Copyright c , , Oracle Corporation. All rights reserved. Two other files were also created tkprof. Step 5: Analyze tkprof Output. This is the most difficult step in the process.
Each tkprof output file contains a header, body, and summary section. The header simply displays the trace file name, definitions, and sort options selected. The body contains the performance metrics for SQL statements.
The summary section contains an aggregate of performance statistics for all SQL statements in the file. Parse 10 0. Execute 10 0. Fetch 20 0. Rows Row Source Operation. The output displays a table of performance metrics after each unique SQL statement. Each row in the table corresponds to each of the three steps required in SQL processing. Parse - The translation of the SQL into an execution plan.
Since insert does not need to aquire locks on the rows selected, this can happen several times for a single insert statement. Am I right? We are not modifying existing rows when we insert, we create new. So there is no read consistent version to compare to a current mode version.
I know that is the case with statspack reports. And the outer measurement does not suffer from that CPU time more than Elapsed Time.
Tony, May 13, - am UTC. Tom, I have tkprof output for a query. I see CPU time more than the elapsed time. How is it possible? More to Explore. Performance Get all the information about database performance in the Database Performance guide. Each row corresponds to one of three steps of SQL statement processing. Statistics are identified by the value of the CALL column. See Table Translates the SQL statement into an execution plan, including checks for proper security authorization and checks for the existence of tables, columns, and other referenced objects.
Actual execution of the statement by Oracle. Retrieves rows returned by a query. The other columns of the SQL Trace facility output are combined statistics for all parses, all executes, and all fetches of a statement.
Total CPU time in seconds for all parse, execute, or fetch calls for the statement. Total elapsed time in seconds for all parse, execute, or fetch calls for the statement. Total number of data blocks physically read from the datafiles on disk for all parse, execute, or fetch calls. Total number of buffers retrieved in consistent mode for all parse, execute, or fetch calls.
Usually, buffers are retrieved in consistent mode for queries. Total number of buffers retrieved in current mode. Statistics about the processed rows appear in the ROWS column. Total number of rows processed by the SQL statement.
This total does not include rows processed by subqueries of the SQL statement. The row source counts are displayed when a cursor is closed. Exiting or reconnecting causes the counts to be displayed. Timing statistics have a resolution of one hundredth of a second; therefore, any operation on a cursor that takes a hundredth of a second or less might not be timed accurately.
Keep this in mind when interpreting statistics. In particular, be careful when interpreting the results from simple queries that execute very quickly. Sometimes, in order to execute a SQL statement issued by a user, Oracle must issue additional statements. Such statements are called recursive calls or recursive SQL statements.
For example, if you insert a row into a table that does not have enough space to hold that row, then Oracle makes recursive calls to allocate the space dynamically. Recursive calls are also generated when data dictionary information is not available in the data dictionary cache and must be retrieved from disk. You can suppress the listing of Oracle internal recursive calls for example, space management in the output file by setting the SYS command-line parameter to NO.
The statistics for a recursive SQL statement are included in the listing for that statement, not in the listing for the SQL statement that caused the recursive call. So, when you are calculating the total resources required to process a SQL statement, consider the statistics for that statement as well as those for recursive calls caused by that statement.
For more information, see "Avoiding the Trigger Trap". These statistics appear on separate lines following the tabular statistics. Trace files generated immediately after instance startup contain data that reflects the activity of the startup process.
For the purposes of tuning, ignore such trace files. The key is the number of block visits, both query that is, subject to read consistency and current that is, not subject to read consistency. Segment headers and blocks that are going to be updated are acquired in current mode, but all query and subquery processing requests the data in query mode.
You can find high disk activity in the disk column. If it is acceptable to have 7. You can also see that 10 unnecessary parse call were made because there were 11 parse calls for this one statement and that array fetch operations were performed.
You know this because more rows were fetched than there were fetches performed. You might want to keep a history of the statistics generated by the SQL Trace facility for an application, and compare them over time.
This script contains:. The script then inserts the new rows into the existing table. Most output table columns correspond directly to the statistics that appear in the formatted output file. The columns in Table help you identify a row of statistics. This is the date and time when the row was inserted into the table.
This value is not exactly the same as the time the statistics were collected by the SQL Trace facility. This indicates the level of recursion at which the SQL statement was issued. For example, a value of 0 indicates that a user issued the statement. A value of 1 indicates that Oracle generated the statement as a recursive call to process a statement with a value of 0 a statement issued by a user. A value of n indicates that Oracle generated the statement as a recursive call to process a statement with a value of n- 1.
0コメント