oracle internal: VIEW: X$KCBKPFS - PreFetch Statistics - (9.0)

WebIV:View NOTE:159898.1   Previous Next Logoff Help Reset Menu
            

Bottom of Page


Article-ID:         <Note:159898.1>
Alias:              VIEW:X$KCBKPFS
Circulation:        PUBLISHED (INTERNAL) ***Oracle Confidential - Internal Use Only***
Folder:             server.Internals.General
Topic:              ** X$ Table Definitions
Title:              VIEW: X$KCBKPFS - PreFetch Statistics - (9.0)
Document-Type:      REFERENCE
Impact:             LOW
Skill-Level:        NOVICE
Server-Version:     09.00
Updated-Date:       05-OCT-2001 05:27:02
References:         
Shared-Refs:        
Authors:            MEPRESTO.US
Attachments:        NONE
Content-Type:       TEXT/PLAIN
Products:           5/RDBMS (9.0);  

View:   X$KCBKPFS
         [K]ernal [C]ache [B]uffer chec[K]point management
           [P]re[F]etch [S]tats


 Column            Type               Description
 --------          ----               --------
 ADDR              RAW(4|8)           address of this row/entry in the array or SGA
 INDX              NUMBER             index number of this row in the fixed table array
 INST_ID           NUMBER             oracle instance number
 BUFFER_POOL_ID    NUMBER             Buffer Pool
 TIMESTAMP         NUMBER             Timestamp
 PREFETCH_OPS      NUMBER             number of prefetch operations
 PREFETCH_BLOCKS   NUMBER             number of blocks prefetched
 WASTED_BLOCKS     NUMBER             number of prefetched blocks wasted
 CLIENTS_PREFETCH  NUMBER             number of clients actually prefetching buffers
 PREFETCH_LIMIT    NUMBER             Limit to be used by for each prefetch operation



Notes: 

This is maintained by the CKPT process which fills in an entry for each timeout period.

Size of prefech history is limited to 50 per buffer pool.


 This section deals with controlling prefetching through the cache. This
 is used to determine whether prefetch is wasteful i.e are prefetched blocks
 being aged out of the cache before they can be used? This can happen,
 for example, if there is a lot of recycling activity in the cache and
 each client of the cache prefetched a large number of blocks into the cache.

 To prevent this the following algorithm is used to limit the amount
 of prefetching done in each buffer pool:

    A history buffer is maintained per buffer pool. This buffer maintains
    a history of prefetching performance. This is populated through a
    timeout action. The history buffer contains the following information
    in each entry (struct kcbkpfs):
      - Timestamp at which entry was created.
      - Number of prefetch operations since the last timeout action.
      - Number of blocks prefetched in this period.
      - Number of prefetched blocks that were wasted - this refers to
        blocks that were prefetched but had to be aged out before
        they could be pinned.
      - Prefetch limit (this is the value computed by this function)
      - Number of prefetching clients (as a snapshot at the time this
        function is executed)
      - Number of buffers being read at this time.

    The algorithm uses the history to determine the prefetch limit. It
    the last few history entries (defined by KCBK_HIST_WINDOW) and then
    computes the cumulative number of prefetch operations, prefetched
    blocks and wasted prefetch blocks over this window. It then applies
    the following rules to adjust the limit:

      - If there are no prefetched blocks, then set the limit to
        Q/C where Q is the prefetch quota (in number of buffers) and
        C is the number of clients performing prefetching

      - If there are prefetched blocks and some of them were wasted.
        In this case, the limit is reduced by the fraction (P - W)/P
        where P is the number of blocks prefetched over the history
        window and W is the number of prefetched blocks that were wasted.
        If W happens to be greater than P, this implies that the buffers
        that were prefetched before the history window were wasted in
        this time interval. In this case, we reduce the limit to half
        its value.

      - If there are no wasted prefetch blocks, then there are 3 cases:

        (a) The number of clients has gone down - in this case we
            double the prefetch limit. If this causes any wasted prefetches
            it will be reduced by that fraction in the next timeout. Note
            that the increase is limited by the ratio (Q/C') where Q is
            the prefetch quota and C' is the new number of clients.

        (b) The number of clients has increased - retain the limit as long
            as the it is less than (Q/C') (same ratio as above). If not,
            set it to Q/C'

        (c) The number of clients remains the same - The limit is doubled
            if the number of prefetched buffers has decreased by atleast
            25% (again subject to the Q/C limit), otherwise the limit is
            retained. The rationale is that if the number of prefetched
            buffers has gone down, the clients have reduced their
            prefetching, so increasing the limit may  not lead to wasted
            prefetches.

 Note that if the increase in the limit leads to wasted prefetching,
 then at the next timeout, the limit will be reduced by the fraction
 of the prefetched blocks that were wasted.

          Top of Page


posted on 2013-08-29 20:19  新一  阅读(231)  评论(0编辑  收藏  举报

导航