Create
Learn
Share

RH442 - CHAPTER 5

rename
jandro's version from 2011-01-05 03:50

Section 5 - Queuing Theory ( pg 99 )

Introduction to queuing theory ( 5 -3 )

The study of waiting lines , or queue
Permits the derivation and calculation of several performance measures
average waiting time in the queue or the system
expected number of waiting or receiving service
probability of encountering the system in certain states such as empty orfull
having an available server or having to wait a certian time to be served.

Queuing Disciplines ( 5 - 4 )

First In, First Out (FIFO) - a customer that finds the server busy goes to the end of the queue, and waits the entire length of the queue to be served. Usually the default
Last In, First Out (LIFO) - a customer that arrives at a queue goes immediately to the front of the queue and will be served next unless another customer arrives before they are served
Random Service - wherever there placed in the queue customers are served in a random order
Priority Disciplines - Every customer has a priority and the server will always select the customers from the queue with the highest priority ( i.e. hospital waiting rooms )

Little's Law ( 5 - 5 )

L = (Lamda)W

 

Where:
    L = the average number of nits in the queuing system ( queue length )
    (Lamda) = the average number of units arriving per unit of time ( arrival rate )
    W = the average waiting time of a unit in the system

 

This formula is only valid when the system is stable. That being said, it can be used to quickly approximate queue times.

Queue Length ( 5 - 6 )

With Linux queue, queue length can be tuneable or a measurement
Wide range of queues to be considered
processor load ( queue length )
disk request queue
network read / write queue
application queues

 

Most queues have a prioritization algorithm
class based queuing
commit reads before writes

 

Longer queue lengths will require more memory
Linux tuneables are located in /proc and /sys

Wait Time ( 5 - 9 )

Includes two key components
queue time - time spent in queue waiting on resouce
service time - time spent processing the request

A closer look at wait time ( 5 - 10 )

Service time will include time in kernel mode ( system time ) , time processing the request ( user time )

 

Profiling time with time
Two commands to time the execution of a process
    time and /usr/bin/time

 

# time updatedb

 

real 0m6.317s
user 0m0.028s
sys 0m0.084s

 

# /usr/bin/time updatedb
0.01user 0.09system 0:00.11elapsed 98%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+269minor)pagefaults 0swaps

Finding hotspots in code

strace - In the simplest case strace runes the specified command until it exits. It intercepts and records the system calls which are called by a process and the signals which are received by a process.
    -c Count time, calls, and errors for each system call and report a summary

 

ltrace - runs the specified command until it exits. It intercepts and records dynamic library calls which are called by the executed process and the signals which are received by that process.

Completion rate ( 5 -14 )

Throughput = # of processes / Average Completion rate
OR
Average Completion rate = # number of processes / Throughput

Predicting Resource Limits ( 5 - 18 )

Utilization rate - utilization = (service time)(arrival rate)
For a saturated resource ( utilization = 1 ) and at stead rate (completion rate = arrival rate )

Summery of Strategies

Tune queue length - longer disk queue lenghts allow for sorting and request, prefer reads over writes
Tune arrival and completion rates
reduce arrival rate by distributing over multiple resouces
user lower overhead processes
user more efficient protocols

 

Tune wait time
use disk deadline scheduler
more memory to cache more items

 

Change one item at a time and repeat benchmarks