Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Analysis Counting Primitive Operations - Analysis of Algorithms - Slides, Slides of Algorithms and Programming

This is first lecture on Analysis of Algorithms. What are algorithms, what are primitive operations and how to count them.

Typology: Slides

2016/2017

Uploaded on 04/03/2017

jaee
jaee 🇮🇳

4.7

(22)

101 documents

1 / 28

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Analysis of Algorithms
An algorithm is a step-by-step procedure for
solving a problem in a finite amount of time.
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c

Partial preview of the text

Download Analysis Counting Primitive Operations - Analysis of Algorithms - Slides and more Slides Algorithms and Programming in PDF only on Docsity!

Analysis of Algorithms

An algorithm is a step-by-step procedure for solving a problem in a finite amount of time.

Running Time (§1.1)

Most algorithms transform input objects into output objects. The running time of an algorithm typically grows with the input size. Average case time is often difficult to determine. We focus on the worst case running time.  (^) Easier to analyze  (^) Crucial to applications such as games, finance and robotics 0 20 40 60 80 100 120 Running Time 1000 2000 3000 4000 Input Size best case average case worst case

Limitations of Experiments

It is necessary to implement the

algorithm, which may be difficult

Results may not be indicative of the

running time on other inputs not

included in the experiment.

In order to compare two algorithms,

the same hardware and software

environments must be used

Theoretical Analysis

Uses a high-level description of the

algorithm instead of an implementation

Characterizes running time as a

function of the input size, n.

Takes into account all possible inputs

Allows us to evaluate the speed of an

algorithm independent of the

hardware/software environment

Pseudocode Details

Control flow  (^) ifthen … [ else …]  (^) whiledo …  (^) repeatuntil …  (^) fordo …  (^) Indentation replaces braces Method declaration Algorithm method ( arg [, arg …]) InputOutput … Method call var.method ( arg [, arg …]) Return value return expression Expressions  (^) Assignment (like  in Java)  (^) Equality testing (like  in Java) n^2 Superscripts and other mathematical formatting allowed

The Random Access

Machine (RAM) Model

A CPU

An potentially unbounded

bank of memory cells,

each of which can hold

an arbitrary number or

character

Memory cells are numbered and accessing any cell in memory takes unit time.

Counting Primitive

Operations (§1.1)

By inspecting the pseudocode, we can determine the maximum number of primitive operations executed by an algorithm, as a function of the input size Algorithm arrayMax ( A , n ) currentMaxA [0] 2 for i  1 to n  1 do 1  n if A [ i ]  currentMax then 2( n  1) currentMaxA [ i ] 2( n  1) { increment counter i } 2( n  1) return currentMax 1 Total 7 n  2

Estimating Running Time

Algorithm arrayMax executes 7 n  2 primitive operations in the worst case. Define: a = Time taken by the fastest primitive operation b = Time taken by the slowest primitive operation Let T ( n ) be worst-case time of arrayMax. Then a (7 n  2 )  T ( n )  b (7 n  2 ) Hence, the running time T ( n ) is bounded by two linear functions

Growth Rates

Growth rates of functions:  (^) Linear  n  (^) Quadratic  n 2  (^) Cubic  n 3

Constant Factors

The growth rate is not affected

by

 constant factors or  lower-order terms

Examples

 102 n  105 is a linear function  105 n^2  108 n is a quadratic function

Big-Oh Example

Example: the function n^2 is not O ( n )  (^) n (^2)  cn  (^) nc  (^) The above inequality cannot be satisfied since c must be a constant

More Big-Oh

Examples

 (^) 7n-2 is O(n) need to find two positive constants: c, n 0 , such that for all n  n 0 7n-2  c•n c = 7 and n 0 = 1  (^) 3n^3 + 20n^2 + 5 is O(n3) need to find two positive constants: c, n 0 , such that for all n  n 0 3n 3

  • 20n 2
  • 5  c•n 3 c = 4 and n 0 = 21  (^) 3 log n + log log n is O(log n) need to find two positive constants: c, n 0 , such that 3 log n + log log n  c•log n for all n  n 0 this is true for c = 4 and n 0 = 2

Big-Oh Rules

If f ( n ) is a polynomial of degree d , then f ( n )

is O ( nd ), i.e.,

  1. Drop lower-order terms
  2. Drop constant factors

Use the smallest possible class of functions

 (^) Say “ 2 n is O ( n )” instead of “ 2 n is O ( n (^2) )”

Use the simplest expression of the class

 (^) Say “ 3 n  5 is O ( n )” instead of “ 3 n  5 is O (3 n )”

Asymptotic Algorithm

Analysis

The asymptotic analysis (as n grows toward infinity) of an algorithm determines the running time in big-Oh notation To perform the asymptotic analysis  (^) We find the worst-case number of primitive operations executed as a function of the input size  (^) We express this function with big-Oh notation Example:  (^) We determine that algorithm arrayMax executes at most 7 n  1 primitive operations  (^) We say that algorithm arrayMax “runs in O ( n ) time” Since constant factors and lower-order terms are eventually dropped anyhow, we can disregard them when counting primitive operations