Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Brief Intro of computer networks, Schemes and Mind Maps of Computer Networks

Computer networks introduction,summary

Typology: Schemes and Mind Maps

2022/2023

Uploaded on 02/25/2023

aakash-srivastav-1
aakash-srivastav-1 🇮🇳

2 documents

1 / 79

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Chapter 5: Terminology and Basic Algorithms
Ajay Kshemkalyani and Mukesh Singhal
Distributed Computing: Principles, Algorithms, and Systems
Cambridge University Press
A. Kshemkalyani and M. Singhal (Distributed Computing) Terminology and Basic Algorithms CUP 2008 1 / 79
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25
pf26
pf27
pf28
pf29
pf2a
pf2b
pf2c
pf2d
pf2e
pf2f
pf30
pf31
pf32
pf33
pf34
pf35
pf36
pf37
pf38
pf39
pf3a
pf3b
pf3c
pf3d
pf3e
pf3f
pf40
pf41
pf42
pf43
pf44
pf45
pf46
pf47
pf48
pf49
pf4a
pf4b
pf4c
pf4d
pf4e
pf4f

Partial preview of the text

Download Brief Intro of computer networks and more Schemes and Mind Maps Computer Networks in PDF only on Docsity!

Chapter 5: Terminology and Basic Algorithms

Ajay Kshemkalyani and Mukesh Singhal

Distributed Computing: Principles, Algorithms, and Systems

Cambridge University Press

Topology Abstraction and Overlays

System: undirected (weighted) graph (N, L), where n = |N|, l = |L|

Physical topology

I Nodes: network nodes, routers, all end hosts (whether participating or not)

I Edges: all LAN, WAN links, direct edges between end hosts

I E.g., Fig. 5.1(a) topology + all routers and links in WANs

Logical topology (application context)

I Nodes: end hosts where application executes

I Edges: logical channels among these nodes

All-to-all fully connected (e.g., Fig 5.1(b))

or any subgraph thereof, e.g., neighborhood view, (Fig 5.1(a)) - partial

system view, needs multi-hop paths, easy to maintain

Superimposed topology (a.k.a. topology overlay):

I superimposed on logical topology

I Goal: efficient information gathering, distribution, or search (as in P2P

overlays)

I e.g., ring, tree, mesh, hypercube

Classifications and Basic Concepts (1)

Application execution vs. control algorithm execution, each with own events

I Control algorithm:

F for monitoring and auxiliary functions, e.g., creating a ST, MIS, CDS, reaching

consensus, global state detection (deadlock, termination etc.), checkpointing

F superimposed on application execution, but does not interfere

F its send, receive, internal events are transparent to application execution

F a.k.a. protocol

Centralized and distributed algorithms

I Centralized: asymmetric roles; client-server configuration; processing and

bandwidth bottleneck; point of failure

I Distributed: more balanced roles of nodes, difficult to design perfectly

distributed algorithms (e.g., snapshot algorithms, tree-based algorithms)

Symmetric and asymmetric algorithms

Classifications and Basic Concepts (2)

Anonymous algorithm: process ids or processor ids are not used to make any

execution (run-time) decisions

I Structurally elegant but hard to design, or impossible, e.g., anonymous leader

election is impossible

Uniform algorithm: Cannot use n, the number of processes, as a parameter in

the code

I Allows scalability; process leave/join is easy and only neighbors need to be

aware of logical topology changes

Adaptive algorithm: Let k (≤ n) be the number of processes participating in

the context of a problem X when X is being executed. Complexity should be

expressible as a function of k, not n.

I E.g., mutual exclusion: critical section contention overhead expressible in

terms of number of processes contending at this time (k)

Classification and Basic Concepts (4)

Execution inhibition (a.k.a. freezing)

I Protocols that require suspension of normal execution until some stipulated

operations occur are inhibitory

I Concept: Different from blocking vs. nonblocking primitives

I Analyze inhibitory impact of control algo on underlying execution

I Classification 1:

F Non-inhibitory protocol: no event is disabled in any execution

F Locally inhibitory protocol: in any execution, any delayed event is a locally

delayed event, i.e., inhibition under local control, not dependent on any receive

event

F Globally inhibitory: in some execution, some delayed event is not locally delayed

I Classification 2: send inhibitory/ receive inhibitory/ internal event inhibitory

Classifications and Basic Concepts (5)

Synchronous vs. asynchronous systems

I Synchronous:

F upper bound on message delay

F known bounded drift rate of clock wrt. real time

F known upper bound for process to execute a logical step

I Asynchronous: above criteria not satisfied

spectrum of models in which some combo of criteria satisfied

Algorithm to solve a problem depends greatly on this model

Distributed systems inherently asynchronous

On-line vs. off-line (control) algorithms

I On-line: Executes as data is being generated

Clear advantages for debugging, scheduling, etc.

I Off-line: Requires all (trace) data before execution begins

Classifications and Basic Concepts (7)

Process failures (sync + async systems) in order of increasing severity

I Fail-stop: Properly functioning process stops execution. Other processes learn

about the failed process (thru some mechanism)

I Crash: Properly functioning process stops execution. Other processes do not

learn about the failed process

I Receive omission: Properly functioning process fails by receiving only some of

the messages that have been sent to it, or by crashing.

I Send omission: Properly functioning process fails by sending only some of the

messages it is supposed to send, or by crashing. Incomparable with receive

omission model.

I General omission: Send omission + receive omission

I Byzantine (or malicious) failure, with authentication: Process may (mis)

behave anyhow, including sending fake messages.

Authentication facility =⇒ If a faulty process claims to have received a

message from a correct process, that is verifiable.

I Byzantine (or malicious) failure, no authentication

The non-malicious failure models are ”benign”

Classifications and Basic Concepts (8)

Process failures (contd.) → Timing failures (sync systems):

I General omission failures, or clocks violating specified drift rates, or process

violating bounds on time to execute a step

I More severe than general omission failures

Failure models influence design of algorithms

Link failures

I Crash failure: Properly functioning link stops carrying messages

I Omission failure: Link carries only some of the messages sent on it, not others

I Byzantine failure: Link exhibits arbitrary behavior, including creating fake

messages and altering messages sent on it

Link failures → Timing failures (sync systems): messages delivered

faster/slower than specified behavior

Program Structure

Communicating Sequential Processes (CSP) like:

∗ [ G 1 −→ CL 1 || G 2 −→ CL 2 || · · · || Gk −→ CLk ]

The repetitive command “*” denotes an infinite loop.

Inside it, the alternative command ‘||” is over guarded commands.

Specifies execution of exactly one of its constituent guarded commands.

Guarded command syntax: “G −→ CL”

guard G is boolean expression,

CL is list of commands to be executed if G is true.

Guard may check for message arrival from another process.

Alternative command fails if all the guards fail; if > 1 guard is true, one is

nondeterministically chosen for execution.

Gm −→ CLm: CLm and Gm atomically executed.

Basic Distributed Graph Algorithms: Listing

Sync 1-initiator ST (flooding)

Async 1-initiator ST (flooding)

Async conc-initiator ST (flooding)

Async DFS ST

Broadcast & convergecast on tree

Sync 1-source shortest path

Distance Vector Routing

Async 1-source shortest path

All sources shortest path:

Floyd-Warshall

Sync, async constrained flooding

MST, sync

MST, async

Synchronizers: simple, α, β, γ

MIS, async, randomized

CDS

Compact routing tables

Leader election: LCR algorithm

Dynamic object replication

Synchronous 1-init Spanning Tree: Example

QUERY

A B C

F E D

Figure 5.2: Tree in boldface; round numbers of QUERY are labeled

Designated root. Node A in example.

Each node identifies parent

How to identify child nodes?

Synchronous 1-init Spanning Tree: Complexity

Termination: after diameter rounds.

How can a process terminate after setting its parent?

Complexity:

Local space: O(degree)

Global space: O(

local space)

Local time: O(degree + diameter )

Message time complexity: d rounds or message hops

Message complexity: ≥ 1 , ≤ 2 messages/edge. Thus, [l, 2 l]

Spanning tree: analogous to breadth-first search

Async 1-init Spanning Tree: Operation

root initiates flooding of QUERY to identify tree edges

parent: 1st node from which QUERY received

I ACCEPT (+ rsp) sent in response; QUERY sent to other nbhs

I Termination: when ACCEPT or REJECT (- rsp) received from non-parent

nbhs. Why?

QUERY from non-parent replied to by REJECT

Necessary to track neighbors? to determine children and when to terminate?

Why is REJECT message type required?

Can use of REJECT messages be eliminated? How? What impact?

Asynchronous 1-init Spanning Tree: Complexity

Local termination: after receiving ACCEPT or REJECT from non-parent nbhs.

Complexity:

Local space: O(degree)

Global space: O(

local space)

Local time: O(degree)

Message complexity: ≥ 2 , ≤ 4 messages/edge. Thus, [2l, 4 l]

Message time complexity: d + 1 message hops.

Spanning tree: no claim can be made. Worst case height n − 1