Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Compiler Design: Language Processing, Parsing, and Semantics, Lecture notes of Compiler Design

Lecture notes on Compiler Design, covering topics such as the overview of language processing systems, preprocessor, compiler, list of compilers, and structure of compiler design. The notes also discuss the phases of a compiler, including lexical analysis, syntax analysis, intermediate code generation, code optimization, code generation, table management, and error handlers. The notes are from the Department of Computer Science & Engineering at Shri Vishnu Engineering College For Women.

Typology: Lecture notes

2022/2023

Uploaded on 05/11/2023

hardcover
hardcover 🇺🇸

4.7

(7)

259 documents

1 / 89

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Shri Vishnu Engineering College For Women
Department of CSE
- 1 -
COMPILER DESIGN
LECTURE NOTES
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING
SHRI VISHNU ENGINEERING COLLEGE FOR WOMEN
(Approved by AICTE, Accredited by NBA, Affiliated to JNTU Kakinada)
BHIMAVARAM 534 202
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25
pf26
pf27
pf28
pf29
pf2a
pf2b
pf2c
pf2d
pf2e
pf2f
pf30
pf31
pf32
pf33
pf34
pf35
pf36
pf37
pf38
pf39
pf3a
pf3b
pf3c
pf3d
pf3e
pf3f
pf40
pf41
pf42
pf43
pf44
pf45
pf46
pf47
pf48
pf49
pf4a
pf4b
pf4c
pf4d
pf4e
pf4f
pf50
pf51
pf52
pf53
pf54
pf55
pf56
pf57
pf58
pf59

Partial preview of the text

Download Compiler Design: Language Processing, Parsing, and Semantics and more Lecture notes Compiler Design in PDF only on Docsity!

Department of CSE

COMPILER DESIGN

LECTURE NOTES

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING

SHRI VISHNU ENGINEERING COLLEGE FOR WOMEN

(Approved by AICTE, Accredited by NBA, Affiliated to JNTU Kakinada) BHIMAVARAM – 534 202

Department of CSE

UNIT -

1.1 OVERVIEW OF LANGUAGE PROCESSING SYSTEM

1.2 Preprocessor

A preprocessor produce input to compilers. They may perform the following functions.

  1. Macro processing: A preprocessor may allow a user to define macros that are short hands for longer constructs.
  2. File inclusion: A preprocessor may include header files into the program text.
  3. Rational preprocessor: these preprocessors augment older languages with more modern flow-of-control and data structuring facilities.
  4. Language Extensions: These preprocessor attempts to add capabilities to the language by certain amounts to build-in macro

1.3 COMPILER

Compiler is a translator program that translates a program written in (HLL) the source program and translate it into an equivalent program in (MLL) the target program. As an important part of a compiler is error showing to the programmer.

Source pgm target pgm

Error msg

Compiler

Department of CSE

Disadvantages:

The execution of the program is slower. Memory consumption is more.

2 Loader and Link-editor: Once the assembler procedures an object program, that program must be placed into memory and executed. The assembler could place the object program directly in memory and transfer control to it, thereby causing the machine language program to be execute. This would waste core by leaving the assembler in memory while the user’s program was being executed. Also the programmer would have to retranslate his program with each execution, thus wasting translation time. To over come this problems of wasted translation time and memory. System programmers developed another component called loader

“A loader is a program that places programs into memory and prepares them for execution.” It would be more efficient if subroutines could be translated into object form the loader could”relocate” directly behind the user’s program. The task of adjusting programs o they may be placed in arbitrary core locations is called relocation. Relocation loaders perform four functions.

1.6 TRANSLATOR

A translator is a program that takes as input a program written in one language and produces as output a program in another language. Beside program translation, the translator performs another very important role, the error-detection. Any violation of d HLL specification would be detected and reported to the programmers. Important role of translator are:

1 Translating the hll program input into an equivalent ml program. 2 Providing diagnostic messages wherever the programmer violates specification of the hll.

1.7 TYPE OF TRANSLATORS :-

INTERPRETOR COMPILER PREPROSSESSOR

Department of CSE

1.8 LIST OF COMPILERS

  1. Ada compilers 2 .ALGOL compilers 3 .BASIC compilers 4 .C# compilers 5 .C compilers 6 .C++ compilers 7 .COBOL compilers 8 .D compilers 9 .Common Lisp compilers
  2. ECMAScript interpreters
  3. Eiffel compilers
  4. Felix compilers
  5. Fortran compilers
  6. Haskell compilers 15 .Java compilers
  7. Pascal compilers
  8. PL/I compilers
  9. Python compilers
  10. Scheme compilers
  11. Smalltalk compilers
  12. CIL compilers

1.9 STRUCTURE OF THE COMPILER DESIGN

Phases of a compiler: A compiler operates in phases. A phase is a logically interrelated operation that takes source program in one representation and produces output in another representation. The phases of a compiler are shown in below There are two phases of compilation. a. Analysis (Machine Independent/Language Dependent) b. Synthesis(Machine Dependent/Language independent) Compilation process is partitioned into no-of-sub processes called ‘phases’.

Department of CSE

This is the portion to keep the names used by the program and records essential information about each. The data structure used to record this information called a ‘Symbol Table’. Error Handlers:- It is invoked when a flaw error in the source program is detected.

The output of LA is a stream of tokens, which is passed to the next phase, the syntax analyzer or parser. The SA groups the tokens together into syntactic structure called as expression. Expression may further be combined to form statements. The syntactic structure can be regarded as a tree whose leaves are the token called as parse trees.

The parser has two functions. It checks if the tokens from lexical analyzer, occur in pattern that are permitted by the specification for the source language. It also imposes on tokens a tree-like structure that is used by the sub-sequent phases of the compiler.

Example , if a program contains the expression A+/B after lexical analysis this expression might appear to the syntax analyzer as the token sequence id+/id. On seeing the /, the syntax analyzer should detect an error situation, because the presence of these two adjacent binary operators violates the formulations rule of an expression.

Syntax analysis is to make explicit the hierarchical structure of the incoming token stream by identifying which parts of the token stream should be grouped.

Example, (A/B*C has two possible interpretations.) 1, divide A by B and then multiply by C or 2, multiply B by C and then use the result to divide A.

each of these two interpretations can be represented in terms of a parse tree. Intermediate Code Generation:- The intermediate code generation uses the structure produced by the syntax analyzer to create a stream of simple instructions. Many styles of intermediate code are possible. One common style uses instruction with one operator and a small number of operands. The output of the syntax analyzer is some representation of a parse tree. the intermediate code generation phase transforms this parse tree into an intermediate language representation of the source program.

Code Optimization This is optional phase described to improve the intermediate code so that the output runs faster and takes less space. Its output is another intermediate code program that does the some job as the original, but in a way that saves time and / or spaces. 1, Local Optimization:- There are local transformations that can be applied to a program to make an improvement. For example, If A > B goto L

Department of CSE

Goto L L2 : This can be replaced by a single statement If A < B goto L Another important local optimization is the elimination of common sub-expressions A := B + C + D E := B + C + F

Might be evaluated as T1 := B + C

A := T1 + D E := T1 + F Take this advantage of the common sub-expressions B + C.

2, Loop Optimization:- Another important source of optimization concerns about increasing the speed of loops. A typical loop improvement is to move a computation that produces the same result each time around the loop to a point, in the program just before the loop is entered. Code generator :- Cg produces the object code by deciding on the memory locations for data, selecting code to access each datum and selecting the registers in which each computation is to be done. Many computers have only a few high speed registers in which computations can be performed quickly. A good code generator would attempt to utilize registers as efficiently as possible. Table Management OR Book-keeping :- A compiler needs to collect information about all the data objects that appear in the source program. The information about data objects is collected by the early phases of the compiler-lexical and syntactic analyzers. The data structure used to record this information is called as Symbol Table.

Error Handing :- One of the most important functions of a compiler is the detection and reporting of errors in the source program. The error message should allow the programmer to determine exactly where the errors have occurred. Errors may occur in all or the phases of a compiler. Whenever a phase of the compiler discovers an error, it must report the error to the error handler, which issues an appropriate diagnostic msg. Both of the table-management and error-Handling routines interact with all phases of the compiler.

Department of CSE

Id1:= id2 +temp

MOVF id3, r MULF *60.0, r MOVF id2, r ADDF r2, r MOVF r1, id

1.10 TOKEN LA reads the source program one character at a time, carving the source program into a sequence of automatic units called ‘Tokens’. 1, Type of the token. 2, Value of the token. Type : variable, operator, keyword, constant Value : N1ame of variable, current variable (or) pointer to symbol table. If the symbols given in the standard format the LA accepts and produces token as output. Each token is a sub-string of the program that is to be treated as a single unit. Token are two types. 1, Specific strings such as IF (or) semicolon. 2, Classes of string such as identifiers, label, constants.

Code Generator

Department of CSE

UNIT -

LEXICAL ANALYSIS

2.1 OVER VIEW OF LEXICAL ANALYSIS

o To identify the tokens we need some method of describing the possible tokens that can appear in the input stream. For this purpose we introduce regular expression, a notation that can be used to describe essentially all the tokens of programming language. o Secondly , having decided what the tokens are, we need some mechanism to recognize these in the input stream. This is done by the token recognizers, which are designed using transition diagrams and finite automata.

2.2 ROLE OF LEXICAL ANALYZER the LA is the first phase of a compiler. It main task is to read the input character and produce as output a sequence of tokens that the parser uses for syntax analysis.

Upon receiving a ‘get next token’ command form the parser, the lexical analyzer reads the input character until it can identify the next token. The LA return to the parser representation for the token it has found. The representation will be an integer code, if the token is a simple construct such as parenthesis, comma or colon.

LA may also perform certain secondary tasks as the user interface. One such task is striping out from the source program the commands and white spaces in the form of blank, tab and new line characters. Another is correlating error message from the compiler with the source program.

Department of CSE

A patter is a rule describing the set of lexemes that can represent a particular token in source program.

2.5 LEXICAL ERRORS:

Lexical errors are the errors thrown by your lexer when unable to continue. Which means that there's no way to recognise a lexeme as a valid token for you lexer. Syntax errors, on the other side, will be thrown by your scanner when a given set of already recognised valid tokens don't match any of the right sides of your grammar rules. simple panic-mode error handling system requires that we return to a high-level parsing function when a parsing or lexical error is detected.

Error-recovery actions are: i. Delete one character from the remaining input. ii. Insert a missing character in to the remaining input. iii. Replace a character by another character. iv. Transpose two adjacent characters.

2.6 DIFFERENCE BETWEEN COMPILER AND INTERPRETER

A compiler converts the high level instruction into machine language while an interpreter converts the high level instruction into an intermediate form. Before execution, entire program is executed by the compiler whereas after translating the first line, an interpreter then executes it and so on. List of errors is created by the compiler after the compilation process while an interpreter stops translating after the first error. An independent executable file is created by the compiler whereas interpreter is required by an interpreted program each time. The compiler produce object code whereas interpreter does not produce object code. In the process of compilation the program is analyzed only once and then the code is generated whereas source program is interpreted every time it is to be executed and every time the source program is analyzed. hence interpreter is less efficient than compiler. Examples of interpreter: A UPS Debugger is basically a graphical source level debugger but it contains built in C interpreter which can handle multiple source files. example of compiler: Borland c compiler or Turbo C compiler compiles the programs written in C or C++.

Department of CSE

2.7 REGULAR EXPRESSIONS

Regular expression is a formula that describes a possible set of string. Component of regular expression.. X the character x

. any character, usually accept a new line [x y z] any of the characters x, y, z, ….. R? a R or nothing (=optionally as R) R zero or more occurrences….. R+ one or more occurrences …… R1R2 an R1 followed by an R R2R1 either an R1 or an R2.* A token is either a single string or one of a collection of strings of a certain type. If we view the set of strings in each token class as an language, we can use the regular-expression notation to describe tokens.

Consider an identifier, which is defined to be a letter followed by zero or more letters or digits. In regular expression notation we would write.

Identifier = letter (letter | digit)* Here are the rules that define the regular expression over alphabet.

o is a regular expression denoting { € }, that is, the language containing only the empty string. o For each ‘a’ in ∑, is a regular expression denoting { a }, the language with only one string consisting of the single symbol ‘a’. o If R and S are regular expressions, then

(R) | (S) means LrULs R.S means Lr.Ls R* denotes Lr*

2.8 REGULAR DEFINITIONS

For notational convenience, we may wish to give names to regular expressions and to define regular expressions using these names as if they were symbols. Identifiers are the set or string of letters and digits beginning with a letter. The following regular definition provides a precise specification for this class of string. Example-1, Ab|cd? Is equivalent to (a(b)) | (c(d?)) Pascal identifier Letter - A | B | ……| Z | a | b |……| z| Digits - 0 | 1 | 2 | …. | 9 Id - letter (letter / digit)*

Department of CSE

<= relop LE = relop ET < > relop NE

2.9 TRANSITION DIAGRAM:

Transition Diagram has a collection of nodes or circles, called states. Each state represents a condition that could occur during the process of scanning the input looking for a lexeme that matches one of several patterns. Edges are directed from one state of the transition diagram to another. each edge is labeled by a symbol or set of symbols. If we are in one state s, and the next input symbol is a, we look for an edge out of state s labeled by a. if we find such an edge ,we advance the forward pointer and enter the state of the transition diagram to which that edge leads. Some important conventions about transition diagrams are

  1. Certain states are said to be accepting or final .These states indicates that a lexeme has been found, although the actual lexeme may not consist of all positions b/w the lexeme Begin and forward pointers we always indicate an accepting state by a double circle.
  2. In addition, if it is necessary to return the forward pointer one position, then we shall additionally place a * near that accepting state.
  3. One state is designed the state ,or initial state ., it is indicated by an edge labeled “start” entering from nowhere .the transition diagram always begins in the state before any input symbols have been used.

As an intermediate step in the construction of a LA, we first produce a stylized flowchart, called a transition diagram. Position in a transition diagram, are drawn as circles and are called as states.

Department of CSE

The above TD for an identifier, defined to be a letter followed by any no of letters or digits.A sequence of transition diagram can be converted into program to look for the tokens specified by the diagrams. Each state gets a segment of code.

If = if Then = then Else = else Relop = < | <= | = | > | >= Id = letter (letter | digit) *| Num = digit | 2.10 AUTOMATA

An automation is defined as a system where information is transmitted and used for performing some functions without direct participation of man. 1, an automation in which the output depends only on the input is called an automation without memory. 2, an automation in which the output depends on the input and state also is called as automation with memory. 3, an automation in which the output depends only on the state of the machine is called a Moore machine. 3, an automation in which the output depends on the state and input at any instant of time is called a mealy machine.

2.11 DESCRIPTION OF AUTOMATA

1, an automata has a mechanism to read input from input tape, 2, any language is recognized by some automation, Hence these automation are basically language ‘acceptors’ or ‘language recognizers’. Types of Finite Automata

Deterministic Automata Non-Deterministic Automata.

2.12 DETERMINISTIC AUTOMATA

A deterministic finite automata has at most one transition from each state on any input. A DFA is a special case of a NFA in which:-

1, it has no transitions on input € ,

Department of CSE

A NFA can be diagrammatically represented by a labeled directed graph, called a transition graph, In which the nodes are the states and the labeled edges represent the transition function.

This graph looks like a transition diagram, but the same character can label two or more transitions out of one state and edges can be labeled by the special symbol € as well as by input symbols.

The transition graph for an NFA that recognizes the language ( a | b ) * abb is shown

2.14 DEFINITION OF CFG

It involves four quantities. CFG contain terminals, N-T, start symbol and production. Terminal are basic symbols form which string are formed. N-terminals are synthetic variables that denote sets of strings In a Grammar, one N-T are distinguished as the start symbol, and the set of string it denotes is the language defined by the grammar. The production of the grammar specify the manor in which the terminal and N-T can be combined to form strings. Each production consists of a N-T, followed by an arrow, followed by a string of one terminal and terminals.

2.15 DEFINITION OF SYMBOL TABLE

An extensible array of records. The identifier and the associated records contains collected information about the identifier. FUNCTION identify (Identifier name) RETURNING a pointer to identifier information contains The actual string A macro definition A keyword definition A list of type, variable & function definition A list of structure and union name definition A list of structure and union field selected definitions.

Department of CSE

2.16 Creating a lexical analyzer with Lex

2.17 Lex specifications:

A Lex program (the .l file ) consists of three parts:

declarations %% translation rules %% auxiliary procedures

  1. The declarations section includes declarations of variables,manifest constants(A manifest constant is an identifier that is declared to represent a constant e.g. # define PIE 3.14 ), and regular definitions.
  2. The translation rules of a Lex program are statements of the form : p1 {action 1} p2 {action 2} p3 {action 3} … … … … where each p is a regular expression and each action is a program fragment describing what action the lexical analyzer should take when a pattern p matches a lexeme. In Lex the actions are written in C.
  3. The third section holds whatever auxiliary procedures are needed by the actions. Alternatively these procedures can be compiled separately and loaded with the lexical analyzer.