Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Digital Notes on Compiler Design, Study notes of Compiler Design

A set of digital notes on Compiler Design, a course offered in the B.Tech III Year - I Sem program at Malla Reddy College of Engineering & Technology. The notes cover topics such as language translation, lexical analysis, syntax analysis, semantic analysis, intermediate code generation, and symbol tables. an initial understanding of language translators, knowledge of various techniques used in compiler construction, and the use of automated tools available in compilers construction.

Typology: Study notes

2018/2019

Uploaded on 05/11/2023

thehurts
thehurts 🇺🇸

4.5

(11)

219 documents

1 / 127

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
DIGITAL NOTES
ON
COMPILER DESIGN
B.TECH III YEAR - I SEM
(2018-19)
DEPARTMENT OF INFORMATION TECHNOLOGY
MALLA REDDY COLLEGE OF ENGINEERING & TECHNOLOGY
(Autonomous Institution UGC, Govt. of India)
(Affiliated to JNTUH, Hyderabad, Approved by AICTE - Accredited by NBA & NAAC ‘A’ Grade - ISO 9001:2015 Certified)
Maisammaguda, Dhulapally (Post Via. Hakimpet), Secunderabad 500100, Telangana State, INDIA.
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25
pf26
pf27
pf28
pf29
pf2a
pf2b
pf2c
pf2d
pf2e
pf2f
pf30
pf31
pf32
pf33
pf34
pf35
pf36
pf37
pf38
pf39
pf3a
pf3b
pf3c
pf3d
pf3e
pf3f
pf40
pf41
pf42
pf43
pf44
pf45
pf46
pf47
pf48
pf49
pf4a
pf4b
pf4c
pf4d
pf4e
pf4f
pf50
pf51
pf52
pf53
pf54
pf55
pf56
pf57
pf58
pf59
pf5a
pf5b
pf5c
pf5d
pf5e
pf5f
pf60
pf61
pf62
pf63
pf64

Partial preview of the text

Download Digital Notes on Compiler Design and more Study notes Compiler Design in PDF only on Docsity!

DIGITAL NOTES

ON

COMPILER DESIGN

B.TECH III YEAR - I SEM

DEPARTMENT OF INFORMATION TECHNOLOGY

MALLA REDDY COLLEGE OF ENGINEERING & TECHNOLOGY

(Autonomous Institution – UGC, Govt. of India)

(Affiliated to JNTUH, Hyderabad, Approved by AICTE - Accredited by NBA & NAAC – ‘A’ Grade - ISO 9001:2015 Certified) Maisammaguda, Dhulapally (Post Via. Hakimpet), Secunderabad – 500100, Telangana State, INDIA.

MALLA REDDY COLLEGE OF ENGINEERING & TECHNOLOGY

DEPARTMENT OF INFORMATION TECHNOLOGY

(R15A0512)COMPILER DESIGN

OBJECTIVES: -

  • To provide an initial Understanding of language translators, Knowledge of various techniques used in compiler construction and also use of the automated tools available in compilers construction. UNIT – I: Language Translation: Basics, Necessity, Steps involved in atypical language processing system, Types of translators, Compilers: Overview and Phases of a Compiler, Pass and Phases of translation, bootstrapping, data structures in compilation Lexical Analysis (Scanning): Functions of Lexical Analyzer, Specification of tokens: Regular expressions and Regular grammars for common PL constructs. Recognition of Tokens: Finite Automata in recognition and generation of tokens. Scanner generators: LEX-Lexical Analyzer Generators. Syntax Analysis (Parsing) : Functions of a parser, Classification of parsers. Context free grammars in syntax specification, benefits and usage in compilers. UNIT – II: Top down parsing – Definition, types of top down parsers: Backtracking, Recursive descent, Predictive, LL (1), Preprocessing the grammars to be used in top down parsing, Error recovery, and Limitations. Bottom up parsing: Definition, types of bottom up parsing, Handle pruning. Shift Reduce parsing, LR parsers: LR(0), SLR, CALR and LALR parsing, Error recovery, Handling ambiguous grammar, Parser generators: YACC-yet another compiler compiler. . UNIT – III: Semantic analysis: Attributed grammars, Syntax directed definition and Translation schemes, Type checker: functions, type expressions, type systems, types of checking of various constructs. Intermediate Code Generation: Functions, different intermediate code forms- syntax tree, DAG, Polish notation, and Three address codes. Translation of different source language constructs into intermediate code. Symbol Tables: Definition, contents, and formats to represent names in a Symbol table. Different approaches used in the symbol table implementation for block structured and non block structured languages, such as Linear Lists, Self Organized Lists, and Binary trees, Hashing based STs. UNIT – IV: Runtime Environment: Introduction, Activation Trees, Activation Records, Control stacks. Runtime storage organization: Static, Stack and Heap storage allocation. Storage allocation for

MALLA REDDY COLLEGE OF ENGINEERING & TECHNOLOGY

DEPARTMENT OF INFORMATION TECHNOLOGY

INDEX

S. No Unit Topic Page no 1 1 language processing system 1 2 1 Phases 0 f a Compiler 4 3 1 Automata^16 4 1 Lex-lexical Analyzer generator^19 5 2 Top down parsing^27 6 2 Bottom up parsing^40 7 2 LR Parsers^52 8 2 CALR PARSER^69 9 3 Intermediate code forms^75 10 3 Type Checking^81 11 3 Syntax Directed Translation^84 12 3 Symbol table^91 13 4 Activation Records^96 14 4 Code optimization^102 15 4 Common Sub Expression Elimination^103

  • 16 5 Control flow and Data flow Analysis
  • 17 5 Object code Generation
  • 18 5 Generic Code Generation
  • 19 5 Dag for Register Allocation

Executing a program written n HLL programming language is basically of two parts. the source program must first be compiled translated into a object program. Then the results object program is loaded into a memory executed. ASSEMBLER : programmers found it difficult to write or read programs in machine language. They begin to use a mnemonic (symbols) for each machine instruction, which they would subsequently translate into machine language. Such a mnemonic machine language is now called an assembly language. Programs known as assembler were written to automate the translation of assembly language in to machine language. The input to an assembler program is called source program, the output is a machine language translation (object program). INTERPRETER : An interpreter is a program that appears to execute a source program as if it were machine language. Languages such as BASIC, SNOBOL, LISP can be translated using interpreters. JAVA also uses interpreter. The process of interpretation can be carried out in following phases.

  1. Lexical analysis
  2. Syntax analysis
  3. Semantic analysis
  4. Direct Execution Advantages: Modification of user program can be easily made and implemented as execution proceeds. Type of object that denotes various may change dynamically. Debugging a program and finding errors is simplified task for a program used for interpretation. The interpreter for the language makes it machine independent. Disadvantages:
    • The execution of theprogramis slower. • Memory consumption is more.

Loader and Link-editor: Once the assembler procedures an object program, that program must be p laced into memory and executed. The assembler could place the object program directly in memory and transfer control to it, thereby causing the machine language program to be execute. This would waste core by leaving the assembler in memory while the user’s program was being executed. Also the programmer would have to retranslate his program with each execution, thus wasting translation time. To overcome this problems of wasted translation time and memory. System programmers developed another component called loader. “A loader is a program that places programs into memory and prepares them for execution.” It would be more efficient if subroutines could be translated into object form the loader could ”relocate” directly behind the user’s program. The task of adjusting programs o they may be placed in arbitrary core locations is called relocation. Relocation loaders perform four functions. TRANSLATOR A translator is a program that takes as input a program written in one language and produces as output a program in another language. Beside program translation, the translator performs another very important role, the error-detection. Any violation of d HLL specification would be detected and reported to the programmers. Important role of translator are: 1.Translating the hll program input into an equivalent ml program. 2.Providing diagnostic messages wherever the programmer violates specification of TYPE OF TRANSLATORS :-

  • Interpreter
  • Compiler
  • preprocessor LIST OF COMPILERS
  1. Ada compilers
  2. ALGOL compilers
  3. BASIC compilers
  4. C# compilers
  5. C compilers
  6. C++ compilers
  7. COBOL compilers
  8. Java compilers

Intermediate Code Generations: - An intermediate representation of the final machine language code is produced. This phase bridges the analysis and synthesis phases of translation. Code Optimization : - This is optional phase described to improve the intermediate code so that the output runs faster and takes less space. Code Generation:- The last phase of translation is code generation. A number of optimizations to Reduce the length of machine language program are carried out during this phase. The output of the code generator is the machine language program of the specified computer. Table Management (or) Book-keeping:- This is the portion to keep the names used by the program and records essential information about each. The data structure used to record this information called a ‘Symbol Table’. Error Handlers:- It is invoked when a flaw error in the source program is detected. The output of LA is a stream of tokens, which is passed to the next phase, the syntax analyzer or parser. The SA groups the tokens together into syntactic structure called as expression. Expression may further be combined to form statements. The syntactic structure can be regarded as a tree whose leaves are the token called as parse trees. The parser has two functions. It checks if the tokens from lexical analyzer, occur in pattern that are permitted by the specification for the source language. It also imposes on tokens a tree-like structure that is used by the sub-sequent phases of the compiler. Example , if a program contains the expression A+/B after lexical analysis this expression might appear to the syntax analyzer as the token sequence id+/id. On seeing the /, the syntax analyzer should detect an error situation, because the presence of these two adjacent binary operators violates the formulations rule of an expression. Syntax analysis is to make explicit the hierarchical structure of the incoming token stream by identifying which parts of the token stream should be grouped. Example, (A/B*C has two possible interpretations.) 1 - divide A by B and then multiply by C or 2 - multiply B by C and then use the result to divide A. Each of these two interpretations can be represented in terms of a parse tree. Intermediate Code Generation:- The intermediate code generation uses the structure produced by the syntax analyzer to create a stream of simple instructions. Many styles of intermediate code are

possible. One common style uses instruction with one operator and a small number of operands.The output of the syntax analyzer is some representation of a parse tree. The intermediate code generation phase transforms this parse tree into an intermediate language representation of the source program. Code Optimization:- This is optional phase described to improve the intermediate code so that the output runs faster and takes less space. Its output is another intermediate code program that does the same job as the original, but in a way that saves time and / or spaces. /* 1, Local Optimization:- There are local transformations that can be applied to a program to make an improvement. For example, If A > B goto L Goto L3 L2 : This can be replaced by a single statement If A < B goto L Another important local optimization is the elimination of common sub-expressions A := B + C + D E := B + C + F Might be evaluated as T1 := B + C A := T1 + D E := T1 + F Take this advantage of the common sub-expressions B + C. Loop Optimization:- Another important source of optimization concerns about increasing the speed of loops. A typical loop improvement is to move a computation that produces the same result each time around the loop to a point, in the program just before the loop is entered.*/ Code generator :- C produces the object code by deciding on the memory locations for data, selecting code to access each data and selecting the registers in which each computation is to be done. Many computers have only a few high speed registers in which computations can be performed quickly. A good code generator would attempt to utilize registers as efficiently as possible. Error Handing :- One of the most important functions of a compiler is the detection and reporting of errors in the source program. The error message should allow the programmer to determine exactly where the errors have occurred. Errors may occur in all or the phases of a compiler.

Code Generator Temp1: = id3 * 60. Id1:= id2 +temp Lexical Analyzer: The LA is the first phase of a compiler. Lexical analysis is called as linear analysis or scanning. In this phase the stream of characters making up the source program is read from left-to-right and grouped into tokens that are sequences of characters having a collective meaning. Upon receiving a ‘get next token’ command form the parser, the lexical analyzer reads the input character until it can identify the next token. The LA return to the parser representation for the token it has found. The representation will be an integer code, if the token is a simple construct such as parenthesis, comma or colon. LA may also perform certain secondary tasks as the user interface. One such task is striping out from the source program the commands and white spaces in the form of blank, tab and new line characters. Another is correlating error message from the compiler with the source program. Code Optimizer

Lexical Analysis Vs Parsing: Token, Lexeme, Pattern: Token: Token is a sequence of characters that can be treated as a single logical entity. Typical tokens are,

  1. Identifiers 2) keywords 3) operators 4) special symbols 5) constants Pattern: A set of strings in the input for which the same token is produced as output. This set of strings is described by a rule called a pattern associated with the token. Lexeme: A lexeme is a sequence of characters in the source program that is matched by the pattern for a token. Example: Description of token Token lexeme pattern const const const if if If relation <,<=,= ,< >,>=,> < or <= or = or < > or >= or^ letter followed by letters & digit i pi any numeric constant nun 3.14 any character b/w “and “except" literal "core" pattern A pattern is a rule describing the set of lexemes that can represent a particular token in source program. Lexical analysis Parsing A Scanner simply turns an input String (say a file) A parser converts this list of tokens into a list of tokens. These tokens repr like object to represent how the into a toke things like identifiers, parentheses, operators etc. together to form a cohesive (sometimes referred to as a sentence). The lexical analyzer (the "lexer") p individual symbols from the source code file into tokens. From there, the "parser" proper turns those whole tokens into sentences of your grammar A parser does not give meaning beyond structural the nodes cohesion. The thing to do is extract meaning from this structure (sometimes called conte analysis).

countable set of strings over some fixed alphabet. In language theory, the terms "sentence" and "word" are often used as synonyms for "string." The length of a string s, usually written |s|, is the number of occurrences of symbols in s. For example, banana is a string of length six. The empty string, denoted ε, is the string of length zero. Operations on strings The following string-related terms are commonly used:

  1. A prefix of string s is any string obtained by removing zero or more symbols from the end of strings. For example, ban is a prefix of banana.
  2. A suffix of string s is any string obtained by removing zero or more symbols from the beginning For example, nana is a suffix of banana.
  3. A substring of s is obtained by deleting any prefix and any suffix from s. For example, nan is a substring of banana.
  4. The proper prefixes, suffixes, and substrings of a string s are those prefixes, suffixes, and substrings, respectively of s that are not ε or not equal to s itself.
  5. A subsequence of s is any string formed by deleting zero or more not necessarily consecutive positions of s For example, baan is a subsequence of banana. Operations on languages: The following are the operations that can be applied to languages:
  6. Union
  7. Concatenation
  8. Kleene closure 4.Positive closure The following example shows the operations on strings: Let L={0,1} and S={a,b,c} Union : L U S={0,1,a,b,c} Concatenation : L.S={0a,1a,0b,1b,0c,1c} Kleene closure : L * ={ ε,0,1,00….} Positive closure : L + ={0,1,00….}

Regular Expressions: Each regular expression r denotes a language L(r). Here are the rules that define the regular expressions over some alphabet Σ and the languages that those expressions denote:

  1. ε is a regular expression, and L(ε) is { ε }, that is, the language whose sole member is the empty string.
  2. If‘a’is a symbol in Σ, then ‘a’is a regular expression, and L(a) = {a}, that is, the language with one string, of length one, with ‘a’in its one position.
  3. Suppose r and s are regular expressions denoting the languages L(r) and L(s). Then, o (r)|(s) is a regular expression denoting the language L(r) U L(s). o (r)(s) is a regular expression denoting the language L(r)L(s). o (r)* is a regular expression denoting (L(r))*. o (r) is a regular expression denoting L(r).
  4. The unary operator * has highest precedence and is left associative.
  5. Concatenation has second highest precedence and is left associative. has lowest precedence and is left associative. REGULAR DEFINITIONS: For notational convenience, we may wish to give names to regular expressions and to define regular expressions using these names as if they were symbols. Identifiers are the set or string of letters and digits beginning with a letter. The following regular definition provides a precise specification for this class of string. Example-1, Ab|cd? Is equivalent to (a(b)) | (c(d?)) Pascal identifier Letter - A | B | ……| Z | a | b |……| z| Digits - 0 | 1 | 2 | …. | 9 Id - letter (letter / digit)* Shorthand’s Certain constructs occur so frequently in regular expressions that it is convenient to introduce notational shorthands for them. 1. One or more instances (+): o The unary postfix operator + means “ one or more instances of”. o If r is a regular expression that denotes the language L(r), then ( r )+^ is a regular expression that denotes the language (L (r )) + o Thus the regular expression a+^ denotes the set of all strings of one or more a’s. o The operator + has the same precedence and associativity as the operator * .

Lexeme Token Name Attribute Value Any ws _ if if _ then then _ else _ Any id id pointer to table entry Any number number pointer^ to^ table entry < relop LT <= relop LE = relop ET < > relop NE TRANSITION DIAGRAM: Transition Diagram has a collection of nodes or circles, called states. Each state represents a condition that could occur during the process of scanning the input looking for a lexeme that matches one of several patterns .Edges are directed from one state of the transition diagram to another. each edge is labeled by a symbol or set of symbols.If we are in one state s, and the next input symbol is a, we look for an edge out of state s labeled by a. if we find such an edge ,we advance the forward pointer and enter the state of the transition diagram to which that edge leads. Some important conventions about transition diagrams are

  1. Certain states are said to be accepting or final .These states indicates that a lexeme has been found, although the actual lexeme may not consist of all positions b/w the lexeme Begin and forward pointers we always indicate an accepting state by a double circle.
  2. In addition, if it is necessary to return the forward pointer one position, then we shall additionally place a * near that accepting state.
  3. One state is designed the state ,or initial state ., it is indicated by an edge labeled “start” entering from nowhere .the transition diagram always begins in the state before any input symbols have been used.

As an intermediate step in the construction of a LA, we first produce a stylized flowchart, called a transition diagram. Position in a transition diagram, are drawn as circles and are called as states. The above TD for an identifier, defined to be a letter followed by any no of letters or digits.A sequence of transition diagram can be converted into program to look for the tokens specified by the diagrams. Each state gets a segment of code. Automata : Automation is defined as a system where information is transmitted and used for performing some functions without direct participation of man.

1. An automation in which the output depends only on the input is called automation without memory.

  1. An automation in which the output depends on the input and state also is called as automation with memory.
  2. An automation in which the output depends only on the state of the machine is called a Moore machine.
  3. An automation in which the output depends on the state and input at any instant of time is called a mealy machine. DESCRIPTION OF AUTOMATA
  4. An automata has a mechanism to read input from input tape,
  5. Any language is recognized by some automation, Hence these automation are basically language ‘acceptors’ or ‘language recognizers’. Types of Finite Automata Deterministic Automata Non-Deterministic Automata. Deterministic Automata: A deterministic finite automata has at most one transition from each state on any input. A DFA is a special case of a NFA in which:-
  6. it has no transitions on input € ,
  7. Each input symbol has at most one transition from any state.