Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Lecture Notes on Compiler Design, Lecture notes of Compiler Design

These lecture notes cover the basics of compiler design, including language processing, structure of a compiler, and the evaluation of programming languages. The notes also cover the different types of translators, including compilers, interpreters, and preprocessors. The notes were created for the III B. Tech I Semester (JNTUK-R16) at Sir CRR College of Engineering in Eluru.

Typology: Lecture notes

2020/2021

Uploaded on 05/11/2023

teap1x
teap1x 🇺🇸

4.7

(17)

231 documents

1 / 150

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
LECTURE NOTES
ON
COMPILER DESIGN
2020 2021
III B. Tech I Semester (JNTUK-R16)
P.Naga Deepthi/
Mr J.S.V. Gopala Krishna
Department of Computer Science and
Engineering
SIR CRREDDY COLLEGE OF ENGINEERING,
ELURU
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25
pf26
pf27
pf28
pf29
pf2a
pf2b
pf2c
pf2d
pf2e
pf2f
pf30
pf31
pf32
pf33
pf34
pf35
pf36
pf37
pf38
pf39
pf3a
pf3b
pf3c
pf3d
pf3e
pf3f
pf40
pf41
pf42
pf43
pf44
pf45
pf46
pf47
pf48
pf49
pf4a
pf4b
pf4c
pf4d
pf4e
pf4f
pf50
pf51
pf52
pf53
pf54
pf55
pf56
pf57
pf58
pf59
pf5a
pf5b
pf5c
pf5d
pf5e
pf5f
pf60
pf61
pf62
pf63
pf64

Partial preview of the text

Download Lecture Notes on Compiler Design and more Lecture notes Compiler Design in PDF only on Docsity!

LECTURE NOTES

ON

COMPILER DESIGN

III B. Tech I Semester (JNTUK-R16)

P.Naga Deepthi/ Mr J.S.V. Gopala Krishna

Department of Computer Science and

Engineering

SIR CRREDDY COLLEGE OF ENGINEERING,

ELURU

UNIT – I

Introduction Language Processing, Structure of a compiler, the Evaluation of Programming language, The Science of building a Compiler application of Compiler Technology. Programming Language Basics. Lexical Analysis-:The role of lexical analysis buffering, specification of tokens. Recognitions of tokens the lexical analyzer generator lexical

UNIT - 1

TRANSLATOR

A translator is a program that takes as input a program written in one language and produces as output a program in another language. Beside program translation, the translator performs another very important role, the error-detection. Any violation of HLL specification would be detected and reported to the programmers. Important role of translator are: 1 Translating the HLL program input into an equivalent machine language program. 2 Providing diagnostic messages wherever the programmer violates specification of the HLL. A translator is a program that takes as input a program written in one language and produces as output a program in another language. Beside program translation, the translator performs another very important role, the error-detection. Any violation of HLL specification would be detected and reported to the programmers. Important role of translator are: 1 Translating the hll program input into an equivalent ml program. 2 Providing diagnostic messages wherever the programmer violates specification of the hll. TYPE OF TRANSLATORS :-

a. Compiler

b. Interpreter

c. Preprocessor

Compiler

Compiler is a translator program that translates a program written in (HLL) the source program and translate it into an equivalent program in (MLL) the target program. As an important part of a compiler is error showing to the programmer.

Source pgm Compiler target pgm

Error msg

Preprocessor A preprocessor produce input to compilers. They may perform the following functions.

  1. Macro processing: A preprocessor may allow a user to define macros that are short hands for longer constructs.
  2. File inclusion: A preprocessor may include header files into the program text.
  3. Rational preprocessor: these preprocessors augment older languages with more modern flow-of-control and data structuring facilities.
  4. Language Extensions: These preprocessor attempts to add capabilities to the language by certain amounts to build-in macro Assembler : programmers found it difficult to write or read programs in machine language. They begin to use a mnemonic (symbols) for each machine instruction, which they would subsequently translate into machine language. Such a mnemonic machine language is now called an assembly language. Programs known as assembler were written to automate the translation of assembly language in to machine language. The input to an assembler program is called source program, the output is a machine language translation (object program). Loader and Link-editor: Once the assembler procedures an object program, that program must be placed into memory and executed. The assembler could place the object program directly in memory and transfer control to it, thereby causing the machine language program to be execute. This would waste core by leaving the assembler in memory while the user‟s program was being executed. Also the programmer would have to retranslate his program with each execution, thus wasting translation time. To overcome this problems of wasted translation time and memory. System programmers developed another component called loader “A loader is a program that places programs into memory and prepares them for execution.” It would be more efficient if subroutines could be translated into object form the loader could”relocate” directly behind the user‟s program. The task of adjusting programs othey may be placed in arbitrary core locations is called relocation. STRUCTURE OF A COMPILER Phases of a compiler: A compiler operates in phases. A phase is a logically interrelated operation that takes source program in one representation and produces output in another representation. The phases of a compiler are shown in below There are two phases of compilation. a. Analysis (Machine Independent/Language Dependent) b. Synthesis(Machine Dependent/Language independent) Compilation process is partitioned into no-of-sub processes called p hases’.

Lexical Analysis:- LA or Scanners reads the source program one character at a time, carving the source program into a sequence of automic units called tokens. Syntax Analysis:- The second stage of translation is called Syntax analysis or parsing. In this phase expressions, statements, declarations etc… are identified by using the results of lexical analysis. Syntax analysis is aided by using techniques based on formal grammar of the programming language. Intermediate Code Generations: - An intermediate representation of the final machine language code is produced. This phase bridges the analysis and synthesis phases of translation. Code Optimization :- This is optional phase described to improve the intermediate code so that the output runs faster and takes less space. Code Generation:- The last phase of translation is code generation. A number of optimizations to reduce the length of machine language program are carried out during this phase. The output of the code generator is the machine language program of the specified computer. Table Management (or) Book-keeping:-

Another important local optimization is the elimination of common sub-expressions A := B + C + D E := B + C + F Might be evaluated as T1 := B + C A := T1 + D E := T1 + F Take this advantage of the common sub-expressions B + C. 2, Loop Optimization:- Another important source of optimization concerns about increasing the speed of loops. A typical loop improvement is to move a computation that produces the same result each time around the loop to a point, in the program just before the loop is entered. Code Generator :- Code Generator produces the object code by deciding on the memory locations for data, selecting code to access each datum and selecting the registers in which each computation is to be done. Many computers have only a few high speed registers in which computations can be performed quickly. A good code generator would attempt to utilize registers as efficiently as possible. Table Management OR Book-keeping :- A compiler needs to collect information about all the data objects that appear in the source program. The information about data objects is collected by the early phases of the compiler-lexical and syntactic analyzers. The data structure used to record this information is called as Symbol Table. Error Handing :- One of the most important functions of a compiler is the detection and reporting of errors in the source program. The error message should allow the programmer to determine exactly where the errors have occurred. Errors may occur in all or the phases of a compiler. Whenever a phase of the compiler discovers an error, it must report the error to the error handler, which issues an appropriate diagnostic msg. Both of the table-management and error-Handling routines interact with all phases of the compiler.

Example: Position:= initial + rate * Lexical Analyzer Tokens id1 = id2 + id3 * id Syntsx Analyzer = id1 + id2 * id3 id Semantic Analyzer = id1 + id2 * id3 60 int to real Intermediate Code Generator temp1:= int to real (60) temp2:= id3 * temp temp3:= id2 + temp id1:= temp3. Code Optimizer Temp1:= id3 * 60.

Object oriented language is one that supports Object oriented programming, a Programming style in which a program consists of a collection of objects that interact with one another. Examples: Simula 67, small talk, C ++, Java,Ruby Scripting languages are interpreted languages with high level operators designed for “gluing together” computations These computations originally called Scripts Example: JavaScript, Perl, PHP, python, Ruby, TCL

The Science of building a Compiler

A compiler must accept all source programs that conform to the specification of the language; the set of source programs is infinite and any program can be very large, consisting of possibly millions of lines of code. Any transformation performed by the compiler while translating a source program must preserve the meaning of the program being compiled. Compiler writers thus have influence over not just the compilers they create, but all the programs that their compilers compile. This leverage makes writing compilers particularly rewarding; however, it also makes compiler development challenging. Modelling in compiler design and implementation: The study of compilers is mainly a study of how we design the right mathematical models and choose the right algorithms.Some of most fundamental models are finite-state machines and regular expressions.These models are useful for de-scribing the lexical units of programs (keywords, identifiers, and such) and for describing the algorithms used by the compiler to recognize those units. Also among the most fundamental models are context-free grammars, used to describe the syntactic structure of programming languages such as the nesting of parentheses or control constructs. Similarly, trees are an important model for representing the structure of programs and their translation into object code. The science of code optimization: The term "optimization" in compiler design refers to the attempts that a com-piler makes to produce code that is more efficient than the obvious code. In modern times, the optimization of code that a compiler performs has become both more important and more complex. It is more complex because processor architectures have become more complex, yielding more opportunities to improve the way code executes. It is more important because massively par-allel

computers require substantial optimization, or their performance suffers by orders of magnitude.

Compiler optimizations must meet the following design objectives:

  1. The optimization must be correct, that is, preserve the meaning of the compiled program,
  2. The optimization must improve the performance of many programs,
  3. The compilation time must be kept reasonable, and
  4. The engineering effort required must be manageable. Thus, in studying compilers, we learn not only how to build a compiler, but also the general methodology of solving complex and open-ended problems. Applications of Compiler Technology Compiler design impacts several other areas of computer science. Implementation of high-level programming language: A high-level programming language defines a programming abstraction: the programmer expresses an algorithm using the language, and the compiler

must translate that program to the target language. higher-level programming languages are easier to program in, but are less efficient, that is, the target programs run more slowly. Programmers using a low-level language have more control over a computation and can, in principle, produce more efficient code. Language features that have stimulated significant advances in compiler technology. Practically all common programming languages, including C, Fortran and Cobol, support user-defined aggregate data types, such as arrays and structures, and high-level control flow, such as loops and procedure invocations. If we just take each high-level construct or data-access operation and translate it directly to machine code, the result would be very inefficient. A body of compiler optimizations, known as data-flow optimizations, has been developed to analyze the flow of data through the program and removes redundancies across these constructs. They are effective in generating code that resembles code written by a skilled programmer at a lower level. Object orientation was first introduced in Simula in 1967, and has been incorporated in languages such as Smalltalk, C + + , C # , and Java. The key ideas behind object orientation are

  1. Data abstraction and
  2. Inheritance of properties, Java has many features that make programming easier, many of which have been introduced previously in other languages. Compiler optimizations have been developed to reduce the overhead, for example, by eliminating unnecessary range checks and by allocating objects that are not accessible beyond a procedure on the stack instead of the heap. Effective algorithms also have been developed to minimize the overhead of garbage collection. In dynamic optimization, it is important to minimize the compilation time as it is part of the execution overhead. A common technique used is to only compile and optimize those parts of the program that will be frequently executed. Optimizations for Computer Architecture: high-performance systems take advantage of the same two basic techniques: parallelism and memory hierarchies. Parallelism can be found at several levels: at the instruction level, where multiple operations are executed simultaneously and at the processor level, where different threads of the same application are run on different processors. Memory hierarchies are a response to the basic limitation that we can build very fast storage or very large storage, but not storage that is both fast and large. Design of New Computer Architectures: in modern computer architecture development, compilers are developed in the processor-design stage, and compiled code, running on simulators, is used to evaluate the proposed architectural features. One of the best known examples of how compilers influenced the design of computer architecture was the invention of the RISC (Reduced Instruction-Set Computer) architecture. Compiler optimizations often can reduce these instructions to a small number of simpler operations by eliminating the redundancies across complex instructions. Thus, it is desirable to build simple instruction sets; compilers can use them effectively and the hardware is much easier to optimize. Most general-purpose processor architectures, including PowerPC, SPARC, MIPS, Alpha, and PA-RISC, are based on the RISC concept. Specialized Architectures Over the last three decades, many architectural concepts have been proposed. They include data flow machines, vector machines, VLIW (Very Long Instruction Word) machines, SIMD (Single Instruction, Multiple Data) arrays of processors, systolic arrays, multiprocessors with shared memory, and multiprocessors with distributed memory. The development of each of these architectural concepts was accompanied by the research and development of corresponding compiler technology.

Static Scope and Block Structure Most languages, including C and its family, use static scope. we consider static-scope rules for a language with blocks, where a block is a grouping of declarations and statements. C uses braces { and } to delimit a block; the alternative use of begin and end for the same purpose dates back to Algol. A C program consists of a sequence of top-level declarations of variables and functions.Functions may have variable declarations within them, where variables include local variables and parameters. The scope of each such declaration is restricted to the function in which it appears. The scope of a top-level declaration of a name x consists of the entire program that follows, with the exception of those statements that lie within a function that also has a declaration of x. A block is a sequence of declarations followed by a sequence of statements, all surrounded by braces. a declaration D "belongs" to a block B if B is the most closely nested block containing D; that is, D is located within B, but not within any block that is nested within B. The static-scope rule for variable declarations in a block-structured lan-guages is as follows. If declaration D of name x belongs to block B, then the scope of D is all of B, except for any blocks B' nested to any depth within J5, in which x is redeclared. Here, x is redeclared in B' if some other declaration D' of the same name x belongs to B'. An equivalent way to express this rule is to focus on a use of a name x. Let Bi, i?2, • • • , Bk be all the blocks that surround this use of x, with Bk the smallest, nested within Bk-i, which is nested within Bk-2, and so on. Search for the largest i such that there is a declaration of x belonging to B^. This use of x refers to the declaration in B{. Alternatively, this use of x is within the scope of the declaration in Bi. Explicit Access Control Through the use of keywords like public, private, and protected, object-oriented languages such as C

    • or Java provide explicit control over access to member names in a superclass. These keywords support encapsulation by restricting access. Thus, private names are purposely given a scope that includes only the method declarations and definitions associated with that class and any "friend" classes (the C + + term). Protected names are accessible to subclasses. Public names are accessible from outside the class. Dynamic Scope Any scoping policy is dynamic if it is based on factor(s) that can be known only when the program executes. The term dynamic scope, however, usually refers to the following policy: a use of a name x refers to the declaration of x in the most recently called procedure with such a declaration. Dynamic scoping of this type appears only in special situations. We shall consider two ex-amples of

dynamic policies: macro expansion in the C preprocessor and method resolution in object-oriented programming. Declarations and Definitions Declarations tell us about the types of things, while definitions tell us about their values. Thus, i n t i is a declaration of i, while i = 1 is a definition of i. The difference is more significant when we deal with methods or other procedures. In C + + , a method is declared in a class definition, by giving the types of the arguments and result of the method (often called the signature for the method. The method is then defined, i.e., the code for executing the method is given, in another place. Similarly, it is common to define a C function in one file and declare it in other files where the function is used. Parameter Passing Mechanisms In this section, we shall consider how the actual parameters (the parameters used in the call of a procedure) are associated with the formal parameters (those used in the procedure definition). Which mechanism is used determines how the calling-sequence code treats parameters. The great majority of languages use either "call-by-value," or "call-by-reference," or both. Call - by - Value In call-by-value, the actual parameter is evaluated (if it is an expression) or copied (if it is a variable). The value is placed in the location belonging to the corresponding formal parameter of the called procedure. This method is used in C and Java, and is a common option in C + + , as well as in most other languages. Call-by-value has the effect that all computation involving the formal parameters done by the called procedure is local to that procedure, and the actual parameters themselves cannot be changed. Note, however, that in C we can pass a pointer to a variable to allow that variable to be changed by the callee. Likewise, array names passed as param eters in C, C + + , or Java give the called procedure what is in effect a pointer or reference to the array itself. Thus, if a is the name of an array of the calling procedure, and it is passed by value to corresponding formal parameter x, then an assignment such as x [ i ] = 2 really changes the array element a[2]. The reason is that, although x gets a copy of the value of a, that value is really a pointer to the beginning of the area of the store where the array named a is located. Similarly, in Java, many variables are really references, or pointers, to the things they stand for. This observation applies to arrays, strings, and objects of all classes. Even though Java uses call-by-value exclusively, whenever we pass the name of an object to a called procedure, the value received by that procedure is in effect a pointer to the object. Thus, the called procedure is able to affect the value of the object itself. Call - by - Reference In call-by-reference, the address of the actual parameter is passed to the callee as the value of the corresponding formal parameter. Uses of the formal parameter in the code of the callee are implemented by following this pointer to the location indicated by the caller. Changes to the formal parameter thus appear as changes to the actual parameter.

LEXICAL ANALYSIS

2.1 OVER VIEW OF LEXICAL ANALYSIS

o To identify the tokens we need some method of describing the possible tokens that can appear in the input stream. For this purpose we introduce regular expression, a notation that can be used to describe essentially all the tokens of programming language. o Secondly , having decided what the tokens are, we need some mechanism to recognize these in the input stream. This is done by the token recognizers, which are designed using transition diagrams and finite automata. 2.2 ROLE OF LEXICAL ANALYZER the LA is the first phase of a compiler. It main task is to read the input character and produce as output a sequence of tokens that the parser uses for syntax analysis. Upon receiving a get next token command form the parser, the lexical analyzer reads the input character until it can identify the next token. The LA return to the parser representation for the token it has found. The representation will be an integer code, if the token is a simple construct such as parenthesis, comma or colon. LA may also perform certain secondary tasks as the user interface. One such task is striping out from the source program the commands and white spaces in the form of blank, tab and new line characters. Another is correlating error message from the compiler with the source program.

  • 11 -

LEXICAL ANALYSIS VS PARSING:

Lexical analysis Parsing A Scanner simply turns an input String (say a A parser converts this list of tokens into a file) into a list of tokens. These tokens Tree-like object to represent how the tokens represent things like identifiers, parentheses, fit together to form a cohesive whole operators etc. (sometimes referred to as a sentence). The lexical analyzer (the "lexer") parses A parser does not give the nodes any individual symbols from the source code file meaning beyond structural cohesion. The into tokens. From there, the "parser" proper next thing to do is extract meaning from this turns those whole tokens into sentences of structure (sometimes called contextual your grammar analysis). 2.3 INPUT BUFFERING The LA scans the characters of the source pgm one at a time to discover tokens. Because of large amount of time can be consumed scanning characters, specialized buffering techniques have been developed to reduce the amount of overhead required to process an input character. Buffering techniques:

  1. Buffer pairs
  2. Sentinels The lexical analyzer scans the characters of the source program one a t a time to discover tokens. Often, however, many characters beyond the next token many have to be examined before the next token itself can be determined. For this and other reasons, it is desirable for thelexical analyzer to read its input from an input buffer. Figure shows a buffer divided into two haves of, say 100 characters each. One pointer marks the beginning of the token being discovered. A look ahead pointer scans ahead of the beginning point, until the token is discovered .we view the position of each pointer as being between the character last read and thecharacter next to be read. In practice each buffering scheme adopts one convention either apointer is at the symbol last read or the symbol it is ready to read. Token beginnings look ahead pointerThe distance which the lookahead pointer may have to travel past the actual token may belarge. For example, in a PL/I program we may see: DECALRE (ARG1, ARG2… ARG n ) Without knowing whether DECLARE is a keyword or

A pattern is a rule describing the set of lexemes that can represent a particular token in source program. 2.5 LEXICAL ERRORS: Lexical errors are the errors thrown by your lexer when unable to continue. Which means that there's no way to recognise a lexeme as a valid token for you lexer. Syntax errors, on the other side, will be thrown by your scanner when a given set of already recognised valid tokens don't match any of the right sides of your grammar rules. simple panic-mode error handling system requires that we return to a high-level parsing function when a parsing or lexical error is detected. Error-recovery actions are: i. Delete one character from the remaining input. ii. Insert a missing character in to the remaining input. iii. Replace a character by another character. iv. Transpose two adjacent characters. 2.6 DIFFERENCE BETWEEN COMPILER AND INTERPRETER A compiler converts the high level instruction into machine language while an interpreter converts the high level instruction into an intermediate form. Before execution, entire program is executed by the compiler whereas after translating the first line, an interpreter then executes it and so on. List of errors is created by the compiler after the compilation process while an interpreter stops translating after the first error. An independent executable file is created by the compiler whereas interpreter is required by an interpreted program each time. The compiler produce object code whereas interpreter does not produce object code. In the process of compilation the program is analyzed only once and then the code is generated whereas source program is interpreted every time it is to be executed and every time the source program is analyzed. hence interpreter is less efficient than compiler. Examples of interpreter: A UPS Debugger is basically a graphical source level debugger but it contains built in C interpreter which can handle multiple source files. Example of compiler: Borland c compiler or Turbo C compiler compiles the programs written in C or C++.

2.7 REGULAR EXPRESSIONS

Regular expression is a formula that describes a possible set of string. Component of regular expression.. X the character x

. any character, usually accept a new line [x y z] any of the characters x, y, z, ….. R? a R or nothing (=optionally as R) R zero or more occurrences….. R+ one or more occurrences …… R1R2 an R1 followed by an R R2R1 either an R1 or an R2.* A token is either a single string or one of a collection of strings of a certain type. If we view the set of strings in each token class as an language, we can use the regular-expression notation to describe tokens. Consider an identifier, which is defined to be a letter followed by zero or more letters or digits. In regular expression notation we would write. Identifier = letter (letter | digit)* Here are the rules that define the regular expression over alphabet. o is a regular expression denoting { € }, that is, the language containing only the empty string. o For each „a‟ in ∑, is a regular expression denoting { a }, the language with only one string consisting of the single symbol „a‟. o If R and S are regular expressions, then (R) | (S) means LrULs R.S means Lr.Ls R* denotes Lr* 2.8 REGULAR DEFINITIONS For notational convenience, we may wish to give names to regular expressions and to define regular expressions using these names as if they were symbols. Identifiers are the set or string of letters and digits beginning with a letter. The following regular definition provides a precise specification for this class of string. Example-1, Ab|cd? Is equivalent to (a(b)) | (c(d?)) Pascal identifier Letter - Digits - A | B | ……| Z | a | b |……| z| 0 | 1 | 2 | …. | 9 letter (letter / digit)* I