Handouts csc 441 Compiler Construction Muhammad Bilal Zafar vcomsats learning Management System



Download 310.43 Kb.
Page1/7
Date31.01.2017
Size310.43 Kb.
#12898
  1   2   3   4   5   6   7
logo-comsats
Handouts

CSC 441
Compiler Construction
Muhammad Bilal Zafar
VCOMSATS

Learning Management System

The purpose of the course is to become familiar with the functionality of the different phases in the construction of a compiler front end and to gain insight of how these phases are related to each other.


Course Outline:

  • Introduction to Compiler

  • Structure of Compiler

  • Syntax Directed Translator

  • Lexical Analysis

  • Syntax Analysis

  • Semantic Analysis

  • Syntax Directed Translation

  • Intermediate Code Generation



Lesson 01, 02 & 03

Language Processors, Interpreters, Compiler Vs Interpreters, Language Processing System, The Structure of a Compiler, Compiler Construction Tools, Evolution of Programming Languages, 1st - 5th Generation Languages, Impacts on Compilers, Modeling in Compiler Design, Code Optimization, Programming Language Basics

Compiler is a piece of software that translates a program in one (artificial) language, Lang1, to a program in another (artificial) language, Lang2. Our primarily focus is the case where Lang1 is a programming language that humans like to program in, and Lang2is (or is “closer to”) a machine language, that a computer “understands” and can execute. The compiler should say something even if the input is not a valid program of Lang1. Namely, it should give an error message explaining why the input is not a valid program.


Simply stated, a compiler is a program that can read a program in one language - the source language - and translate it into an equivalent program in another language - the target language. An important role of the compiler is to report any errors in the source program that it detects during the translation process.
lec01-01.png
An interpreter is another common kind of language processor. Instead of producing a target program as a translation, an interpreter appears to directly execute the operations specified in the source program on inputs supplied by the user.
The machine-language target program produced by a compiler is usually much faster than an interpreter at mapping inputs to outputs. An interpreter, however, can usually give better error diagnostics than a compiler, because it executes the source program statement by statement.
Java language processors combine compilation and interpretation, as shown in following figure. A Java source program may first be compiled into an intermediate form called bytecodes. The bytecodes are then interpreted by a virtual machine. A benefit of this arrangement is that bytecodes compiled on one machine can be interpreted on another machine, perhaps across a network. In order to achieve faster processing of inputs to outputs, some Java compilers, called just-in-time compilers, translate the bytecodes into machine language immediately before they run the intermediate program to process the input.

In addition to a compiler, several other programs may be required to create

an executable target program, as shown.



A source program may be divided into modules stored in separate files. The task of collecting the source program is sometimes entrusted to a separate program, called a preprocessor. The preprocessor may also expand shorthands, called macros, into source language statements. The modified source program is then fed to a compiler. The compiler may produce an assembly-language program as its output, because assembly language is easier to produce as output and is easier to debug. The assembly language is then processed by a program called an assembler that produces relocatable machine code as its output.
Large programs are often compiled in pieces, so the relocatable machine code may have to be linked together with other relocatable object files and library files into the code that actually runs on the machine. The linker resolves external memory addresses, where the code in one file may refer to a location in another file. The loader then puts together the entire executable object files into memory for execution.
Up to this point we have treated a compiler as a single box that maps a source program into a semantically equivalent target program. If we open up this box a little, we see that there are two parts to this mapping: analysis and synthesis.
The analysis part breaks up the source program into constituent pieces and imposes a grammatical structure on them. It then uses this structure to create an intermediate representation of the source program. If the analysis part detects that the source program is either syntactically ill formed or semantically unsound, then it must provide informative messages, so the user can take corrective action. The analysis part also collects information about the source program and stores it in a data structure called a symbol table, which is passed along with the intermediate representation to the synthesis part.
The synthesis part constructs the desired target program from the intermediate representation and the information in the symbol table. The analysis part is often called the front end of the compiler; the synthesis part is the back end. If we examine the compilation process in more detail, we see that it operates as a sequence of phases, each of which transforms one representation of the source program to another.
A typical decomposition of a compiler into phases is shown below:
In practice, several phases may be grouped together, and the intermediate representations between the grouped phases need not be constructed explicitly. The symbol table, which stores information about the entire source program, is used by all phases of the compiler. Some compilers have a machine-independent optimization phase between the front end and the back end. The purpose of this optimization phase is to perform transformations on the intermediate representation, so that the back end can produce a better target program than it would have otherwise produced from an unoptimized intermediate representation.
The compiler writer, like any software developer, can profitably use modern software development environments containing tools such as language editors, debuggers, version managers, profilers, test harnesses, and so on. In addition to these general software-development tools, other more specialized tools have been created to help implement various phases of a compiler. These tools use specialized languages for specifying and implementing specific components, and many use quite sophisticated algorithms. The most successful tools are those that hide the details of the generation algorithm and produce components that can be easily integrated into the remainder of the compiler.
Some commonly used compiler-construction tools include:


  1. Parser generators that automatically produce syntax analyzers from a grammatical description of a programming language.




  1. Scanner generators that produce lexical analyzers from a regular-expression description of the tokens of a language.




  1. Syntax-directed translation engines that produce collections of routines for walking a parse tree and generating intermediate code.




  1. Code-generator generators that produce a code generator from a collection of rules for translating each operation of the intermediate language into the machine language for a target machine.




  1. Data-flow analysis engines that facilitate the gathering of information about how values are transmitted from one part of a program to each other part. Data-flow analysis is a key part of code optimization.




  1. Compiler- construction toolkits that provide an integrated set of routines for constructing various phases of a compiler.


Evolution of programming languages: The first electronic computers appeared in the 1940's and were programmed in machine language by sequences of O's and 1's that explicitly told the computer what operations to execute and in what order. The operations themselves were very low level: move data from one location to another add the contents of two registers, compare two values, and so on. Needless to say, this kind of programming was slow, tedious, and error prone. And once written the programs were hard to understand and modify.
The first step towards more people-friendly programming languages was the development of mnemonic assembly languages in the early 1950's. Initially, the instructions in an assembly language were just mnemonic representations of machine instructions. Later, macro instructions were added to assembly languages so that a programmer could define parameterized shorthands for frequently used sequences of machine instructions. A major step towards higher-level languages was made in the latter half of the 1950's with the development of Fortran for scientific computation, Cobol for business data processing, and Lisp for symbolic computation. The philosophy behind these languages was to create higher-level notations with which programmers could more easily write numerical computations, business applications, and symbolic programs. These languages were so successful that they are still in use today.
In the following decades, many more languages were created with innovative features to help make programming easier, more natural, and more robust. Today, there are thousands of programming languages. They can be classified in a variety of ways.
One classification is by generation. First-generation languages are the machine languages, second-generation the assembly languages, and third-generation the higher-level languages like Fortran, Cobol, Lisp, C, C++, C#, and Java. Fourth-generation languages are languages designed for specific applications like NOMAD for report generation, SQL for database queries, and Postscript for text formatting. The term fifth-generation language has been applied to logic- and constraint-based languages like Prolog and OPS5.
Another classification of languages uses the term imperative for languages in which a program specifies how a computation is to be done and declarative for languages in which a program specifies what computation is to be done. Languages such as C, C++, C#, and Java are imperative languages. In imperative languages there is a notion of program state and statements that change the state. Functional languages such as ML and Haskell and constraint logic languages such as Prolog are often considered to be declarative languages. The term von Neumann language is applied to programming languages whose computational model is based on the von Neumann computer architecture. Many of today's languages, such as Fortran and C are von Neumann languages. An object-oriented language is one that supports object-oriented programming, a programming style in which a program consists of a collection of objects that interact with one another. Simula 67 and Smalltalk are the earliest major object-oriented languages. Languages such as C++, C#, Java, and Ruby are more recent object-oriented languages. Scripting languages are interpreted languages with high-level operators designed for "gluing together" computations. These computations were originally called "scripts." Awk, JavaScript, Perl, PHP, Python, Ruby, and Tel are popular examples of scripting languages. Programs written in scripting languages are often much shorter than equivalent programs written in languages like C.
Since the design of programming languages and compilers are intimately related, the advances in programming languages placed new demands on compiler writers. They had to devise algorithms and representations to translate and support the new language features. Since the 1940's, computer architecture has evolved as well. Not only did the compiler writers have to track new language features, they also had to devise translation algorithms that would take maximal advantage of the new hardware capabilities.
Compilers can help promote the use of high-level languages by minimizing the execution overhead of the programs written in these languages. Compilers are also critical in making high-performance computer architectures effective on users' applications. In fact, the performance of a computer system is so dependent on compiler technology that compilers are used as a tool in evaluating architectural concepts before a computer is built. Compiler writing is challenging. A compiler by itself is a large program. Moreover, many modern language-processing systems handle several source languages and target machines within the same framework; that is, they serve as collections of compilers, possibly consisting of millions of lines of code. Consequently, good software-engineering techniques are essential for creating and evolving modern language processors.
A compiler must translate correctly the potentially infinite set of programs that could be written in the source language. The problem of generating the optimal target code from a source program is undecidable in general, thus, compiler writers must evaluate tradeoffs about what problems to tackle and what heuristics to use to approach the problem of generating efficient code.
The study of compilers is mainly a study of how we design the right mathematical models and choose the right algorithms, while balancing the need for generality and power against simplicity and efficiency. Some of most fundamental models are finite-state machines and regular expressions. These models are useful for describing the lexical units of programs (keywords, identifiers) and for describing the algorithms used by the compiler to recognize those units. Also among the most fundamental models are context-free grammars, used to describe the syntactic structure of programming languages such as the nesting of parentheses or control constructs. Similarly, trees are an important model for representing the structure of programs and their translation into object code.
The term "optimization" in compiler design refers to the attempts that a compiler makes to produce code that is more efficient than the obvious code. "Optimization" is thus a misnomer, since there is no way that the code produced by a compiler can be guaranteed to be as fast or faster than any other code that performs the same task.
In modern times, the optimization of code that a compiler performs has become both more important and more complex. It is more complex because processor architectures have become more complex, yielding more opportunities to improve the way code executes. It is more important because massively parallel computers require substantial optimization, or their performance suffers by orders of magnitude. With the likely prevalence of multi core machines (computers with chips that have large numbers of processors on them), all compilers will have to face the problem of taking advantage of multiprocessor machines.
Compiler optimizations must meet the following design objectives:


  • The optimization must be correct, that is, preserve the meaning of the compiled program.




  • The optimization must improve the performance of many programs.







  • The engineering effort required must be manageable.

It is impossible to overemphasize the importance of correctness. It is trivial to write a compiler that generates fast code if the generated code need not be correct! Optimizing compilers are so difficult to get right that we dare say that no optimizing compiler is completely error-free! Thus, the most important objective in writing a compiler is that it is correct. The second goal is that the compiler must be effective in improving the performance of many input programs. Normally, performance means the speed of the program execution. Especially in embedded applications, we may also wish to minimize the size of the generated code. And in the case of mobile devices, it is also desirable that the code minimizes power consumption.


Typically, the same optimizations that speed up execution time also conserve power. Besides performance, usability aspects such as error reporting and debugging are also important. Third, we need to keep the compilation time short to support a rapid development and debugging cycle. Finally, a compiler is a complex system; we must keep the system simple to assure that the engineering and maintenance costs of the compiler are manageable.
Lesson 04

Syntax Director Translator, Syntax Definition, CFG, Derivations, Parse Trees, Ambiguity, Associativity & Precedence

This section illustrates the compiling techniques by developing a program that translates representative programming language statements into three-address code, an intermediate representation.


The analysis phase of a compiler breaks up a source program into constituent pieces and produces an internal representation for it, called intermediate code. The synthesis phase translates the intermediate code into the target program. Analysis is organized around the "syntax" of the language to be compiled. The syntax of a programming language describes the proper form of its programs, while the semantics of the language defines what its programs mean, that is, what each program does when it executes. For specifying syntax, we present a widely used notation, called context-free grammars or BNF (for Backus-NaurForm).
Besides specifying the syntax of a language, a context-free grammar can be used to help guide the translation of programs. For simplicity, we consider the syntax-directed translation of infix expressions to postfix form, a notation in which operators appear after their operands. For example, the postfix form of the expression 9 - 5 + 2 is 95 - 2+. Translation into postfix form is rich enough to illustrate syntax analysis. The simple translator handles expressions like 9 - 5 + 2, consisting of digits separated by plus and minus signs. One reason for starting with such simple expressions is that the syntax analyzer can work directly with the individual characters for operators and operands.
Lexical analyzer allows a translator to handle multicharacter constructs like identifiers, which are written as sequences of characters, but are treated as units called tokens during syntax analysis, for example, in the expression count + 1, the identifier count is treated as a unit. The lexical analyzer allows numbers, identifiers, and "white space" (blanks, tabs, and newlines) to appear within expressions.

lec04-01.png

In this model, the parser produces a syntax tree that is further translated into three-address code. Some compilers combine parsing and intermediate-code generation into one component.


Context Free Grammar is used to specify the syntax of the language. Shortly we can say it “Grammar”. A grammar describes the hierarchical structure of most programming language constructs.
Ex. if ( expression ) statement else statement
That is, an if-else statement is the concatenation of the keyword if, an opening parenthesis, an expression, a closing parenthesis, a statement, the keyword else, and another statement. Using the variable expr to denote an expression and the variable stmt to denote a statement, this structuring rule can be expressed as stmt -> if ( expr ) stmt else stmt in which the arrow may be read as "can have the form." Such a rule is called a production. In a production, lexical elements like the keyword if and the parentheses are called terminals. Variables like expr and stmt represent sequences of terminals and are called nonterminals.
A context-free grammar has four components:


  1. A set of terminal symbols, sometimes referred to as "tokens." The terminals are the elementary symbols of the language defined by the grammar.




  1. A set of nonterminals, sometimes called "syntactic variables." Each nonterminal represents a set of strings of terminals, in a manner we shall describe.




  1. A set of productions, where each production consists of a nonterminal, called the head or left side of the production, an arrow, and a sequence of terminals and/or nonterminals , called the body or right side of the production .




  1. A designation of one of the nonterminals as the start symbol.

Example Expressions … 9 – 5 + 2 , 5 – 4 , 8 …


Since a plus or minus sign must appear between two digits, we refer to such expressions as lists of digits separated by plus or minus signs. The productions are


List → list + digit

List → list – digit

List → digit

Digit → 0 1 1 1 2 1 3 1 4 1 5 1 6 1 7 1 8 1 9
A grammar derives strings by beginning with the start symbol and repeatedly replacing a nonterminal by the body of a production for that nonterminal. The terminal strings that can be derived from the start symbol form the language defined by the grammar.
Parsing is the problem of taking a string of terminals and figuring out how to derive it from the start symbol of the grammar. If it cannot be derived from the start symbol of the grammar, then reporting syntax errors within the string.
Given a context-free grammar, a parse tree according to the grammar is a tree with the following properties:


  • The root is labeled by the start symbol.

  • Each leaf is labeled by a terminal or by ɛ.

  • Each interior node is labeled by a nonterminal.

  • If A ® X1 X2 … Xn is a production, then node A has immediate children X1, X2, …, Xn where Xi is a (non)terminal or e.

A grammar can have more than one parse tree generating a given string of terminals. Such a grammar is said to be ambiguous. To show that a grammar is ambiguous, all we need to do is find a terminal string that is the yield of more than one parse tree. Since a string with more than one parse tree usually has more than one meaning, we need to design unambiguous grammars for compiling applications, or to use ambiguous grammars with additional rules to resolve the ambiguities.


Consider the Grammar G = [ {string}, {+,-,0,1,2,3,4,5,6,7,8,9}, P, string ]
Its productions are string ® string + string | string - string | 0 | 1 | … | 9

This grammar is ambiguous, because more than one parse tree


represents the string 9-5+2

By convention,

9+5+2 is equivalent to (9+5) +2 &

9-5-2 is equivalent to (9-5) -2

When an operand like 5 has operators to its left and right, conventions are needed for deciding which operator applies to that operand. We say that the operator + associates to the left, because an operand with plus signs on both sides of it belongs to the operator to its left. In most programming languages the four arithmetic operators, addition, subtraction, multiplication and division are left-associative.
Consider the expression 9+5*2. There are two possible interpretations of this expression: (9+5 ) *2 or 9+ ( 5*2) . The associativity rules for + and * apply to occurrences of the same operator, so they do not resolve this ambiguity.
An Associativity and precedence table is shown below:

lec04-03.png



Download 310.43 Kb.

Share with your friends:
  1   2   3   4   5   6   7




The database is protected by copyright ©ru.originaldll.com 2024
send message

    Main page