Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

COA Important Questions And Answers, Assignments of Computer Architecture and Organization

Some Important Questions Regarding Computer Organization And Architecture

Typology: Assignments

2024/2025

Available from 03/15/2025

bittu-chakraborty
bittu-chakraborty 🇮🇳

2 documents

1 / 17

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Types Of RAM:-
SRAM: Static Random Access Memory (SRAM)
Data is stored in transistors and requires a constant power flow.
Because of the continuous power, SRAM doesn’t need to be
refreshed to remember the data being stored. SRAM is called
static as no change or action i.e. refreshing is not needed to keep
the data intact. It is used in cache memories.
Characteristics of Static RAM
Static RAM is much faster than DRAM.
Static RAM takes less power to perform.
Advantages of Static RAM
Static RAM has low power consumption.
Static RAM has faster access speeds than DRAM.
Static RAM helps in creating a speed-sensitive cache.
Disadvantages of Static RAM
Static RAM has less memory capacity.
Static RAM has high costs of manufacturing than DRAM.
Static Ram comprises of more complex design.
DRAM: Dynamic Random Access Memory (DRAM)
Data is stored in capacitors. Capacitors that store data in DRAM
gradually discharge energy, no energy means the data has been
lost. So, a periodic refresh of power is required in order to
function. DRAM is called dynamic as constant change or
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff

Partial preview of the text

Download COA Important Questions And Answers and more Assignments Computer Architecture and Organization in PDF only on Docsity!

Types Of RAM:- SRAM: Static Random Access Memory (SRAM) Data is stored in transistors and requires a constant power flow. Because of the continuous power, SRAM doesn’t need to be refreshed to remember the data being stored. SRAM is called static as no change or action i.e. refreshing is not needed to keep the data intact. It is used in cache memories. Characteristics of Static RAM

  • Static RAM is much faster than DRAM.
  • Static RAM takes less power to perform. Advantages of Static RAM
  • Static RAM has low power consumption.
  • Static RAM has faster access speeds than DRAM.
  • Static RAM helps in creating a speed-sensitive cache. Disadvantages of Static RAM
  • Static RAM has less memory capacity.
  • Static RAM has high costs of manufacturing than DRAM.
  • Static Ram comprises of more complex design. DRAM: Dynamic Random Access Memory (DRAM) Data is stored in capacitors. Capacitors that store data in DRAM gradually discharge energy, no energy means the data has been lost. So, a periodic refresh of power is required in order to function. DRAM is called dynamic as constant change or

action(change is continuously happening) i.e. refreshing is needed to keep the data intact. It is used to implement main memory. Characteristics of Dynamic RAM

  • Dynamic RAM is slower in comparison to SRAM.
  • Dynamic RAM is less costly than SRAM.
  • Dynamic RAM has high power consumption. Advantages of Dynamic RAM
  • Dynamic RAM has Low costs of manufacturing than SRAM.
  • Dynamic RAM has greater memory capacities. Disadvantages of Dynamic RAM
  • Dynamic RAM has a slow access speed.
  • Dynamic RAM has high power consumption.
  • Dynamic RAM data can be lost in case of Power Loss. SRAM DRAM It stores information as long as the power is supplied. It stores information as long as the power is supplied or a few milliseconds when the power is switched off. Transistors are used to store information in SRAM. Capacitors are used to store data in DRAM. Capacitors are not used hence no refreshing is required. To store information for a longer time, the contents of the capacitor need to be refreshed periodically. SRAM is faster compared to DRAM. DRAM provides slow access speeds.

Q. Design a 4 bit combinational circuit decrementer using four full adders. Ans: What is 4 Bit Binary Decrementer? It subtracts 1 binary value from the existing binary value stored in the register or in other words we can simply say that it decreases the existing value stored in the register by 1. For any n- bit binary decrementer, ‘n’ refers to the storage capacity of the register which needs to be decremented by 1. So we require ‘n’ number of full adders. Thus, in case of 4 bit binary decrementer we require 4 full adders. Explanation: Working: It consists of 4 full adders, connected one after the other. Each full adder has 3 inputs (carry input, 1, A) and 2 outputs (carry output and S)Afull adderbasic consist of 2 half adders and an OR gate.The carry(C) from previous full adder is propagated to the next full adder. So carry output from one full adder becomes one of the three input of the next full adder.It follows the concept of 2’s complement, so we take 1 as input in all 4 full adder as seen from the above diagram.So we add 1111 in order to subtract 1. Reason for adding 1111: This is because our main motive is to subtract 1 which in 4 bit representation is 0001Representing it in 1’s complement will give: 1110Representing it in 2’s complement (adding 1 to 1’s complement) will give: 1111This is the reason why input 1111 is given to get a decremented output in 4 bit binary decrementer.

In 4 bit representation In 1's complement In 2's complement 1 -------------------------> 0001 ----------------------> 1110 ---------------------> 1111

Q. How to implement full adders using half adders? Ans: Implementation of full adder from half adders is possible because the half adders add two 1-bit inputs in full adder we add three 1-bit inputs. To obtain a full adder from a half adder we take the first two inputs and add them and use the sum and carry outputs and the third input to get the final sum and carry output of the full adder. In this article, we will explore half adders, and full adders and implement full adders using half adders. What is Half Adder? Half adder is a combinational circuit that is used to add two 1-bit inputs to generate two outputs sum and carry. The sum in half adder is given by XORing both the inputs. The carry in the half adder is given by the product of both inputs. Half Adders are used in the Various Digital Systems Where Addition of Binary Numbers is Required Such as Arithmetic Circuits, Digital Calculators, Microcontrollers and Processors, Communication systems and Control Systems.

Block Diagram for Half Adder Below is the block diagram for half adder.

What is Full Adder? Full adder is a combinational circuit that is used to add three 1-bit inputs to generate two outputs sum and carry. The sum in full adder is given by XORing all the inputs. The carry in the full adder is given by sum of product of two inputs. Full Adders are important component in digital Circuit and are used in the ALUs (Arithmetic Logic Units),Binary Additions, Address decoding, Counters and Registers, Data Encryption and Decryption and Digital Signal Processing. Expression for Sum in Full Adder From the above truth table, the expression for sum S in half adder is: S = A ⊕ B ⊕C where, A, B and C are inputs and ⊕represents XOR operation.

  • Then, connect the output of first XOR gate [i.e., A ⊕B] and third input C as the input to second AND gate [i.e., (A ⊕ B)C].
  • After that connect the output of first AND gate [i.e., AB] and output of second AND gate [i.e., (A ⊕B)C] as the input to OR gate which results in the carry of full adder i.e., AB + AC
    • BC.
  • Since, we require 1 XOR and 1 AND gate to implement half adder and in above diagram we used 2 XOR gate, 2 AND gate and 1 OR gate.
  • From above point we get that to implement full adder using half adders we require 2 half adders and 1 OR gate. Q. Define booth’s algorithm?what is best case and worst case? Explain with examples. Ans: Booth's algorithm is an efficient method for multiplying signed binary integers, particularly in 2's complement representation, by minimizing the number of additions and subtractions required. The best case occurs with large blocks of consecutive 0s or 1s in the multiplier, while the worst case is when there are alternating 0s and 1s Best Case Scenario:
  • Description: The best case occurs when the multiplier contains long sequences of consecutive 0s or 1s.
  • Example: Consider multiplying 1010 (multiplicand) by 11110000 (multiplier). Since there are long strings of 1s and 0s, Booth's algorithm will require fewer operations than a traditional method. Worst Case Scenario:
  • Description: The worst case occurs when the multiplier has a pattern of alternating 0s and 1s (e.g., 01010101).
  • Example: Multiplying 1010 (multiplicand) by 0101 (multiplier). Each bit of the multiplier requires an operation (addition or subtraction), leading to a higher number of operations.
  • Reasoning: With alternating 0s and 1s, the algorithm has to perform an operation (addition or subtraction) for nearly every bit of the multiplier, leading to a higher operation count. Q. What are the advantages of Interrupt IO over Programed IO? Ans: Interrupt-initiated I/O and programmed I/O are two different methods for handling input and output operations in computer systems. Here are the advantages of interrupt-initiated I/O over programmed I/O: 1. Efficiency:
  • CPU Utilization : In interrupt-initiated I/O, the CPU can perform other tasks while waiting for I/O operations to complete, leading to better CPU utilization. In contrast, programmed I/O requires the CPU to continuously check the status of the I/O device (polling), which wastes CPU cycles. 2. Responsiveness:
  • Immediate Handling : Interrupts allow the system to respond immediately to I/O events. The CPU can be notified as soon as an I/O device is ready, leading to quicker response times compared to programmed I/O, where the CPU might be busy with other tasks.

Q. What is Von-Neumann bottleneck? How can it be reduced? Ans: Von Neumann Architecture: This architecture, named after John von Neumann, uses a single bus to transfer both data and instructions between the CPU and memory.

  • The Bottleneck: The problem arises because the CPU can process data much faster than it can receive it from memory, or send it back. This means the CPU is often idle, waiting for data or instructions to arrive, creating a performance bottleneck.

  • Impact: This bottleneck can limit the overall performance of a computer system, especially when dealing with large datasets or complex computations.

  • Solutions: To mitigate the Von Neumann bottleneck, various techniques have been developed, including:

  • Caching: Using faster, smaller memory (cache) to store frequently accessed data and instructions.

  • Parallel Processing: Employing multiple CPUs or processing units to perform tasks concurrently.

  • Alternative Architectures: Exploring non-Von Neumann architectures, such as Harvard architecture, which separates data and instruction memory.

  • What is the Von Neumann Bottleneck? - TechTarget The von Neumann bottleneck is a limitation on throughput caused by the standard personal computer architecture. The Q. What are the hazards in pipelining? Ans: Here's a breakdown of the different types of pipeline hazards:

  1. Structural Hazards:
  • Definition: Occur when two or more instructions in the pipeline need to use the same hardware resource (e.g., memory unit, functional unit) simultaneously, and the resource can only be accessed by one instruction at a time.
  • Example: If two instructions both need to access the same memory unit in the same clock cycle, one instruction must wait for the other to finish using the resource.
  • Mitigation: Duplicating the hardware resource or using a different resource for each instruction.
  1. Data Hazards:
  • Definition: Occur when one instruction's execution depends on the result of a previous instruction that hasn't completed yet.
  1. Control Hazards:
    • Definition: Occur due to branch instructions, where the next instruction to be executed depends on the outcome of the branch.
    • Example: If the processor predicts the wrong path when branching, it will have to discard the instructions fetched after the branch and re-fetch the correct path.
    • Mitigation:
      • Branch Prediction: Predicting the outcome of the branch instruction to minimize the impact of mispredictions.
      • Branch Target Buffering: Storing the target address of branch instructions in a cache for faster access. Q. Describe IEEE 754 format to represent floating point numbers. Ans: The IEEE Standard for Floating-Point Arithmetic (IEEE 754) is a technical standard for floating-point computation which was established in 1985 by the Institute of Electrical and Electronics Engineers (IEEE). The standard addressed many problems found in the diverse floating point implementations that made them difficult to use reliably and reduced their portability. IEEE Standard 754 floating point is the most common representation today for real numbers on computers, including Intel-based PC’s, Macs, and most Unix platforms. There are several ways to represent floating point number but IEEE 754 is the most efficient in most cases. IEEE 754 has 3 basic components:
    1. The Sign of Mantissa – This is as simple as the name. 0 represents a positive number while 1 represents a negative number.
  1. The Biased exponent – The exponent field needs to represent both positive and negative exponents. A bias is added to the actual exponent in order to get the stored exponent.
  2. The Normalised Mantissa – The mantissa is part of a number in scientific notation or a floating-point number, consisting of its significant digits. Here we have only 2 digits, i.e. O and 1. So a normalised mantissa is one with only one 1 to the left of the decimal. IEEE 754 numbers are divided into two based on the above three components: single precision and double precision.