Computer Organization and Architecture

The computer industry is moving ahead like no other. The primary driving force is the ability of chip manufacturers to pack more and more transistors per chip every year. More transistors, which are tiny electronic switches, means larger memories and more powerful processors. Gordon Moore, co-founder and former chairman of Intel, noticed that each new generation of memory chips was being introduced 3 years after the previous one. Since each new generation had four times as much memory as its predecessor, he realized that the number of transistors on a chip was increasing at a constant rate and predicted this growth would continue for decades to come. This observation has become known as Moore’s law. Today, Moore’s law is often expressed as the number of transistors doubling every 18 months. Of course, Moore’s law is not a law at all, but simply an empirical observation about how fast solid state physicists and process engineers are advancing the state-of-the-art, and a prediction that they will continue at the same rate in the future.

Moore’s law has created what economist’s call a virtuous circle. Advances in technology (transistors/chip) lead to better products and lower prices. Lower prices lead to new applications (nobody was making video games for computers when computers cost $10 million each). New applications lead to new markets and new companies springing up to take advantage of them. The existence of all these companies leads to competition, which in turn, creates economic demand for better technologies with which to beat the others. The circle is then round. The gains afforded by Moore’s law can be used in several different ways. One way is to build increasingly powerful computers at constant price. Another approach is to build the same computer for less and less money every year. The computer industry has done both of these and more, resulting in a wide variety of computers available now.

A digital computer is a machine that can solve problems for people by carrying out instructions given to it. A sequence of instructions describing how to perform a certain task is called a program. The electronic circuits of each computer can recognize and directly execute a limited set of simple instructions into which all its programs must be converted before they can be executed.

There is a large gap between what is convenient for people and what is convenient for computers. People want to do X, but computers can only do Y. This leads to a problem. The goal of this course is to explain how this problem can be solved. The problem can be attacked in two ways: both involve designing a new set of instructions that is more convenient for people to use than the set of built-in machine instructions. Both methods, and increasingly, a combination of the two, are widely used.

Syllabus (Download)

Lesson Plan (Download)

Lecture Notes

  1. Structure of Computers, Register Transfer and Micro-operations (Download)
  2. Basic Computer Organization and Design (Download)
  3. Micro-programmed Control, Computer Arithmetic (Download)
  4. The Memory System (Download)
  5. Multiprocessors (Download)

Midterm Question Papers and Solution Manuals
Midterm 1: Question PaperSolution Manual
Midterm 2: Question PaperSolution Manual

Previous Year University Question Papers (Download)

Advertisements