Computer System Architecture


Digital System and Numbering System

Introduction
A numeral system (or system of numeration) is a mathematical
notation for representing a set of numbers by symbols in a consistent
manner. It can be seen as the context that allows the numeral "11" to be
interpreted as the binary numeral for three, the decimal numeral for
eleven, or other numbers in different bases.

Ideally, a numeral system will:

* Represent a useful set of numbers (e.g. all whole numbers, integers,
or real numbers)

* Give every number represented a unique representation (or at least a
standard representation)

* Reflect the algebraic and arithmetic structure of the numbers.

For example, the usual decimal representation of whole numbers
gives every whole number a unique representation as a finite sequence
of digits, with the operations of arithmetic (addition, subtraction,
multiplication and division)
being present as the standard
algorithms of arithmetic.

As a Software professional, should you care about hardware?

Some Software Professional look at computer hardware the same
way many drivers look at their Bikes: the use of a Bike doesn't require
the knowledge of how to build one.

Knowing how to design and build a computer may not be vital to the
computer professional, but it goes a long way toward improving their
skills, i.e., making them better drivers.

For anyone going into a career involving computer programming,
system design, or the installation and maintenance of computer
systems, the principles of computer organization provide tools to
create better designs.

These include:

System design tools –

The same design theories used at the lowest level of system design are
also applied at higher levels. For example, the same methods a
circuit board designer uses to create the interface between a processor
and its memory chips are used to design the addressing scheme of an
IP network.

Software design tools –

The same procedures used to optimize digital circuits can be used for
the logic portions of software. Complex blocks of if-statements, for
example, can be simplified or made to run faster using these tools.


Improved troubleshooting skills –

A clear understanding of the inner workings of a computer gives the
technician servicing it the tools to isolate a problem quicker and
with greater accuracy.

• Interconnectivity –

Hardware is needed to connect the real world to a computer's inputs
and outputs. Writing software to control a system such as an
automotive air bag could produce catastrophic results without a
clear understanding of the architecture and hardware of a
microprocessor.

This means Software Engineer with experience in hardware
design has a significant advantage over hardware engineers in this
market.


Types of Signals
Analog Signals

An analog or analogue signal is any time continuous signal where some time varying feature of the signal is a representation of some other time varying quantity. It differs from a digital signal in that small fluctuations in the signal are meaningful. Analog is usually thought of in an electrical context, however mechanical, pneumatic, hydraulic, and other systems may also convey analog signals.

An analog signal uses some property of the medium to convey the signal's information. For example, an aneroid barometer uses rotary position as the signal to convey pressure information. Electrically, the property most commonly used is voltage followed closely by frequency, current, and charge.

Any information may be conveyed by an analog signal, often such a signal is a measured response to changes in physical phenomena, such as sound, light, temperature, position, or pressure, and is achieved using a transducer.

For example, in sound recording, fluctuations in air pressure (that is to say, sound) strike the diaphragm of a microphone which causes corresponding fluctuations in a voltage or the current in an electric circuit. The voltage or the current is said to be an "analog" of the sound.

Since an analogue signal has a theoretically infinite resolution, it will always have a higher resolution than any digital system where the resolution is in discrete steps. In practice, as analogue systems become more complex, effects such as non linearity and noise ultimately degrade analogue resolution such that digital systems surpass it. In analogue systems it is difficult to detect when such degradation occurs, but in digital systems, degradation can not only be detected, but corrected as well.

The real world is analog. What does that mean? Well, an analog value is equivalent to a floating-point number with an infinite number of places to the right of the decimal point. For example, temperatures do not take on distinct values such as 75°, 76°, 77°, 78°, etc. They take values like 75.434535... In fact, between the temperatures 75.435° and 75.436°, there are an infinite number of possible values.

A man doesn't weigh exactly 178 pounds. Add an atom, and his weight changes. When values such as temperature or weight change over time, they follow what is called a continuous curve. Between any two values on the curve, an infinite number of values take place over an infinite number of points in time.

img

Digital Signals

Digital signals are digital representations of discrete-time signals, which are often derived from analog signals.

An analog signal is a datum that changes over time—say, the temperature at a given location; the depth of a certain point in a pond; or the amplitude of the voltage at some node in a circuit—that can be represented as a mathematical function, with time as the free variable (abscissa) and the signal itself as the dependent variable (ordinate).

A discrete-time signal is a sampled version of an analog signal: the value of the datum is noted at fixed intervals (for example, every microsecond) rather than continuously.

If individual time values of the discrete-time signal, instead of being measured precisely (which would require an infinite number of digits), are approximated to a certain precision—which, therefore, only requires a specific number of digits—then the resultant data stream is termed a digital signal. The process of approximating the precise value within a fixed number of digits, or bits, is called quantization.

In conceptual summary, a digital signal is a quantized discrete-time signal; a discrete-time signal is a sampled analog signal.

In the Digital Revolution, the usage of digital signals has increased significantly. Many modern media devices, especially the ones that connect with computers use digital signals to represent signals that were traditionally represented as continuous-time signals; cell phones, music and video players, personal video recorders, and digital cameras are examples.

In most applications, digital signals are represented as binary numbers, so their precision of quantization is measured in bits. Suppose, for example, that we wish to measure a signal to two significant decimal digits. Since seven bits, or binary digits, can record 128 discrete values (viz., from 0 to 127), those seven bits are more than sufficient to express a range of one hundred values.

There is such a thing as an analog computer, a computer that processes information using analog levels of electricity or the positions of mechanical devices. The overwhelming majority of today's computers do not do this, however.

Instead, they represent an analog value by converting it to a number with a fixed resolution, i.e., a fixed number of digits to the right of the decimal point. This measurement is referred to as a digital value. If the value is changing with respect to time, then a sequence of measurements can be taken, the period between the measurements typically remaining fixed.

img


Types of Digital Signals
Representation of Digital Signals

Digital systems do not store numbers the way humans do. A human can remember the number 4.5 and understand that it represents a quantity. The digital system does not have this capability. Instead, digital systems work with numbers using millions of tiny switches called transistors. Each transistor can remember only one of two possible values, on or off.

This is referred to as a binary system. The values represented by the transistors of a binary system can be interpreted as needed by the application. On and off can just as easily mean 1 or 0, yes or no, true or false, up or down, or high or low. At this point, it is immaterial what the two values represent. What matters is that there are only two possible values per transistor.

The complexity of the computer comes in how the millions of transistors are designed to work together. For the purpose of this discussion, the two values of a transistor will be referred to as logic 1 and logic 0.

Now let's examine some of the methods used to represent binary data by first looking at a single binary signal. Assume we are recording the binary values present on a single wire controlling a light bulb. Excluding lights controlled by dimmer switches, a light bulb circuit is a binary system; the light is either on or off, logic 1 or logic 0 respectively.

Over time, the state of the light bulb changes following position of the switch. The top portion of image below represents the waveform of the binary signal controlling the light bulb based on the changes in the switch position shown in the lower half of the image.

img

This representation is much like a mathematical x-y plot where the x-axis represents time and the y-axis identifies either logic 1 or 0. Sometimes, two or more binary lines are grouped together to perform a single function.

For example, the overall lighting in a room may be controlled by three different switches controlling independent banks of lights. This circumstance may be represented with a diagram such as the one shown in image below

img

Alternatively, multiple lines can be combined into a more abstract representation such as the one shown in image below

img

Types of Digital Signals

Edges

A single binary signal can have one of two possible transitions as shown in Figure 1-11. The first one, a transition from logic 0 to logic 1, is called a rising edge transition. The second one, a transition from logic 1 to logic 0 is called a falling edge transition.

Pulses

A binary pulse occurs when a signal changes from one value to the other for a short period, then returns to its original value. Examples of this type of signal might be the power-on or reset buttons on a computer (momentarily pressed, then released) or the button used to initialize synchronization between a PDA and a computer.

img

There are two types of pulses. The first is called a positive-going pulse, and it has an idle state of logic 0 with a short pulse to logic 1. The other one, a negative-going pulse, has an idle state of logic 1 with a short pulse to logic 0. Both of these signals are shown in image below:

img

Non-Periodic Pulse Trains

Some digital signals such as the data wires of an Ethernet link or the data and address lines of a memory interface do not have a characteristic pattern in their changes between logic 1 and logic 0. These are called non-periodic pulse trains.

img

Like music, the duration of the notes or the spaces between the notes can be longer or shorter. On the page, they do not look meaningful, but once the reader is given the tools to interpret the signal, the data they contain becomes clear.

Periodic Pulse Trains

Some signals act as the heartbeat to a digital system. For example, a signal might tell a system, "Every 1/100th of a second, you need to ____." The output from a car's processor to control the engine's spark plug is such a signal. These signals are referred to as periodic pulse trains.

Like the drum beat to a song, a periodic pulse train is meant to synchronize events or keep processes moving. The defining characteristic of this type of waveform is that all measurements between any two subsequent, identical parts of the waveform produce the same value. This value is referred to as the period, T, and it has units of seconds/cycle (read seconds per cycle).

img

The measurement of the period does not fully describe a periodic pulse train, however; a second measurement, the width of the pulse, tw, is needed. For example, the two signals in image below have the same period. Their pulse widths, however, are not the same. In signal a, tw is about one-fourth of the signal's period while tw of signal b is about onehalf of the signal's period.

img

It is also common to represent the rate of the pulses in a periodic pulse train with the inverse measurement of the period. This measurement, called the frequency of the periodic pulse train has units of cycles/second, otherwise known as Hertz (Hz). To determine the frequency of a periodic pulse train from the period, invert the measurement for the period.

img

Pulse-Width Modulation

The last measurement of a periodic waveform is the duty cycle. The duty cycle represents the percentage of time that a periodic signal is a logic '1'. For example, image below represents a periodic pulse train where tw is about one-quarter or 25% of the duration of the period. Therefore, the duty cycle of this signal is 25%.

img


Binary Numbering System
The binary numeral system, or base-2 number system, is a numeral system that represents numeric values using two symbols, usually 0 and 1. More specifically, the usual base-2 system is a positional notation with a radix of 2. Owing to its straightforward implementation in electronic circuitry, the binary system is used internally by virtually all modern computers.

The binary number system is the most important one in digital systems, but several others are also important. The decimal system is important because it is universal used to represent quantities outside a digital system. This means that there will be situations where decimal values have to be converted to binary values before they are entered into the digital system.

In additional to binary and decimal, two other number systems find wide-spread applications in digital systems. The octal (base-8) and hexadecimal (base-16) number systems are both used for the same purpose- to provide an efficient means for representing large binary system.

History of Binary Numbering System

The ancient Indian mathematician Pingala presented the first known description of a binary numeral system between the 8th and 4th centuries BC, written in Hindu numerals. The numeration system was based on the Eye of Horus Old Kingdom numeration system.

A full set of 8 trigrams and 64 hexagrams, analogous to the 3-bit and 6-bit binary numerals, were known to the ancient Chinese in the classic text I Ching. Similar sets of binary combinations have also been used in traditional African divination systems such as Ifá as well as in medieval Western geomancy.

In 1605 Francis Bacon discussed a system by which letters of the alphabet could be reduced to sequences of binary digits, which could then be encoded as scarcely visible variations in the font in any random text.

Importantly for the general theory of binary encoding, he added that this method could be used with any objects at all: "provided those objects be capable of a twofold difference only; as by Bells, by Trumpets, by Lights and Torches, by the report of Muskets, and any instruments of like nature""

The modern binary number system was fully documented by Gottfried Leibniz in the 17th century. Leibniz's system used 0 and 1, like the modern binary numeral system.

In 1854, British mathematician George Boole published a landmark paper detailing a system of logic that would become known as Boolean algebra. His logical system proved instrumental in the development of the binary system, particularly in its implementation in electronic circuitry.

In 1937, Claude Shannon produced his master's thesis at MIT that implemented Boolean algebra and binary arithmetic using electronic relays and switches for the first time in history. Entitled A Symbolic Analysis of Relay and Switching Circuits, Shannon's thesis essentially founded practical digital circuit design.

Binary equivalent of different Decimal values

img


Binary Conversion
Unsigned Binary Counting

The simplest form of numeric representation with bits is unsigned binary. When we count upward through the positive integers using decimal, we start with a 0 in the one's place and increment that value until we reach the upper limit of a single digit, i.e., 9. At that point, we've run out of the "symbols" we use to count, and we need to increment the next digit, the ten's place. We then reset the one's place to zero, and start the cycle again.

img

Since computers do not have an infinite number of transistors, the number of digits that can be used to represent a number is limited. This would be like saying we could only use the hundreds, tens, and ones place when counting in decimal. This has two results. First, it limits the number of values we can represent. For our example where we are only allowed to count up to the hundreds place in decimal, we would be limited to the range of values from 0 to 999.

Second, we need a way to show others that we are limiting the number of digits. This is usually done by adding leading zeros to the number to fill up any unused places. For example, a decimal 18 would be written 018 if we were limited to three decimal digits. Counting with bits, hereafter referred to as counting in binary, is subject to these same issues.

The only difference is that decimal uses ten symbols (0, 1, 2, 3, 4, 5, 6, 7, 8, and 9) while binary only uses two symbols (0 and 1). In binary, the rightmost place is considered the ones place just like decimal. The next place is incremented after the ones place reaches 1.

This means that the second place in binary represents the value after 1, i.e., a decimal 2. The third place is incremented after a 1 is in both the ones place and the twos place, i.e., we've counted to a decimal 3. Therefore, the third place represents a decimal 4. Continuing this process shows us that each place in binary represents a successive power of two.

Table Below uses 5 bits to count up to a decimal 17. Examine each row where a single one is present in the binary number. This reveals what that position represents. For example, a binary 01000 is shown to be equivalent to a decimal 8. Therefore, the fourth bit position from the right is the 8’s position.

img

This information will help us develop a method for converting unsigned binary numbers to decimal and back to unsigned binary. Some of you may recognize this as "base-2" math. This gives us a method for indicating which representation is being used when writing a number down on paper.

For example, does the number 100 represent a decimal value or a binary value? Since binary is base-2 and decimal is base-10, a subscript "2" is placed at the end of all binary numbers in this book and a subscript "10" is placed at the end of all decimal numbers. This means a binary 100 should be written as 1002 and a decimal 100 should be written as 10010.

Converting binary to decimal

To convert binary into decimal is very simple and can be done as shown below: Say we want to convert the 8 bit value 10011101 into a decimal value, we can use a formula like that below:

128

64

32

16

8

4

2

1

1

0

0

1

1

1

0

1


As you can see we have placed the numbers 1, 2, 4, 8, 16, 32, 64, 128 (powers of two) in reverse numerical order and then written the binary value below, to convert you simply take a value from the top row wherever there is a 1 below and add the values together, for instance in our example we would have 128 + 16 + 8 + 4 + 1 = 157.

For a 16 bit value you would use the decimal values 1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384, 32768 (powers of two) for the conversion.

Because we know binary is base 2 then the above could be written as:

1*27 + 0*26 + 0*25 + 1*24 + 1*23 + 1*22 + 0*21 + 1*20 = 157.

Converting decimal to binary

To convert decimal to binary is also very simple, you simply divide the decimal value by 2 and then write down the remainder, repeat this process until you cannot divide by 2 anymore,

For example let's take the decimal value 157:

157 / 2 = 78
78 / 2 = 39
39 / 2 = 19
19 / 2 = 9
9 / 2 = 4
4 / 2 = 2
2 / 2 = 1
1 / 2 = 0

with a remainder of 1
with a remainder of 0
with a remainder of 1
with a remainder of 1
with a remainder of 1
with a remainder of 0
with a remainder of 0
with a remainder of 1

<--- to convert write this remainder first.


Next write down the value of the remainders from bottom to top (in other words write down the bottom remainder first and work your way up the list) which gives:

10011101 = 157

Binary Terminology

When writing values in decimal, it is common to separate the places or positions of large numbers in groups of three digits separated by commas. For example, 34532374510 is typically written 345,323,74510 showing that there are 345 millions, 323 thousands, and 745 ones. This practice makes it easier to read and comprehend the magnitude of the numbers.

Binary numbers are also divided into components depending on their application. Each binary grouping has been given a name. To begin with, a single place or position in a binary number is called a bit, short for binary digit. For example, the binary number 01102 is made up of four bits. The rightmost bit, the one that represents the ones place, is called the Least Significant Bit or LSB.

The leftmost bit, the one that represents the highest power of two for that number, is called the Most Significant Bit or MSB. Note that the MSB represents a bit position. It doesn't mean that a '1' must exist in that position. The next four terms describe how bits might be grouped together.

Highest power of two for that number is called the Most Significant Bit or MSB. Note that the MSB represents a bit position. It doesn't mean that a '1' must exist in that position. The next four terms describe how bits might be grouped together.

Nibble – A four bit binary number

Byte – A unit of storage for a single character, typically an eight bit (2 nibble) binary number (short for binary term)

Word – Typically a sixteen bit (2 byte) binary number

Double Word – A thirty-two bit (2 word) binary number

The following are some examples of each type of binary number:

Bit

12

Nibble

10102

Byte

101001012

Word

10100101111100002

Double Word

101001011111000011001110111011012



Sampling
In signal processing, sampling is the reduction of a continuous signal to a discrete signal. A common example is the conversion of a sound wave (a continuous-time signal) to a sequence of samples (a discrete-time signal).

A sample refers to a value or set of values at a point in time and/or space.

A sampler is a subsystem or operator that extracts samples from continuous signal. A theoretical ideal sampler multiplies a continuous signal with a Dirac comb. This multiplication "picks out" values but the result is still continuous-valued. If this signal is then discredited (i.e., converted into a sequence) and quantized along all dimensions it becomes a discrete signal.

In general, an n-bit analog-to-digital converter divides the analog range into 2n – 1 increment. Image below presents four graphs, each with a different number of bits providing different levels of resolution. The figure shows how the addition of a bit can improve the resolution of the values represented by the binary integers.

Earlier, it was mentioned how a computer can only capture a "snapshot" or sample of an analog voltage. This is sufficient for slowly varying analog values, but if a signal is varying quickly, details might be missed.

To improve the signal's digital representation, the rate at which the samples are taken, the sampling rate, needs to be increased. There is also a chance of missing a higher frequency because the sampling rate is too slow. This is called aliasing, and there are examples of it in everyday life.

1. When riding in a car at night, you may have noticed that at times the wheels of an adjacent car appear to be spinning at a different rate than they really are or even appear to spin backwards. (If you have no idea what I'm talking about, watch the wheels of the car next to you the next time you are a passenger riding at night under street lights.)

2. The effect is caused by the fact that the light from street lamps actually pulses, a fact that is usually not detectable with the human eye. This pulsing provides a sampling rate, and if the sampling rate is not fast enough for the spinning wheel, the wheel appears to be spinning at a different rate than it really is. Street lights are not necessary to see this effect. Your eye has a sampling rate of its own which means that you may experience this phenomenon in the day time.

img

Aliasing is also the reason fluorescent lights are never used in sawmills. Fluorescent lights blink much like a very fast strobe light and can make objects appear as if they are not moving. If the frequency of the fluorescent lights and the speed of a moving saw blade are multiples of each other, it can appear as if the spinning blade is not moving at all.

Both of these examples are situations where aliasing has occurred. If a signal's frequency is faster than the sampling rate, then information will be lost, and the collected data will never be able to duplicate the original.

To avoid aliasing, the rate at which samples are taken must be more than twice as fast as the highest frequency you wish to capture. This is called the Nyquist Theorem. For example, the sampling rate for audio CDs is 44,100 samples/second. Dividing this number in half gives us the highest frequency that an audio CD can play back, i.e., 22,050 Hz.

For an analog telephone signal, a single sample is converted to an 8-bit integer. If these samples are transmitted across a single channel of a T1 line which has a data rate of 56 Kbps (kilobits per second), then we can determine the sampling rate.

img

This means that the highest analog frequency that can be transmitted across a telephone line using a single channel of a T1 link is 7,000÷2 = 3,500 Hz. That's why the quality of voices sent over the telephone is poor when compared to CD quality. Although telephone users can still recognize the voice of the caller on the opposite end of the line when the higher frequencies are eliminated, their speech often sounds muted.


Hexadecimal Numeral System
Hexadecimal, base-16, or simply hex, is a numeral system with a radix, or base, of 16, usually written using the symbols 0–9 and A–F, or a–f. Its primary purpose is to represent the binary code in a format easier for humans to read, and acts as a form of shorthand, in which one hexadecimal digit stands in place of four binary bits.

For example, the decimal numeral 79, whose binary representation is 01001111, is 4F in hexadecimal (4 = 0100, F = 1111). IBM introduced the current hexadecimal system to the computing world; an earlier version, using the digits 0–9 and u–z, had been introduced in 1956, and had been used by the Bendix G-15 computer.

It is usually difficult for a person to look at a binary number and instantly recognize its magnitude. Unless you are quite experienced at using binary numbers, recognizing the relative magnitudes of 101011012 and 101001012 is not immediate (17310 is greater than 16510). Nor is it immediately apparent to us that 10011011012 equals 62110 without going through the process of calculating 512 + 64 + 32 + 8 + 4 + 1.

There is another problem:
we are prone to creating errors when writing or typing binary numbers. As a quick exercise, write the binary number 10010111111011010010001112 onto a sheet of paper. Did you make a mistake? Most people would have made at least one error.

To make the binary representation of numbers easier on us humans, there is a shorthand representation for binary values. It begins by partitioning a binary number into its nibbles starting at the least significant bit (LSB).

An example is shown below:

img

Next, a symbol is used to represent each of the possible combinations of bits in a nibble. We start by numbering them with the decimal values equivalent to their binary value, i.e.:

img

Table below presents the mapping between the sixteen patterns of 1's and 0's in a binary nibble and their corresponding decimal and hexadecimal (hex) values.

Binary

Decimal

Hexadecimal

0000

0

0

0001

1

1

0010

2

2

0011

3

3

0100

4

4

0101

5

5

0110

6

6

0111

7

7

1000

8

8

1001

9

9

1010

10

A

1011

11

B

1100

12

C

1101

13

D

1110

14

E

1111

15

F


Another way to look at it is that hexadecimal counting is also similar decimal except that instead of having 10 numerals, it has sixteen. This is also referred to as a base-16 numbering system.

How do we convert binary to hexadecimal? Begin by dividing the binary number into its nibbles (if the number of bits is not divisible by 4, add leading zeros), then nibble-by-nibble use the table above to find the hexadecimal equivalent to each 4-bit pattern.

For example:

img

Therefore, 10010111101101001001112 = 25ED2716. Notice the use of the subscript "16" to denote hexadecimal representation.


BCD Numeral System
Binary-Coded Decimal (BCD) is an encoding for decimal numbers in which each digit is represented by its own binary sequence. Its main virtue is that it allows easy conversion to decimal digits for printing or display and faster decimal calculations.

Its drawbacks are the increased complexity of circuits needed to implement mathematical operations and a relatively inefficient encoding – it occupies more space than a pure binary representation. Even though the importance of BCD has diminished,[citation needed] it is still widely used in financial, commercial, and industrial applications.

In BCD, a digit is usually represented by four bits which, in general, represent the values/digits/characters 0-9. Other bit combinations are sometimes used for sign or other indications.

Usage of BCDs?

When was the last time you multiplied your electricity bill number by 5? Or have you ever added 215 to your mobile number? These questions seem silly, but they reveal an important fact about numbers. Some numbers do not need to have mathematical operations performed on them, and therefore, do not need to have a mathematically correct representation in binary.

In an effort to afford decimal notation the same convenience of conversion to binary that hex has, Binary Coded Decimal (BCD) was developed. It allows for fast conversion to binary of integers that do not require mathematical operations.

As in hex, each decimal digit represents a nibble of the binary equivalent. Table below shows the conversion between each decimal digit and the binary equivalent.

img

It is important to note that there is no algorithmic conversion between BCD and decimal. BCD is only a method for representing decimal numbers in binary. Another item of note is that not all binary numbers convert from BCD to decimal. 0101 1011 0101 for example is an illegal BCD value because the second nibble, 1011, does not have a corresponding decimal value.

There are two primary advantages of BCD over binary. First, any mathematical operation based on a factor of ten is simpler in BCD. Multiplication by ten, for example, appends a nibble of zeros to the right side of the number.

All it takes to truncate or round a base-10 value in BCD is to zero the appropriate nibbles. Because of this advantage, BCD is used frequently in financial applications due to legal requirements that decimal values be exactly represented.

The second advantage is that conversion between entered or displayed numeric characters and the binary value being stored is fast and does not require much code.

The primary disadvantage is that unless the operation is based on a power of ten, mathematical operations are more complex and require more hardware. In addition, BCD is not as compact as unsigned binary and may require more memory for storage. BCD can be used to represent signed values too, although there are many implementations.

Different processor manufacturers use different methods making it hard to select a standard. One of the easiest ways to represent negative numbers in BCD is to add a nibble at the beginning of the number to act as a plus/minus sign. By using one of the illegal BCD values to represent a negative sign and another to represent a positive sign, BCD values can be made negative or positive.

Binary values of 1010, 1100, or 1110 typically mean the number is positive while binary values of 1011 or 1101 mean the number is negative. For example, –1234 in signed BCD would be 1101 0001 0010 0011 0100 while +1234 would be 1100 0001 0010 0011 0100.

BCD values preceded with 1111 typically indicate unsigned values.


Gray Codes
The use of binary counting sequences is common in digital applications. For example, an n-bit binary value can be used to identify the position of a rotating shaft as being within one of 2n different arcs.

As the shaft turns, a sensor can detect which of the shaft's arcs it is aligned with by reading a digital value and associating it with a specific arc. By remembering the previous position and timing the changes between positions, a processor can also compute speed and direction.

Image below shows how a shaft's position might be divided into eight arcs using three bits. This would allow a processor to determine the shaft's position to within 360o/8 = 45o.

img



Binary Arithmetic

Binary Addition
Basic Concepts behind the Binary System

To understand binary numbers, begin by recalling elementary school math. When we first learned about numbers, we were taught that, in the decimal system, things are organized into columns:

H | T | O
1 | 9 | 3

such that "H" is the hundreds column, "T" is the tens column, and "O" is the ones column. So the number "193" is 1-hundreds plus 9-tens plus 3-ones.

Years later, we learned that the ones column meant 10^0, the tens column meant 10^1, the hundreds column 10^2 and so on, such that

10^2|10^1|10^0
1 | 9 | 3

the number 193 is really {(1*10^2)+(9*10^1)+(3*10^0)}.

As you know, the decimal system uses the digits 0-9 to represent numbers. If we wanted to put a larger number in column 10^n (e.g., 10), we would have to multiply 10*10^n, which would give 10^(n+1), and be carried a column to the left.

For example, putting ten in the 10^0 column is impossible, so we put a 1 in the 10^1 column, and a 0 in the 10^0 column, thus using two columns. Twelve would be 12*10^0, or 10^0(10+2), or 10^1+2*10^0, which also uses an additional column to the left (12).

The binary system works under the exact same principles as the decimal system, only it operates in base 2 rather than base 10. In other words, instead of columns being

10^2|10^1|10^0

they are
2^2|2^1|2^0

Instead of using the digits 0-9, we only use 0-1 (again, if we used anything larger it would be like multiplying 2*2^n and getting 2^n+1, which would not fit in the 2^n column. Therefore, it would shift you one column to the left.

For example, "3" in binary cannot be put into one column. The first column we fill is the right-most column, which is 2^0, or 1. Since 3>1, we need to use an extra column to the left, and indicate it as "11" in binary (1*2^1) + (1*2^0).

Binary Addition

Regardless of the numbering system, the addition of two numbers with multiple digits is performed by adding the corresponding digits of a single column together to produce a single digit result. For example, 3 added to 5 using the decimal numbering system equals 8.

The 8 is placed in the same column of the result where the 3 and 5 came from. All of these digits, 3, 5, and 8, exist in the decimal numbering system, and therefore can remain in a single column.

In some cases, the result of the addition in a single column might be more than 9 making it necessary to place a '1' overflow or carry to the column immediately to the left. If we add 6 to 5 for example, we get 11 which is too large to fit in a single decimal digit. Therefore, 10 is subtracted from the result leaving 1 as the new result for that column.

The subtraction of 10 is compensated for by placing a carry in the next highest column, the ten's place. Another way of saying this is that 6 added to 5 equals 1 with a carry of 1. It is important to note that the addition of two digits in decimal can never result in a value greater than 18. Therefore, the carry to the next highest position will never be larger than 1.

Binary addition works the same way except that we're limited to two digits. Three of the addition operations, 0+0, 0+1, and 1+0, result in 0 or 1, digits that already exist in the binary numbering system. This means no carry will be needed.

Consider the addition of decimal numbers:

23
+
48
___

We begin by adding 3+8=11. Since 11 is greater than 10, a one is put into the 10's column (carried), and a 1 is recorded in the one's column of the sum. Next, add {(2+4) +1} (the one is from the carry)=7, which is put in the 10's column of the sum. Thus, the answer is 71.

Binary addition works on the same principle, but the numerals are different. Begin with one-bit binary addition:

0 0 1
+0 +1 +0
___ ___ ___
0 1 1

1+1 carries us into the next column. In decimal form, 1+1=2. In binary, any digit higher than 1 puts us a column to the left (as would 10 in decimal notation). The decimal number "2" is written in binary notation as "10" (1*2^1)+(0*2^0). Record the 0 in the ones column, and carry the 1 to the twos column to get an answer of "10." In our vertical notation,

1
+
1
___
10

The process is the same for multiple-bit binary numbers:

1010
+
1111
______

* Step one:
Column 2^0: 0+1=1.
Record the 1.
Temporary Result: 1; Carry: 0

* Step two:
Column 2^1: 1+1=10.
Record the 0, carry the 1.
Temporary Result: 01; Carry: 1

* Step three:
Column 2^2: 1+0=1 Add 1 from carry: 1+1=10.
Record the 0, carry the 1.
Temporary Result: 001; Carry: 1

* Step four:
Column 2^3: 1+1=10.
Add 1 from carry: 10+1=11.
Record the 11.
Final result: 11001

Alternately:

11 (carry)
1010
+1111
______
11001

Always remember

* 0+0=0
* 1+0=1
* 1+1=10

Try a few examples of binary addition:

111 101 111
+110 +111 +111
______ _____ _____


Binary Subtraction
Just as with addition, we're going to use the decimal numbering system to illustrate the process used in the binary numbering system for subtraction.

There are four possible cases of single-bit binary subtraction: 0 – 0, 0 – 1, 1 – 0, and 1 – 1. As long as the value being subtracted from (the minuend) is greater than or equal to the value subtracted from it (the subtrahend), the process is contained in a single column.

Minuend - 0 1 1
Subtrahend -0 -0 -1
0 1 0

But what happens in the one case when the minuend is less than the subtrahend? As in decimal, a borrow must be taken from the next most significant digit. The same is true for binary.

img

Pulling 1 from the next highest column in binary allows us to add 102 or a decimal 2 to the current column. For the previous example, 102 added to 0 gives us 102 or a decimal 2. When we subtract 1 from 2, the result is 1.

img


Floating Point Arithmetic
Binary numbers can also have decimal points, and to show you how, we will once again begin with decimal numbers. For decimal numbers with decimal points, the standard way to represent the digits to the right of the decimal point is to continue the powers of ten in descending order starting with -1 where 10-1=1/10th = 0.1.

That means that the number 6.5342 has 5 increments of 10-1 (tenths), 3 increments of 10-2 (hundredths), 4 increments of 10-3 (thousandths), and 2 increments of 10-4 (ten-thousandths).

The table below shows this graphically.

img

Therefore, our example has the decimal value 6*1 + 5*0.1 + 3*0.01 + 4*0.001 + 2*0.0001 = 6.5342.

Binary representation of real numbers works the same way except that each position represents a power of two, not a power of ten. To convert 10.01101 to decimal for example, use descending negative powers of two to the right of the decimal point.

Exponent

2

1

0

-1

-2

-3

-4

-5

Position Value

4

2

1

0.5

0.25

0.125

0.0625

0.03125

Sample Values

0

1

0

0

1

1

0

1


Therefore, our example has the decimal value 0*4 + 1*2 + 0*1+0*0.5 + 1*0.25 + 1*0.125 + 0*0.0625 + 1*0.03125 = 2.40625. This means that the method of conversion is the same for real numbers as it is for integer values; we've simply added positions representing negative powers of two.

Computers, however, use a form of binary more like scientific notation to represent floating-point or real numbers. For example, with scientific notation we can represent the large value 342,370,000 as 3.4237 x 108.

This representation consists of a decimal component or mantissa of 3.4237 with an exponent of 8. Both the mantissa and the exponent are signed values allowing for negative numbers and for negative exponents respectively.

Binary works the same way using 1's and 0's for the digits of the mantissa and exponent and using 2 as the multiplier that moves the decimal point left or right.

For example, the binary number 100101101.010110 would be represented as:

1.00101101010110 * 28


Hexadecimal Numbers
Hexadecimal numbers (base 16) can be added using the same method. The difference is that there are more digits in hexadecimal than there are in decimal. For example, in decimal, adding 5 and 7 results in 2 with a carry to the next highest position. In hexadecimal, however, 5 added to 7 does not go beyond the range of a single digit. In this case, 5 + 7 = C16 with no carry. It isn't until a result greater than F16 is reached (a decimal 1510) that a carry is necessary.

In decimal, if the result of an addition is greater than 9, subtract 1010 to get the result for the current column and add a carry to the next column. In binary, when a result is greater than 1, subtract 102 (i.e., 210) to get the result for the current column then add a carry to the next column. In hexadecimal addition, if the result is greater than F16 (1510) subtract 1016 (1610) to get the result for the current column and add a carry to the next column.

D16 + 516 = 1310 + 510 = 1810

By moving a carry to the next highest column, we change the result for the current column by subtracting 1610.

1810 = 210 + 1610 = 216 with a carry to the next column

Therefore, D16 added to 516 equals 216 with a carry to the next column. Just like decimal and binary, the addition of two hexadecimal digits never generates a carry greater than 1. The following shows how adding the largest hexadecimal digit, F16, to itself along with a carry from the previous column still does not require a carry larger than 1 to the next highest column.

F16 + F16 +1 = 1510 + 1510 + 1 = 3110 = 1510 + 1610 = F16 with a 1 carry to the next column

When learning hexadecimal addition, it might help to have a table showing the hexadecimal and decimal equivalents such as that shown in Table below. This way, the addition can be done in decimal, the base with which most people are familiar, and then the result can be converted back to hex.

Hex

Dec

016

010

116

110

216

210

316

310

416

10

516

510

616

610

716

710

816

810

916

910

A16

1010

B16

11101

C16

1210

D16

1310

E16

1410

F16

1510


Example

Add 3DA3216 to 4292F16.

Solution

Just like in binary and decimal, place one of the numbers to be added on top of the other so that the columns line up.

3 D A 3 2
+ 4 2 9 2 F

Adding 216 to F16 goes beyond the limit of digits hexadecimal can represent. It is equivalent to 210 + 1510 which equals 1710, a value greater than 1610. Therefore, we need to subtract 1016 (1610) giving us a result of 1 with a carry into the next position.

1
3 D A 3 2
+ 4 2 9 2 F
-------------
1

For the next column, the 161 position, we have 1 + 3 + 2 which equals 6. This result is less than 1610, so there is no carry to the next column.

1
3 D A 3 2
+ 4 2 9 2 F
-----------
6 1

The 162 position has A16 + 916 which in decimal is equivalent to 1010+ 910 = 1910. Since this is greater than 1610, we must subtract 1610 to get the result for the 162 column and add a carry in the 163 column.

1 1
3 D A 3 2
+ 4 2 9 2 F
----------------
3 6 1

For the 163 column, we have 116 + D16 + 216 which is equivalent to 110 + 1310 + 210 = 1610. This gives us a zero for the result in the 163 column with a carry.

1 1 1
3 D A 3 2
+ 4 2 9 2 F
-------------
0 3 6 1

Last of all, 1 + 3 + 4 = 8 which is the same in both decimal and hexadecimal, so the result is 3DA3216 + 4292F16 = 8036116:

1 1 1
3 D A 3 2
+ 4 2 9 2 F 8
---------------
0 3 6 1



Logic Gates

Introduction to Logic Gates
Unless you are an electrical engineer, an understanding of the operation of transistors is unnecessary. One level above the transistors, however, is a set of basic building blocks for digital circuitry. These building blocks are called logic gates, and it is at this level that we will begin our discussion.

A logic gate has the basic format shown below in the image. It takes one or more binary signals as its inputs, and using a specific algorithm, outputs a single bit as a result. Each time the inputs change, the output bit changes in a predictable fashion.

img

For example, the algorithm for a specific gate may cause a one to be output if an odd number of ones are present at the gate's input and a zero to be output if an even number of ones is present.

A number of standard gates exist, each one of which has a specific symbol that uniquely identifies its function. Image below presents the symbols for the four primary types of gates that are used in digital circuit design.

img


AND Logic Gate
The expression X = A * B reads as "X equals A AND B". The multiplication sign stands for the AND operation, same for ordinary multiplication of 1s and 0s. The AND operation produces a result of 1 occurs only for the single case when all of the input variables are 1.

The output is 0 for any case where one or more inputs are 0

img

img

An example of three input AND gate and its truth table is as follows:

img

img

With the AND operation, 1*1 = 1, 1*1*1 = 1 and so on.


NOT Gate
This logic gate, sometimes referred to as an inverter, is the only one in that has a single input. Its input goes into the left side of a triangle symbol while its output exits the gate through a small circle placed at the tip of the opposite corner. Note that it is the small circle that defines the operation of this gate, so it should not be left out.

The NOT gate is used to flip the value of a digital signal. In other words, it changes logic1 input to logic0 or it changes logic0 input to logic1. An example of an inverter might be the light detection circuit used to control the automatic headlights of a car.

During the daylight hours, sunshine enters the light detector which is typically installed on the top surface of the car's dashboard. This acts as logic1 input. Since it is daytime, the headlights need to be turned off, logic0. When the sun goes down and no light enters the light detector, logic0, then the headlights must be turned on, logic1.

The NOT operation is unlike the OR and AND operations in that it can be performed on a single input variable. For example, if the variable A is subjected to the NOT operation, the result x can be expressed as

x = A'

where the prime (') represents the NOT operation. This expression is read as:
x equals NOT A
x equals the inverse of A
x equals the complement of A

Each of these is in common usage and all indicate that the logic value of x = A' is opposite to the logic value of A.

The truth table of the NOT operation is as follows:

img

img

1' = 0 because NOT 1 is 0
0' = 1 because NOT 0 is 1

The NOT operation is also referred to as inversion or complementation, and these terms are used interchangeably.


OR Gate
The OR gate is a digital logic gate that implements logical disjunction - it behaves according to the truth table to the right. A HIGH output (1) results if one or both the inputs to the gate are HIGH (1). If neither input is HIGH, a LOW output (0) results.

The expression X = A + B reads as "X equals A OR B".

The + sign stands for the OR operation, not for ordinary addition.

The OR operation produces a result of 1 when any of the input variable is 1.

The OR operation produces a result of 0 only when all the input variables are 0.

img

img

An example of three input OR gate and its truth table is as follows:

img

img

With the OR operation, 1 + 1 = 1, 1+ 1 + 1 = 1 and so on.

A common example of an OR gate circuit is a security system.

Assume that a room is protected by a system that watches three inputs:

a door open sensor, a glass break sensor, and a motion sensor. If none of these sensors detects a break-in condition, i.e., they all send logic 0 to the OR gate, the alarm is off (logic 0). If any of the sensors detects a break-in, it will send logic 1 to the OR gate which in turn will output logic 1 indicating an alarm condition.

It doesn't matter what the other sensors are reading, if any sensor sends logic 1 to the gate, the alarm should be going off. Another way to describe the operation of this circuit might be to say, "The alarm goes off if the door opens or the glass breaks or motion is detected." Once again, the use of the word "or" suggests that this circuit should be implemented with an OR gate.


XOR (Exclusive OR) Gate
The XOR gate (sometimes EOR gate) is a digital logic gate that implements exclusive disjunction - it behaves according to the truth table to the right. A HIGH output (1) results if one, and only one, of the inputs to the gate is HIGH (1). If both inputs are LOW (0) or both are HIGH (1), a LOW output (0) results.

This function is addition modulo 2. As a result, XOR gates are used to implement binary addition in computers. A half adder consists of an XOR gate and an AND gate.

An Exclusive-OR gate is sometimes called a parity checker. Parity checkers count the number of ones being input to a circuit and output logic 1 or 0 based on whether the number of ones is odd or even. The Exclusive-OR (XOR) gate counts the number of ones at its input and outputs logic 1 for an odd count and logic 0 for an even count.

A common application for XOR gates is in error checking circuits. If two digital signals are compared bit-by-bit, an error free condition means that logic 0 will be compared to logic 0 and logic 1 will be compared with logic 1. In both of these cases, there is an even number of logic 1's being input to the XOR gate.

Therefore, as long as the XOR gate outputs logic 0, there is no error. If, however, an error has occurred, then one signal will be logic 1 and the other will be logic 0. This odd number of logic 1's will cause the XOR gate to output logic 1 indicating an error condition. Just as with the AND and OR gates, the XOR gate may have two or more inputs. Image below shows all four states for a two-input XOR.

img

These representations of logic gates can be an awkward way to describe the operation of a complex circuit. The next section will introduce an easier method for representing the operation of any digital circuit incorporating the NOT, AND, OR, and XOR gates.


Truth Tables
A truth table is a mathematical table used in logic — specifically in connection with Boolean algebra, Boolean functions, and propositional calculus — to compute the functional values of logical expressions on each of their functional arguments, that is, on each combination of values taken by their logical variables. In particular, truth tables can be used to tell whether a propositional expression is true for all legitimate input values, that is, logically valid.

A truth table serves purpose of to show the output of a digital system based on each of the possible input patterns of ones and zeros; by making a column for each of the inputs to a digital circuit and a column for the resulting output. A row is added for each of the possible patterns of ones and zeros that could be input to the circuit.

For example, a circuit with three inputs, A, B, and C, would have 23 = 8 possible patterns of ones and zeros:

A=0, B=0, C=0
A=0, B=0, C=1
A=0, B=1, C=0
A=0, B=1, C=1
A=1, B=0, C=0
A=1, B=0, C=1
A=1, B=1, C=0
A=1, B=1, C=1

This means that a truth table representing a circuit with three inputs would have 8 rows. Image below presents a sample truth table for a digital circuit with three inputs, A, B, and C, and one output, X. Note that the output X doesn't represent anything in particular. It is just added to show how the output might appear in a truth table.

img

There is also a trick to deriving the combinations. Assume we need to build a truth table with four inputs, A, B, C, and D. Since 24 = 16, we know that there will be sixteen possible combinations of ones and zeros.

For half of those combinations, A will equal zero, and for the other half, A will equal one. When A equals zero, the remaining three inputs, B, C, and D, will go through every possible combination of ones and zeros for three inputs.

Three inputs have 23 = 8 patterns, which coincidentally, is half of 16. For half of the 8 combinations, B will equal zero, and for the other half, B will equal one. Repeat this for C and then D.

This gives us a process to create a truth table for four inputs. Begin with the A column and list eight zeros followed by eight ones. Half of eight is four, so in the B column write four zeros followed by four ones in the rows where A equals zero, then write four zeros followed by four ones in the rows where A equals one. Half of four equals two, so the C column will have two zeros followed by two ones followed by two zeros then two ones and so on.

The process should end with the last column having alternating ones and zeros. If done properly, the first row should have all zeros and the last row should have all ones.

img

In addition to verifying that all combinations of ones and zeros have been listed, this method also provides a consistency between all truth tables in the way that their rows are organized.


Gates Timing Diagram
The operation of a logic gate can also be represented with a timing diagram. Images below shows the output that results from three binary input signals for an AND gate, OR gate, and XOR gate respectively.

Remember that the AND gate outputs a one only if all its inputs equal one, the OR gate outputs a one if any input equals one, and the XOR gate outputs a one if an odd number of ones is present at the input. Use these rules to verify the outputs shown in the images.

img

img

img



Boolean algebra

Introduction
Boolean algebra (or Boolean logic) is a logical calculus of truth values, developed by George Boole. It resembles the algebra of real numbers as taught in high school, but with the numeric operations of multiplication xy, addition x + y, and negation -x replaced, respectively, by the logical operations of conjunction x?y, disjunction x?y, and complement x.

The Boolean operations are these and all other operations obtainable from them by composition; equivalently, the finitary operations on the set {0,1}. The laws of Boolean algebra can be defined axiomatically as the equations derivable from a sufficient finite subset of those laws, such as the equations axiomatizing a complemented distributive lattice or a Boolean ring, or semantically as those equations identically true or valid over {0,1}.

The axiomatic approach is sound and complete in the sense that it proves respectively neither more nor fewer laws than the validity-based semantic approach.

Why we need Boolean algebra?

Now, after studying so much, we have two methods for representing combinational logic: schematics and truth tables.

These two methods are inadequate for a number of reasons:

• Both schematics and truth tables take too much space to describe the operation of complex circuits with numerous inputs.

• The truth table "hides" circuit information.

• The schematic diagram is difficult to use when trying to determine output values for each input combination.

To overcome these problems, a discipline much like algebra is practiced that uses expressions to describe digital circuitry. These expressions, which are called Boolean expressions, use the input variable names, A, B, C, etc., and combine them using symbols representing the AND, OR, and NOT gates.

These Boolean expressions can be used to describe or evaluate the output of a circuit. There is an additional benefit. Just like algebra, a set of rules exist that when applied to Boolean expressions can dramatically simplify them. A simpler expression that produces the same output can be realized with fewer logic gates. A lower gate count results in cheaper circuitry, smaller circuit boards, and lower power consumption.

If your software uses binary logic, the logic can be represented with Boolean expressions. Applying the rules of simplification will make the software run faster or allow it to use less memory.


Boolean algebra Symbols
Analogous behavior can be shown between Boolean algebra and mathematical algebra, and as a result, similar symbols and syntax can be used. For example, the following expressions hold true in math.

img

This looks like the AND function allowing an analogy to be drawn between the mathematical multiply and the Boolean AND functions. Therefore, in Boolean algebra, A AND'ed with B is written A • B.

img

Mathematical addition has a similar parallel in Boolean algebra, although it is not quite as flawless. The following four mathematical expressions hold true for addition.

img

The first three operations match the OR function, and if the last operation is viewed as having a non-zero result instead of the decimal result of two, it too can be viewed as operating similar to the OR function. Therefore, the Boolean OR function is analogous to the mathematical function of addition.

img

An analogy cannot be made between the Boolean NOT and any mathematical operation. Later in this chapter we will see how the NOT function, unlike AND and OR, requires its own special theorems for algebraic manipulation. The NOT is represented with a bar across the inverted element.

img

The NOT operation may be used to invert the result of a larger expression. For example, the NAND function which places an inverter at the output of an AND gate is written as:

img

Since the bar goes across A • B, the NOT is performed after the AND. Let's begin with some simple examples. Can you determine the output of the Boolean expression 1 + 0 + 1? Since the plus-sign represents the OR circuit, the expression represents 1 or 0 or 1.

img

Since an OR-gate outputs a 1 if any of its inputs equal 1, then 1 + 0 + 1 = 1.

The two-input XOR operation is represented using the symbol ?, but it can also be represented using a Boolean expression. Basically, the two-input XOR equals one if A = 0 and B = 1 or if A = 1 and B = 0. This gives us the following expression.

img

The next section shows how the Boolean operators •, +, ?, and the NOT bar may be combined to represent complex combinational logic.


Laws & Rules of Boolean algebra
The manipulation of algebraic expressions is based on fundamental laws. Some of these laws extend to the manipulation of Boolean expressions. For example, the commutative law of algebra which states th at the result of an operation is the same regardless of the order of operands holds true for Boolean algebra too. This is shown for the OR function applied to two variables in the truth tables given below

img

Not only does image above show how the commutative law applies to the OR function, it also shows how truth tables can be used in Boolean algebra to prove laws and rules. If a rule states that two Boolean expressions are equal, then by developing the truth table for each expression and showing that the output is equal for all combinations of ones and zeros at the input, then the rule is proven true.

Below, the three fundamental laws of Boolean algebra are given along with examples.

Commutative Law:

The results of the Boolean operations AND and OR are the same regardless of the order of their operands.

img

Associative Law:

The results of the Boolean operations AND and OR with three or more operands are the same regardless of which pair of elements are operated on first.

img

Distributive Law:

The AND'ing of an operand with an OR expression is equivalent to OR'ing the results of an AND between the first operand and each operand within the OR expression.

img

Rules of Boolean algebra

NOT Rules

In algebra, the negative of a negative is a positive and taking the inverse of an inverse returns the original value. Although the NOT gate does not have an equivalent in mathematical algebra, it operates in a similar manner. If the Boolean inverse of a Boolean inverse is taken, the original value results.

img

This is proven with a truth table.

img

Since the first column and the third column have the same pattern of ones and zeros, they must be equivalent. Image below shows this rule in schematic form.

img

OR Rules

If an input to a logic gate is a constant 0 or 1 or if the same signal is connected to more than one input of a gate, a simplification of the expression is almost always possible. This is true for the OR gate as is shown with the following four rules for simplifying the OR function. First, what happens when one of the inputs to an OR gate is a constant logic 0? It turns out that the logic 0 input drops out leaving the remaining inputs to stand on their own. Notice that the two columns in the truth table below are equivalent thus proving this rule.

img

What about inputting a logic 1 to an OR gate? In this case, a logic 1 forces the other operands into the OR gate to drop out. Notice that the output column (A + 1) is always equal to 1 regardless of what A equals. Therefore, the output of this gate will always be 1.

img

Another case of simplification occurs when an operand is connected to one input of a two-input OR gate and its inverse is connected to the other. In this case, either the operand is equal to a one or its inverse is. There is no other possibility. Therefore, at least one logic 1 is connected to the inputs of the OR gate. This gives us an output of logic 1 regardless of the inputs.

img

AND Rules

Just as with the OR gate, if either of the inputs to an AND gate is a constant (logic 0 or logic 1) or if the two inputs are the same or inversesof each other, a simplification can be performed. Let's begin with the case where one of the inputs to the AND gate is a logic 0. Remember that an AND gate must have all ones at its inputs to output a one. In this case, one of the inputs will always be zero forcing this AND to always output zero. The truth table below shows this.

img

If one input of a two-input AND gate is connected to a logic 1, then it only takes the other input going to a one to get all ones on the inputs. If the other input goes to zero, the output becomes zero. This means that the output follows the input that is not connected to the logic 1.

img

If the same operand is connected to all of the inputs of an AND gate, we get a simplification similar to that of the OR gate. Notice that the two columns in the truth table below are equivalent proving this rule.

img

Last of all, when an operand is connected to one input of a two-input AND gate and its inverse is connected to the other, either the operand is equal to a zero or its inverse is equal to zero. There is no other possibility. Therefore, at least one logic 0 is connected to the inputs of the AND gate giving us an output of logic 0 regardless of the inputs.

img

XOR Rules

Now let's see what happens when we apply these same input conditions to a two-input XOR gate. Remember that a two-input XOR gate outputs a 1 if its inputs are different and a zero if its inputs are the same. If one of the inputs to a two-input XOR gate is connected to a logic 0, then the gate's output follows the value at the second input.

In other words, if the second input is a zero, the inputs are the same forcing the output to be zero and if the second input is a one, the inputs are different and the output equals one.

img

If one input of a two-input XOR gate is connected to a logic 1, then the XOR gate acts as an inverter as shown in the table below.

img

If the same operand is connected to both inputs of a two-input XOR gate, then the inputs are always the same and the gate outputs a 0.

img

Lastly, if the inputs of a two-input XOR gate are inverses of each other, then the inputs are always different and the output is 1.

img


DeMorgan’s Theorem
A mathematician named DeMorgan developed a pair of important rules regarding group complementation in Boolean algebra. By group complementation, I'm referring to the complement of a group of terms, represented by a long bar over more than one variable.

You should recall from the chapter on logic gates that inverting all inputs to a gate reverses that gate's essential function from AND to OR, or vice versa, and also inverts the output. So, an OR gate with all inputs inverted (a Negative-OR gate) behaves the same as a NAND gate, and an AND gate with all inputs inverted (a Negative-AND gate) behaves the same as a NOR gate.

DeMorgan's theorems state the same equivalence in "backward" form: that inverting the output of any gate results in the same function as the opposite type of gate (AND vs. OR) with inverted inputs:

img

A long bar extending over the term AB acts as a grouping symbol, and as such is entirely different from the product of A and B independently inverted. In other words, (AB)' is not equal to A'B'. Because the "prime" symbol (') cannot be stretched over two variables like a bar can, we are forced to use parentheses to make it apply to the whole term AB in the previous sentence.

A bar, however, acts as its own grouping symbol when stretched over more than one variable. This has profound impact on how Boolean expressions are evaluated and reduced, as we shall see.

DeMorgan's theorem may be thought of in terms of breaking a long bar symbol. When a long bar is broken, the operation directly underneath the break changes from addition to multiplication, or vice versa, and the broken bar pieces remain over the individual variables. To illustrate:

img

When multiple "layers" of bars exist in an expression, you may only break one bar at a time, and it is generally easier to begin simplification by breaking the longest (uppermost) bar first. To illustrate, let's take the expression (A + (BC)')' and reduce it using DeMorgan's Theorems:

img

Following the advice of breaking the longest (uppermost) bar first, I'll begin by breaking the bar covering the entire expression as a first step:

img

As a result, the original circuit is reduced to a three-input AND gate with the A input inverted:


img

You should never break more than one bar in a single step, as illustrated here:

img

As tempting as it may be to conserve steps and break more than one bar at a time, it often leads to an incorrect result, so don't do it!

It is possible to properly reduce this expression by breaking the short bar first, rather than the long bar first:

img

The end result is the same, but more steps are required compared to using the first method, where the longest bar was broken first. Note how in the third step we broke the long bar in two places.

This is a legitimate mathematical operation, and not the same as breaking two bars in one step! The prohibition against breaking more than one bar in one step is not a prohibition against breaking a bar in more than one place. Breaking in more than one place in a single step is okay; breaking more than one bar in a single step is not.

You might be wondering why parentheses were placed around the sub-expression B' + C', considering the fact that I just removed them in the next step. I did this to emphasize an important but easily neglected aspect of DeMorgan's theorem.

Since a long bar functions as a grouping symbol, the variables formerly grouped by a broken bar must remain grouped lest proper precedence (order of operation) be lost. In this example, it really wouldn't matter if I forgot to put parentheses in after breaking the short bar, but in other cases it might. Consider this example, starting with a different expression:

img

img

As you can see, maintaining the grouping implied by the complementation bars for this expression is crucial to obtaining the correct answer. Let's apply the principles of DeMorgan's theorems to the simplification of a gate circuit:

img

As always, our first step in simplifying this circuit must be to generate an equivalent Boolean expression. We can do this by placing a sub-expression label at the output of each gate, as the inputs become known. Here's the first step in this process:

img\

Next, we can label the outputs of the first NOR gate and the NAND gate. When dealing with inverted-output gates, I find it easier to write an expression for the gate's output without the final inversion, with an arrow pointing to just before the inversion bubble.

Then, at the wire leading out of the gate (after the bubble), I write the full, complemented expression. This helps ensure I don't forget a complementing bar in the sub-expression, by forcing myself to split the expression-writing task into two steps:

img

Finally, we write an expression (or pair of expressions) for the last NOR gate:

img

Now, we reduce this expression using the identities, properties, rules, and theorems (DeMorgan's) of Boolean algebra:

img

The equivalent gate circuit for this much-simplified expression is as follows:

img

• REVIEW

DeMorgan's Theorems describe the equivalence between gates with inverted inputs and gates with inverted outputs. Simply put, a NAND gate is equivalent to a Negative-OR gate, and a NOR gate is equivalent to a Negative-AND gate.

• When "breaking" a complementation bar in a Boolean expression, the operation directly underneath the break (addition or multiplication) reverses, and the broken bar pieces remain over the respective terms.

• It is often easier to approach a problem by breaking the longest (uppermost) bar before breaking any bars under it. You must never attempt to break two bars in one step!

• Complementation bars function as grouping symbols. Therefore, when a bar is broken, the terms underneath it must remain grouped. Parentheses may be placed around these grouped terms as a help to avoid changing precedence.

No comments:

Post a Comment