The NASA Columbia Supercomputer. The National Aeronautics and Space Administration ( NASA, ˈnæsə is an agency of the United States government, responsible for the nation's public space program Columbia is a Supercomputer built by Silicon Graphics for NASA.

A computer is a machine that manipulates data according to a list of instructions. A machine is any device that uses Energy to perform some activity In Computer science, data is anything in a form suitable for use with a Computer. In Computer science, source code (commonly just source or code) is any sequence of statements or declarations written in some Human-readable

The first devices that resemble modern computers date to the mid-20th century (around 1940 - 1945), although the computer concept and various machines similar to computers existed earlier. Early electronic computers were the size of a large room, consuming as much power as several hundred modern personal computers. [1] Modern computers are based on tiny integrated circuits and are millions to billions of times more capable while occupying a fraction of the space. Microchipsjpg|right|thumb|200px|Microchips ( EPROM memory with a transparent window showing the integrated circuit inside [2] Today, simple computers may be made small enough to fit into a wristwatch and be powered from a watch battery. A watch is a timepiece that is made to be worn on a person The term now usually refers to a wristwatch, which is worn on the wrist with a strap or Bracelet. A watch battery, button cell, silver button cell, or coin cell is a small form-factor battery designed for use in wrist watches pocket Personal computers, in various forms, are icons of the Information Age and are what most people think of as "a computer"; however, the most common form of computer in use today is the embedded computer. A personal computer ( PC) is any Computer whose original sales price size and capabilities make it useful for individuals and which is intended to be operated Information Age is a term that has been used to refer to the present era An embedded system is a special-purpose Computer system designed to perform one or a few dedicated functions often with Real-time computing constraints Embedded computers are small, simple devices that are used to control other devices — for example, they may be found in machines ranging from fighter aircraft to industrial robots, digital cameras, and children's toys. A fighter aircraft is a Military aircraft designed primarily for air-to-air combat with other Aircraft, as opposed to a Bomber, which is designed An industrial robot is officially defined by ISO as an automatically controlled reprogrammable multipurpose manipulator programmable in three or more axes. Many compact digital still cameras can record Sound and moving Video as well as still Photograph. This article is about playthings For other uses of the term see Toy (disambiguation.

The ability to store and execute lists of instructions called programs makes computers extremely versatile and distinguishes them from calculators. Computer programs (also software programs, or just programs) are instructions for a Computer. A calculator is device for performing mathematical calculations distinguished from a Computer by having a limited problem solving ability and an interface optimized for interactive The Church–Turing thesis is a mathematical statement of this versatility: any computer with a certain minimum capability is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, computers with capability and complexity ranging from that of a personal digital assistant to a supercomputer are all able to perform the same computational tasks given enough time and storage capacity. A supercomputer is a Computer that is at the frontline of processing capacity particularly speed of calculation (at the time of its introduction

## History of computing

The Jacquard loom was one of the first programmable devices. The history of computer hardware encompasses the hardware, its architecture, and its impact on software. The Jacquard Loom is a mechanical Loom, invented by Joseph Marie Jacquard in 1801, that has holes punched in pasteboard each row of which corresponds to

It is difficult to identify any one device as the earliest computer, partly because the term "computer" has been subject to varying interpretations over time. Originally, the term "computer" referred to a person who performed numerical calculations (a human computer), often with the aid of a mechanical calculating device. Before electronic computers became commercially available the term " computer " in use from the mid 17th century literally meant "one who computes" A calculator is device for performing mathematical calculations distinguished from a Computer by having a limited problem solving ability and an interface optimized for interactive

The history of the modern computer begins with two separate technologies - that of automated calculation and that of programmability.

Examples of early mechanical calculating devices included the abacus, the slide rule and arguably the astrolabe and the Antikythera mechanism (which dates from about 150-100 BC). An abacus, also called a counting frame, is a calculating tool used primarily by Asians for performing arithmetic processes The slide rule, also known as a slipstick, is a mechanical Analog computer. The astrolabe is a historical Astronomical instrument used by classical astronomers, Navigators The Antikythera mechanism (ˌæntɪkɪˈθɪərə an-ti-ki- theer -uh is an ancient mechanical Calculator (also described as the first known " mechanical The end of the Middle Ages saw a re-invigoration of European mathematics and engineering, and Wilhelm Schickard's 1623 device was the first of a number of mechanical calculators constructed by European engineers. Wilhelm Schickard ( April 22 1592 &ndash October 24 1635) was a German Polymath who built one of the first Calculating machines However, none of those devices fit the modern definition of a computer because they could not be programmed.

Hero of Alexandria (c. Hero (or Heron) of Alexandria ( Ήρων ο Αλεξανδρεύς) (c 10 – 70 AD) built a mechanical theater which performed a play lasting 10 minutes and was operated by a complex system of ropes and drums that might be considered to be a means of deciding which parts of the mechanism performed which actions - and when. [3] This is the essence of programmability. In 1801, Joseph Marie Jacquard made an improvement to the textile loom that used a series of punched paper cards as a template to allow his loom to weave intricate patterns automatically. Joseph Marie Charles nicknamed Jacquard ( 7 July 1752 &ndash 7 August 1834) was a Straw hat maker before becoming A loom is a Machine or device for Weaving thread or Yarn into Textiles Looms can range from very small hand-held frames to large free-standing The resulting Jacquard loom was an important step in the development of computers because the use of punched cards to define woven patterns can be viewed as an early, albeit limited, form of programmability.

It was the fusion of automatic calculation with programmability that produced the first recognizable computers. In 1837, Charles Babbage was the first to conceptualize and design a fully programmable mechanical computer that he called "The Analytical Engine". The analytical engine, an important step in the History of computers, was the design of a mechanical general-purpose Computer by the British mathematician Charles [4] Due to limited finances, and an inability to resist tinkering with the design, Babbage never actually built his Analytical Engine.

Large-scale automated data processing of punched cards was performed for the U.S. Census in 1890 by tabulating machines designed by Herman Hollerith and manufactured by the Computing Tabulating Recording Corporation, which later became IBM. See also Unit record equipment The tabulating machine was a machine designed to assist in Tabulations. Herman Hollerith ( February 29, 1860 &ndash November 17, 1929) was a German-American statistician who developed a The Computing Tabulating Recording Corporation (CTR was incorporated on June 15 1911 in Endicott New York a few miles west of Binghamton International Business Machines Corporation abbreviated IBM and nicknamed "Big Blue", is a multinational Computer Technology By the end of the 19th century a number of technologies that would later prove useful in the realization of practical computers had begun to appear: the punched card, Boolean algebra, the vacuum tube (thermionic valve) and the teleprinter. Boolean algebra (or Boolean logic) is a logical calculus of truth values, developed by George Boole in the late 1830s This article is about the electronic device not an evacuated pipe used for experiments in Free-fall. A teleprinter (

During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. An analog computer (spelt analogue in British English is a form of Computer that uses continuous physical phenomena such as electrical mechanical Computation is a general term for any type of Information processing. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers.

Defining characteristics of some early digital computers of the 1940s (See History of computing hardware)
NameFirst operationalNumeral systemComputing mechanismProgrammingTuring complete
Zuse Z3 (Germany)May 1941BinaryElectro-mechanicalProgram-controlled by punched film stockYes (1998)
Atanasoff–Berry Computer (USA)Summer 1941BinaryElectronicNot programmable—single purposeNo
Colossus (UK)December 1943BinaryElectronicProgram-controlled by patch cables and switchesNo
Harvard Mark I – IBM ASCC (USA)1944DecimalElectro-mechanicalProgram-controlled by 24-channel punched paper tape (but no conditional branch)Yes (1998)
ENIAC (USA)November 1945DecimalElectronicProgram-controlled by patch cables and switchesYes
Manchester Small-Scale Experimental Machine (UK)June 1948BinaryElectronicStored-program in Williams cathode ray tube memoryYes
Modified ENIAC (USA)September 1948DecimalElectronicProgram-controlled by patch cables and switches plus a primitive read-only stored programming mechanism using the Function Tables as program ROMYes
EDSAC (UK)May 1949BinaryElectronicStored-program in mercury delay line memoryYes
Manchester Mark I (UK)October 1949BinaryElectronicWilliams cathode ray tube memory and magnetic drum memoryYes
CSIRAC (Australia)November 1949BinaryElectronicStored-program in mercury delay line memoryYes

EDSAC was one of the first computers to implement the stored program (von Neumann) architecture. Electronic Delay Storage Automatic Calculator ( EDSAC) was an early British Computer. The von Neumann architecture is a design model for a stored-program Digital computer that uses a processing unit and a single separate storage structure
• Konrad Zuse's electromechanical "Z machines". Konrad Zuse (ˈkɔnʁat ˈtsuːzə June 22, 1910 Berlin - December 18, 1995 Hünfeld) was a German In Engineering, electromechanics combines the Sciences of Electromagnetism of Electrical engineering and mechanics. The Z3 (1941) was the first working machine featuring binary arithmetic, including floating point arithmetic and a measure of programmability. Konrad Zuse 's The binary numeral system, or base-2 number system, is a Numeral system that represents numeric values using two symbols usually 0 and 1. In 1998 the Z3 was proved to be Turing complete, therefore being the world's first operational computer. In computability theory, several closely-related terms are used to describe the "computational power" of a computational system (such as an Abstract machine or
• The non-programmable Atanasoff–Berry Computer (1941) which used vacuum tube based computation, binary numbers, and regenerative capacitor memory. The Atanasoff–Berry Computer ( ABC) was the first electronic Digital Computing device Computation is a general term for any type of Information processing. Regenerative capacitor memory is a type of computer memory that uses the electrical property of Capacitance to store the Bits of data
• The secret British Colossus computers (1943)[5], which had limited programmability but demonstrated that a device using thousands of tubes could be reasonably reliable and electronically reprogrammable. The Colossus machines were electronic Computing devices used by British codebreakers to read Encrypted German messages during It was used for breaking German wartime codes. Cryptanalysis (from the Greek kryptós, "hidden" and analýein, "to loosen" or "to untie" is the study of methods for
• The Harvard Mark I (1944), a large-scale electromechanical computer with limited programmability. The IBM Automatic Sequence Controlled Calculator ( ASCC) called the Mark I by Harvard University, was the first large-scale automatic digital
• The U. S. Army's Ballistics Research Laboratory ENIAC (1946), which used decimal arithmetic and is sometimes called the first general purpose electronic computer (since Konrad Zuse's Z3 of 1941 used electromagnets instead of electronics). The United States Army Ballistic Research Laboratory (BRL at Aberdeen Proving Ground, Maryland was the center for the army's research efforts in interior ENIAC, short for Electronic Numerical Integrator And Computer, was the first general-purpose electronic Computer. The decimal ( base ten or occasionally denary) Numeral system has ten as its base. Electronics refers to the flow of charge (moving Electrons through Nonmetal conductors (mainly Semiconductors, whereas electrical Konrad Zuse (ˈkɔnʁat ˈtsuːzə June 22, 1910 Berlin - December 18, 1995 Hünfeld) was a German Konrad Zuse 's An electromagnet is a type of Magnet in which the Magnetic field is produced by the flow of an electric current. Electronics refers to the flow of charge (moving Electrons through Nonmetal conductors (mainly Semiconductors, whereas electrical Initially, however, ENIAC had an inflexible architecture which essentially required rewiring to change its programming.

Several developers of ENIAC, recognizing its flaws, came up with a far more flexible and elegant design, which came to be known as the stored program architecture or von Neumann architecture. The von Neumann architecture is a design model for a stored-program Digital computer that uses a processing unit and a single separate storage structure This design was first formally described by John von Neumann in the paper "First Draft of a Report on the EDVAC", published in 1945. The First Draft of a Report on the EDVAC (commonly shortened to First Draft) was an incomplete 101-page document written by John A number of projects to develop computers based on the stored program architecture commenced around this time, the first of these being completed in Great Britain. See also Kingdom of Great Britain Great Britain (Breatainn Mhòr Prydain Fawr Breten Veur Graet Breetain is the larger of the two main islands The first to be demonstrated working was the Manchester Small-Scale Experimental Machine (SSEM) or "Baby". The Manchester Small-Scale Experimental Machine (SSEM, nicknamed Baby, was the world's first stored-program Computer. However, the EDSAC, completed a year after SSEM, was perhaps the first practical implementation of the stored program design. Electronic Delay Storage Automatic Calculator ( EDSAC) was an early British Computer. Shortly thereafter, the machine originally described by von Neumann's paper—EDVAC—was completed but did not see full-time use for an additional two years. EDVAC ( E lectronic D iscrete V ariable A utomatic C omputer) was one of the earliest electronic Computers

Nearly all modern computers implement some form of the stored program architecture, making it the single trait by which the word "computer" is now defined. By this standard, many earlier devices would no longer be called computers by today's definition, but are usually referred to as such in their historical context. While the technologies used in computers have changed dramatically since the first electronic, general-purpose computers of the 1940s, most still use the von Neumann architecture. The von Neumann architecture is a design model for a stored-program Digital computer that uses a processing unit and a single separate storage structure The design made the universal computer a practical reality.

Microprocessors are miniaturized devices that often implement stored program CPUs. A microprocessor incorporates most or all of the functions of a Central processing unit (CPU on a single Integrated

Vacuum tube-based computers were in use throughout the 1950s. This article is about the electronic device not an evacuated pipe used for experiments in Free-fall. Vacuum tubes were largely replaced in the 1960s by transistor-based computers. In Electronics, a transistor is a Semiconductor device commonly used to amplify or switch electronic signals When compared with tubes, transistors are smaller, faster, cheaper, use less power, and are more reliable. In the 1970s, integrated circuit technology and the subsequent creation of microprocessors, such as the Intel 4004, caused another generation of decreased size and cost, and another generation of increased speed and reliability. Microchipsjpg|right|thumb|200px|Microchips ( EPROM memory with a transparent window showing the integrated circuit inside A microprocessor incorporates most or all of the functions of a Central processing unit (CPU on a single Integrated The Intel 4004 is a 4-bit Central processing unit (CPU released by Intel Corporation in 1971 By the 1980s, computers became sufficiently small and cheap to replace simple mechanical controls in domestic appliances such as washing machines. A washing machine, or washer, is a machine designed to clean Laundry, such as Clothing, Towels and sheets The term is mostly applied The 1980s also witnessed home computers and the now ubiquitous personal computer. A home computer was a class of Personal computer entering the market in 1977 and becoming common during the 1980s A personal computer ( PC) is any Computer whose original sales price size and capabilities make it useful for individuals and which is intended to be operated With the evolution of the Internet, personal computers are becoming as common as the television and the telephone in the household. Television ( TV) is a widely used Telecommunication medium for sending ( Broadcasting) and receiving moving Images, either monochromatic Basic principle A traditional landline telephone system also known as "plain old telephone service" (POTS, commonly handles both signaling and audio information

## Stored program architecture

The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. Computer programs (also software programs, or just programs) are instructions for a Computer. That is to say that a list of instructions (the program) can be given to the computer and it will store them and carry them out at some time in the future. In Computer science, an instruction is a single operation of a processor defined by an Instruction set architecture. Computer programs (also software programs, or just programs) are instructions for a Computer.

In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. Computer data storage, often called storage or memory, refers to Computer components devices and recording media that retain digital Execution in computer and Software engineering is the process by which a Computer or Virtual computer carries out the instructions However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). A branch (or jump on some Computer architectures, such as the PDP-8 and Intel x86) is a point in a Computer program where the Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. In Computer science, conditional statements, conditional expressions and conditional constructs are features of a Programming language which Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction. In Computer science, a subroutine ( function, method, procedure, or subprogram) is a portion of code within a larger

Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention. In Computer science control flow (or alternatively flow of control refers to the order in which the individual statements, instructions or Function

Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. A calculator is device for performing mathematical calculations distinguished from a Computer by having a limited problem solving ability and an interface optimized for interactive But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time—with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. For example:

        mov      #0,sum     ; set sum to 0        mov      #1,num     ; set num to 1loop:   add      num,sum    ; add num to sum        add      #1,num     ; add 1 to num        cmp      num,#1000  ; compare num to 1000        ble      loop       ; if num <= 1000, go back to 'loop'        halt                ; end of program.  stop running

Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in about a millionth of a second. [6]

However, computers cannot "think" for themselves in the sense that they only solve problems in exactly the way they are programmed to. An intelligent human faced with the above addition task might soon realize that instead of actually adding up all the numbers one can simply use the equation

$1+2+3+...+n = {{n(n+1)} \over 2}$

and arrive at the correct answer (500,500) with little work. [7] In other words, a computer programmed to add up the numbers one by one as in the example above would do exactly that without regard to efficiency or alternative solutions.

### Programs

A 1970s punched card containing one line from a FORTRAN program. Fortran (previously FORTRAN) is a general-purpose, procedural, imperative Programming language that is especially suited to The card reads: "Z(1) = Y + W(1)" and is labelled "PROJ039" for identification purposes.

In practical terms, a computer program might include anywhere from a dozen instructions to many millions of instructions for something like a word processor or a web browser. Computer programs (also software programs, or just programs) are instructions for a Computer. A web browser is a software application which enables a user to display and interact with text images videos music games and other information typically located on a A typical modern computer can execute billions of instructions every second and nearly never make a mistake over years of operation.

Large computer programs may take teams of computer programmers years to write and the probability of the entire program having been written completely in the manner intended is unlikely. A programmer is someone who writes Computer software. The term computer programmer can refer to a specialist in one area of computer programming or to a generalist Errors in computer programs are called bugs. A software bug (or just “bug” is an error flaw mistake Failure, fault or “undocumented feature” in a Computer program that prevents it Sometimes bugs are benign and do not affect the usefulness of the program, in other cases they might cause the program to completely fail (crash), in yet other cases there may be subtle problems. A crash in Computing is a condition where a program (either an application or part of the Operating system) stops performing its expected function and also Sometimes otherwise benign bugs may be used for malicious intent, creating a security exploit. An exploit (from the same word in the French language, meaning "achievement" or "accomplishment" is a piece of Software, a chunk of data or Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design. [8]

In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). Machine code or machine language is a system of instructions and data executed directly by a Computer 's Central processing unit. In computer technology an opcode ( op eration code) is the portion of a Machine language instruction that specifies the operation to be performed The command to add two numbers together would have one opcode, the command to multiply them would have a different opcode and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from—each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer just as if they were numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. The Harvard architecture is a Computer architecture with physically separate storage and signal pathways for instructions and data The IBM Automatic Sequence Controlled Calculator ( ASCC) called the Mark I by Harvard University, was the first large-scale automatic digital Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches.

While it is possible to write computer programs as long lists of numbers (machine language) and this technique was used with many early computers,[9] it is extremely tedious to do so in practice, especially for complicated programs. Machine code or machine language is a system of instructions and data executed directly by a Computer 's Central processing unit. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember—a mnemonic such as ADD, SUB, MULT or JUMP. A mnemonic device (nəˈmɒnɪk is a Memory aid Commonly met mnemonics are often verbal something such as a very short poem or a special word used to help a person remember These mnemonics are collectively known as a computer's assembly language. See the terminology section below for information regarding inconsistent use of the terms assembly and assembler Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) tend to be unique to a particular type of computer. In Computer science, a low-level programming language is a language that provides little or no abstraction from a computer's microprocessor. For instance, an ARM architecture computer (such as may be found in a PDA or a hand-held videogame) cannot understand the machine language of an Intel Pentium or the AMD Athlon 64 computer that might be in a PC. The ARM architecture (previously the Advanced RISC Machine, and prior to that Acorn RISC Machine) is a 32-bit RISC processor architecture A handheld video game is a Video game designed for a handheld device The Pentium brand refers to Intel 's single-core x86 Microprocessor based on the P5 fifth-generation Microarchitecture. The Athlon 64 is an eighth-generation AMD64 architecture Microprocessor produced by AMD, released on A personal computer ( PC) is any Computer whose original sales price size and capabilities make it useful for individuals and which is intended to be operated [10]

Though considerably easier than in machine language, writing long programs in assembly language is often difficult and error prone. Therefore, most complicated programs are written in more abstract high-level programming languages that are able to express the needs of the computer programmer more conveniently (and thereby help reduce programmer error). In computing a high-level programming language is a Programming language with strong abstraction from the details of the computer A programmer is someone who writes Computer software. The term computer programmer can refer to a specialist in one area of computer programming or to a generalist High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler. A compiler is a Computer program (or set of programs that translates text written in a computer language (the source language) into another [11] Since high level languages are more abstract than assembly language, it is possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles.

The task of developing large software systems is an immense intellectual effort. Producing software with an acceptably high reliability on a predictable schedule and budget has proved historically to be a great challenge; the academic and professional discipline of software engineering concentrates specifically on this problem. Software engineering is the application of a systematic disciplined quantifiable approach to the development operation and maintenance of Software.

### Example

A traffic light showing red.

Suppose a computer is being employed to drive a traffic light. The traffic light, also known as traffic signal, stop light, traffic lamp, stop-and-go lights, robot or semaphore, A simple stored program might say:

1. Turn off all of the lights
2. Turn on the red light
3. Wait for sixty seconds
4. Turn off the red light
5. Turn on the green light
6. Wait for sixty seconds
7. Turn off the green light
8. Turn on the yellow light
9. Wait for two seconds
10. Turn off the yellow light

With this set of instructions, the computer would cycle the light continually through red, green, yellow and back to red again until told to stop running the program.

However, suppose there is a simple on/off switch connected to the computer that is intended to be used to make the light flash red while some maintenance operation is being performed. A switch is a mechanical device used to connect and disconnect an electric Circuit at will The program might then instruct the computer to:

1. Turn off all of the lights
2. Turn on the red light
3. Wait for sixty seconds
4. Turn off the red light
5. Turn on the green light
6. Wait for sixty seconds
7. Turn off the green light
8. Turn on the yellow light
9. Wait for two seconds
10. Turn off the yellow light
11. If the maintenance switch is NOT turned on then jump to instruction number 2
12. Turn on the red light
13. Wait for one second
14. Turn off the red light
15. Wait for one second

In this manner, the computer is either running the instructions from number (2) to (11) over and over or its running the instructions from (11) down to (16) over and over, depending on the position of the switch. [12]

## How computers work

A general purpose computer has four main sections: the arithmetic and logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). A microprocessor incorporates most or all of the functions of a Central processing unit (CPU on a single Integrated In Computing, an arithmetic logic unit ( ALU) is a Digital circuit that performs Arithmetic and Logical operations A control unit in general is a central (or sometimes distributed but clearly distinguishable part of whatsoever machinery that controls its operation provided that Computer data storage, often called storage or memory, refers to Computer components devices and recording media that retain digital These parts are interconnected by busses, often made of groups of wires. In Computer architecture, a bus is a subsystem that transfers data between computer components inside a Computer or between computers A wire is a single usually cylindrical, elongated string of drawn Metal.

The control unit, ALU, registers, and basic I/O (and often other hardware closely linked with these) are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components but since the mid-1970s CPUs have typically been constructed on a single integrated circuit called a microprocessor. Microchipsjpg|right|thumb|200px|Microchips ( EPROM memory with a transparent window showing the integrated circuit inside A microprocessor incorporates most or all of the functions of a Central processing unit (CPU on a single Integrated

### Control unit

Main articles: CPU design and Control unit

The control unit (often called a control system or central controller) directs the various components of a computer. CPU design is the Design engineering task of creating a Central processing unit (CPU a component of Computer hardware. A control unit in general is a central (or sometimes distributed but clearly distinguishable part of whatsoever machinery that controls its operation provided that It reads and interprets (decodes) instructions in the program one by one. The control system decodes each instruction and turns it into a series of control signals that operate the other parts of the computer. [13] Control systems in advanced computers may change the order of some instructions so as to improve performance.

A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from. The program counter, or shorter PC (also called the instruction pointer, part of the instruction sequencer in some Computers is a register in In Computer architecture, a processor register is a small amount of storage available on the CPU whose contents can be accessed more quickly than storage [14]

Diagram showing how a particular MIPS architecture instruction would be decoded by the control system. MIPS (originally an acronym for Microprocessor without Interlocked Pipeline Stages) is a RISC microprocessor architecture developed by MIPS Technologies

The control system's function is as follows—note that this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU:

1. Read the code for the next instruction from the cell indicated by the program counter.
2. Decode the numerical code for the instruction into a set of commands or signals for each of the other systems.
3. Increment the program counter so it points to the next instruction.
4. Read whatever data the instruction requires from cells in memory (or perhaps from an input device). The location of this required data is typically stored within the instruction code.
5. Provide the necessary data to an ALU or register.
6. If the instruction requires an ALU or specialized hardware to complete, instruct the hardware to perform the requested operation.
7. Write the result from the ALU back to a memory location or to a register or perhaps an output device.
8. Jump back to step (1).

Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow). In Computer science control flow (or alternatively flow of control refers to the order in which the individual statements, instructions or Function

It is noticeable that the sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program - and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer that runs a microcode program that causes all of these events to happen. In Computer architecture and Engineering, a sequencer or microsequencer is a part of the Control unit of a CPU. Microprogramming (ie writing microcode) is a method that can be employed to implement Machine instructions in a CPU relatively easily often using less

### Arithmetic/logic unit (ALU)

Main article: Arithmetic logic unit

The ALU is capable of performing two classes of operations: arithmetic and logic. In Computing, an arithmetic logic unit ( ALU) is a Digital circuit that performs Arithmetic and Logical operations

The set of arithmetic operations that a particular ALU supports may be limited to adding and subtracting or might include multiplying or dividing, trigonometry functions (sine, cosine, etc) and square roots. Circle-trig6svg|300px|thumb|right|All of the Trigonometric functions of an angle θ can be constructed geometrically in terms of a unit circle centered at O. In Mathematics, a square root of a number x is a number r such that r 2 = x, or in words a number r whose Some can only operate on whole numbers (integers) whilst others use floating point to represent real numbers—albeit with limited precision. The integers (from the Latin integer, literally "untouched" hence "whole" the word entire comes from the same origin but via French In Computing, floating point describes a system for numerical representation in which a string of digits (or Bits represents a Real number. In Mathematics, the real numbers may be described informally in several different ways However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?"). In Logic and Mathematics, a logical value, also called a truth value, is a value indicating the extent to which a Proposition is true

Logic operations involve Boolean logic: AND, OR, XOR and NOT. Boolean logic is a complete system for Logical operations It was named after George Boole, who first defined an algebraic system of In Logic and/or Mathematics, logical conjunction or and is a two-place Logical operation that results in a value of true if both of In Logic and Mathematics, negation or not is an operation on Logical values for example the logical value of a Proposition These can be useful both for creating complicated conditional statements and processing boolean logic. In Computer science, conditional statements, conditional expressions and conditional constructs are features of a Programming language which Boolean logic is a complete system for Logical operations It was named after George Boole, who first defined an algebraic system of

Superscalar computers contain multiple ALUs so that they can process several instructions at the same time. A superscalar CPU architecture implements a form of parallelism called Instruction-level parallelism within a single processor Graphics processors and computers with SIMD and MIMD features often provide ALUs that can perform arithmetic on vectors and matrices. In Computing, SIMD ( S ingle I nstruction M ultiple D ata is a technique employed to achieve data level parallelism as in a Vector In Computing, MIMD ( M ultiple I nstruction stream M ultiple D ata stream is a technique employed to achieve parallelism In Mathematics, a matrix (plural matrices) is a rectangular table of elements (or entries) which may be Numbers or more generally

### Memory

Main article: Computer storage
Magnetic core memory was popular main memory for computers through the 1960s until it was completely replaced by semiconductor memory. Computer data storage, often called storage or memory, refers to Computer components devices and recording media that retain digital Magnetic core memory, or ferrite-core memory, is an early form of Random access Computer memory.

A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595". The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is up to the software to give significance to what the memory sees as nothing but a series of numbers.

In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). The binary numeral system, or base-2 number system, is a Numeral system that represents numeric values using two symbols usually 0 and 1. A bit is a binary digit, taking a value of either 0 or 1 Binary digits are a basic unit of Information storage and communication A byte (pronounced "bite" baɪt is the basic unit of measurement of information storage in Computer science. Each byte is able to represent 256 different numbers; either from 0 to 255 or -128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. The two's complement of a Binary number is defined as the value obtained by subtracting the number from a large power of two (specifically from 2 N for Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory as long as it can be somehow represented in numerical form. Modern computers have billions or even trillions of bytes of memory.

The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. In Computer architecture, a processor register is a small amount of storage available on the CPU whose contents can be accessed more quickly than storage There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. Since data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed.

Computer main memory comes in two principal varieties: random access memory or RAM and read-only memory or ROM. RAM can be read and written to anytime the CPU commands it, but ROM is pre-loaded with data and software that never changes, so the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM is erased when the power to the computer is turned off while ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In Computing, the BIOS (ˈbaɪoʊs An operating system (commonly abbreviated OS and O/S) is the software component of a Computer system that is responsible for the management and coordination In embedded computers, which frequently do not have disk drives, all of the software required to perform the task may be stored in ROM. An embedded system is a special-purpose Computer system designed to perform one or a few dedicated functions often with Real-time computing constraints Software that is stored in ROM is often called firmware because it is notionally more like hardware than software. In Computing, firmware is a computer program that is Embedded in a hardware device for example a Microcontroller. Flash memory blurs the distinction between ROM and RAM by retaining data when turned off but being rewritable like RAM. Flash memory is non-volatile computer memory that can be electrically erased and reprogrammed However, flash memory is typically much slower than conventional ROM and RAM so its use is restricted to applications where high speeds are not required. [15]

In more sophisticated computers there may be one or more RAM cache memories which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part.

### Input/output (I/O)

Main article: Input/output
Hard disks are common I/O devices used with computers. In Computing, input/output, or I/O, refers to the communication between an Information processing system (such as a Computer) and the outside A hard disk drive ( HDD) commonly referred to as a hard drive, hard disk, or fixed disk drive, is a Non-volatile storage device

I/O is the means by which a computer receives information from the outside world and sends results back. Devices that provide input or output to the computer are called peripherals. For an account of the words periphery and peripheral as they are used in biology sociology politics computer hardware and other fields see the On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. A personal computer ( PC) is any Computer whose original sales price size and capabilities make it useful for individuals and which is intended to be operated In Computing, a mouse (plural mice, mouse devices, or mouses) A visual display unit, often called simply a monitor or display, is a piece of Electrical equipment which displays images generated from the Video Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. A hard disk drive ( HDD) commonly referred to as a hard drive, hard disk, or fixed disk drive, is a Non-volatile storage device A floppy disk is an increasingly Obsolete data storage medium that is composed of a disk of thin flexible ("floppy" Magnetic storage medium encased In Computing, an optical disc drive ( ODD) is a Disk drive that uses Laser light or electromagnetic waves near the Light spectrum Computer networking is another form of I/O. Computer networking is the Engineering Discipline concerned with communication between Computer systems or devices Networking routers

Often, I/O devices are complex computers in their own right with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics. 3D computer graphics (in contrast to 2D computer graphics) are graphics that use a three-dimensional representation of geometric data that is stored in the computer Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. A desktop computer is a Personal computer (PC in a form intended for regular use at a single location as opposed to a mobile Laptop or portable computer

While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. In computing Multitasking is a method by which multiple tasks also known as processes, share common processing resources such as a CPU. This is achieved by having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt which can periodically cause the computer to stop executing instructions where it was and do something else instead. In Computing, an interrupt is an asynchronous signal from hardware indicating the need for attention or a synchronous event in software indicating the need for a change By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time", then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn.

Before the era of cheap computers, the principle use for multitasking was to allow many people to share the same computer.

Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly - in direct proportion to the number of programs it is running. However, most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run at the same time without unacceptable speed loss.

### Multiprocessing

Main article: Multiprocessing
Cray designed many supercomputers that used multiprocessing heavily. Multiprocessing is the use of two or more central processing units (CPUs within a single computer system Cray Inc ( is a Supercomputer manufacturer based in Seattle Washington.

Some computers may divide their work between one or more separate CPUs, creating a multiprocessing configuration. Traditionally, this technique was utilized only in large and powerful computers such as supercomputers, mainframe computers and servers. A supercomputer is a Computer that is at the frontline of processing capacity particularly speed of calculation (at the time of its introduction Mainframes (often colloquially referred to as Big Iron) are Computers used mainly by large organizations for critical applications typically bulk data A server is a Computer dedicated to providing one or more services over a computer network typically through a request-response routine However, multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers have become widely available and are beginning to see increased usage in lower-end markets as a result. A multi-core processor (or chip-level multiprocessor, CMP) combines two or more independent cores into a single package composed of a single Integrated

Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general purpose computers. [16] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful only for specialized tasks due to the large scale of program organization required to successfully utilize most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks. A computer simulation, a computer model or a computational model is a Computer program, or network of computers that attempts to simulate an Rendering is the process of generating an image from a model, by means of computer programs Cryptography (or cryptology; from Greek grc κρυπτός kryptos, "hidden secret" and grc γράφω gráphō, "I write" In the jargon of Parallel computing, an embarrassingly parallel workload (or embarrassingly parallel problem is one for which no particular effort is needed to segment the problem

### Networking and the Internet

Main articles: Computer networking and Internet
Visualization of a portion of the routes on the Internet. Computer networking is the Engineering Discipline concerned with communication between Computer systems or devices Networking routers The Internet is a global system of interconnected Computer networks Routing is the process of selecting paths in a network along which to send network traffic

Computers have been used to coordinate information between multiple locations since the 1950s. The U. S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems like Sabre. The Semi-Automatic Ground Environment ( SAGE) was an automated control system for tracking and intercepting enemy Bomber aircraft used by NORAD from Sabre is a Computer reservations system /global distribution system (GDS used by Airlines Railways Hotels Travel agents and other

## Further topics

### Hardware

Main article: Computer hardware

The term hardware covers all of those parts of a computer that are tangible objects. Typical PC hardware A typical Personal computer consists of a case or chassis in a tower shape (desktop and the following parts Motherboard Circuits, displays, power supplies, cables, keyboards, printers and mice are all hardware.

 First Generation (Mechanical/Electromechanical) Calculators Antikythera mechanism, Difference Engine, Norden bombsight Programmable Devices Jacquard loom, Analytical Engine, Harvard Mark I, Z3 Second Generation (Vacuum Tubes) Calculators Atanasoff–Berry Computer, IBM 604, UNIVAC 60, UNIVAC 120 Programmable Devices Colossus, ENIAC, Manchester Small-Scale Experimental Machine, EDSAC, Manchester Mark I, CSIRAC, EDVAC, UNIVAC I, IBM 701, IBM 702, IBM 650, Z22 Third Generation (Discrete transistors and SSI, MSI, LSI Integrated circuits) Mainframes IBM 7090, IBM 7080, System/360, BUNCH Minicomputer PDP-8, PDP-11, System/32, System/36 Fourth Generation (VLSI integrated circuits) Minicomputer VAX, IBM System i 4-bit microcomputer Intel 4004, Intel 4040 8-bit microcomputer Intel 8008, Intel 8080, Motorola 6800, Motorola 6809, MOS Technology 6502, Zilog Z80 16-bit microcomputer 8088, Zilog Z8000, WDC 65816/65802 32-bit microcomputer 80386, Pentium, 68000, ARM architecture 64-bit microcomputer[17] x86-64, PowerPC, MIPS, SPARC Embedded computer 8048, 8051 Personal computer Desktop computer, Home computer, Laptop computer, Personal digital assistant (PDA), Portable computer, Tablet computer, Wearable computer Theoretical/experimental Quantum computer, Chemical computer, DNA computing, Optical computer, Spintronics based computer
 Peripheral device (Input/output) Input Mouse, Keyboard, Joystick, Image scanner Output Monitor, Printer Both Floppy disk drive, Hard disk, Optical disc drive, Teleprinter Computer busses Short range RS-232, SCSI, PCI, USB Long range (Computer networking) Ethernet, ATM, FDDI

### Software

Main article: Computer software

### Programming languages

 Lists of programming languages Timeline of programming languages, Categorical list of programming languages, Generational list of programming languages, Alphabetical list of programming languages, Non-English-based programming languages Commonly used Assembly languages ARM, MIPS, x86 Commonly used High level languages BASIC, C, C++, C#, COBOL, Fortran, Java, Lisp, Pascal Commonly used Scripting languages Bourne script, JavaScript, Python, Ruby, PHP, Perl

### Professions and organizations

As the use of computers has spread throughout society, there are an increasing number of careers involving computers. A programming language is an Artificial language that can be used to write programs which control the behavior of a machine particularly a Computer. This is a Timeline of historically important Programming languages. This is a list of Programming languages grouped by category Array languages CategoryArray programming languages|l1=CategoryArray programming Here a genealogy of Programming languages is shown Languages are categorized under the ancestor language with the strongest influence Non-English-based programming languages are computer Programming languages that unlike most well-known programming languages do not use keywords taken from or inspired by See the terminology section below for information regarding inconsistent use of the terms assembly and assembler The ARM architecture (previously the Advanced RISC Machine, and prior to that Acorn RISC Machine) is a 32-bit RISC processor architecture MIPS (originally an acronym for Microprocessor without Interlocked Pipeline Stages) is a RISC microprocessor architecture developed by MIPS Technologies x86 assembly language is the Assembly language for the X86 class of processors which includes Intel 's Pentium series and AMD In computing a high-level programming language is a Programming language with strong abstraction from the details of the computer In Computer programming, BASIC (an Acronym for Beginner's All-purpose Symbolic Instruction Code) is a family of High-level programming languages tags please moot on the talk page first! --> In Computing, C is a general-purpose cross-platform block structured C++ (" C Plus Plus " ˌsiːˌplʌsˈplʌs is a general-purpose Programming language. C# (pronounced C Sharp is a Multi-paradigm COBOL (ˈkoʊbɒl is one of the oldest programming languages still in active use Fortran (previously FORTRAN) is a general-purpose, procedural, imperative Programming language that is especially suited to Lisp (or LISP) is a family of Computer Programming languages with a long history and a distinctive fully parenthesized syntax Pascal is an influential imperative and procedural Programming language, designed in 1968/9 and published in 1970 by Niklaus Wirth as a small "Scripting" redirects here For other uses see Script. The Bourne shell, or sh, was the default Unix shell of Unix Version 7, and replaced the Thompson shell, whose executable file had the same JavaScript is a Scripting language most often used for Client-side web development Python is a general-purpose High-level programming language. Its design philosophy emphasizes programmer productivity and code readability Ruby is a dynamic, reflective, general purpose Object-oriented programming language that combines syntax inspired by Perl with Smalltalk PHP is a computer Scripting language. Originally designed for producing Dynamic web pages it has evolved to include a Command line interface capability NOTES FOR EDITORS "Perl" is not an acronym (read the "Name" section below Following the theme of hardware, software and firmware, the brains of people who work in the industry are sometimes known irreverently as wetware or "meatware".

 Hardware-related Electrical engineering, Electronics engineering, Computer engineering, Telecommunications engineering, Optical engineering, Nanoscale engineering Software-related Computer science, Human-computer interaction, Information technology, Software engineering, Scientific computing, Web design, Desktop publishing

The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature. Electrical engineering, sometimes referred to as electrical and electronic engineering, is a field of Engineering that deals with the study and application of Electronic engineering is a discipline dealing with the behavior and effects of Electrons (as in electron tubes and transistors and with electronic devices systems or equipment Computer engineering (or Computer Systems Engineering) encompasses broad areas of both Electrical engineering and Computer science. Optical engineering is the field of study that focuses on applications of Optics. Nanoengineering is the practice of Engineering on the Nanoscale. Computer science (or computing science) is the study and the Science of the theoretical foundations of Information and Computation and their Human–computer interaction or HCI is the study of interaction between people ( users and Computers It is often regarded as the intersection of Information technology ( IT) as defined by the Information Technology Association of America (ITAA is "the study design development implementation support Software engineering is the application of a systematic disciplined quantifiable approach to the development operation and maintenance of Software. Computational science (or scientific computing) is the field of study concerned with constructing Mathematical models and numerical solution techniques and using computers Web page design is a process of conceptualization planning modeling and execution of Electronic media content delivery via Internet in the form Desktop publishing (also known as DTP) combines a Personal computer and WYSIWYG page layout Software to create Publication Documents

 Standards groups ANSI, IEC, IEEE, IETF, ISO, W3C Professional Societies ACM, ACM Special Interest Groups, IET, IFIP Free/Open source software groups Free Software Foundation, Mozilla Foundation, Apache Software Foundation

## Notes

1. ^ In 1946, ENIAC consumed an estimated 174 kW. The International Electrotechnical Commission ( IEC) is a not-for-profit, non-governmental international Standards organization that prepares and publishes The Institute of Electrical and Electronics Engineers or IEEE (read eye-triple-e) is an international Non-profit, professional organization The Association for Computing Machinery, or ACM, was founded in 1947 as the world's first scientific and educational Computing society The Institution of Engineering and Technology ( IET) is a British Professional body for those working in Engineering and Technology The International Federation for Information Processing, usually known as IFIP, is an umbrella organization for national societies working in the field of Information technology Free software or software libre is Software that can be used studied and modified without restriction and which can be copied and redistributed in modified or unmodified Open source software (OSS began as a marketing campaign for Free software. The Free Software Foundation ( FSF) is a Non-profit corporation founded by Richard Stallman on 4 October 1985 to support the Free software movement The Mozilla Foundation is a Non-profit organization that exists to support and provide leadership for the Open source Mozilla project In Computer science, computability theory is the branch of the Theory of computation that studies which problems are computationally solvable using different Computer science (or computing science) is the study and the Science of the theoretical foundations of Information and Computation and their Computing is usually defined like the activity of using and developing Computer technology Computer hardware and software. This page is intended to be a list of computers in fiction and Science fiction. This article describes how security can be achieved through design and engineering Many current Computer systems have only limited security precautions in place Electronic waste, " e-waste " or " Waste Electrical and Electronic Equipment " (" WEEE " is a Waste type consisting This is a list of the origins of computer-related terms or terms used in the computing world (i ENIAC, short for Electronic Numerical Integrator And Computer, was the first general-purpose electronic Computer. By comparison, a typical personal computer may use around 400 W; over four hundred times less. (Kempf 1961)
2. ^ Early computers such as Colossus and ENIAC were able to process between 5 and 100 operations per second. The Colossus machines were electronic Computing devices used by British codebreakers to read Encrypted German messages during ENIAC, short for Electronic Numerical Integrator And Computer, was the first general-purpose electronic Computer. A modern "commodity" microprocessor (as of 2007) can process billions of operations per second, and many of these operations are more complicated and useful than early computer operations.
3. ^ Heron of Alexandria. Retrieved on 2008-01-15. 2008 ( MMVIII) is the current year in accordance with the Gregorian calendar, a Leap year that started on Tuesday of the Common Events 588 BC - Nebuchadrezzar II of Babylon lays siege to Jerusalem under Zedekiah 's reign
4. ^ The Analytical Engine should not be confused with Babbage's difference engine which was a non-programmable mechanical calculator. The Difference Engine was an automatic mechanical calculator designed to tabulate polynomial functions.
5. ^ B. Jack Copeland, ed. , Colossus: The Secrets of Bletchley Park's Codebreaking Computers, Oxford University Press, 2006
6. ^ This program was written similarly to those for the PDP-11 minicomputer and shows some typical things a computer can do. The PDP-11 was a series of 16-bit Minicomputers sold by Digital Equipment Corp A minicomputer (colloquially mini) is a class of multi-user Computers that lies in the middle range of the computing spectrum in between the largest Multi-user All the text after the semicolons are comments for the benefit of human readers. In Computer programming, a comment is a Programming language construct used to embed Information in the Source code of a computer program These have no significance to the computer and are ignored. (Digital Equipment Corporation 1972)
7. ^ Attempts are often made to create programs that can overcome this fundamental limitation of computers. Software that mimics learning and adaptation is part of artificial intelligence.
8. ^ It is not universally true that bugs are solely due to programmer oversight. Computer hardware may fail or may itself have a fundamental problem that produces unexpected results in certain situations. For instance, the Pentium FDIV bug caused some Intel microprocessors in the early 1990s to produce inaccurate results for certain floating point division operations. The Pentium FDIV bug was a bug in Intel 's original Pentium Floating point unit. A microprocessor incorporates most or all of the functions of a Central processing unit (CPU on a single Integrated In Computing, floating point describes a system for numerical representation in which a string of digits (or Bits represents a Real number. This was caused by a flaw in the microprocessor design and resulted in a partial recall of the affected devices.
9. ^ Even some later computers were commonly programmed directly in machine code. Some minicomputers like the DEC PDP-8 could be programmed directly from a panel of switches. A minicomputer (colloquially mini) is a class of multi-user Computers that lies in the middle range of the computing spectrum in between the largest Multi-user Digital Equipment Corporation was a pioneering American company in the Computer industry The PDP-8 was the first successful commercial Minicomputer, produced by Digital Equipment Corporation (DEC in the 1960s However, this method was usually used only as part of the booting process. In Computing, booting ( booting up) is a bootstrapping process that starts Operating systems when the user turns on a Computer system Most modern computers boot entirely automatically by reading a boot program from some non-volatile memory. Non-volatile memory, nonvolatile memory, NVM or non-volatile storage, is Computer memory that can retain the stored information
10. ^ However, there is sometimes some form of machine language compatibility between different computers. An x86-64 compatible microprocessor like the AMD Athlon 64 is able to run most of the same programs that an Intel Core 2 microprocessor can, as well as programs designed for earlier microprocessors like the Intel Pentiums and Intel 80486. x86-64 is a Superset of the x86 instruction set architecture. The Athlon 64 is an eighth-generation AMD64 architecture Microprocessor produced by AMD, released on The Core 2 brand refers to a range of Intel 's consumer 64-bit dual-core and 2x2 MCM quad-core CPUs with the X86-64 instruction set The Pentium brand refers to Intel 's single-core x86 Microprocessor based on the P5 fifth-generation Microarchitecture. The Intel 486, otherwise known as the 80486 i486 or just 486 was the first tightly pipelined X86 design This contrasts with very early commercial computers, which were often one-of-a-kind and totally incompatible with other computers.
11. ^ High level languages are also often interpreted rather than compiled. In Computer programming an interpreted language is a Programming language whose implementation often takes the form of an interpreter. Interpreted languages are translated into machine code on the fly by another program called an interpreter. In Computer science, an interpreter normally means a Computer program that executes, i
12. ^ Although this is a simple program, it contains a software bug. A software bug (or just “bug” is an error flaw mistake Failure, fault or “undocumented feature” in a Computer program that prevents it If the traffic signal is showing red when someone switches the "flash red" switch, it will cycle through green once more before starting to flash red as instructed. This bug is quite easy to fix by changing the program to repeatedly test the switch throughout each "wait" period—but writing large programs that have no bugs is exceedingly difficult.
13. ^ The control unit's rule in interpreting instructions has varied somewhat in the past. While the control unit is solely responsible for instruction interpretation in most modern computers, this is not always the case. Many computers include some instructions that may only be partially interpreted by the control system and partially interpreted by another device. This is especially the case with specialized computing hardware that may be partially self-contained. For example, EDVAC, the first modern stored program computer to be designed, used a central control unit that only interpreted four instructions. EDVAC ( E lectronic D iscrete V ariable A utomatic C omputer) was one of the earliest electronic Computers All of the arithmetic-related instructions were passed on to its arithmetic unit and further decoded there.
14. ^ Instructions often occupy more than one memory address, so the program counters usually increases by the number of memory locations required to store one instruction.
15. ^ Flash memory also may only be rewritten a limited number of times before wearing out, making it less useful for heavy random access usage. (Verma 1988)
16. ^ However, it is also very common to construct supercomputers out of many pieces of cheap commodity hardware; usually individual computers connected by networks. These so-called computer clusters can often provide supercomputer performance at a much lower cost than customized designs. While custom architectures are still used for most of the most powerful supercomputers, there has been a proliferation of cluster computers in recent years. (TOP500 2006)
17. ^ Most major 64-bit instruction set architectures are extensions of earlier designs. An instruction set is a list of all the instructions and all their variations that a processor can execute All of the architectures listed in this table existed in 32-bit forms before their 64-bit incarnations were introduced.

## References

• a  Kempf, Karl (1961). "Historical Monograph: Electronic Computers Within the Ordnance Corps". . Aberdeen Proving Ground (United States Army)
• a  Phillips, Tony (2000). Aberdeen Proving Ground (APG is a United States Army facility located near Aberdeen Maryland (in Harford County) The United States Army is a military organization whose primary mission is to "provide necessary forces and capabilities. The Antikythera Mechanism I. American Mathematical Society. Retrieved on 2006-04-05. Year 2006 ( MMVI) was a Common year starting on Sunday of the Gregorian calendar. Events 456 - St Patrick returns to Ireland as a missionary bishop
• a  Shannon, Claude Elwood (1940). "A symbolic analysis of relay and switching circuits". . Massachusetts Institute of Technology
• a  Digital Equipment Corporation (1972). Digital Equipment Corporation was a pioneering American company in the Computer industry PDP-11/40 Processor Handbook (PDF), Maynard, MA: Digital Equipment Corporation. Maynard is a town in Middlesex County, Massachusetts, United States.
• a  Verma, G. ; Mielke, N. (1988). "Reliability performance of ETOX based flash memories". . IEEE International Reliability Physics Symposium
• a  Meuer, Hans; Strohmaier, Erich; Simon, Horst; Dongarra, Jack (2006-11-13). Hans Meuer is a Professor of Computer Science at the University of Mannheim, general manager of Prometeus GmbH and general chairman of the International Supercomputing Jack Dongarra is a University Distinguished Professor of Computer Science in the Computer Science Department at the University of Tennessee. Year 2006 ( MMVI) was a Common year starting on Sunday of the Gregorian calendar. Events 1002 - English king Ethelred orders the killing of all Danes in England, known today as the St Architectures Share Over Time. TOP500. The TOP500 project ranks and details the 500 most powerful known Computer systems in the world Retrieved on 2006-11-27. Year 2006 ( MMVI) was a Common year starting on Sunday of the Gregorian calendar. Events 1095 - Pope Urban II declares the First Crusade at the Council of Clermont
• Stokes, Jon (2007). Inside the Machine: An Illustrated Introduction to Microprocessors and Computer Architecture. San Francisco: No Starch Press. ISBN 978-1-59327-104-6.

## computer

### -noun

1. (computing) A programmable device that performs mathematical calculations and logical operations, especially one that can process, store and retrieve large amounts of data very quickly.
2. (dated) A person employed to perform computations.
© 2009 citizendia.org; parts available under the terms of GNU Free Documentation License, from http://en.wikipedia.org
network: | |