Publications / Article

Decentralism in Information Technology

George Gilder

            George Gilder (1939 – ) was born in Massachusetts and was graduated from Philips Exeter and Harvard. His biography describes him as “an American investor, author, economist, techno-utopian advocate, and co-founder of the Discovery Institute. His 1981 international bestseller, Wealth and Poverty, advanced a case for supply-side economics and capitalism during the early months of the Reagan administration.

            Gilder was active in Republican politics from the 1960s to the late 1980s, when his interests turned to technology, especially the rapidly emerging world of computer technology.  His first book in that field was Microcosm: The Quantum Revolution in Economics and Technology (1989), followed by several others, plus hundreds of speaking appearances and articles that made him a leading authority on the future of the microcosm, the telecosm, and most recently Artificial Intelligence.

            Gilder’s relationship to decentralism lies in his early recognition that the rise of the computer and its capabilities would enable more decision making at lower levels, as opposed to centralized command and control techniques. His 2020 book Gaming AI challenges the thinking of AI enthusiasts that a universal computerized brain can emerge to replace the neural networks of the human brain.

            The following is Chapter 26 of Microcosm: The Quantum Revolution in Economics and Technology (New York: Simon & Schuster, 1989). pp 345-352.

The Law of the Microcosm

“In the worldwide rivalry in information technology, the greatest American advantage is that the U.S. system, for all its flaws, accords best with the inner logic of the microcosm. When Ferguson says that complexity increases by the square of the number of nodes, he makes a case not for centralized authority but for its impossibility. Beyond a certain point comes the combinatorial explosion: large software programs tend to break down faster than they are repaired. Large bureaucracies tend to stifle their own purposes.

Endemic in most industrial organizations, private and public, this problem becomes deadly in the microcosm. The pace of change creates too many decisions, too many nodes, to be managed effectively in a centralized system.

Rather than revolting against the microcosm, the advocates of industrial policy should listen more closely to the technology. For what the technology is saying is that the laws of the microcosm are so powerful and fundamental that they restructure nearly everything else around them. As Mead discovered, complexity grows exponentially only off the chip. In the microcosm, on particular slivers of silicon, efficacy grows far faster than complexity. Therefore, power must move down, not up.

This rule applies most powerfully to the users of the technology. In volume, anything on a chip is cheap. But as you move out of the microcosm, prices rise exponentially. A connection on a chip costs a few millionths of a cent, while the cost of a lead from a plastic package is about a cent, a wire on a printed circuit board is 10 cents, backplane links between boards are on the order of a dollar apiece, and links between computers, whether on copper cables or through fiber-optic threads or off satellites, cost between a few thousand and millions of dollars to consummate. The result is that the efficiency of computing drops drastically as efforts are made to control and centralize the overall system.

Provided that complexity is concentrated on single chips rather than spread across massive networks, the power of the chip grows much faster than the power of a host processor running a vast system of many computer terminals. The power of the individual commanding a single workstation—or small network of specialized terminals— increases far faster than the power of an overall bureaucratic system.

The chip designers, computer architects, and process engineers using these workstations—more potent by far than mainframes of a decade ago—are far less dependent on bureaucracy for capital and support than their predecessors. The more intellectual functionality placed on single chips and the fewer expensive interconnections, the more the power that can be cheaply available to individuals. The organization of enterprise follows the organization of the chip. The power of entrepreneurs using distributed information technology grows far faster than the power of large institutions attempting to bring information technology to heel.

Rather than pushing decisions up through the hierarchy, the power of microelectronics pulls them remorselessly down to the individual. This is the law of the microcosm. This is the secret of the new American challenge in the global economy.

The law of the microcosm has even invaded the famous Cray supercomputer, the pride of the Von Neumann line, still functioning powerfully in many laboratories around the globe. Judging by its sleek surfaces and stunning gigaflop specifications (billions of floating point operations per second), the Cray appears to be high technology. But then there is the hidden scandal of the “mat.” Remove the back panel and you will see a madman’s pasta of tangled wires. The capacity these wires—electrons can travel just nine inches a nanosecond—is the basic limit of the technology.

The Mead mandate to economize on wire while proliferating switches has led inexorably to parallel architectures, in which computing jobs were distributed among increasingly large numbers of processors interconnected on single chips or boards and closely coupled to fast solid state memories. Such machines were often able to outperform Von Neumann supercomputers in various specialized applications. A symbolic victory came when HiTech, a parallel machine made of $100,000 worth of application-specific chips, challenged and generally outperformed the $15 million Cray Blitz supercomputer in the world chess championships for computers.

The very physics of computing dictates that complexity and interconnections—and thus computational power—be pushed down from the system into single chips where they cost a few dollars and are IBM. Ralph Gomory, IBM’s chief scientist, predicted in 1987 that; within a decade the central processing units of supercomputers will; have to be concentrated within a space of three cubic inches. The supercomputer core of the 1990s will be suitable for a laptop.

Following microcosmic principles in pursuing this goal, IBM in December 1987 joined the venture capitalists and sponsored the new supercomputer company headed by former Cray star Steven Chen, designer of the best-selling Cray XMF. The Chen firm will use massive parallelism to achieve a hundredfold rise in supercomputer power.

Such huge gains were only the beginning. But there will be one key limit: stay on one chip or a small set of chips. As soon as you leave the microcosm, you will run into dire problems: decreases in speed and reliability, increases in heat, size, materials cost, design complexity, and manufacturing expense.

Above all, the law of the microcosm means that the computer will remain chiefly a personal appliance, not a governmental or bureaucratic system. Integration will be downward onto the chip, not upward from the chip. Small companies, entrepreneurs, individual inventors and creators, handicapped persons, all will benefit hugely. But large centralized organizations will lose relative efficiency and power.

This is nowhere more evident than in telecom systems, once a paradigm of the oligopoly so cherished by Ferguson and his allies. Still America’s most comprehensively organized business institution, the telecom network is being inexorably redesigned in imitation of the microcosmic devices that pervade it. Though afflicted by obtuse regulators fearful of monopoly, the telecom establishment is actually under serious attack from microelectronic entrepreneurs.

Once a pyramid controlled from the top, where the switching power was concentrated, the telephone system is becoming what Peter Huber of the Manhattan Institute calls a “geodesic” network. Under the same pressure Carver Mead describes in the computer industry, the telecommunications world has undertaken a massive effort to use cheap switches to economize on wire. Packet switching systems, private branch exchanges, local area networks, smart buildings, and a variety of intelligent switching nodes all are ways of funneling increasing communications traffic onto ever fewer wires.

As Huber explains, the pyramidal structure of the Bell system was optimal only as long as switches were expensive and wires were cheap. When switches became far cheaper than wires, a horizontal ring emerged as the optimal network. By 1988, the effects of this trend were increasingly evident. The public telephone network commanded some 115 million lines, while private branch exchanges held over 30 million lines. But the public system was essentially stagnant, while PBXs were multiplying at a pace of nearly 20 percent a year. PBX’s, though, now are being eroded by still more decentralized systems. Increasingly equipped with circuit cards offering modem, facsimile, and even PBX powers, the 40 million personal computers in the United States can potentially function as telecom switches.

Personal computers are not terminals, in the usual sense, but seminals, sprouting an ever-expanding network of microcosmic devices. AT&T and the regional Bell companies will continue to play a central role in telecommunications (as they are released from archaic national and state webs of regulation). But the fastest growth will continue to occur on the entrepreneurial fringes of the network. And inverted by the powers of the microcosm, the fringes will become increasingly central as time passes.

Nevertheless, the question remains, will the producers of the technology be as bound in their business tactics by the laws of the microcosm as their customers are? In the face of repeated Japanese victories in mass manufacturing of some essential components, mostly achieved through government-favored conglomerates, is it credible that entrepreneurs can be as powerful a force in the production of the technology as they are in its use?

The answer is yes. The industry must listen to its own technology, and the technology dictates that power gravitate to intellect and innovation over even the most admirable efficiency in mass production. The evolution of the industry will remorselessly imitate the evolution of the chip. As chip densities rise and suddenly inexpensive custom logic replaces memory, specialized computers-on-a-chip will become the prevailing product category. Even more than today, chip designers will be systems builders and systems builders will be specialists.

The world of the collapsing computer is a world everywhere impregnated with intelligence in the form of scores of thousands of unique designs. Even today, the “cowboy capitalists” of the U.S. industry reap far more than their fair share of value added and profits. As the power of custom designs grows and the relative importance of the general-purpose computer, patched together with general-purpose chips, declines, innovators and entrepreneurs will be more favored still. And these innovators and entrepreneurs will enjoy the same advantages of increasingly cheap and increasingly powerful design and production equipment that has been bequeathed to entrepreneurs throughout the economy.

 The advocates of industrial policy are enthralled by supercomputers as the key to competitiveness. Yet the leading U.S. supercomputer firm is Cray Research, a relatively small firm, deeply dependent on the new generation of semiconductor companies for key technologies. Gigabit Logic provides its superfast gallium arsenide logic devices; Performance supplies CMOS chips interfacing with the gallium arsenide devices; Micron furnishes 256K fast static RAM built to Cray specifications. “I don’t know what we’d do without the startups,” said Seymour Cray, a leading investor in Performance and fervent admirer of Micron.

The first successful billion-component chip will be designed, simulated, and tested on a massively parallel desktop supercomputer that will yield functionality far beyond its cost. Whatever’ the chip will do, those twenty Crays of computing power will make it incomparably more potent than current microprocessors which take scores of designers to create and are built in a $150 million plant. It will be manufactured on a laser direct write system or an X-ray stepper that will also far outproduce its predecessors in the functions it will offer per dollar.

Doubtless all the business magazines will still be speaking of the incredible cost of building and equipping a new semiconductor plant. But in functional output per dollar of investment, the fab of the year 2000 will be immensely more efficient than the “Goliath fab” making DRAMs today. By the law of the microcosm the industry will continue to become less capital-intensive, more intensive in the individual mastery of the promise of information technology.

Contrary to the usual assumption, even the need for standards favors entrepreneurs. Standards often cannot be established until entrepreneurs ratify them. Based on an operating system from a startup called Microsoft, the IBM PC, for example, did not become a prevailing architecture until thousands of entrepreneurial software and peripheral companies rose up to implement it. Similarly, Ethernet, launched at Xerox PARC by Robert Metcalf, gained authority despite the opposition of IBM, when scores of companies, from SEEQ to Sun to DEC to Metcalf’s 3-Com itself, adopted and established it. Just as public utilities open the way for thousands of small firms to use electricity and other services, information industry standards emancipate entrepreneurs to make vital contributions to the economy.

Finally, if the doomsayers were correct, the United States would still be losing market share in electronics. But since 1985, U.S. firms have been steadily gaining market share in most parts of the computer industry and holding their own in semiconductors (adjusted for | rising yen).

The revolt against the microcosm remains strong among academic intellectuals. Many critics of capitalism resent its defiance of academic or social standards of meritocracy. William Gates of Microsoft left Harvard for good his sophomore year. The system often gives economic dominance to people who came to our shores as immigrants with little knowledge of English, people unappealingly small, fat, or callow, all the nerds and wonks disdained at the senior prom or the Ivy League cotillion. Some academics maintain a mind-set still haunted by the ghost of Marxism. But the entrepreneur remains the driving force of economic growth in the quantum era.

Rather than pushing decisions up through the hierarchy, the power of microelectronics pulls them remorselessly down to the individual. It applies not only to business organization but to the very power of the state as well.”

Writing 31 years later, Gilder admitted some mistaken predictions from Microcosm, but continued to believe that that the power of the microcosm would make possible the decentralization of decision making in both business and government.

In his short treatise Gaming AI: Why AI Can’t Think but Can Transform Jobs (Seattle: Discovery Institute, 2020) Gilder emphatically countered the predictions of Artificial Intelligence enthusiasts who foresee AI growing to become a overarching universal brain with ever-diminishing guidance from the humans who created it.

All software and hardware, mathematical models and projections, computational simulations and logical extrapolations, depend on maps: translations of physical entities into symbols. Maps consist of distillations of objects into representations of them. The problem is that the map is not the territory. Whether in a mathematical equation or a mathematical model consisting of functions and equations, or in a neural network reflecting a sensorium of global measurements, AI supremacy assumes the essential identity of sufficiently refined maps and territories. AI is based on manipulating symbols as sufficient and reliable representations of their objects. By asserting that there is always a gap between the object and the symbol, Peirce foreshadowed the  coming  of  the  AI  emperor  and  his new clothes. Whether a number or a word or alphanumeric code or an analog report from a sensor, the symbol is always intrinsically different from the object it designates or describes (page 38).

For all the grandiose talk of AI usurping brains, this requirement to imitate them provides a humbling lesson. There is a critical difference between programmable machines and programmers. The machines are deterministic and the programmers are creative. Robert Marks, of the Walter Bradley Center for Natural and Artificial Intelligence, explains the canonical example from biology:

Biologists in the mid-twentieth century were excited by the advent of computers that could simulate evolution. Millions of generations could be simulated in a few seconds. But evolution simulation on a computer is algorithmic. It requires computer code. Creativity is non-algorithmic and therefore incomputable… Design theorist William Dembski and I built on the No Free Lunch theorem, showing that the creative information added to an evolution program could be measured in bits. Computer simulations of popular evolutionary algorithms demonstrate that evolutionary programs need this active information. The programmer must contribute creativity to make the code work.

So  the  AI  movement,  far  from  replacing  human  brains,  is  going to find itself imitating them. Just as the time-space constraint requires computers to break up into distributed and parallel functions, computer programs like artificial intelligence will have to respond to a mandate for modularity. The brain demonstrates the superiority of the edge over the core: It’s not agglomerated in a few air-conditioned nodes but is dispersed far and wide, interconnected by myriad sensory and media channels. The test of the new global ganglia of computers and cables, worldwide webs  of  glass  and  light  and  air,  is  how  readily  they  take  advantage  of unexpected contributions from free human minds in all their creativity and diversity. These high-entropy phenomena cannot even be readily measured by the metrics of computer science (page 46).

[Gregory] Chaitin’s “mathematics of creativity” suggests that in order to push the technology forward it will be necessary to transcend the deterministic mathematical logic that pervades existing computers. Information theory depicts creativity as high-entropy, unexpected bits. Deterministic systems supply low-entropy carriers for creative activity. But creativity itself cannot be deterministic without prohibiting the very surprises that define information and reflect real creation. Gödel dictates a mathematics of creativity.

Unfortunately,  you  can  read  a  hundred  books  on  artificial  intelligence  and  machine  learning  without  encountering  a  single  serious engagement of these showstoppers from the giants of computer science and  information  theory.  Instead, the exponents of  “strong”  AI offer triumphalist analogies. Existing AI robots are deemed precursors of a robotic imperium, like a lily pond that begins nearly empty of lily pads and fills up in an exponential swoosh.  Following the earlier coinage of Henry Adams describing nineteenth-century energy technologies, Kurzweil dubs AI progress a “Law of Accelerating Returns.” At some point, the human epoch ends and an epoch of machine control ensues. However, this fashionable singularity scenario depends on a set of little-understood assumptions common in the artificial intelligence movement:

  • The Modeling Assumption: A computer can deterministically model a brain
  • The Big Data Assumption: The bigger the dataset, the better. No diminishing returns to big data.
  • The Binary Reality Assumption: Reliable links exist between maps and territories, computational symbols, and their objects.
  • The Ergodicity Assumption: In the world, the same inputs always produce the same outputs.
  • The Locality Assumption: Actions of human agents reflect only immediate physical forces impinging directly on them.
  • The Digital Time Assumption: Time is objective and measured by discrete increments.

For the game of Go, all these assumptions are essentially true. Go is deterministic and ergodic; any specific arrangement of stones will always  produce  the  same  results,  according  to  the  rules  of  the  game. The stones are at once symbols and objects; they are always mutually congruent: the map is the territory. The effects of moves are immediate and local, according to the definitions and rules of the game. The overall system is determinist and Newtonian, governed by a single scheme of predetermined and unchangeable logic, and a single universal clock. The existing state of play is always the cumulative result of moves in the past. Ergodicity is crucial to any predictive model. If the model itself generates a variety of outcomes, sure prediction is impossible. Unfortunately  for  prophets,  the  real  world  generates  a  huge  multiplicity  of outcomes from a tiny number of regularities (page 50).

Share: