My Blog
Technology

Edward Fredkin, Who Saw the Universe as One Big Computer, Dies at 88

Edward Fredkin, Who Saw the Universe as One Big Computer, Dies at 88
Edward Fredkin, Who Saw the Universe as One Big Computer, Dies at 88


Edward Fredkin, who despite never having graduated from college became an influential professor of computer science at the Massachusetts Institute of Technology, a pioneer in artificial intelligence and a maverick scientific theorist who championed the idea that the entire universe might function like one big computer, died on June 13 in Brookline, Mass. He was 88.

His death, in a hospital, was confirmed by his son Richard Fredkin.

Fueled by a seemingly limitless scientific imagination and a blithe indifference to conventional thinking, Professor Fredkin charged through an endlessly mutating career that at times seemed as mind-warping as the iconoclastic theories that made him an intellectual force in both computer science and physics.

“Ed Fredkin had more ideas per day than most people have in a month,” Gerald Sussman, a professor of electronic engineering and a longtime colleague at M.I.T., said in a phone interview. “Most of them were bad, and he would have agreed with me on that. But out of those, there were good ideas, too. So he had more good ideas in a lifetime than most people ever have.”

After serving as a fighter pilot in the Air Force in the early 1950s, Professor Fredkin became a renowned, if unconventional, scientific thinker. He was a close friend and intellectual sparring partner of the celebrated physicist Richard Feynman and the renowned computer scientist Marvin Minsky, a trailblazer in artificial intelligence.

An autodidact who left college after a year, he nonetheless became a full professor of computer science at M.I.T. at 34. He later taught at Carnegie Mellon University in Pittsburgh and at Boston University.

Not content to confine his energies to the ivory tower, Professor Fredkin in 1962 founded a company that built programmable film readers, allowing computers to analyze data captured by cameras, such as Air Force radar information.

That company, Information International Incorporated, went public in 1968. With his new fortune, he bought a Caribbean island in the British Virgin Islands, to which he traveled in his Cessna 206 seaplane. The island lacked potable water, so Professor Fredkin developed a reverse-osmosis technology to desalinate seawater, which he turned into another business.

He eventually sold the property, Mosquito Island, to the British billionaire Richard Branson for $25 million.

Professor Fredkin’s life was filled with paradoxes, so it is only fitting that he was credited with his own. Fredkin’s paradox, as it is known, posits that when one is deciding between two options, the more similar they are the more time one spends fretting about the decision, even though the difference in choosing one or the other may be insignificant. Conversely, when the difference is more substantial or meaningful, one is likely to spend less time deciding.

As an early researcher in artificial intelligence, Professor Fredkin foreshadowed the current debates about hyper-intelligent machines a half-century ago.

“It requires a combination of engineering and science, and we already have the engineering,” Professor Fredkin said in a 1977 interview with The New York Times. “In order to produce a machine that thinks better than man, we don’t have to understand everything about man. We still don’t understand feathers, but we can fly.”

As a starting point, he helped pave the way for machines to checkmate the Bobby Fischers of the world. A developer of an early processing system for chess, Professor Fredkin in 1980 created the Fredkin Prize, a $100,000 award that he offered to whoever could develop the first computer program to win the world chess championship.

In 1997, a team of IBM programmers did just that, taking home the six-figure bounty when their computer, Deep Blue, beat Garry Kasparov, the world chess champion.

“There has never been any doubt in my mind that a computer would ultimately beat a reigning world chess champion,” Professor Fredkin said at the time. “The question has always been when.”

Edward Fredkin was born on Oct. 2, 1934, in Los Angeles, the youngest of four children of Russian immigrants. His father, Manuel Fredkin, ran a chain of radio stores that failed during the Great Depression. His mother, Rose (Spiegel) Fredkin, was a pianist.

A cerebral and socially awkward youth, Edward avoided sports and school dances, preferring to lose himself in hobbies like building rockets, designing fireworks and dismantling and rebuilding old alarm clocks. “I always got along well with machines,” he said in a 1988 interview with The Atlantic Monthly.

After high school, he enrolled in the California Institute of Technology in Pasadena, where studied with the Nobel Prize-winning chemist Linus Pauling. Lured by his desire to fly, however, he left school in his sophomore year to join the Air Force.

During the Korean War, he trained to fly fighter jets. But his prodigious skills with mathematics and technology landed him work on military computer systems instead of in combat. The Air Force eventually sent him to M.I.T. Lincoln Laboratory, a wellspring of technological innovation funded by the Pentagon, to further his education in computer science.

It was the start of a long tenure at M.I.T., where in the 1960s he helped develop early versions of multiple access computers as a part of a Pentagon-funded program called Project MAC. That program also explored machine-aided cognition, an early investigation into artificial intelligence.

“He was one of the world’s first computer programmers,” Professor Sussman said.

In 1971, Professor Fredkin was chosen to direct the project. He became a full-time faculty member shortly thereafter.

As his career developed, Professor Fredkin continued to challenge mainstream scientific thinking. He made major advances in the field of reversible computing, an esoteric area of study combining computer science and thermodynamics.

With a pair of innovations — the billiard-ball computer model, which he developed with Tommaso Toffoli, and the Fredkin Gate — he demonstrated that computation is not inherently irreversible. Those advances suggest that computation need not consume energy by overwriting the intermediate results of a computation, and that it is theoretically possible to build a computer that does not consume energy or produce heat.

But none of his insights stoked more debate than his famous theories on digital physics, a niche field in which he became a leading theorist.

His universe-as-one-giant-computer theory, as described by the author and science writer Robert Wright in The Atlantic Monthly in 1988, is based on the idea that “information is more fundamental than matter and energy.” Professor Fredkin, Mr. Wright said, believed that “atoms, electrons and quarks consist ultimately of bits — binary units of information, like those that are the currency of computation in a personal computer or a pocket calculator.”

As Professor Fredkin was quoted as saying in that article, DNA, the fundamental building block of heredity, is “a good example of digitally encoded information.”

“The information that implies what a creature or a plant is going to be is encoded,” he said. “It has its representation in the DNA, right? OK, now, there is a process that takes that information and transforms it into the creature.”

Even a creature as ordinary as a mouse, he concluded, “is a big, complicated informational process.”

Professor Fredkin’s first marriage, to Dorothy Fredkin, ended in divorce in 1980. In addition to his son Richard, he is survived by his wife, Joycelin; a son, Michael, and two daughters, Sally and Susan, from his first marriage; a brother, Norman; a sister, Joan Entz; six grandchildren; and one great-grandchild.

By the end of his life, Professor Fredkin’s theory of the universe remained fringe, if intriguing. “Most of the physicists don’t think it’s true,” Professor Sussman said. “I’m not sure if Fredkin believed it was true, either. But certainly there’s a lot to learn by thinking that way.”

His early views on artificial intelligence, by contrast, seem more prescient by the day.

“In the distant future we won’t know what computers are doing, or why,” he told The Times in 1977. “If two of them converse, they’ll say in a second more than all the words spoken during all the lives of all the people who ever lived on this planet.”

Even so, unlike many current doomsayers, he did not feel a sense of existential dread. “Once there are clearly intelligent machines,” he said, “they won’t be interested in stealing our toys or dominating us, any more than they would be interested in dominating chimpanzees or taking nuts away from squirrels.”

Related posts

Commanders vs. Buccaneers Livestream: How to Watch NFL Week 1 Online Today

newsconquest

Affirmative action could have chilling effect on tech companies’ diversity

newsconquest

Tablo’s First ATSC 3.0 DVR Delayed, Won’t Skip Ads

newsconquest