Published October 7, 2014, Los Angeles Daily Journal – In its quest to be the first manufacturer to develop an autonomous car, General Motors designed the “Highways & Horizons” pavilion — a public display of its self-driving cars that would change the way we think of private transportation. The showing represented “almost every type of terrain in America and illustrating how a motorway system may be laid down over the entire country — across mountains, over rivers and lakes, through cities and past towns — never deviating from a direct course and always adhering to the four basic principles of highway design: safety, comfort, speed and economy.”

The year was 1939, and the pavilion — dubbed “General Motors Futurama” —was presented at the New York World Fair. Experts of the day predicted that automated cars would be common place by 1960, forever displacing traditional motoring.

We think of the concept of autonomous cars as a recent convention, brought on by the power of the microchip, cloud-based computing and the Internet, but in truth car makers have been waxing on about self-driving automobiles for the past 80 years with the same conclusion: They are about 20 years out.

At the 1962 Seattle World Fair, GM introduced the Firebird III concept car that had an “electronic guide system [that] can rush it over an automatic highway while the driver relaxes.” Today, the car maker is working on its Super Cruise Cadillac, which would allow drivers to take their hands of the wheel, as the car travels in its lane, automatically slowing down or speeding up depending on traffic. Its engineers believe that self-driving cars will become available by the end of the next decade — the same prediction made back in 1962.

Much legislative ink has been consumed on the issue as well, as nations also rush to be the first to develop the driverless car. In 1991, Congress enacted the Intermodal Surface Transportation Efficiency Act, which instructed the U.S. Department of Transportation to “demonstrate an automated vehicle and highway system by 1997.” The program was followed by the Transportation Equity Act for the 21st Century in 1998, the Safe, Accountable, Flexible, Efficient Transportation Equity Act in 2005, and the Moving Ahead for Progress in the 21st Century Act in 2012. Yet despite the expenditure of billions in public revenue, none of these acts succeeded in achieving the goal set out by GM in 1939 of creating an automated highway system.

The goal of having autonomous cars is undeniably pure: They would allow the young or infirm to travel with independence, they would eliminate human error and help avoid accidents, and they would greatly reduce traffic congestion. But is this an achievable goal? More to the point, can machine ever duplicate the delicate finesse involved in exercising judgment, or interpret the social cues required in responsible driving?

Interestingly, in 2012, Stanford University undertook an all-out effort to build an autonomous race car, with the thought being that a self-driven car should be designed to achieve the limits of physics and mathematics, and then scaled back for daily commuting. The university’s efforts resulted in an autonomous race-car capable of achieving 150 mph and, driven by algorithmic formulas, capable of theoretically achieving the fastest line on the track a car could take. When they pitted the race car against professional drivers, however, they were unable to beat the drivers, even though it was mathematically impossible to outperform the university’s car.

What’s even more interesting are the results from the studies the university conducted on the brain waves of the professional drivers. The studies revealed that the professional drivers emitted very few theta brainwaves (used for heavy cognitive thinking), and displayed almost all alpha brainwaves (emitted when the brain is at rest), even though they were taking the racecar to the outer limits of physics. The conclusion the study reached was that even at the highest level of performance, the human exercise of judgment in driving was a reflex activity, and that perhaps the goal of autonomous driving should be to assist drivers, rather than replacing them.

Nevertheless, innovators and lawmakers are still pressing ahead to develop the self-driving car. Perhaps most famously, Google is in the process of developing its autonomous car, powered by software called Google Chauffeur. Following substantial lobbying efforts by Google, on June 29, 2011 Nevada passed a law permitting the operation of autonomous cars within the state. In May 2012, the Nevada Department of Motor Vehicles issued the first license for an autonomous car to a Toyota Prius modified with Google’s experimental driver-less technology. Since then, three additional states (California, Florida and Michigan) and the District of Columbia have passed laws allowing driver-less cars.

Still, pundits have concerns about the safety of driver-less cars. Recall that not so long ago Toyota was required to pay a $1.2 billion fine for covering up the unintended acceleration in certain models of its vehicles. The official verdict is that the root of the problem was never found, but many industry experts believe that it was a software glitch. Whatever the cause of the Toyota safety issue, the event underscores how glitchy software can be. Consider the trillions of dollars that have been invested in the personal computing industry, and still we deal with computers freezing and software crashing on an all too frequent basis.

At a 2014 symposium on automated vehicles in San Francisco, 500 industry experts were asked about the viability of autonomous cars. Asked when they would trust a fully robotic car to take their children to school, more than half said 2030 at the very earliest. A fifth said not until 2040, and roughly one in 10 said “never.”

It is undeniable that automated cars, if they ever actually become a reality, will reduce human error in daily driving. A computer simply won’t fall asleep at the wheel, or drive home from a party when it shouldn’t. But can a computer ever out perform a human in judgment-related tasks, or successfully interpret the facial cues of a nearby motorist? And when an accident occurs with the self-driving car — and it will occur — is it not more difficult to understand and reconcile the computer-generated mistake, than that of our human counterparts?

We have the ability to land a commercial airliner with an automated system. It is really a function of physics, with little that can go wrong. Yet, at the point when it matters, we make a conscious decision to turn off the autopilot and return the controls to a person who can reason, judge and exercise discretion. We need to employ that same methodology in daily driving. Fortunately for all drivers on the road, the automated car is, and always will be, 20 years out.