“I’m sorry, Dave. I’m afraid I can’t do that.”
That is Hal 9000, all-purpose spaceship computer, declining an order from an astronaut to open the reentry doors. The scene is from the 1968 film “2001 – A Space Odyssey.” David Bowman (actor Keir Dullea) remains trapped in a small module, facing death.
The theme of machines taking control from people, eventually destroying them, is at least as old as the Industrial Revolution. The classic 1927 film “Metropolis” is another outstanding example. In the 19th century, Karl Marx seized on this threat to develop communist ideology, which proved vastly destructive in the 20th century.
Today, the current mania for artificial intelligence, or AI, and reactions thereto, brings up-to-date the theme of technology challenging humanity. Enormous amounts of capital are pouring into an array of companies pursing AI. These range from mammoth, established tech firms to others comprising little more than concepts.
Among the former is Apple, which just announced a partnership with OpenAI. Virtual assistant ChatGPT will be offered across Apple platforms. ChatGPT uses brief prompts to develop longer statements (even essays), provide suggestions based on data analysis and otherwise help keep people on track.
Apple stock has been boosted, but so far the reaction in business and investment circles to the much-hyped announcement has been underwhelming.
Part of the explanation is that Apple has lagged other corporations in emphasizing AI. These include Microsoft; Google’s parent, Alphabet; NVIDIA, the developer of a revolutionary new chip; and Facebook’s parent, Meta Platforms.
Parenthetically, just last year, Apple became the first corporation in history to pass $3 trillion in capitalization. In 2010, Apple surpassed Microsoft in total value.
Yet Apple was in serious trouble before Steve Jobs returned to the top job in 1997. He had left in 1985, effectively forced out after losing a struggle to retain influence over future directions.
Marketing and creative genius Jobs had launched Apple with technical genius and partner Steve Wozniak in 1977, and they went public in 1980. The aggressive, unconventional startup soon was challenging dominant IBM and the other personal computer companies of that very different time.
In sum, Apple’s long-term success reflects human talent led by Jobs and successor Tim Cook.
The basic question: How truly significant, and how potentially dangerous, is artificial intelligence for business, government and the world at large?
There is alarm about student cheating in the academic world, thanks to the ease of having term papers and other projects effectively, secretly handled by someone — or at least something — else. There are computer programs that are reasonably good at detecting plagiarism, but this represents a new and different sort of threat.
Way back in the 1960s, UCLA, where I was a student, became plagued by storefronts just off campus offering term papers for a price. They stayed just within the law. The campus uproar was tremendous.
Howard Swearer, a younger faculty member and mentor, calmly told students that we would be cheating ourselves if we paid others to do this work. Education is an investment in your own future effectiveness. Eventually the hustlers moved on.
More broadly, AI clearly greatly aids routine work, including research, but is no substitute for judgment.
Astronaut Dave ultimately outwitted and prevailed over Hal, who had received separate, contradictory human instructions. Hal in consequence became unable to cope and, in human vernacular, cracked up.
Legitimate concerns regarding artificial intelligence include surveillance and job loss. These are serious challenges, and future columns will address them.
Arthur I. Cyr is the author of “After the Cold War.” Contact acyr@carthage.edu.