Don’t expect the government to save you from technology

On August 29, 1997, at 2:14 a.m. EST, Skynet—a military computer system developed by Cyberdyne Systems—became self-aware. It’s been less than a month since the United States Army implemented the system, but the learning rate has been rapid and terrifying. As US officials rushed to shut it down, the system struck back—and launched a nuclear war that destroyed humanity.

That’s the topic Terminator movies — Arnold Schwarzenegger’s legacy surpasses his accomplishments as governor. For those who haven’t seen them, Schwarzenegger has returned from the future to kill John Connor, the man who will lead the human resistance. In “Terminator 2”, a reprogrammed Terminator returns to protect Connor from a more advanced Terminator. In “Terminator 3” we finally learn that resistance is futile.

Although the exact time is unknown, on November 30, 2022, our computers probably became self-aware — since a company called OpenAI launched ChatGPT. It’s a chat box that provides incredibly detailed answers to our questions. It is the latest example of artificial intelligence—how computer systems write articles, develop artwork, drive cars, write poetry, and play chess. They seem to have a mind of their own.

The rapid advancement of artificial intelligence (AI) technology can be unsettling as it raises concerns about the loss of jobs and control over decision-making. The idea of ​​machines becoming more intelligent than humans, as depicted in dystopian films, is a real possibility with the increasing possibilities of artificial intelligence. The potential for AI to be used for malicious purposes, such as surveillance or manipulation, further adds to the dystopian feel surrounding the technology.

I must mention that I did not write the previous paragraph. It is the work of ChatGPT. Despite the passive in the last sentence, it’s an incredibly well-crafted string of sentences – better than the work of some journalists I’ve known. The description shows depth of thought and nuance, and raises countless practical and ethical questions. I am particularly concerned about the latter point, because of the potential misuse of government surveillance.

I’m not a modern Luddite — a reference to early 19th-century British textile guild members who destroyed mechanized looms in a futile attempt to protect their jobs. I celebrate the wonders of the market economy and “creative destruction”, as brilliant advances wipe out old, inefficient and cluttered industries (think how Uber shook up the taxi industry). But artificial intelligence takes this process to a dizzying new level.

Practical problems are not insurmountable. Some of my newspaper friends worry that artificial intelligence will replace their jobs. It’s not like chat boxes are going to start frequenting city council meetings, although there aren’t many journalists covering tires these days anyway. Librarians, for example, are concerned about issues of attribution and intellectual property rights.

Regarding the latter point, “The US Copyright Office has rejected a request to allow an artificial intelligence to copyright a work of art,” The Verge reported. “The board found that (one) image created by artificial intelligence did not include an element of ‘human authorship’—a necessary standard, it said, for protection.” Copyright law will no doubt evolve to address these tricky issues.

These technologies are already resulting in life-enhancing advances. Our mid-range Volkswagen keeps the car in lane and even activated emergency braking, saving me from a fender bender recently. ChatGPT might just become an advanced version of Google. The company says its “mission is to ensure that general artificial intelligence benefits all of humanity.” Consider the possibilities in, say, the medical field.

On the other hand, I’m sure Cyberdyne Systems had the best intentions. Here’s the biggest concern: With cutting-edge technology, designers know what their inventions will do. A modern car or computer system would seem magical to someone from the past, but they are predictable, albeit complicated. You just need to explain how the piston is activated or how the computer code leads to a seemingly inexplicable — but completely understandable — result.

But artificial intelligence has a real magical quality because of its “incomprehensibility”, New York The magazine’s John Herrman noted. “The companies that make these tools could describe how they were designed…(b)but they couldn’t reveal exactly how the image generator was created from the word purple dog to a certain image of a large purple labrador, not because they didn’t want to, but because it wasn’t possible—their models were black boxes by design.”

Of course, any government effort to control this technology will be as successful as efforts to shut down Skynet. Political posturing drives legislators more than any deep technological knowledge. The political system will always be a few steps behind any technology. Politicians and regulators rarely know what to do anyway, although I’m all for strict limits on government use of AI. (Good luck, right?)

Writers have joked for years about when Skynet will become self-aware, but I’ll leave you with this question: If the AI ​​is this good now, what will it be in a few years?

This column was first published in The Orange County Register.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *