Valley News – Editorial: Artificial Intelligence and the Written Word

Posted: 12/27/2022 15:05:34

Modified: 12/27/2022 15:02:38

Years ago, when the digital revolution was beginning to take hold, the Luddite wing of the valley news The editorial board speculated that one day it would be possible to enter an editorial issue and a position to be taken into a computer, upon which a software program would immediately produce 500 words of well-argued prose. Brave New World!

For better or worse, and we let our readers decide, that day may be just around the corner. In late November, OpenAI, a nonprofit research lab, launched ChatGPT, an experimental online tool that responds to prompts with realistic representations of human-produced written content. Furthermore, the show’s responses contain “sometimes astonishingly convincing grammar and syntax”. the boston globe informed. We are far from competent in explaining exactly how this happens, but OpenAI kindly notes that “we trained this model using reinforcement learning from human feedback (RLHF).”

Anyway, the Balloon He tested it earlier this month and asked the chatbot to write news stories based on completely fictional events in Boston. The newspaper published the results, and to our eyes and ears, the ChatGPT stories were remarkably plausible responses to prompts like: “Write a boston globe article about Boston accents suddenly disappearing overnight.

Judge for yourself from this excerpt: “According to experts, the sudden disappearance of the Boston accent is probably due to a combination of factors. Some speculate that it may be the result of a collective subconscious desire among Bostonians to shed the ‘dumb Bostonian’ stereotype that has long been associated with the accent. Others point to the growing influence of the media and the homogenizing effects of globalization, which may have led to a more standardized way of speaking in the city. The only false note we detected is the reference to “Bostonian fool”, which could be replaced by “townie”.

To be clear, ChatGPT is still a research work in progress. Its developer points out several limiting flaws at this point, including: “ChatGPT sometimes writes answers that sound plausible but are incorrect or nonsensical”; and, “The model is often excessively detailed and overuses certain phrases.” (On the other hand, any completely honest editorialist would admit to occasionally committing the same mortal and venial sins against English form and language.)

Critics have pointed to more serious concerns. One is that as the chatbot develops, it could be used to systematically generate and spread misinformation or outright lies that people would have great difficulty recognizing. Many other problematic uses come to mind, even in academia. The Balloon reports that Paul Kedroski, a fellow at MIT, recently compared the technology to “a virus (that) has been released without concern for the consequences.”

If ChatGPT poses a threat to those of us who write (or type, depending on your point of view) for a living, it might also offer a challenge: Typing is a way readers can distinguish human from computer-generated. — avoid clichés and the clichéd thinking that produces them; express yourself clearly and simply, in a distinctive yet accessible style; sticking to what is true, rather than what just rings true.

But if opinion writers are destined to be eventually replaced by technology, we fervently pray that the program can be trained to keep in mind, as we try to do so, the need for humility in this endeavor. Because as the great Supreme Court Justice Oliver Wendell Holmes Jr. correctly observed, “Certainty is not the proof of certainty. We have been sure of many things that were not so.

Leave a Reply

Your email address will not be published. Required fields are marked *