ChatGPT is making waves, but what do AI chat tools mean for the future of writing?

How will artificial intelligence chat tools change the future of writing?

Artificial intelligence chat tools have the potential to change the way we write by making it more efficient and accurate. For example, they can assist with spelling and grammar checks, suggest synonyms, and even generate entire pieces of text. This technology could also make writing more accessible to people who struggle with language, such as those learning a new language or those with certain disabilities.

However, it is important to note that the use of AI in writing also raises ethical concerns about the authenticity and originality of the work being produced.

Full disclosure: The above two paragraphs were written by artificial intelligence chat tool ChatGPT. The tool was developed by software company OpenAI – the people who graced the world with DALL-E, one of the AI tools which went viral last year for being able to turn text into images. Frightening images.

ChatGPT was released for public access on November 30. Upon opening ChatGPT , the webpage noted the tool would be able to interact with users “in a conversational way.” Among ChatGPT’s talents are that it: “…can answer follow-up questions, admit mistakes, challenge incorrect premises, and reject inappropriate requests.”


Read more: AI Art: Proof that artificial intelligence is creative or not?


So, how does it work?

Taking vast amounts of data, the tool compiles the information based on the prompts of the user.

Sounds easy, but it’s very easy for AI to simply spew out nonsense without the proper “training.” ChatGPT has made headlines and caused a stir online precisely because it actually sounds pretty human. Maybe too human?

For comparison, other AI tools exist which let you play around with some of their parameters – this gives an insight into what’s going on behind the scenes.

An example is Inferkit, created by Canadian developer Adam King. Inferkit allows you to “dial up” the “sampling temperature” increasing the randomness of the text. The result is often hilarious.

ChatGPT is based on what is known as a Generative Pre-trained Transformer (GPT) architecture. This essentially means that the software uses deep learning algorithms to analyse and generate text. The model uses huge volumes of data from the internet to “understand” the nuances of natural language, as produced by humans.

It analyses input text by breaking it up into smaller components like words or short phrases. Piecing everything together, ChatGPT reveals its response.

What sets ChatGPT apart is that it was trained through Reinforcement Learning from Human Feedback (RLHF).

Human AI trainers fine-tuned the initial ChatGPT model by “playing both sides” of the interaction and giving the AI feedback on which of its responses were most appropriate. A “reward” model trains the AI to recognise when it has produced human-sounding responses.

The developers note some snags in the model, including occasional nonsensical responses, high sensitivity to tweaks in the input phrase, repetition, assumptions being made when prompts aren’t clear, and susceptibility to respond to inappropriate requests, including those with harmful instructions or those that lead to biased behaviour in the AI.


Read more: Promise and problems: how do we implement artificial intelligence in clinical settings and ensure patient safety?


And the potential for misuse is among the most serious of public concerns about ChatGPT and other language-processing AI software.

In academic writing , for example, ChatGPT or some equivalent tool might be used by a student to produce an essay. The student has not done the work and has essentially plagiarised. But it would be very difficult to prove, as ChatGPT is specifically designed to sound human.

Edward Tian, a 22-year-old student at Princeton University, has developed an app which he claims can “quickly and efficiently” tell if ChatGPT was the author of a student’s essay.

OpenAI partnered with Georgetown University’s Center for Security and Emerging Technology and the Stanford Internet Observatory, to compile a report focused on the potential misuses of AI language software as well as possible mitigation methods.

Of much greater concern than a plagiarised university homework assignment is the potential use of AI language tools to spread misinformation.

In a statement published on OpenAI’s website, it says: “We believe that it is critical to analyse the threat of AI-enabled influence operations and outline steps that can be taken before language models are used for influence operations at scale. We hope our research will inform policymakers that are new to the AI or disinformation fields, and spur in-depth research into potential mitigation strategies for AI developers, policymakers, and disinformation researchers.”

For now, it appears artificial intelligence is here to stay – whether we like it or not.

It certainly has its uses. As my learned friend, ChatGPT, said at the outset of this article, it has the potential to aid in aiding writing as well as making language more accessible. But it seems legislation will have to catch up to the rapidly-developing technology to avoid misuse.

In the meantime, you can rest assured that I wrote this article all on my own. Or did I?

Please login to favourite this article.