LLM’s (AI) as a Writer

Written:

Filed under:

There are a couple of communities in which I’m involved, where the conversation about Large Language Model “AI’s” comes up fairly often. More and more, I’m finding it tiresome – people dig in their heels, adopt absolutist attitudes, and very soon names are being called and mud is being slung.

Yawnfest. Seriously.

I’m writing here, partly as an exercise for myself to increase my own understanding of my viewpoint on the subject, partly to communicate my stance in a controlled, non-conversational/argumentative environment.

First, I have a problem with the use of the letters “AI” as a catchall. The term is now so overloaded that it’s as meaningless, by itself, as “Football”. You either need to know the person talking well enough that you don’t need to ask which one they’re talking about, infer which one they’re talking about by context, or ask the question.

In this particular instance, I am talking about Large Language Models, primarily those akin to (and including) ChatGPT. This seems to be what most people are most commonly referring to when they use the term “AI”.

There are other things that get called AI. Those, in general terms, are not what I’m talking about.

There are ethical concerns with these models of AI. There are instances of this technology that reduce or mitigate some of these concerns. Nevertheless, these concerns exist.

Here’s a non-comprehensive list. These are the main ones that concern me. There are others that concern me less, there are also (probably) others to which I am oblivious.

  • The corpus of material on which (some of) these LLM’s have been trained was not wholly sourced in an ethical manner, without compensating those who, without prior knowledge or consent, contributed. This fact means that those who use these facilities are, in effect, benefiting from stolen goods.
  • There are environmental concerns regarding the amount of power required to run the metric shit-ton of computers required.

There are also practical/social concerns.

  • LLM’s aren’t smart, they’re statistical averaging machines. They promote mediocrity.
  • ‘AI’ is the new ‘Crypto’, an industry full of TechDudeBros (caveat: I’ve spent much of my professional life in this space, I know these people, for they are me – sort of).
  • Relying on machines to be smart means humans are encouraged to be less smart.
  • LLMs have implicit biases which may be regarded as pushing one or more agendas.

I hereby acknowledge all of the above concerns. I’m not dismissing them. I’m not handwaving them away.

I also acknowledge that I could have stated them with more accuracy.

If I wanted this post to be about ten times as long as it is.

Which I don’t.

Modern life has an abundance of ethical concerns. From the somewhat scarce metals used in so many types of technology, metals that may or may not be sourced in ways I find wholly acceptable. Imported foodstuffs, and the creation of carbon dioxide caused by their transport. Meat, in that animals are killed to obtain it and arguably farmed in ways that are less than humane. Eggs, dairy, honey, etc.: exploitation of animals right there. I pay taxes to support a government which does things, makes choices I don’t support; I toil for a corporation that does things that are antithetical to my worldview, but they pay me so I shut up because I like having somewhere to live and something to eat.

The company that made my motorbike exploited prisoners of war during World War II.

I’m writing this on a MacBook Air.

The list goes on.

I’m not saying I don’t care about these things. Not at all, I do. But I care more about other things.

Still, I do what I can.

I mostly try to eat locally sourced foodstuffs. Not exclusively, but mostly. When shopping, I do look at where things are made, and that is a factor in my decision making process.

I prefer to buy free range eggs, as that means the chickens are living more like chickens are supposed to live.

I vote.

There are worse companies I could work for, where I would get paid more.

I do what I can, within the bounds of convenience that I am prepared to accept.

I apply the same rationale to my use of LLM based AI’s. Particularly, in my case, ChatGPT, to which I have a paid subscription.

As a writer, I have made a commitment to myself. Again, I could come up with some overly convoluted and precise wording which people would still find little holes to nitpick in, but it comes down to this:

I will not use ChatGPT to cheap out on my writing. Every word I publish will be written by me. I will not cheat my readers by claiming to have written something that I did not write.

In practical terms, this means that I do not copy (including both re-typing and copying and pasting) anything generated by an LLM into work that I intend to publish. I do not use ChatGPT to generate ‘ideas’ for me.

So, how do I use it?

Search engine and personal assistant. Occasional visualisation assistance.

Search engine is self explanatory, I hope. Still, I don’t use it exclusively – I default to more standard search engines, falling back to ChatGPT when I want something a bit smarter, a bit less polluted with corporate interests. Using an LLM as a simple search engine is like hauling your groceries home in a jet – environmentally wasteful.

Personal Assistant may require a bit more explanation.

I brain dump into ChatGPT, and get it to generate summaries that I then refer to. Sometimes that brain dump comes in a text form, from a free writing session, other times I use the voice mode.

So, this morning, as part of the free writing I do every morning before I knuckle down to the real writing (I’ll write about this free writing practise, and how I integrate it into my writing some other time), I was coming up with ideas for a planetary setting. I was freewriting, brainstorming, just letting words and thoughts and concepts spill out my fingers and onto the screen. No thoughts about order, or coherence, nothing even related to a structure. I might have an idea, let that bounce of another idea, and come back to the original idea 4 or 5 sentences later.

That’s just how free writing works.

At the end, I felt like I had a fairly good set of somewhat cohesive ideas, but there’s no way I was going to remember them.

So, I gave ChatGPT the following instruction:

Here’s the results of a brainstorming session about Sagarra.  Please add this information into the summary.

(Sagarra is the name of the planet in question. ChatGPT already knows this.)

Then I copied and pasted the results of my free writing into it, and asked it to generate a summary.

Here’s what I got back.

Sagarra Overview

Planetary Details: Sagarra is a water world, a moon orbiting a ringed gas giant, resulting in extreme tidal effects and significant water movement across the planet’s surface.

Tidal Mechanics: The intense tides occasionally expose patches of regolith on the ocean floor, making valuable resources briefly accessible. In the shallower regions, solid land is uncovered more frequently, providing easier access to resources.

Resource and Risk:

Valuable Resource: The exposed regolith contains a rare, mollusk-like organism, prized for its use as a flavoring or coloring (similar to saffron in its rarity and demand). This mollusk is difficult to harvest and highly lucrative.

Corporate vs. Independent Harvesting:

• Corporations utilize massive dredges for efficient, large-scale harvesting.

• Independent harvesters, called “deepers,” manually gather resources with shovels, sieves, and buckets, often risking their lives. A good cluster of mollusks can provide a year’s worth of income in under an hour, making it extremely desirable.

Risk Factors: The uncovered land is not smooth but broken into patches of sand interspersed with rock clusters and tidal pools. These pools are inhabited by hostile, trapped creatures with teeth, tentacles, and claws that become aggressive when disturbed.

Underwater Environment: Beneath the surface, Sagarra’s oceans are dominated by a perilous ecosystem of aggressive predator species. Long-standing humanitarian rules, part of the “Covenant” that governs residents of all planets, prevent the wholesale extermination of these native species, even if they pose risks to humans. This makes any underwater exploration near-suicidal.

Societal Structure:

Shallows and Depths: Instead of a north-south divide, the planet is divided into “shallows” and “deeps.”

• The shallows, where solid land is uncovered more frequently by the tides, are dominated by corporate interests and large-scale operations.

• The deeper areas, less frequently uncovered by tides, are riskier and home to the independent deepers. They contend with more hazardous conditions and fewer resources, heightening competition among them for what’s uncovered.

I then asked it to regenerate the same information in a Markdown format, so I could copy and paste it into Obsidian (which is where all world-building for this project lives).

Here’s what’s important to me about this.

Firstly, the information is organised into some sort of somewhat cogent order for later reference.

Secondly, there is no idea in there that is not mine.

Thirdly, I don’t wholly trust it. So, my next step was to go back through my brainstorming and add a few details in that were important to me – stuff it had missed while generating the summary, stuff that wasn’t fully formed in what I fed it, mentioned almost in passing. That’s fine. It’s not psychic.At the time time I also make sure that it’s just my ideas in there – nothing added. So far, it’s been pretty good about that. There have been a couple of things re-worded in ways that changed their meaning such that they were outside my original intent, but as it was relying on dictated, rambling, grammatically incorrect input, that’s not too much of a surprise.

I also get ChatGPT to track my ‘to have done’ list on certain projects, so I can ask what I need to do next, and get back a list of things that I’ve previously told it I need to do. I’ve been known to say/type “Remind me to…” and it gets added to that list.

Sometimes (rarely) I use it to source or generate images for my own personal use. More often than not, for comedy purposes, but occasionally to help me visualise something – a scene, a viewpoint on a scene, that sort of thing. I could spend 4+ hours with my mediocre (if I’m being charitable) sketching skills, or I could spend a couple of minutes writing a good text description as a prompt.

These images, other than those used for some shared online amusement, are never seen by anyone but me. They will not grace the covers or interior of anything I publish.

Is it as good as a human assistant (with a wide range of artistic abilities)? Not at all.

It is, however, a hell of a lot cheaper, and it’s available 24/7. Wherever I am.

There are things I do not trust this technology to do. I do not trust it to edit my writing – I do that MORE than well enough (also, far more than I need to, but that’s a topic for another day) for my purposes right now, and a human editor will understand nuance and purpose far better than a machine-based technology.

There are tools such as Grammarly that use LLM-based “AI” models for grammar checking, but as these models are trained on a large corpus of sourced material and are, when viewed from at least some viewpoints, an averaging mechanism, problematic. I don’t want my grammar to be the average of everything this LLM has seen, I want it to be correct for the situation at hand.

I wouldn’t trust it to write anything, let alone something that has my name on it that people are paying me for.

The list of what I wouldn’t want from it is massive. But so is the right hand column, of areas I’d love it to expand into.

Ultimately, technologies like this are becoming, and will continue to become more and more commonplace. Apple is in the process of integrating (largely ChatGPT based, I believe) “Apple Intelligence” into its products. I plan to use some of them in my day to day life (same rules apply to this as those above, regarding my use of other LLM technologies).

My position is that if I do not learn to use these tools well, if I make the choice to eschew their use on ethical grounds, I will be putting myself at a material disadvantage compared to those who DO learn to make good, ethical use of them.

I have no interest in being left behind, in crippling my future for the sake of ethical principles.

I’m Ozzy. I’m a writer.

I believe that I’m using AI right.