ChatGPT is (as of December 7th, 2022) getting much the same starry-eyed reception as MidJourney.ai got a few months ago. This text generation tool creates “artificially intelligent” text content, in much the say way that MidJourney created AI “art” – by mashing together existing content in a way that seems immediately impressive.
But is this original? Is it creative? Does it even make sense? Will ChatGPT and MidJourney put writers and artists out of work?
To test this, let’s go for something that is both “creative”, and also not too challenging. Let’s ask ChatGPT to create a story for Dungeons and Dragons – no constraints, no cultural nuance, just a generic fantasy setting.
Let’s look at what it comes back with, and then unpack that.
A ChatGPT Example
Prompt: Generate a mission objective for a dungeons and dragons quest
Output: The party must venture into the depths of an ancient dungeon and recover the lost relic of the ancient mage, which is said to grant its bearer great power.
Prompt: What does the relic do?
Output: The relic is said to grant its bearer immense magical power, allowing them to cast powerful spells and summon powerful creatures to do their bidding. It is also said to grant its bearer protection from the arcane forces of the dungeon.
Prompt: What is the mage’s motivation?
Output: The mage’s motivation was to protect the relic from falling into the wrong hands. He believed that the relic’s power was too great and could be used for evil, so he sealed it away in the depths of the dungeon and cast powerful spells to protect it.
Prompt: What is the moral dilemma for the player characters?
Output: The moral dilemma for the player characters is whether or not they should use the relic’s power for their own ends. The relic is said to be powerful enough to grant its bearer great power, but it could also be used for evil. The players must decide if they should use the relic for their own gain or if they should honor the mage’s wishes and return it to its rightful owners.
—
This is a very simple outline. And superficially, it looks pretty good. But pause for just a moment and it all starts to fall down.
For one thing ChatGPT has only given us one character – the mage – and he’s dead. When asked to give the mage a motivation, ChatGPT misses the point and contradicts itself. First, it doesn’t give him a motivation, it specifies his objective. And second, the objective is inconsistent. Does the mage want the powerful magic item returned to the rightful owner, or did he want it kept safe to protect the world? ChatGPT has given us both options – and neither makes sense.
The hook that the plot hangs off is that the artefact that is too powerful for mortals to wield, which would be a totally predictable, boring idea in itself, but here it makes no sense because actually, it isn’t super-powerful. It is just a magic thing that does the same magic things that any generic powerful magic thing does in DnD (“cast powerful spells and summon powerful creatures to do their bidding” – yeah, exactly what Dungeons and Dragons player characters, and their allies and enemies, all do all the time).
Meanwhile, when we ask for a moral dilemma – something that will give the story emotional weight and give meaning to the protagonists’ actions – all that ChatGPT suggests is “keep the power or give the power away” which is amazingly facile.
Any competent writer or GM could come up with something way better.
Off the top of my head…
A Human Alternative
The “power” of the amulet is to purify water across the lands. The PCs are sent to recover it because the kingdom’s wells are turning brackish, and this is spreading sickness in a kingdom already torn by discontent against the kingdom’s tyrannical ruler. The tyrant king wants the amulet so he can purify the waters and save the kingdom. But do the PCs return it to him (saving the kingdom, yes, but also cementing his brutal rule) or hand it over to a rival who can use this as leverage (“I have the power to save the land, where your king has failed – follow me , we will overthrow the tyrant, and I will end this blight!”) to start a civil war to oust the king (and how do the PC know that the rival will be less tyrannical)? The mage who originally hid the amulet in the ruins (remember him) did so deliberately, because he knew that using the amulet would indeed purify the waters, but would also render those who had drunk the waters susceptible to some deep evil power (cue sequel….)
—
I just typed that as I thought of it, and it is way better than what the bot came up with. With some polish it could actually be pretty good. ChatGPT is nowhere near that.
And this is where the situation with ChatGPT is easier to understand than with MidJourney – but the two are actually the same.
These both take existing imaginative works, and swiftly repackage them in highly derivative ways. They create things which are, presumably, legally “original”, but which are simply uncritical mashups of existing content. As such they can only ever riff on, perhaps diminish, but certainly never increase our societies’ cultural capital.
And in the example above, by taking “Dungeons and Dragons” as our reference point, we are setting the bar low. This should be easy for a bot to work with because we are not asking it to take into account huge amounts of existing, detailed material. If we had asked it to set a story in a detailed fictional universe (say, a Star Fleet / Star Trek setting, or even a specific DnD world) or a real historical setting (say, 10th century Scandinavia) it would have had no chance of doing justice to the depth of such a setting. And notably, if we had wanted to venture off a well-trod path (Star Fleet and Vikings both have plenty of material for the bots to feed from) and tell a story of a place that is not so well represented in contemporary media (say, medieval Mali, or the matrilineal kingdoms of early medieval South East Asia) the bots could make no inroads as they cannot undertake research.
And what ChatGPT does, MidJourney does in much the same way – only, more prettily. Ask MidJourney to give you a picture of a Viking and it will quite often give you a character with a horned helmet – an absurd idea which no historian has believed since the 1970s. But at least the stupid fantasy Viking has some basis in popular culture. Ask it to create art for a “warrior of the Chenla kingdom” and it just makes stuff up – specifically it creates characters wearing plate mail which, given that Chenla warriors fought naked or near-naked and had no technology for plate armour, is spectacularly wide of the mark. MidJourney clearly generates garbage – but oh my, such alluringly beautiful garbage, and oh, this is all so easy!
If ChatGPT and MidJourney Create Garbage, Who Cares?
This sounds like I’ve just demolished what ChatGPT wrote, and ridiculed what MidJourney creates. I’ve just shown ChatGPT spewing contradictory rubbish which is inferior to the first off-the-top-of-the-head thoughts of a human being, and poked at the absurdity of MidJourney’s output.
In practice there are good use-cases for MidJourney art. (The image that accompanies this post was created using MidJourney as part of the process.) And I can imagine possible uses for text generation. There are even cases where MidJourney assets can be used as-is. But the limitations are severe – being derivative, superficial, failing to understand nuance and meaning, perpetuating a focus on a narrow range of human experience… blah, blah, blah….
But this raises the question… does it matter? Does anyone care that what the “AI” above wrote is derivative, superficial and nonsensical? Does anyone care if MidJourney tends to perpetuate lies and absurdities?
Taking the ChatGPT story above, as written, that would probably satisfy a lot of D&D players. It gives them a hook for them to delve into a ruin, roll some dice, get the thrill of victory, have a laugh with their friends. Many a fourteen year old D&D player gaming after school would be very happy with the bot’s plot.
And before we get too smug – as sophisticated adults, patronisingly looking down on undiscerning mere teenagers, or mocking table-top RPGs as inherently dumb – the level of quality delivered by ChatGPT would probably satisfy a lot of adults, too, and in media beyond TTRPGs.
Look at video games. Consider, for example, the highly regarded Skyrim, with its huge sales figures and enthusiastic fans. Its plots are pretty well as simple as the ChatGPT story.
This is true of the Skyrim sidequests. E.g. “Oh please recover my grandfather’s sword….” (Random villains get killed on the way to the sword.) “Oh thank you. As a reward here is more money than I would plausibly have, and no there is no plot significance or backstory to go along with this quest.”
This is also true of the more major questlines. E.g. “Evil necromancers have stolen three powerful books.” (Necromancers decide to die in a hopeless defence of the books, despite having no apparent motivation to do so and being clearly outmatched by the hero. Because, necromancers.) “Thank you, here is a reward. No, the books don’t have any interest of backstory, but please do go and find the arch-mage now to continue with this quest line…”
A bot could have written that.
Now, Skyrim is not a bad game. It is a great game. And people love it. But ChatGPT could, it seems, have written its quests.
One might also wonder if ChatGPT is pretty close to writing pulp fantasy novels, or the screenplays for some of the biggest budget shows on Netflix and Amazon – if not the actual scripts for the production crew to run with, at least the plot summaries.
But How Creative Is Creative?
By contrast, there is no way that ChatGPT could have written a novel like Project Hail Mary, or a TV show like Altered Carbon. It could not have written (as a computer game example) the plots for Divinity Original Sin 2, nor (as a TTRPG) the Coriolis rule book.
If we are now looking at a lot of MidJourney “concept art”, or reading plot summaries from ChatGPT, and thinking “this seems as good as a human could do, at a fraction of the price” then maybe this isn’t because the AI is brilliant. It is because for a long time we have asked human “creatives” to be dumb.
Huge amounts of concept art is derivative, unoriginal work which recycles and repackages ideas which are already commonplace. Many video games take no real care over their plots, and TV production companies have shown that actually they can make a lot of money with literally no narrative originality (reality TV shows being the most extreme examples – no need to craft a narrative at all, there!) And so we really can’t be surprised if “AI” can replace human “creatives”.
In a lot of media, we simply do not expect creatives to be creative. We expect them to generate assets which simplistically present the audience with whatever it already expects. And this kind of work is gong to be diminished, replaced or changed by so-called AI.
Our Future With AI
So where is this going to take us?
Here is where I would like us to go: I want us to rise to the challenge. I want human creatives to step up and do work which, instead of being lazily derivative, takes fresh inspiration to create engaging, surprising, wonderful stories and experiences. We can do better than the AI. Some writers, artists, studios and publishers have already shown this. Follow their lead.
But the way I want the world to be, and the way it actually is, rarely mesh. And in reality I suspect that in a very large number of projects there will be managers thinking “well, we made money from derivative visuals and unimaginative plots before – if these tools let us do the same at a lower cost then that’s a win.”
In a lot of companies “same quality, lower costs” is an easier sell than “overhaul our processes and people to increase quality”.
Kevin Hassall

Keep Up to Date
Want to know more? Then sign up for the Yosano Studios mailing list, for occasional updates direct to your inbox.