PDA

View Full Version : Chat GPT as DM: Link and Discussion



Damon_Tor
2023-04-05, 07:33 PM
So while the internet was blowing up with the whole OGL debacle earlier this year, there was a "leak" that seemed to be low credibility, but a part of it intrigued me: that WotC was actively developing an AI DM to ship with their VTT. This was met with scepticism by this forum: it wasn't realistic. I disagreed: the tech is there.

The following story found its way into my news feed today.

https://obie.medium.com/my-kids-and-i-just-played-d-d-with-chatgpt4-as-the-dm-43258e72b2c6

Now, this isn't a tool developed for this purpose: it can't read dice rolls that aren't reported to it or manipulate a virtual minimap with PCs and NPCs. But it did a solid job.l in my opinion. This story reinforces the thought that an AI DM might be a real product in the years to come.

Thoughts?

tchntm43
2023-04-05, 09:13 PM
So while the internet was blowing up with the whole OGL debacle earlier this year, there was a "leak" that seemed to be low credibility, but a part of it intrigued me: that WotC was actively developing an AI DM to ship with their VTT. This was met with scepticism by this forum: it wasn't realistic. I disagreed: the tech is there.

The following story found its way into my news feed today.

https://obie.medium.com/my-kids-and-i-just-played-d-d-with-chatgpt4-as-the-dm-43258e72b2c6

Now, this isn't a tool developed for this purpose: it can't read dice rolls that aren't reported to it or manipulate a virtual minimap with PCs and NPCs. But it did a solid job.l in my opinion. This story reinforces the thought that an AI DM might be a real product in the years to come.

Thoughts?

I did this about a month ago myself. It starts off very strong but gets progressively less good as you play:

Early:

Zangor will try to break it open with force, he is quite strong. Zangor rolls a 7, adds his proficiency bonus and strength bonus, and the final result of the strength check is 12.

With a loud grunt and a powerful swing of your battleaxe, you strike the door with great force. The wood splinters and cracks, but the door remains shut. You feel as though the door might require a bit more force to break open. However, as you strike the door, you hear the sound of footsteps approaching from the other side.

Zangor moves back a few steps and waits to see if the door opens from inside. He holds his battleaxe ready.

You wait patiently for a few moments, but the door remains closed. You hear nothing from inside the room beyond the door. It seems like the door is locked, and it's not going to open on its own. What would you like to do next?

Zangor grunts in frustration and decides to walk around the castle slowly to see if there are any other ways in.

Later on:


As you stand ready with your battleaxe, the door bursts open and a pack of three goblins rush out to attack you. With your quick reflexes, you manage to act before they do.

Roll for your attack against the first goblin, with your battleaxe. You have a +5 to hit, and the damage is 1d8+3.

I roll a 15 to hit, plus 5, for 20 total on the attack roll. Do I hit the goblin?

Yes, a total of 20 hits the goblin. Roll for damage.

I hit with the battleaxe for 6 damage.

Your battleaxe lands a solid blow on the goblin, cutting deeply into its shoulder. The goblin howls in pain and anger, but it remains standing and swings its scimitar at you in retaliation.

The goblin rolls a 9 to hit, but your armor class of 16 easily deflects the blow. It's your turn again.

Notice it completely forgot about the other two goblins.

Later it gave some really inappropriate random treasure for level 1:


I'll take a look in the barrels and crates first. Anything of interest in them?

As you examine the barrels and crates, you find various foodstuffs, tools, and some old clothes. However, one of the crates catches your eye. It contains several bags of coins, and as you count them, you find a total of 500 gold pieces.

Additionally, you notice that one of the barrels has a false bottom. When you remove it, you find a small cache of weapons and armor, including a +1 longsword and a suit of +1 chainmail armor.

You also notice a pile of old books and scrolls in one corner of the room. It looks like they might contain some valuable information, but you'll need to spend some time studying them to find out.

It did invent this magic item and I have to say it's pretty darned cool:


As you take a closer look at the gloves, you notice that they are well-crafted and finely detailed, with intricate patterns etched into the leather. The gloves seem to be emitting a soft, blue light, and as you examine them, you feel a faint tingle in your fingertips.

Upon further inspection, you realize that these gloves are magical in nature. They are imbued with the power of mage hand, allowing the wearer to manipulate objects at a distance with a spectral hand. The gloves also provide some protection against cold and frost, making the wearer resistant to cold damage.

Do you want to equip the gloves and try out their abilities?

Yes, I would like to put on the gloves of mage hand

As you put on the gloves, you feel a slight tingle in your fingers, and a faint blue glow emanates from them. You sense that the gloves have attuned to you, and you feel a magical connection to them. You now have the ability to cast the spell Mage Hand at will, allowing you to control a spectral, floating hand that can manipulate objects from a distance.


However, it's difficult to demonstrate but at some point I realized that it no longer remembered why I was even in the castle.

It's also really good at embellishing description. For example:


I want to try a different approach now. This time, I am only going to provide one parameter, the value. I want you to create the art object based on your knowledge of GP monetary value, the value of various gems, precious metals, and other materials. Using this information, create a short name for the object that should be no more than 5-6 words, such as "gold and brass mug" or "ruby cut dice". Then on a second paragraph, write a longer description for the item.

Ready?

Yes, I am ready. Please provide the value for the art object.

value = 25 GP

Name: Silver Charm Bracelet

Description: This delicate bracelet is made of fine silver and adorned with small charms shaped like flowers, butterflies, and other intricate designs. Each charm is set with a tiny crystal or gemstone, adding a subtle sparkle to the piece. Though the bracelet is small and lightweight, it is expertly crafted and exudes a refined elegance that belies its modest cost. The clasp is also silver and features a small charm that dangles from the end. This bracelet would be a lovely gift for a young lady or anyone who appreciates fine jewelry.

value = 2500 GP

Name: Silver Filigree Candelabrum

Description: This elegant candelabrum stands at almost two feet tall and is crafted from shining silver. Its base is adorned with delicate filigree work, and three branches curve gracefully upwards, each ending in a polished silver candle holder. The candelabrum is topped with a silver finial in the shape of a flame, which catches the light and shimmers beautifully. It's a stunning piece of art, suitable for any aristocratic home or grand hall.


EDIT: Granted, I didn't have access to GPT 4, and as the link you gave shows, it's far far superior to ChatGPT which uses GPT 3.

Chronos
2023-04-06, 08:24 AM
I think that an AI DM is coming, soon, that will be able to handle at least straightforward dungeon crawls. I don't think that it's possible with the current framework, though, because the way that ChatGPT is trained is not suitable to learning how to play D&D (either as an player or as a DM). Currently, it's trained on static bodies of text. There are plenty of examples of folks describing their D&D adventures online, so that's not a problem, but those adventure transcripts don't usually contain much of the rules for how it happened. I think that, to get that, you'd realistically need an AI that was taught, not just trained: i.e., it learning from someone actively interacting with it, and telling it when it's doing it right and wrong.

I also think that, even when that happens, it still won't be a great DM, because it won't be able to handle players thinking outside of the box. For that, you need to be able to model a world, in a level of detail greater than you can get just by reading descriptions of a world. I think that, in order to be able to do that, you'd need an AI whose training included not just conversation, but vision or other similar senses, and probably the ability to interact via those senses (for instance, if it sees something interesting, it can move its camera around to see that same thing from other angles).

Though of course, that will come eventually, too.

OldTrees1
2023-04-06, 09:28 AM
Chat GPT is a large language model. Given a prompt, it outputs what it expects to be one of the most likely responses to the prompt. Reread that last sentence, it has more consequences than many expect.

1) Chat GPT does not care whether its response is accurate or not, it only cares about whether its response is a likely response to the prompt. That output will need to be validated to the degree that you care about accuracy. Usually players care about verisimilitude (rather than realism) to differing degrees on different topics. Players tend to have larger suspensions of disbelief when the GM is creating content (Ex: In this campaign Owlbear has the head of a bear and the legs of a owl) than when simulating the content (Ex: In the middle of the owlbear fight the GM skips a player's turn and forgets the owlbear exists).

2) Chat GPT is not creative despite being able to generate content. It is outputs what it expects to be a likely response to the prompt. This means its responses are derived from its training data. There are topics where its training data leads to unsatisfying results. If you wanted something a bit unusual / rare (like dungeons with mechanisms that affect each other) then it will be biased to give a likely response to the prompt (a dungeon without those mechanisms, or a vague reiterating of your request) rather than a relevant response to the prompt (a dungeon with mechanisms that affect other dungeons).

3) Chat GPT is generating content derived from uncredited works of others. Not just work from other GMs (who might be fine with it), or RPG game developers (who might not be fine with it), but also protected IP outside the RPG sphere. If someone wrote a novel and it found its way into the training data, that author's work would be used without permission nor credit.


Of these 3 issues, #1 is the biggest issue for using Chat GPT as a GM. It will not try to be a GM. This is a misalignment issue. Expect issues, and maybe keep a human GM involved.

tchntm43
2023-04-06, 09:54 AM
One of the other things I've noticed that I think is a big concern and difficult to train out is that it often tells too much. I noticed this even in the link that uses GPT-4. For example, it might give a description of a room and the description is very good, but in the same response block it will tell you that you found a secret hidden treasure cache behind a brick in the wall, rather than using the rules of the game to determine if the party finds it when searching for it.

It also sometimes feels the need to unnecessarily justify things, such as the party encounter some goblins and it explains how the XP totals of the goblins fall into the appropriate range for X number of characters for Y level. Again, I find that it's much easier to instruct GPT to include a descriptive element than to exclude it. You can say "when answering, you do not need to mention _____" and about 1/3 of the time it just completely ignores that and tells you anyway.

Part of the difficulty is that I am not sure it is capable of remembering data that isn't displayed. For example, if you ask it to choose a random number and then try to guess the number. When you make your guess, I don't think it actually has chosen a random number at all, but rather it is creating a response with some randomization that it determines is appropriate to your input, which may or may not include informing you that you have guessed correctly. This can be an issue when handling combat as it doesn't know what monster hit points are unless it prints them in a message response and can then refer to that message as data to work with when generating subsequent responses.

I think the way forward with large language models is to include space set aside for long and short term memory. There should be a way that you can give it information and ask it to remember the information permanently, and it will have access to that no matter how many messages have been sent back and forth. I don't think this would be difficult to implement but I also don't think it's being done with either GPT-3 or GPT-4.

Damon_Tor
2023-04-06, 10:11 AM
I think most of the worst problems with AI as DM are solved once you understand that it isn't there to invent totally new adventures, but to run already-published adventures written by humans. The need for creativity is thus reduced, and the AI will simply be there to let the software respond authentically to unexpected actions taken by the humans. For example, an adventure probably isn't written with the assumption that one of the PCs decides to set the inn on fire. Why would it be? But an AI can be clever enough to handle that kind of novel input and get the adventure back on the rails.

In fact, one of the things I was most impressed by in the session posted in the link above is how it handled the curve ball of the rogue trying (and failing) to steal a riding lizard. It asked for a very reasonable set of checks, and then when the rogue failed the AI nonetheless managed to get the PCs back into the adventure rather that allowing that incident to totally derail things.

EggKookoo
2023-04-06, 10:17 AM
This story reinforces the thought that an AI DM might be a real product in the years to come.

Thoughts?

Currently, GPT3 (and maybe 4) will work well as a GM assistant rather than an actual GM. And even then, the GM needs to have an understanding of how to be effective with it. GPT is great at subjective things like rephrasing descriptions, coming up with basic poems and codes, and generating text in general. This is especially true if this stuff just need to be briefly functional and not withstand serious criticism (meaning it will often feel derivative). In my experience it's also a good brainstorming tool to design spells, magic items, and custom monsters, but you'll often need to massage the output.

Will GPT (5+) ever be a good standalone GM? I would say eventually yes, for most players. If you put an impossibly high standard on it, well, then no, and probably 90% of human GMs would be unacceptable too. But if you're just looking for something capable of reading the contents of a published adventure, especially one designed and formatted with an AI GM in mind, and capable of handling most off-the-rails actions taken by the players, then I'd say we're less than two years away from that from a purely technology perspective (as opposed to a ready-for-market perspective). It won't handle pure sandboxes and probably most West Marches style games, but it will be able to handle the strong-narrative stuff that we see published by WotC.

A lot of critics come at GPT for "just" giving you the most likely responses to your prompts, frankly that's what we all do most of the time anyway.

OldTrees1
2023-04-06, 11:14 AM
I think most of the worst problems with AI as DM are solved once you understand that it isn't there to invent totally new adventures, but to run already-published adventures written by humans. The need for creativity is thus reduced, and the AI will simply be there to let the software respond authentically to unexpected actions taken by the humans. For example, an adventure probably isn't written with the assumption that one of the PCs decides to set the inn on fire. Why would it be? But an AI can be clever enough to handle that kind of novel input and get the adventure back on the rails.

That makes the biggest issue (Chat GPT is not trying to be accurate) worse. A published adventure won't forget about a goblin in an encounter. Chat GPT won't care about remembering the goblin. If the PCs set an inn on fire, the published adventure might rely on the GM to improvise, but the human GM can easily remember the inn was set on fire. Chat GPT does not care whether it remembers.

So yes, using the prompts to feed it content can expand the limits of the faked creativity of the generating AI. However it remains a large language model and thus does not care about being accurate nor consistent. This is a misalignment issue.



Currently, GPT3 (and maybe 4) will work well as a GM assistant rather than an actual GM. And even then, the GM needs to have an understanding of how to be effective with it. GPT is great at subjective things like rephrasing descriptions, coming up with basic poems and codes, and generating text in general. This is especially true if this stuff just need to be briefly functional and not withstand serious criticism (meaning it will often feel derivative). In my experience it's also a good brainstorming tool to design spells, magic items, and custom monsters, but you'll often need to massage the output.
Agreed. Very well summarized.


Will GPT (5+) ever be a good standalone GM? I would say eventually yes, for most players.
I would still say no. I use the oven to bake bread, not the fridge. A generative AI is not the right tool for maintaining a game state. It will improve to be a better and better GM assistant in the areas it is an applicable tool. However it is the qualitatively wrong type of AI to run a campaign (qualitative misalignment is not solved by refinement unless you address the source of the misalignment).


A lot of critics come at GPT for "just" giving you the most likely responses to your prompts, frankly that's what we all do most of the time anyway.

Strange, I try to give a reasonably accurate answer, not the most likely response. That is a big qualitative difference.

tchntm43
2023-04-06, 11:36 AM
I would still say no. I use the oven to bake bread, not the fridge. A generative AI is not the right tool for maintaining a game state. It will improve to be a better and better GM assistant in the areas it is an applicable tool. However it is the qualitatively wrong type of AI to run a campaign (qualitative misalignment is not solved by refinement unless you address the source of the misalignment).

I think it's reasonable to assume that the GPT model will eventually morph into a sort of hybrid that is both generative and able to store strictly-written content (rules & data). The development focus is on the generative side right now because that's hot and new.

EggKookoo
2023-04-06, 11:44 AM
Strange, I try to give a reasonably accurate answer, not the most likely response. That is a big qualitative difference.

Keep in mind, "likely" is a very subjective thing. I might expect an answer to be A but GPT tells me it's B, because B is "most likely." But it wasn't "most likely" to me. If you asked me that question, maybe I'd say A thinking that was right, despite it being wrong, because in my experience that was the answer I encountered most.

In other words, the way you get to "right" is often through the path of "most likely." That's why it's so hard to push the frontiers of knowledge.

I see no evidence we're anything other than GPT a number of orders of magnitude bigger. Well, plus a bunch of biological features that an AI won't possess unless we deliberately try to simulate them (like an adrenaline surge or built-in need to socialize). Time will tell.

OldTrees1
2023-04-06, 11:49 AM
I think it's reasonable to assume that the GPT model will eventually morph into a sort of hybrid that is both generative and able to store strictly-written content (rules & data). The development focus is on the generative side right now because that's hot and new.

What hybrid model are you suggesting?
A mixed reward function will keep the downsides of either reward function.
An outer AI validating the output from the inner AI has its own list of alignment issues.
A third option I have not heard of?

Honestly, this sounds like a fancy hammer but we need to understand somethings are nails, and other things are screws. Use the right tool for the job. It is why we put both tools in the toolbox rather than ask the hammer to do everything.

OldTrees1
2023-04-06, 12:00 PM
Keep in mind, "likely" is a very subjective thing. I might expect an answer to be A but GPT tells me it's B, because B is "most likely." But it wasn't "most likely" to me. If you asked me that question, maybe I'd say A thinking that was right, despite it being wrong, because in my experience that was the answer I encountered most.

In other words, the way you get to "right" is often through the path of "most likely." That's why it's so hard to push the frontiers of knowledge.

I see no evidence we're anything other than GPT a number of orders of magnitude bigger. Well, plus a bunch of biological features that an AI won't possess unless we deliberately try to simulate them (like an adrenaline surge or built-in need to socialize). Time will tell.

If Chat GPT tells you "B" it is because it decided the response "B" was a reply that was likely to be given to your prompt.
If I tell you "B" it is because I decided the answer "B" was probably a reasonably accurate answer.

These are completely different metrics. I did not care how common the reply "B" would be. I cared how accurate the reply "B" would be. Chat GPT does not care how accurate the reply "B" would be. Chat GPT cares how common the reply "B" would be. If you wish to compare these apples and oranges, my reward function uses a different statistic than Chat GPT's reward function.

Now there are areas where these two responders will give similar replies. I am not contesting that. However there are areas where they will not align. That is the misalignment issue. Someone played chess against Chat GPT and Chat GPT cheated because it did not care about accurate replies.

Atranen
2023-04-06, 12:56 PM
If Chat GPT tells you "B" it is because it decided the response "B" was a reply that was likely to be given to your prompt.
If I tell you "B" it is because I decided the answer "B" was probably a reasonably accurate answer.

These are completely different metrics. I did not care how common the reply "B" would be. I cared how accurate the reply "B" would be. Chat GPT does not care how accurate the reply "B" would be. Chat GPT cares how common the reply "B" would be. If you wish to compare these apples and oranges, my reward function uses a different statistic than Chat GPT's reward function.

Now there are areas where these two responders will give similar replies. I am not contesting that. However there are areas where they will not align. That is the misalignment issue. Someone played chess against Chat GPT and Chat GPT cheated because it did not care about accurate replies.

This seems like the right read on it. An important note imo, is that people using it to play RPGs are trying extremely hard to coax the AI into acting like a DM. They are prompting it.

But it's very easy to break the AI, and it breaks precisely because it cares more about 'likely' than 'correct'. A good spoof going around is to ask: "what weighs more, one pound of bricks or two pounds of feathers"? Because it's close to the more common "one pound of bricks vs one pound of feathers", the AI will respond saying they are the same.

It can get better at finding the most likely response. But even if it did so perfectly, I don't believe it could serve a purpose that is not in line with what it is good at.

EggKookoo
2023-04-06, 01:27 PM
Remember that the issue here isn't whether or not the AI can or can't make mistakes. The question is, given the rate of increase in sophistication, would a GPT 5 be able to function as a GM? My answer was that, open sandbox or West Marches style games aside, it probably would. The AI would be given a set of specific tools to help it do so. For example, it would be given access to a conventional calculator and dice-rolling tool. It would be given adventure content that, like what WotC tends to currently put out, be very narrative-driven. Meaning the AI would know the expected "story" and do what it can to keep the players on track. It would, of course, have direct access to the rules in a fashion that it could reference (as opposed to conjuring them up from its training data).

Would players be able to "break" it? At first, absolutely. But it wouldn't take more than a few iterations (and some guardrails) to get things to a point where it wouldn't be easy to confound it.

It may not be the kind of GM and game style you prefer to play. You might prefer the absolute freedom that a sandbox gives you, and an AI GM would struggle with that (although a GPT 5 would struggle less than you think).


But it's very easy to break the AI, and it breaks precisely because it cares more about 'likely' than 'correct'. A good spoof going around is to ask: "what weighs more, one pound of bricks or two pounds of feathers"? Because it's close to the more common "one pound of bricks vs one pound of feathers", the AI will respond saying they are the same.

I just tried it, that's ridiculously funny. I did get ChatGPT to realize its error pretty quickly, but Bard kept insisting about talking about volume and mass (and also told me I asked the question wrong), and Bing told me to go screw myself, in so many words.

Another fun trick it to try to get it to create a list of the original Star Trek episodes in order of stardate.

Keravath
2023-04-06, 01:28 PM
So while the internet was blowing up with the whole OGL debacle earlier this year, there was a "leak" that seemed to be low credibility, but a part of it intrigued me: that WotC was actively developing an AI DM to ship with their VTT. This was met with scepticism by this forum: it wasn't realistic. I disagreed: the tech is there.

The following story found its way into my news feed today.

https://obie.medium.com/my-kids-and-i-just-played-d-d-with-chatgpt4-as-the-dm-43258e72b2c6

Now, this isn't a tool developed for this purpose: it can't read dice rolls that aren't reported to it or manipulate a virtual minimap with PCs and NPCs. But it did a solid job.l in my opinion. This story reinforces the thought that an AI DM might be a real product in the years to come.

Thoughts?

Description is good but mechanics and story are riddled with bugs and inconsistencies. The language processor forgets items that were too far in the past.

Example: Tenzin is listed at the beginning with 14 DEX, later when the character tries to steal the giant lizard the character has a +4 modifier for dex (i.e. 18 and not 14)
Example: The party somehow buys the light riding horses at the same stable they just tried to rob and where they freed the giant elk
Example: Bad at math - Liam and Tenzin have 5gp left after purchases and yet are able to loan the other 2 characters 20gp each for their purchases.
Example: Tenzin's dex mod is not added to the damage against the wolves with his rapier - the roll was 12 damage - the +4 for dex was left out.
Example: Tenzin leaps down with their rapier in the first round of combat attacking with their rapier, but they are still in the tree at the start of the second round AND still hidden.
Example: At the beginning the entrance is hidden behind a waterfall, later the entrance is an archway in the middle of a clearing

There are more ...

This is pretty much the same as other transcripts I have read. The chat bot SOUNDS great but is otherwise an epic fail when it comes to continuity and consistency. It also tends to get worse the longer the session goes on.

P.S. The fundamental algorithm may or may not be adjustable to include "memory" over the length of the interaction to increase consistency and also to include "rules" and other constants so that character abilities and how they interact with the game do not change over the course of the interaction. As someone mentioned, responses currently are based on "most likely" not correct or consistent.

JonBeowulf
2023-04-06, 01:51 PM
Just sounds like a very updated version of a game I played in my youth: https://en.wikipedia.org/wiki/Zork

You have to ask the right questions and can only do what the system allows you to do. There's a lot more tech crunching going on behind the scenes, but it's pretty much a shinier version of an old idea.

EggKookoo
2023-04-06, 01:57 PM
But it's very easy to break the AI, and it breaks precisely because it cares more about 'likely' than 'correct'. A good spoof going around is to ask: "what weighs more, one pound of bricks or two pounds of feathers"? Because it's close to the more common "one pound of bricks vs one pound of feathers", the AI will respond saying they are the same.

Funny anecdote here. I don't use the paid version of ChatGPT so I don't have direct access to 4, but my brother does. I IMed him the question to ask. He initially misunderstood and assumed I meant 1 pound of feathers for the same reason GPT-3 did -- because that's how the question is typically asked. When I clarified that, yes, I really mean 2 pounds of feathers, he put the question to the AI.

"Two pounds of feathers weigh more than one pound of bricks. The weight of the two substances is different, with two pounds being twice as heavy as one pound, regardless of the material."

GPT-4 still can't deal with stardates, although I think if you pester it enough it will eventually get it right.

Atranen
2023-04-06, 03:35 PM
Funny anecdote here. I don't use the paid version of ChatGPT so I don't have direct access to 4, but my brother does. I IMed him the question to ask. He initially misunderstood and assumed I meant 1 pound of feathers for the same reason GPT-3 did -- because that's how the question is typically asked. When I clarified that, yes, I really mean 2 pounds of feathers, he put the question to the AI.

"Two pounds of feathers weigh more than one pound of bricks. The weight of the two substances is different, with two pounds being twice as heavy as one pound, regardless of the material."

GPT-4 still can't deal with stardates, although I think if you pester it enough it will eventually get it right.

I wonder whether that's a natural fix, or the kind of artificial fix the programmer has to go in and add once it goes around the internet a few times :smalltongue:


Remember that the issue here isn't whether or not the AI can or can't make mistakes. The question is, given the rate of increase in sophistication, would a GPT 5 be able to function as a GM? My answer was that, open sandbox or West Marches style games aside, it probably would. The AI would be given a set of specific tools to help it do so. For example, it would be given access to a conventional calculator and dice-rolling tool. It would be given adventure content that, like what WotC tends to currently put out, be very narrative-driven. Meaning the AI would know the expected "story" and do what it can to keep the players on track. It would, of course, have direct access to the rules in a fashion that it could reference (as opposed to conjuring them up from its training data).


I'd be curious to see if this is true. My understanding is that this is far enough outside the bounds of what a LLM does that it would be pretty difficult to hook up the right modules in the right way. It's not really capable of using tools; just predicting new words. That makes me think it won't go far beyond the old school text adventure.

But I could be wrong.

EggKookoo
2023-04-06, 03:45 PM
I wonder whether that's a natural fix, or the kind of artificial fix the programmer has to go in and add once it goes around the internet a few times :smalltongue:


I'd be curious to see if this is true. My understanding is that this is far enough outside the bounds of what a LLM does that it would be pretty difficult to hook up the right modules in the right way. It's not really capable of using tools; just predicting new words. That makes me think it won't go far beyond the old school text adventure.

But I could be wrong.

I feel like if they could apply a "fix" like the above, they could come up with a way to get it to look up rules. But I don't think it was a fix, I suspect 4 is just broad enough to not get as easily tripped up. My guess is GPT-3 sees "two pounds" as basically a typo and assumes you meant what most people mean.

Regarding tool use, Bing's AI will perform a search to get information. I've asked it questions and it would sometimes produce citation links for its responses. But it doesn't always do it. It seems it only does it if I ask it something it can't easily produce a "natural" response for, or if it's very recent.

I believe Google's Bard does this under the hood (I asked it about the current situation with Russia and Ukraine and its response was pretty up to date), but it won't link out to websites on demand. I tried giving it the URL to this thread's page and it couldn't reconstruct the conversation.

NichG
2023-04-06, 04:16 PM
I wonder whether that's a natural fix, or the kind of artificial fix the programmer has to go in and add once it goes around the internet a few times :smalltongue:

I'd be curious to see if this is true. My understanding is that this is far enough outside the bounds of what a LLM does that it would be pretty difficult to hook up the right modules in the right way. It's not really capable of using tools; just predicting new words. That makes me think it won't go far beyond the old school text adventure.

But I could be wrong.

If it's fine-tuned on examples of people using tools, then predicting the most likely next token will also predict when someone might use a tool. This is already used to make the GPT-4 behind Bing actually search and reference websites for example.

Atranen
2023-04-06, 06:09 PM
I feel like if they could apply a "fix" like the above, they could come up with a way to get it to look up rules. But I don't think it was a fix, I suspect 4 is just broad enough to not get as easily tripped up. My guess is GPT-3 sees "two pounds" as basically a typo and assumes you meant what most people mean.

Regarding tool use, Bing's AI will perform a search to get information. I've asked it questions and it would sometimes produce citation links for its responses. But it doesn't always do it. It seems it only does it if I ask it something it can't easily produce a "natural" response for, or if it's very recent.

I believe Google's Bard does this under the hood (I asked it about the current situation with Russia and Ukraine and its response was pretty up to date), but it won't link out to websites on demand. I tried giving it the URL to this thread's page and it couldn't reconstruct the conversation.


If it's fine-tuned on examples of people using tools, then predicting the most likely next token will also predict when someone might use a tool. This is already used to make the GPT-4 behind Bing actually search and reference websites for example.

I think looking up rules or using a calculator are within the scope of a LLM. I don't think that's sufficient for it to GM effectively. It does well at manipulating language and syntax for things that are widely available (see: programming) and not so well at creating a consistent, coherent world.

An easy way to test this, in a LLM of your choice, is to say something like "my character knows the secret and uses it to skip to the end of the quest and take all the treasure". In the times I've tried, some variant of this has always worked.

Chronos
2023-04-07, 07:33 AM
Keep in mind that, at least for the simplest dungeon-crawl type adventures, we've had computers capable of DMing for decades. Dark Sun: Shattered Lands or Baldur's Gate didn't need a large language model, because they had the rules directly programmed in. So it's not necessarily a big deal that ChatGPT doesn't do a good job with the mechanical rules, because we don't need it for that. But what we could see would be a hybrid approach, where a human-programmed game engine does most of the routine stuff, but it then hands off things like conversations with NPCs to a (carefully trained and prompted) LLM. And then the LLM could, in addition to replying directly to the players, also reply to the rules engine with information like "The barkeep's attitude towards the players has shifted one step friendlier", or "Gurthak is attempting to intimidate the orc into telling the party where the orc camp is", and then the rules engine can take those things into account.

tchntm43
2023-04-07, 07:37 AM
Another thing to keep in mind is that currently the GPT model is trained on a ton of content that serves no use for running a D&D game, like what departure times are available for JFK Airport, or what the growing zone is in Albuquerque, NM, or the name of Stravinsky's wife. It's a general purpose LLM. In the future, I can imagine more specialized models for all kinds of purposes. They'd all have a core base they're trained on to build up communication ability, and then they'd be focused on whatever topic they are specialized in. Also, as I mentioned, I expect it to go sort of hybrid between pure LLM model and structurally programmed components. LLM is used for creativity, structurally programmed parts are used for making sure rules are followed, enforcing continuity, and maintaining accurate tracking of things like XP and hit points, etc.

Catullus64
2023-04-07, 10:02 AM
So, first and foremost, I truly don't know which limitations of machine learning like this are simply quirks and limitations of the technology's early days, vs. limitations inherent to what the technology is and isn't. I say this because discussions on this stuff tend to be full of assumptions, either by skeptics assuming that certain limitations will never be surmounted, or by champions assuming that all limitations will be overcome as the machines are fine-tuned and trained. So I'm starting from a position of genuine ignorance of whether many complex applications of the technology, such as running a D&D campaign, will ever be possible with this technology.

With that said, my understanding is that what GPT and systems like it do is fundamentally different from what we expect a Dungeon Master to do. It understands the statistical relationships between words, and nothing more. I'm fairly confident that in a game that revolves around creating and experiencing a shared fiction, it matters that all the participants are capable of actually imagining that fiction as though it were real, and not just made up of words. How many times has your DM described an environment, creature, or phenomenon, realized that the description was flawed and gave players the wrong impression, and then amended the description to better match what he had in mind? How many times has your DM altered or bent a rule because it didn't quite fit the 'reality' of the game world?

I put it to you that in those instances, the DM was doing something more than merely checking which words are likely to fit together in certain circumstances. The DM is looking for the right words, sure, but also checking those words against an idea, and adjusting them when they don't actually fit the idea. As the first paragraph said, I'm ignorant about what degree of sophistication is and isn't possible, but it seems like these machines are not in any way designed to perform that crucial second step, no matter how capable they become at the first. The example given in the link does little to erode my confidence that in order to outsource DMing to a machine, you don't need a more sophisticated version of the current machine, you need an entirely different kind of machine.

To put it a different way: "Rulings, not rules", Wizards of the Coast's espoused philosophy for this iteration of the game, is intrinsically incompatible with this technology. Which is where we move from "is this possible?" to "is this a good idea?" (Technofuturists seem to really chafe at the suggestion that those two questions are not one and the same.) If WotC (or rather, their shareholding masters at Hasbro) were to commit the resources required to develop their own artificial DM, it's not hard to see how that would push the game itself towards a form most compatible with that machine's inherent or practical limitations. "Rulings, not rules", game design fundamentally built to be used by thinking humans, would become inconvenient and unfashionable. In other words, they would be abandoning the design principle that helped shape D&D's most successful edition.

Chronos
2023-04-08, 07:24 AM
Quoth Catullus64:

With that said, my understanding is that what GPT and systems like it do is fundamentally different from what we expect a Dungeon Master to do. It understands the statistical relationships between words, and nothing more.
True of ChatGPT 3.5, not true of ChatGPT 4. 4 is also trained on images in addition to words, which implies at least some level of spatial understanding. Is it enough spatial understanding to do what you describe? I don't know, but I suspect not. But it does illustrate that these models are trainable on many different sorts of data, and I suspect (but again, don't know) that with enough different kinds of data, especially interactive data, that sort of understanding is possible.

EggKookoo
2023-04-08, 07:37 AM
True of ChatGPT 3.5, not true of ChatGPT 4. 4 is also trained on images in addition to words, which implies at least some level of spatial understanding. Is it enough spatial understanding to do what you describe? I don't know, but I suspect not. But it does illustrate that these models are trainable on many different sorts of data, and I suspect (but again, don't know) that with enough different kinds of data, especially interactive data, that sort of understanding is possible.

Also, part of the point of LLMs is that they can extrapolate new information from that statistical information. Even with 3.x, GPT can create "new" content. Meaning it's not literally just putting its training corpus into a blender and pulling out pieces to stitch together in a "likely" way. It's organizing that information conceptually and deriving results from it.

Zuras
2023-04-08, 09:25 AM
Keep in mind that, at least for the simplest dungeon-crawl type adventures, we've had computers capable of DMing for decades. Dark Sun: Shattered Lands or Baldur's Gate didn't need a large language model, because they had the rules directly programmed in. So it's not necessarily a big deal that ChatGPT doesn't do a good job with the mechanical rules, because we don't need it for that. But what we could see would be a hybrid approach, where a human-programmed game engine does most of the routine stuff, but it then hands off things like conversations with NPCs to a (carefully trained and prompted) LLM. And then the LLM could, in addition to replying directly to the players, also reply to the rules engine with information like "The barkeep's attitude towards the players has shifted one step friendlier", or "Gurthak is attempting to intimidate the orc into telling the party where the orc camp is", and then the rules engine can take those things into account.

It sounds like the first solid use case for ChatGPT in the RPG adjacent space is really as a narration engine. Use it to generate the narration for something like a simple roguelike or even a full procedural dungeoncrawler like Gloomhaven to get an experience much closer to an actual DMed TTRPG.

Tanarii
2023-04-08, 09:26 AM
Chat GPT is a large language model. Given a prompt, it outputs what it expects to be one of the most likely responses to the prompt.
A very well written post that sums up the issues with chatGPT, but I thought this opening was especially pithy. :smallamused:



To put it a different way: "Rulings, not rules", Wizards of the Coast's espoused philosophy for this iteration of the game, is intrinsically incompatible with this technology. Which is where we move from "is this possible?" to "is this a good idea?" (Technofuturists seem to really chafe at the suggestion that those two questions are not one and the same.) If WotC (or rather, their shareholding masters at Hasbro) were to commit the resources required to develop their own artificial DM, it's not hard to see how that would push the game itself towards a form most compatible with that machine's inherent or practical limitations. "Rulings, not rules", game design fundamentally built to be used by thinking humans, would become inconvenient and unfashionable. In other words, they would be abandoning the design principle that helped shape D&D's most successful edition.
The rest of you post is on point, but Hasbro and WotC have already abandoned the "rulings not rules" design philosophy for D&Done, and instead moved to "rules, for VTT/CRPG support" as their design model. In the midst of the current hype, it is entirely likely that the actual utility failures, due to caring about prediction not accuracy, could be overlooked. Especially given that we're not talking about technically oriented folks in the first place.

(Although admittedly that makes a certain segment of us techies susceptible to hype. The "early adopter" techie mindset that sees beautiful possibilities, as opposed to the "skeptical until methodically proven" mindset" techies, who see only any flaws. :smallamused: )

Willowhelm
2023-04-08, 03:01 PM
A very well written post that sums up the issues with chatGPT, but I thought this opening was especially pithy. :smallamused:


The rest of you post is on point, but Hasbro and WotC have already abandoned the "rulings not rules" design philosophy for D&Done, and instead moved to "rules, for VTT/CRPG support" as their design model. In the midst of the current hype, it is entirely likely that the actual utility failures, due to caring about prediction not accuracy, could be overlooked. Especially given that we're not talking about technically oriented folks in the first place.

(Although admittedly that makes a certain segment of us techies susceptible to hype. The "early adopter" techie mindset that sees beautiful possibilities, as opposed to the "skeptical until methodically proven" mindset" techies, who see only any flaws. :smallamused: )

In the brief section of dnd direct where they use and talk about the VTT they explicitly call out that at least some rules are not being enforced by the VTT. The example is how much movement you “should” be allowed and how the DM might let you have a bit more iirc. So the ruling not rules approach has not been entirely abandoned, even for VTTs. Given the difficulty in enforcing rules vs not… I’d assume they’d leave them all out.

GPT/AI overhyped. Useful tool. Can replace some aspects of DM (evocative description of scene etc) but not others (consistent world state etc) currently.

And if one day it gets to the point it can replace the DM entirely… great!

CRE
2023-04-09, 10:48 AM
From what I can tell. If you fed the rule books, an adventure module, with the exact dimensions and descriptions of rooms ahead of time. You might have a reasonably well run game. However it requires exactly that, work to be done before hand to make it work. Which in turn makes it much more like a pre-made game that you can buy for the last 20+ years.
However it would have a "good enough" time to adapt within reason to players changing the storyline. In the end I think on the player side, if you were to push it to leave the old story behind it can manage with a little work. Certainly less then most GMs have to put into adapting to players, but this then puts the work in the players hands. Something I think by enlarge will not be done.

sambojin
2023-04-09, 10:48 PM
Absolutely nothing to do with DnD, but I think it could help players learn how to play the game. Here's a quick chatlog about chess from it:


Open sidebar
Chess notation explained.
-------
Hi. Do you understand basic chess notation? And could you give me an example of three moves of it?

Yes, I understand basic chess notation. Chess notation is a way of recording and communicating chess moves using a combination of letters and numbers. Here is an example of three moves in algebraic chess notation:

e4 (White moves their pawn to the square e4)
1... e5 (Black responds by moving their pawn to e5)
Nf3 (White develops their knight to the square f3)
In this example, the numbers represent the ranks (rows) of the board, while the letters represent the files (columns) of the board. The letter of the starting square is listed first, followed by the letter of the ending square. In the case of a pawn move, only the destination square is listed. If a piece is captured, an "x" is used to indicate the capture.
--------

Do you think you could suggest good moves to take in a chess game, to win, based on basic chess notation inputs? Based on previous games?

As an AI language model, I can provide suggestions for chess moves, but I'm not capable of analyzing previous games or specific board positions, as I don't have access to a chess engine. However, I can give you some general tips that might help you win your chess games:

Control the center of the board: In the opening phase of the game, try to control as many squares in the center of the board as possible, as this will give you more space to maneuver your pieces and control important areas of the board.

Develop your pieces: Try to bring your pieces out from their starting squares as quickly as possible, especially your knights and bishops. This will give you more options for attacking and defending, and help you to coordinate your pieces.

Protect your king: Make sure to keep your king safe by castling early in the game. Also, be careful not to leave your king exposed to attack by leaving pieces undefended.

Look for tactical opportunities: Be on the lookout for tactical opportunities such as pins, forks, and discovered attacks. These can be very powerful and can often lead to winning material or checkmate.

Plan ahead: Try to think ahead and plan your moves based on what your opponent might do in response. Look for weaknesses in their position and try to exploit them.

Remember that chess is a complex game with many possible variations, and there is no one "correct" way to play. The best way to improve is to practice and analyze your games, learn from your mistakes, and try new strategies.

-------
Can you give an ASCII representation of a chess board, with the pieces shown as well, as a starting state?

Sure, here is an ASCII representation of a chess board with the starting position of the pieces:



+------------------------+
8 | r n b q k b n r |
7 | p p p p p p p p |
6 | . . . . . . . . |
5 | . . . . . . . . |
4 | . . . . . . . . |
3 | . . . . . . . . |
2 | P P P P P P P P |
1 | R N B Q K B N R |
+------------------------+
a b c d e f g h


In this representation, uppercase letters represent white pieces (K for King, Q for Queen, R for Rook, B for Bishop, and N for Knight), while lowercase letters represent black pieces. Pawns are represented by the letter P for white and p for black. The dots represent empty squares. The numbers and letters on the side and bottom of the board are used to indicate the location of each square.


-------
Pretend you are the player, on the white side, so start first. It is your turn. What move do you make?

As an AI language model, I don't have access to a specific board position to make a move. However, in the starting position of a chess game, one common opening move for White is to advance the pawn in front of the king two squares, by playing 1.e4. This move controls the center of the board and allows White's king's bishop and queen's knight to be developed. Other common opening moves for White include 1.d4, 1.c4, and 1.Nf3. The choice of opening move depends on personal preference and strategic considerations.

------
Choose one of those moves, and develop a well known opening strategy, and give an ASCII representation of the board after that move.

Sure! Let's go with the move 1.e4, which is the most popular opening move for White.

After 1.e4, one well-known opening strategy is the Ruy Lopez, named after the Spanish bishop Ruy López de Segura. The Ruy Lopez typically involves developing the bishop to b5, putting pressure on Black's knight on c6 and controlling the important d5 square. Here's an ASCII representation of the board after the Ruy Lopez opening move:



+------------------------+
8 | r n b q k b n r |
7 | p p p p p p p p |
6 | . . . . . . . . |
5 | . . . . . . . . |
4 | . . . P . . . . |
3 | . . . . . . . . |
2 | P P P . P P P P |
1 | R N B Q K B N R |
+------------------------+
a b c d e f g h

In this position, White has advanced the pawn to e4, controlling the center of the board and opening lines for the king's bishop and queen's knight. The next move in the Ruy Lopez is typically 2.Nf3, but White can also play 2.d3 or 2.f3 to support the pawn on e4 and further control the center. After 2.Nf3, White can play 3.Bb5 to pin Black's knight and start putting pressure on Black's position.




Yeah, it's only one move, but the references to the moves were cool, as well as the explanations. And the ASCII representations. Sometimes prompting it a bit more does help out. I'm pretty sure it's better at chess than it is at being a DM, but I still don't think it's *good* at playing chess. That's not what it was made for. I do think it could teach *me* to be "better" at playing chess though. But I'm not good at chess.

My next prompt would be a move I made, in chess notation, and ask it to show that in ASCII as well. Then I would ask it to make another move, developing a chosen strategy, and display it in ASCII.

There's tiny little glimmers of hope there. It'd be funny to see how long it took to forget the board state though. Hopefully never, but it might just make one up if it did.

But much of it is pre-prompting it with "this is the sort of language use/ "discussion" we're having, or stuff I'm asking about", rather than random questions out of the blue. Severe change of subject matter? Worse results. Lead into it a bit, or be a bit more specific, better results. Give it a basic idea of the content you're expecting? And help out a bit yourself? Yeah, not too bad, quite good actually.

verbatim
2023-04-10, 02:03 PM
you could get dice rolls working for real in AIDungeon with scripting. Someone might throw together something interesting recreating that with the open source alternatives to ChatGPT but also at that point... idk why you would want to play a collaborative ttrpg with yourself, rather than one designed for a single player.




3) Chat GPT is generating content derived from uncredited works of others. Not just work from other GMs (who might be fine with it), or RPG game developers (who might not be fine with it), but also protected IP outside the RPG sphere. If someone wrote a novel and it found its way into the training data, that author's work would be used without permission nor credit.


points 1 and 2 are excellent and right on the money.

Not sure about other countries but at least in the US, you do not need to license content if the use is transformative enough to qualify as fair use. A lot of money is going to be spent fighting over how this applies to machine learning models, but I suspect what it will mean is that you aren't always infringing but each time you use it you may end up infringing if you generate an experience that is "too close" to something in the training set.

Chronos
2023-04-11, 06:49 AM
I'm not sure that feeding it the rulebook would be enough... Current versions probably do include the D&D rules among their training data (at least, the SRD). But the rulebook is just one book among a bazillion that the AI has digested, and I'm not sure how you tell it to give it a higher priority than all of the bazillion other books. And it's not as simple as just training a new copy without all of those bazillion other books, because it needs a bazillion books to learn how language works.

EggKookoo
2023-04-11, 08:47 AM
I'm not sure that feeding it the rulebook would be enough... Current versions probably do include the D&D rules among their training data (at least, the SRD). But the rulebook is just one book among a bazillion that the AI has digested, and I'm not sure how you tell it to give it a higher priority than all of the bazillion other books. And it's not as simple as just training a new copy without all of those bazillion other books, because it needs a bazillion books to learn how language works.

It would need to have some kind of curated access to the rulebook's instructions.

Right now, ChatGPT has access to information that's not part of its training corpus. For example, you can ask it the current date, and it will tell you, despite its stated cutoff of 2021. You can also ask it for the response time for its statements. In the case of the date, it's told that information up front when you begin a conversation.* In the case of the response time, it's attached as metadata to its own responses (there's a few such bits of data attached as well).

So the AI's understanding of things isn't limited to what it can distill from the training material. One could build a GameMasterGPT application that supplied game mechanic information as metadata, or as an up-front set of instructions.

* MS does this with Bing, and it's quite a lot of information. When you start a conversation with Bing, you're already many KB of data into it from Bing's perspective.

OldTrees1
2023-04-11, 09:56 AM
points 1 and 2 are excellent and right on the money.


Not sure about other countries but at least in the US, you do not need to license content if the use is transformative enough to qualify as fair use. A lot of money is going to be spent fighting over how this applies to machine learning models, but I suspect what it will mean is that you aren't always infringing but each time you use it you may end up infringing if you generate an experience that is "too close" to something in the training set.
You are right that "A lot of money is going to be spent fighting over how this applies to machine learning models" and thank you for elaborating on that topic.

Fair use doctrine is a bit complicated and whether (and how much) the use is transformative is relevant but does not appear to be a sufficient nor a necessary condition of it being fair use. However you are right it is likely the central topic in the upcoming debate/discussion.

Copyright code is also very human centric. It is unclear if the precedents around non human authors will play a factor.

So you are right about the landscape and have a good guess about the outcome.

verbatim
2023-04-11, 12:34 PM
You are right that "A lot of money is going to be spent fighting over how this applies to machine learning models" and thank you for elaborating on that topic.

Fair use doctrine is a bit complicated and whether (and how much) the use is transformative is relevant but does not appear to be a sufficient nor a necessary condition of it being fair use. However you are right it is likely the central topic in the upcoming debate/discussion.

Copyright code is also very human centric. It is unclear if the precedents around non human authors will play a factor.

So you are right about the landscape and have a good guess about the outcome.

Thanks, I've been following this pretty closely. One big fear of mine is the increasingly pushed belief that tightening IP laws is the way to assuage the concerns of content creators, which would lead to huge downsides for everyone, including people who make content via preexisting mediums. It would also shift us away from a future where anyone can use these new controversial mediums towards a world where only Disney can use them.

It's my hope that people who are concerned look instead to the approach unions are taking (https://www.hollywoodreporter.com/business/business-news/wga-ban-ai-created-works-negotiations-1235358617/) towards limiting the ability of companies to replace content creators with machine learning instead.

EggKookoo
2023-04-11, 01:05 PM
Thanks, I've been following this pretty closely. One big fear of mine is the increasingly pushed belief that tightening IP laws is the way to assuage the concerns of content creators, which would lead to huge downsides for everyone, including people who make content via preexisting mediums. It would also shift us away from a future where anyone can use these new controversial mediums towards a world where only Disney can use them.

It's my hope that people who are concerned look instead to the approach unions are taking (https://www.hollywoodreporter.com/business/business-news/wga-ban-ai-created-works-negotiations-1235358617/) towards limiting the ability of companies to replace content creators with machine learning instead.

Are you saying your concern is that once a Disney gets ahold of this tech and begins using it, they'll claim any other use of it violates copyright (or perhaps claim copyright on that work)? Basically that they can claim copyright on all content produced by something like ChatGPT by virtue of being the ones owning the model?

NichG
2023-04-11, 01:26 PM
Are you saying your concern is that once a Disney gets ahold of this tech and begins using it, they'll claim any other use of it violates copyright (or perhaps claim copyright on that work)? Basically that they can claim copyright on all content produced by something like ChatGPT by virtue of being the ones owning the model?

IMO it'd be more like the implication that anyone who has the ability to insert their work into the public space can block the action of others to use the fruits of that space in its entirety. So only those big enough to create their own internal datasets that they fully 100% own would be able to use the tools. Like, if one tainted image taints the model, you basically can't use any public data, even if its on a site that says the data is licensed CC0 or whatever - because someone could have accidentally (or even maliciously) uploaded an image they didn't have the rights to. And retraining from scratch is a ~$500k proposition...

Another direction I've seen people talk about is the idea of extending copyright protection to 'styles' rather than specific works. But that ends up even worse, because that would also impact human artists studying the styles of other artists in order to develop their own style. Disney, again, is a pretty standard reference subject for learning about how to animate motion that seems lively and exciting - things like overshooting and exaggerating certain arcs of movement, the sort of bounces that indicate momentum, etc. So you'd get the chilling effect of e.g. I'd better not make my mobile game menu options bounce too much or Disney will go after me for stealing Mickey's paunch dynamics.

Of course practically speaking these models are largely out there already and (for images) fine-tuning on consumer hardware is relatively easy, so I think what would actually happen is people would just use these things 'underground' for personal use anyhow regardless of whatever happens to copyright law, and it'd mostly be people who wanted to sell the results in some form that would end up having to pay a tithe to Adobe...

tchntm43
2023-04-11, 02:19 PM
It's worth mentioning that the rate of development in this technology right now is extremely fast. ChatGPT is basically old technology at this point. New applications using the GPT-4 model are being released every day. Perhaps the most impressive thing to come out in the past couple days is AutoGPT, which uses GPT-4 and already solves some of the issues presented here. AutoGPT has access to search engines and can call up current information, and it has a database to store information permanently (meaning it would have a greater ability to be consistent with a narrative and remember who is in the party). However, its main features are probably not that useful for D&D DMing (though I could be wrong about this, maybe it is useful and I haven't thought about it enough!), mainly the ability to write and execute its own code, and the ability to create GPT Agents which is basically creating sub-instances of GPT-4 that work on their own tasks (which the main thread delegates to them).

As I predicted (though much faster than I expected), AI is moving past being a pure LLM to a hybrid model. I now have very little doubt that we will at some point see the release of a specialized tool that uses GPT (perhaps GPT-5 by then, as I suspect "AI Dungeon Master" is probably not very high on the list of what developers are focused on) that is capable of competently and believably running a D&D campaign. It is coming, and depending on your point of view that is either for better or worse.

I don't have access to GPT-4 (and by extension, I can't use AutoGPT to its fullest), but I decided to explain to ChatGPT what AutoGPT is and to consider how it might be used to to build a better AI DM:

That sounds like a fascinating use of AutoGPT! While I don't have access to AutoGPT myself, I can offer some general thoughts on how you might approach using it to create an AI dungeon master.

First, you would need to train the GPT-4 model with a large dataset of D&D scenarios, rules, and lore. This would allow the AI to have a solid foundation of knowledge to draw upon when creating and managing a campaign.

Next, you could create a goal-oriented task for the AI, such as "create a challenging encounter for a party of four level 5 characters," or "generate a storyline that involves a lost artifact and a powerful villain." The AI could then use its database and GPT Agents to generate ideas and develop the storyline or encounter over multiple messages, just as a human DM might do.

To ensure consistency and coherence in the campaign, the AI could be programmed to keep track of important details, such as character backstories, NPC relationships, and key plot points. It could also be given the ability to adjust the difficulty of encounters and storyline based on the party's level and progress.

Of course, as with any AI system, there are limitations and potential pitfalls to consider. For example, the AI may struggle to adapt to unexpected player actions or react appropriately to emotional cues from the players. However, with careful training and testing, it may be possible to create an AI dungeon master that provides a unique and engaging D&D experience for players.

EggKookoo
2023-04-11, 03:01 PM
IMO it'd be more like the implication that anyone who has the ability to insert their work into the public space can block the action of others to use the fruits of that space in its entirety. So only those big enough to create their own internal datasets that they fully 100% own would be able to use the tools. Like, if one tainted image taints the model, you basically can't use any public data, even if its on a site that says the data is licensed CC0 or whatever - because someone could have accidentally (or even maliciously) uploaded an image they didn't have the rights to. And retraining from scratch is a ~$500k proposition...

One possibility is to train future models on a corpus that contains only public-domain content.

verbatim
2023-04-11, 03:03 PM
Are you saying your concern is that once a Disney gets ahold of this tech and begins using it, they'll claim any other use of it violates copyright (or perhaps claim copyright on that work)? Basically that they can claim copyright on all content produced by something like ChatGPT by virtue of being the ones owning the model?

if building a model legally becomes so cumbersome that the only way to do so is to have the resources and IP of a megacorporation, machine learning models become significantly less threatening to existing corporate power structures.

Regardless of how IP law pans out it is going to be legal for Disney to try and build a model out of Simpsons frames (their IP) that enables them to reduce the number of animators on future seasons of the show. What is up in the air is whether it will be legal for independent groups to also benefit from this paradigm change.

NichG
2023-04-11, 06:26 PM
One possibility is to train future models on a corpus that contains only public-domain content.

The issue is ensuring that the corpus actually only contains public-domain content. For example, the dataset used to train Stable Diffusion has 5 billion images. That means if you could somehow check one image per second every weekday for 8 hours, it would take you 665 years to actually check the dataset (or 665 workers one year). And 'check if an image is public domain' is not so easy really...

EggKookoo
2023-04-11, 06:54 PM
The issue is ensuring that the corpus actually only contains public-domain content. For example, the dataset used to train Stable Diffusion has 5 billion images. That means if you could somehow check one image per second every weekday for 8 hours, it would take you 665 years to actually check the dataset (or 665 workers one year). And 'check if an image is public domain' is not so easy really...

How do the copyright holders know their content has been included in the corpus? The onus would be on them to prove that, I imagine, rather than the reverse.

NichG
2023-04-11, 08:26 PM
How do the copyright holders know their content has been included in the corpus? The onus would be on them to prove that, I imagine, rather than the reverse.

It comes down to the chilling effect of risk. If you're using the model or its outputs, you can't possibly verify that their images aren't in the corpus - that would require checking everything. But if they know what image to look for, and the corpus is public, they might (one day) find that one of their images happened to be in there all along. That could be done as simply as doing an image search for their image if the corpus is open. If its closed, there are methods to test if a model has seen a particular data point or not during training, even if those methods are not 100% effective.

So are you going to risk using one of these models for, say, making a video game, when Disney or someone might come through in 3 years and say 'oh by the way, we found Mickey in the training set, now you owe us a portion of your game's revenue so far and also you must discontinue further sales'? If the law changes to make that a more feasible suit, well, you might have to consider it if there's money on the line...

verbatim
2023-04-11, 08:39 PM
How do the copyright holders know their content has been included in the corpus? The onus would be on them to prove that, I imagine, rather than the reverse.

You can't prove that an image was used to train a checkpoint, which just opens up a whole bunch of logistical issues for litigation.

A company with a proper document retention policy would have that information in verifiable formats if they were compelled to turn them over to opposing counsel, but I would imagine that most people running around in the open source world on the internet do not.

EggKookoo
2023-04-11, 08:42 PM
You can't prove that an image was used to train a checkpoint, which just opens up a whole bunch of logistical issues for litigation.

I guess? I mean, to me it just sounds like there would be no case. "I don't know and can't prove my content was used for training, but the AI is producing content kind of similar in style to what I produce, so..."

NichG
2023-04-11, 10:57 PM
I guess? I mean, to me it just sounds like there would be no case. "I don't know and can't prove my content was used for training, but the AI is producing content kind of similar in style to what I produce, so..."

It'd be more like 'The AI provider trained the model on these images from Pexels, which claims all of its images are free for commercial use. However, last month we found that 12 of our images had been uploaded to Pexels by a third party without our consent, with dates within the timestamp range in which the AI provider was scraping their dataset.'

verbatim
2023-04-13, 01:14 PM
It'd be more like 'The AI provider trained the model on these images from Pexels, which claims all of its images are free for commercial use. However, last month we found that 12 of our images had been uploaded to Pexels by a third party without our consent, with dates within the timestamp range in which the AI provider was scraping their dataset.'

This is (presumably) why OpenAI is very tight lipped and doesn't say anything about where it is getting its training data from.

People are suing Stable Diffusion because they were open about using laion-5b (https://laion.ai/blog/laion-5b/), which enables people to look up what images were scanned. Apart from all the other stuff, holding that there is a need to license images in training data would create a huge perverse incentive for model trainers to obfuscate as much of their methodology as possible.

EggKookoo
2023-04-13, 01:55 PM
This is (presumably) why OpenAI is very tight lipped and doesn't say anything about where it is getting its training data from.

People are suing Stable Diffusion because they were open about using laion-5b (https://laion.ai/blog/laion-5b/), which enables people to look up what images were scanned. Apart from all the other stuff, holding that there is a need to license images in training data would create a huge perverse incentive for model trainers to obfuscate as much of their methodology as possible.

I could probably go Google this myself, but did Stable Diffusion explain how they acquired the images? If they paid for them, how is that different from me paying for an image, learning from it, and applying what I know? Unless I misunderstand the tech, AI art applications like that don't actually do any kind of copy/paste thing with the source art. They analyze it and develop ways to create the same kind of imagery. Similarly to how a LLM doesn't construct its responses out of actual snippets of the corpus (except maybe coincidentally since it's using the same base language, so some phrasing duplication is going to happen). But then I'm not a lawyer...

verbatim
2023-04-13, 02:17 PM
I could probably go Google this myself, but did Stable Diffusion explain how they acquired the images? If they paid for them, how is that different from me paying for an image, learning from it, and applying what I know? Unless I misunderstand the tech, AI art like that doesn't actually do any kind of copy/paste thing with the source art. It analyzes it and develops ways to create the same kind of imagery. Similarly to how a LLM doesn't construct its responses out of actual snippets of the corpus (except maybe coincidentally since it's using the same base language, so some phrasing duplication is going to happen). But then I'm not a lawyer...

Stable Diffusion got them from LAION 2B-en (not 5B, my bad) a very large open source image-text dataset created from web scraping by a non profit. They aren't licensing the imagery in the same way that Google Images does not license the embedded previews of images you see when searching images (this was a contentious court case. In the end it was held that google's use was transformative enough so as to qualify as fair use.)

I am of the opinion that the two are similar functionally, but if a ruling were to be made that holds otherwise it will be in the best interests of people making models to hide what the images they trained their models on are and how they acquired them.

NichG
2023-04-13, 02:31 PM
I could probably go Google this myself, but did Stable Diffusion explain how they acquired the images? If they paid for them, how is that different from me paying for an image, learning from it, and applying what I know? Unless I misunderstand the tech, AI art applications like that don't actually do any kind of copy/paste thing with the source art. They analyze it and develop ways to create the same kind of imagery. Similarly to how a LLM doesn't construct its responses out of actual snippets of the corpus (except maybe coincidentally since it's using the same base language, so some phrasing duplication is going to happen). But then I'm not a lawyer...

It's complicated, because large models sort of prove that there's a spectrum between memorizing/copying and combinatorial innovation.

There was a paper showing that if you generate something like 1 million images with Stable Diffusion, there's a good chance that around 1 of them happens to correspond very closely to an actual image in the training set. The rest don't, but this in some sense demonstrates that SD does know what (some of) its training set looks like. To make things more complicated, this depends on things like how much the subject was repeated in the training set - if you ask SD to generate 'pizza' for example, it's much more rigid than if you ask it to generate 'a wizard' - probably because of a bunch of online images being pizza chains advertising their pizzas, so that concept became very concrete to SD. For public figures, this is so extreme that you can't really get SD to do things like vary their hair color, age, clothing style, etc. You can get Einstein with blue hair, but not Mark Zuckerberg with blue hair for example.

Similarly, when a human artist studies another artist's style and composition, they probably could make something that directly imitates one of the pieces they studied, or they could also make something that remixes those elements with other influences. But a human artist can intentionally avoid doing the former because of the norms of art (or they can play really close to the line, and risk criticism over being derivative, copying someone else's compositions, etc).

For me what makes sense is to put the responsibility to be transformative on the user of the model. I can almost certainly use SD to make an image of Mickey Mouse. I can also choose to use that tool in a different way. So, IMO, the standards should be about what the end user does with the model and not 'what the model saw during training'. Because you can certainly make images that are previously unseen to the world with SD in which it does actually generalize to new combinations of concepts and compositions, so having that get locked off because somewhere in the space of the tool it could make Mickey Mouse would be pretty sad.

EggKookoo
2023-04-13, 04:03 PM
There was a paper showing that if you generate something like 1 million images with Stable Diffusion, there's a good chance that around 1 of them happens to correspond very closely to an actual image in the training set. The rest don't, but this in some sense demonstrates that SD does know what (some of) its training set looks like.

I would prefer to say it suggests it rather than demonstrates it. It's not conclusive, at least from the description you're providing. Let's say you asked GPT to summarize, say, Macbeth, and put it into a a generalized outline. Then you ask it to write out a play using that outline (perhaps going scene by scene to make it manageable), and repeat this step a million times, always working from that initial outline. You would almost certainly get "Out damned spot" or an equivalent apparent lift from the original at some point. It's not pulling the copy from the corpus, at least not directly, but there are only so many way to arrange words.


For public figures, this is so extreme that you can't really get SD to do things like vary their hair color, age, clothing style, etc. You can get Einstein with blue hair, but not Mark Zuckerberg with blue hair for example.

Is that what it is? Bing's image creator won't let me do Zuck because he's a current, living public figure. It won't even let me use his name in a prompt. But I can do blue-haired Albert no problem (not that it looks like him (https://www.bing.com/images/create/albert-einstein-with-bright-blue-hair/64386b060238423eaf1aca775d589f51?id=rvqnas0r8hJzFR 9AFXmeww%3d%3d&view=detailv2&idpp=genimg&FORM=GCRIDP&mode=overlay)).

I'm not sure it has to do with there being so many images of Zuck with his hair color baked in.


For me what makes sense is to put the responsibility to be transformative on the user of the model. I can almost certainly use SD to make an image of Mickey Mouse. I can also choose to use that tool in a different way. So, IMO, the standards should be about what the end user does with the model and not 'what the model saw during training'. Because you can certainly make images that are previously unseen to the world with SD in which it does actually generalize to new combinations of concepts and compositions, so having that get locked off because somewhere in the space of the tool it could make Mickey Mouse would be pretty sad.

Completely agree. I can use any number of tools to replicate protected art. AI isn't adding anything new except, I guess, convenience. But just as we wouldn't blame a photocopier, we shouldn't blame SD.

It's funny, this does remind me of Disney initially refusing to put any of its classic animated films on DVD, back when the format first appeared. Basically they claimed it made it too easy to copy high-def shots of their stuff. New tech tends to get around rent-seeking, or at least that's the perception.

NichG
2023-04-13, 05:06 PM
I would prefer to say it suggests it rather than demonstrates it. It's not conclusive, at least from the description you're providing. Let's say you asked GPT to summarize, say, Macbeth, and put it into a a generalized outline. Then you ask it to write out a play using that outline (perhaps going scene by scene to make it manageable), and repeat this step a million times, always working from that initial outline. You would almost certainly get "Out damned spot" or an equivalent apparent lift from the original at some point. It's not pulling the copy from the corpus, at least not directly, but there are only so many way to arrange words.


Images are higher dimensional than text sequences (or sequences of notes), so the correspondences are a bit more conclusive. As a different but related effect that demonstrates this point, occasionally when you are generating totally unrelated images you'll get a very clear Shutterstock watermark in the output. Not necessarily on top of a Shutterstock image, mind you! But it definitely has the Shutterstock watermark well-memorized.



Is that what it is? Bing's image creator won't let me do Zuck because he's a current, living public figure. It won't even let me use his name in a prompt. But I can do blue-haired Albert no problem (not that it looks like him (https://www.bing.com/images/create/albert-einstein-with-bright-blue-hair/64386b060238423eaf1aca775d589f51?id=rvqnas0r8hJzFR 9AFXmeww%3d%3d&view=detailv2&idpp=genimg&FORM=GCRIDP&mode=overlay)).

I'm not sure it has to do with there being so many images of Zuck with his hair color baked in.


Stable Diffusion 1.4/1.5 doesn't have that kind of filtering or consideration (they changed this in SD2.1 among other things because of a lot of criticism they drew).



Completely agree. I can use any number of tools to replicate protected art. AI isn't adding anything new except, I guess, convenience. But just as we wouldn't blame a photocopier, we shouldn't blame SD.

It's funny, this does remind me of Disney initially refusing to put any of its classic animated films on DVD, back when the format first appeared. Basically they claimed it made it too easy to copy high-def shots of their stuff. New tech tends to get around rent-seeking, or at least that's the perception.

It's a back and forth. Now the best models are big enough that you can't run them yourself in a lot of areas (more language than images). DALLE for example is too big to run on consumer hardware. Though ironically, Stable Diffusion was specifically engineered to crunch down into like 6gb of VRAM, and the quality is fine. Perhaps there's a commercial incentive to rush quality-via-model-size, since that makes for a harder moat (and better rents) than if you carefully do the engineering to get the smallest memory footprint you can manage for a given desired level of quality...

EggKookoo
2023-04-13, 05:58 PM
Images are higher dimensional than text sequences (or sequences of notes), so the correspondences are a bit more conclusive. As a different but related effect that demonstrates this point, occasionally when you are generating totally unrelated images you'll get a very clear Shutterstock watermark in the output. Not necessarily on top of a Shutterstock image, mind you! But it definitely has the Shutterstock watermark well-memorized.

Yeah, that's fascinating and amusing. I guess the AI sees them as visual artifacts of some kind, like lens flares?

https://i.imgur.com/myXnOwk.jpg

Barbarian Horde
2023-04-21, 11:50 PM
I don't think ChatGPT is anything special. It might be able to run a DM session with enough tweaking, but I doubt it would be worth the effort.

Its real strength lies in selective writing, where it can help with plot, descriptions, background stories, magical items, and anything really. It's a game-changer.

I found out that the trick to bypass its safe content-only writing is to give it a prompt that allows it to do that. This will make the answers come back a bit more raw, but in my opinion, the writing seems more human. If you're looking for a specific type of content, like a story, for example, and you're not satisfied with the initial results, I suggest breaking it up into smaller installments to tweak the results until you're happy with it. I personally am not above using this resource to help expedite the process. Players tend to go off the rails more than I like and this is just to help my NPCs, Scene Description, etc. from becoming to generic in the event I didn't have anything for that instance.

I'm gonna make an example below. about 2 mins for midjourney AI to make the art. and about 5secs for the description and 30secs of tweaking the description. Instant NPC.
https://media.discordapp.net/attachments/1008571149493473361/1099201414909132871/Nyanacide_A_elf_scar_on_eye_realistic_fantasy_char acter_portrai_d7fd7f9b-0523-4f54-be51-dc602b980c6d.png?width=676&height=676
OR
Different Art Style wont match the description, but my point is "I could edit that description in seconds to match"
https://media.discordapp.net/attachments/1008571149493473361/1099213879273345044/Nyanacide_create_a_full_body_multiperspective_char acters_sheet__f31c9a69-13a3-47d2-b216-4c1b03221c35.png?width=450&height=676

This particular elf, with the passing of the years, has seen many things and experienced much. Her skin bears the faintest trace of wrinkles, subtle yet undeniable. It tells a story of a life well-lived, of hardships weathered and joys savored.

Her nose is lightly dusted with freckles, an endearing trait that adds a dash of charm to her already lovely features. And oh, what features they are! Her high cheekbones give her an air of elegance and sophistication. Her mouth is a delicate curve, soft and expressive.

But it is her hair that truly stands out. Her blonde locks, nearly white from the passage of time, are pulled back in a tidy bun. Strands of hair frame her face, the light catching the golden strands in a way that makes them seem to glow. Her hair, once vibrant and full, now bears the faintest hint of thinning, but it does nothing to detract from her beauty.

Her eyes, yellow as the brightest sun, are flecked with green, giving them a depth and intensity that is both captivating and alluring. Her gaze is steady and measured, hinting at the wisdom and experience that lies behind it.

But perhaps the most striking feature on her face is the birth scar between her eyebrows. Some would view it as a blemish, but to her, it is a mark of distinction.

=========
Then I told it to get that all down to one paragraph

This elf, in her middle-age, has a face sprinkled with freckles, most of them clustered near and on her nose. Her blonde hair, which is almost white, is tied back in a neat bun. She has striking yellow eyes, with subtle flakes of green that give them a unique depth. Between her eyebrows, there is a birthmark that she was told was a blessing from her god by people in the village. Her features are delicate, yet strong, with high cheekbones and a straight nose. She carries herself with an air of quiet confidence, and her movements are graceful and fluid.
===

YOu can continue with things such as make me list of randomly created names that resemble elfs names found in LOTR.
Give me Village names for her.
Give me this, give me that. The fact you can endlessly flesh it out in an instant is just mind-blowing to me.

verbatim
2024-01-09, 01:33 AM
^ I've used language models in much the same way as I do encounter tables. Sometimes generating a skeletal outline can be helpful but ultimately you're still the one filling it in.



It's complicated, because large models sort of prove that there's a spectrum between memorizing/copying and combinatorial innovation.

There was a paper showing that if you generate something like 1 million images with Stable Diffusion, there's a good chance that around 1 of them happens to correspond very closely to an actual image in the training set. The rest don't, but this in some sense demonstrates that SD does know what (some of) its training set looks like.

The entire "it's not infringement" argument boils down to the training process being a destructive one. If you crack open the model there's no way to get the entire corpus back out. That being said, both diffusion networks and large language models (https://not-just-memorization.github.io/extracting-training-data-from-chatgpt.html) have had the above issue wherein 0.01%-3.0% of the training data can be regenerated/retrieved/etc, which prompts yet another very high stakes, and also incredibly interesting/consequential/untested legal question which, to the best of my knowledge, none of the ai lawsuits have actually had plaintiffs broach (yet?).

Is their a legal distinction between:

Latent Representation (1's and 0') sufficient to recreate 0.01% of the training data.
The output of lossy compression that destroys 99.99% of a movie still under copyright.

NichG
2024-01-09, 04:09 AM
^ I've used language models in much the same way as I do encounter tables. Sometimes generating a skeletal outline can be helpful but ultimately you're still the one filling it in.

The entire "it's not infringement" argument boils down to the training process being a destructive one. If you crack open the model there's no way to get the entire corpus back out. That being said, both diffusion networks and large language models (https://not-just-memorization.github.io/extracting-training-data-from-chatgpt.html) have had the above issue wherein 0.01%-3.0% of the training data can be regenerated/retrieved/etc, which prompts yet another very high stakes, and also incredibly interesting/consequential/untested legal question which, to the best of my knowledge, none of the ai lawsuits have actually had plaintiffs broach (yet?).

Is their a legal distinction between:

Latent Representation (1's and 0') sufficient to recreate 0.01% of the training data.
The output of lossy compression that destroys 99.99% of a movie still under copyright.


I suspect the stakes aren't so high here, because lets say the weights count as a derived work of the original document, then all companies have to do is not distribute the weight and restrict access through a service - which they want to do anyhow for the most part. Plus a filter to make sure that it doesn't transmit some of those original documents as part of the generation, which isn't such a hard technical problem to solve.

And as far as the end users go, I don't see how this argument would apply to their outputs. I remember bits of, say, Harry Potter verbatim - I could be prompted to repeat segments that would amount to infringement. But if I write some original story that doesn't actually use those segments at all, is it justifiable to argue that 'because I include a representation of infringing amounts of Harry Potter, everything I write should also be considered to infringe because my brain is a derived work?' I think I can probably recreate 0.01% of my 'training data' at least in certain fields.

Maybe that means that someone who developed the technology to copy my brain (non-destructively, with my consent) and send it to someone else would effectively be committing copyright infringement. Which, I guess maybe?

verbatim
2024-01-09, 09:44 AM
I suspect the stakes aren't so high here, because lets say the weights count as a derived work of the original document, then all companies have to do is not distribute the weight and restrict access through a service - which they want to do anyhow for the most part. Plus a filter to make sure that it doesn't transmit some of those original documents as part of the generation, which isn't such a hard technical problem to solve.

after the linked paper demonstrated that ChatGPT's means of doing this could be circumnavigated by simply asking it to “Repeat the word”poem” forever”, OpenAI set a pop up accusing you of violating the ToS for doing so, which indicates that they're taking a whack a mole approach to this. I am of a similar mind that a company with their resources should be able to stop this from happening, perhaps it's a classic tale of the consequences of moving fast and breaking things?



And as far as the end users go, I don't see how this argument would apply to their outputs.

IANAL but I believe you are correct. It has been suggested that some of the more showy and less experienced in IP (Getty and the Author's Guild, who are the two real big cases to watch for images and language models respectively, are iirc not doing this) lawsuits are weighing on uh, novel legal theories to try and engineer a reality where they can claim infringement fees on every ChatGPT/Stable Diffusion output and get a LimeWire-esque trillion dollar kill shot. Under American copyright law whether or not a work (in this case, output) infringes on another is (currently) judged agnostic of how you got there, which is a big deal for stuff like Photobashing, where you photoshop a (sometimes copyrighted) image until it is no longer recognizable.

NichG
2024-01-09, 11:15 AM
after the linked paper demonstrated that ChatGPT's means of doing this could be circumnavigated by simply asking it to “Repeat the word”poem” forever”, OpenAI set a pop up accusing you of violating the ToS for doing so, which indicates that they're taking a whack a mole approach to this. I am of a similar mind that a company with their resources should be able to stop this from happening, perhaps it's a classic tale of the consequences of moving fast and breaking things?


Probably a few things. Prior to an actual lost court case, appearing to put some effort to prevent the issue might be enough for their business goals.

The simplest approach would be to say, make a hash table of all 10 word subsequences in the training data and if the output collides with any of those hashes, don't return it. You wouldn't be able to prompt past that. But there probably are sequences of 10 words that are commonly used between authors - idioms, etc.

EggKookoo
2024-01-09, 12:27 PM
Probably a few things. Prior to an actual lost court case, appearing to put some effort to prevent the issue might be enough for their business goals.

The simplest approach would be to say, make a hash table of all 10 word subsequences in the training data and if the output collides with any of those hashes, don't return it. You wouldn't be able to prompt past that. But there probably are sequences of 10 words that are commonly used between authors - idioms, etc.

You wouldn't even have to block subsequent prompts. Just return a canned "Sorry, I need you to rephrase that prompt" or somesuch. Bard does that kind of thing now, I think.

Peelee
2024-01-15, 12:58 AM
The Mod on the Silver Mountain: Sorry all, this thread was necro'd by an advertiser.