[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: open conflicts (John's comments)
- To: firstname.lastname@example.org
- Subject: Re: open conflicts (John's comments)
- From: WFreitag@aol.com
- Date: Wed, 14 Feb 2001 13:29:54 EST
- Sender: email@example.com
In a message dated 2/13/01 1:53:53 AM, firstname.lastname@example.org writes:
>The point is that drama/conflict is read piece by piece as experienced
>viewer BUT the overall resolution of the story conflict/problem is an
>pieced together by the viewer after all the bits have been "read" - this
>assembled from the memory of the "reader".
>In interactive stories the aim is to make the structure of that "reading"
>(obeying rules of plot and dramatic onflict.
I agree, resolution is not a single event but an abstraction that references
multiple events. But it's not _only_ in the reader's mind. There's at least
some objective reality to resolution, or else you could end a novel 4/5ths of
the way through and the reader wouldn't notice anything missing.
That's why in my "conflict objects" model I define resolution not as an event
but as a set of possible resolved states. In the reader's mind, the actual
resolution is whatever happened that brought about one of the resolved states.
In every form of interactive storytelling I've ever engaged in, where the
audience had true freedom, resolution was always the hard part. Making plots
build, interweave with each other, spin off new plots, and so forth is
surprisingly easy as long as you don't care about whether anything is
resolved. I believe that the problem of resolution is pretty much equivalent
to the interactive storytelling problem as a whole.
I doubt that the conflict-object model I've described would create
resolutions with any narrative beauty. But it would be an achievement just to
resolve things at all. Perhaps that's one reason I often look to Icelandic
sagas as an example. The sagas have intricate structures of conflict and
resolution, but to my mind at least, little or no beauty in the resolutions.
Whenever I read them I think, "an interactive storytelling system ought to be
able to do that!"
>My take is that the "bits" (short sequences - action or simulation or
>interaction) could be "marked" semantically so that the "story engine"
>significance and causal relationship to the chain of meanings (of each
>I think Chris's engine could handle that.
The problem is in the "meanings." Chris's engine handles causal relationships
in the chain of actual events, as well as state changes. Causality and state
can do a lot. John hits Joe. What can Joe do about it? Joe could hit John
back. Joe could plan to hit John back later. Joe can hate John, making him
more likely to act against John in the future. Joe can even plan to tell
someone information that would harm someone who John likes. All these the
Erasmatron was designed to do.
But it's inconvenient (though not technically impossible) to encode into
character state variables such things as characters' goals, plans, and
conflicts. For example, Joe hates John, so he plans to be nice to John until
he gains John's confidence, and then betray him. We need several relationship
variables to encode this: one to indicate how Joe acts outwardly towards
John, another to record his true deeper hatred of John, another to indicate
that Joe wants-to-betray John. The latter must be separate from the others
because in the first stage of Joe's plan, he does not want to betray John
(yet). Also we need a mechanism (probably a verb tied to a plot point) for
Joe to check whether the first stage of his plan has progressed far enough to
implement the second (setting the wants-to-betray variable). Trying to build
a whole story world this way, we would quickly run out of available variables
in the Erasmatron. More importantly, we would find it impossible to keep
everything straight in the author's mind.
Conflict objects would be a more convenient way to represent state conditions
that have a certain context or meaning. Implemented as a conflict object
instance, Joe's plan to harm John contains all the information about the
current state of the plan in one place.
Here's another example: Sam is my friend. I'm angry at Sam because I believe
that Sam tried to hit on my girlfriend. Later I discover Sam was innocent.
I'm not angry at him any more.
The problem is that while I can confidently increase my "anger" relationship
variable with Sam as a result of learning that he did something I dislike,
it's harder to go the other way (that is, _resolution_ of the issue) using
instantaneous state variables alone. Learning that Sam did not try to hit on
my girlfriend has an effect on my anger that's hard to define without
"understanding" the history. After all, if I'm angry at Sam because he broke
my lawn mower, and I never even suspected that he hit on my girlfriend, then
learning that he didn't hit on my girlfriend would have no effect on my
anger. A conflict object would be a way of encoding the "because" that
underlies my anger, so that subsequent related events could act appropriately
Note that conflict objects do not really add to the system any
"understanding" of meaning or significance of events. They encode meaning or
significance only operationally and only within an individual conflict
object. They're not intelligent. They just present a better way to represent
delayed effects of causes, and circumstances that persist over time, in a
>What happens on the moment to moment level in each "bit" does have meaning
>eg kill or get killed or gotta get the hell away from this maniac chasing
>the integration and orchestration of hese that count for a "story".
>That COULD leave a lot of expressive flexibility within the Bits- we just
>authors give them a "semantic label".
>This could be at a "sentence" level AKA subject-verb-object. OR at Paragraph
>level. If the "engine" knows the "value" of the label and how it fits then
I think I understand the idea of a semantic labeling of an action. To try to
restate your example, if Joe swings a knife at Sam, the semantic label might
be "Joe and Sam are in a life and death struggle" or alternatively it could
be "Joe wants to scare Sam away" depending on the circumstances in which the
action occurred. Subsequent actions would take the semantic label into
account in deciding what happens next.
But I'm not sure if the net results of such labels would be any different
from the way you could do it in the current Erasmatron, which is to have two
different versions of the action, each with its own label built in. Instead
of swings-knife-at with two possible labels, "wants to kill" and "wants to
scare away," you would instead have two separate actions,
"swings-knife-to-try-to-kill" and "swings-knife-to-try-to-scare-away." This
seems equivalent, unless you intend to use the semantic labels in some other
way further down the chain of events, in which case you'll have to explain to
This points to another issue with conflict and resolution in artificial
narrative, which is determining when conflict exists and what its nature is.
With conflict between non-player characters, we could cheat by first deciding
what conflict will occur, then forcing the characters to act in such a way as
to bring the conflict about. But that's not an option where an interactive
participant is involved. So how do we decide when an action constitutes the
creation of a tension that must be resolved (that is, a conflict) and when it
>Discursion good be an occasional good.
I don't understand what you mean here.