[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Facade discussion



----- Original Message -----
From: <WFreitag@aol.com>
To: <idrama@flutterby.com>

It's been a while, but here's a response to Walt's last post.

>> Walt, would you call this a middle ground between the open-ended
>> character-behavior simulation approach and the "pre-planned" approach you
>> described?
>
>I would. But wow, what a large stretch of that treacherous middle ground
>you're aiming to conquer! This is impressive and scary.


Conquer is way too strong a word, more like "take a stab at"


>I recognize the concepts of the components of the system you're describing
>from the Oz model, as you pointed out. But I'm having a lot of difficulty
>abstracting the examples from that work to the same model applied to
>storytelling, quite a "scale up" in complexity. For example, the idea of
>"recovery" when switching from one behavior to another is easy to grasp in
>the domain of Woggles. It's easy to understand why, and how, one would use
a
>recovery scheme to prevent a Woggle from overtly switching from a "bounce
to
>destination" behavior to a "go to sleep" behavior while still in
mid-bounce,
>even if the executive decision for that character to switch happens to be
>made in the character's mid-bounce. I can also understand the "why" of
doing
>the same thing, analogously, with story behaviors. But how to actually do
>that, given the real-world specificity of story events... yikes! Aren't you
>going to end up requiring a hardwired recovery scheme for each possible
>combination of interrupted behavior and interrupting event or new
behavior...
>perhaps even more than one, depending on where the interruption occurs
>chronologically? But that, of course, would clobber the encapsulation of
>behaviors.


We'll do our best to author specific recoveries for a (finite) number of
situations, and write AI to help categorize a wide variety of the player's
dialog into a small number of discourse acts, for example, mentioning
anything about recreational sports, exercise, the beach, hiking, etc. could
all map to "brought-up-topic outdoor-activity", which could trigger a
response from the characters such as, "Oh, we hate the outdoors."

But, unfortunately, I'm sure many of these recovery behaviors will have to
very general, and only semi-believable, and almost certainly annoying and
frustating at times for the player.  For example, if the player says
something the system cannot respond to, at worst it would activate behaviors
that cause the characters to _appear_ to be listening to you, seeming to
consider what you're saying, mulling it over, and then responding by moving
forward with the story -- "Well, I don't know, I think I'm going to do x, y,
or z" -- as if what you just said or suggested didn't work for them.  (It's
no coincidence that our characters are distraught and distracted.)  There's
no fancy AI involved there, it's more a matter of carefully designing the
dialog so that the player doesn't feel completely ignored.  This is a lot
like how politicians respond to questions without actually answering them.
The player will probably sometimes feel like, "they're not listening to me,
I just said something they should respond to!"  This flaw will almost
certainly be the biggest limitation of the story experience.

But we will NEVER have the characters say "I don't understand".  :-)


>And that's just one detail.
>Okay, so where have you hidden the "thenMiracleOccurs" calls in your
system?
>Obviously (since you haven't actually tried to hide it) in the behaviors.
>Possibly in the beats as well, but probably not. If I'm interpreting this
at
>all correctly, I see beats as a unit of performance of a story, a unit of
>action on about the same scale as two consecutive Erasmatron verbs, such as
>"John makes a pass at Mary, and Mary rejects John."

That sounds about right


>snip<
> I imagine that the story
>manager also fits the Oz schema of hierarchical intelligence, so you're not
>trying to slip a miraculous storytelling genius in here. In other words,
the
>story manager has no real-world common sense knowledge about the world the
>story takes place in or the events going on in it. The story teller makes
its
>decisions based on story-arc-level abstractions (such as "the level of
>conflict is low, raise it"), and its actions are limited to assigning
>behaviors based on characteristics tagged to the behaviors (to raise the
>level of conflict, choose a behavior that is permissible in the current
state
>and which has a high conflict attribute, and put it into play). I'm
assuming
>that the story manager is, in your overall structure, really just another
>behavior (you haven't emphasized the hierarchical nature of behaviors but
>that's central to the Oz paradigm).


Yes, a behavior hierarchy is the general approach we're taking.  I should
have mentioned that in my last post.


>So its capabilities are not really
>limited to what I've described. There's no reason the story manager
couldn't
>decree beats directly, perhaps for key events of the story (the tidal wave,
>the shark attack...) and it could embody sufficient causality rules to
>control such events adequately. But my point is that the story manager does
>not have any large body of "universal" or "general-purpose" knowledge do
draw
>on.


correct.


>So what does that leave for the behaviors in general? Well, basically,
>everything. Here are some thoughts about behaviors.
>Behaviors would seem to be a lot like scripted scenes, in the following
>respects:
>- They are data-intensive.
>snip<
>- In particular, a behavior has a pre-planned outcome or a set of possible
>pre-planned outcomes "in mind" at all times.
>snip<
>How does a behavior differ from a scripted scene?
>- They could be interactive (or more precisely, reactive). I envision a
range
>of different types and degrees of reactivity possible within the basic
>defition of a behavior (a behavior is a machine, after all).
>snip<
>- They are not chronologically contiguous. Unlike a scripted scene, they
>don't take continuous control of the stage from start to finish. A behavior
>doesn't have to be acting to be active. An active behavior should be able
to
>perform a few events, idle for a while (perhaps while waiting for one of
>several possible state configurations to occur), then perform a few more
>events. This is necessary for many simultaneous active behaviors to be able
>to share the stage.
>- They are possibly hierarchical. A typical behavior might do a mixture of
>two things: controlling beats directly, and passing control to subordinate
>behaviors that it instantiates. Like I said, you haven't emphasized the
>hierarchical nature of behaviors, perhaps it's not a central part of your
>plan this time. But I think it's key.


Yes, the hierarchical natures of behaviors is definitely key.  As we speak,
we are currently in the process of understanding how to organize this
hierarchy to achieve the effects we want.  Note that the various levels of
the hierarchy may operate in different ways; that is, the way that
lower-level behaviors operate and are authored may be subtly different than
higher-level beat behaviors, and different from even higher-level
global-story-management behaviors.


>snip<
>Now, you've mentioned that the story manager should also be able to
remember
>where a behavior left off, in case it's possible to return to it later. A
>fine idea. The character involved in a romance doesn't die, but he does go
>off to fight in a war. The romance behavior isn't going to go anywhere, but
>you want to be able to come back to it when Johnny comes marching home. The
>story manager could do that, but why not just leave the romance behavior
>itself active? In other words, shift the architecture so that the romance
beha
>vior itself contains the rules for when it can be returned to. Now you've
got
>something like a hierarchical relationship between behaviors. To make it
>complete, let it be the romance behavior itself that decides to send the
>character off to war in the first place. Instead of just reacting to
>interruptions from "outside," the behavior can now interrupt itself, in
order
>to more deeply develop its own internal story.

Well put!


>The trick is schematizing those interruptions to maintain both
>logical/narrative continuity, and flexibility, and to give authors the
>boundaries they need to work with them. If the character in the romance
>always goes off to war in the middle of it, then the going-off-to-war might
>as well just be built into the romance behavior itself. But suppose other
>behaviors were available that could result in the character going off to
>jail, or to minor league training camp, or to a substance abuse program, or
>on a business trip to Europe, or on a fishing voyage. Each offers roughly
the
>same possible directions for further development: the character does not
>return; the character returns having fallen in love with somebody else; the
>character returns more smitten with the other than ever before. Other
>behaviors could similarly draw on the same set of "character goes away"
>behaviors, so efficiency need not suffer. Do you see what I'm getting at
>here? And this is a very crude example. Instead of a set of "character goes
>away" behaviors, it would be better to have a more generally useful set of
>"romance complication" behaviors. The goal is to get as general as you can
>while still able to schematize for interchangeability.


Yeah, I've been trying to understand how to create behaviors that are more
general, more abstract -- ie, even higher up in the hierarchy -- perhaps
leading towards more generativity.  I'm not sure how far we'll get in this
project towards that.  But, this project is at least one behavior hierarchy
level higher than my previous projects, Petz and Babyz.  So if this trend
continues, perhaps after this project we will have an understanding of how
to make an even taller behavior hierarchy -- to make more abstract,
generative behaviors / processes.  That'd be cool.


>At this point I've passed from attempting to recap and understand your
>schema, to describing mine. Let me take that a step further. Here's what
>you'd change to turn your system into something like mine.
>
>The most important change is that every outcome of every behavior must be
an
>ending -- that is, an author-crafted resolution (though not necessarily a
>successful or happy one) of the conflict, goal, or problem around which
that
>behavior has been built. In other words, every behavior must now be a small
>story.

that's interesting


>In my approach, it has to be a small interactive story. Let's keep it
simple
>and make each behavior, at its core, a plain dumb branching story tree.
>Once a behavior begins, it always remains active until it reaches a
>resolution. Nothing the audience does can alter that. If a bomb is set to
go
>off at noon, then it will either go off or not go off (be disarmed, etc.)
but
>there's no way it's going to just be forgotten about. The same is true of a
>romantic interest between two characters. The audience's choices can
>determine the outcome, but cannot determine whether or not there will be an
>outcome. This is the crux of my trick, a cheap and dirty way of preserving
a
>semblance of narrative continuity in the face of interactivity.


Seems reasonable, I can see the value in that.


>If we allow for multiple behaviors simultaneously, then we have to allow
for
>state changes from "outside" a behavior (caused by the effects of other
>behaviors) that complicate or obviate the behavior. We handle those by
>including rules within the behavior that branching to alternative paths or
>outcomes if the state goes "out of bounds." If a character in a romance
>behavior dies (if that's a possible state; in other words if the world
allows
>deaths), there's an outcome in the romance behavior for that eventuality.
>
>Now we can just get rid of the story manager, because the most important
>things a story manager does (suspend behaviors, recover from behavior to
>behavior, etc.) either don't need to be done or are handled by the
behaviors
>themselves. (Sure, we still need global exective functionality for other
>purposes, but these functions are no longer directly concerned with
>storytelling rules).
>
>But what puts behaviors into effect? We start with one or more active, and
>active behaviors instantiate others during the story at points where there
>own scripts instruct them to do so. Hierarchically, as I described before.
In
>other words, the script implicit in a romance behavior might come to a
point
>where it invokes another behavior, filling the new behavior's roles
according
>to its own current roles and state. _Which_ new behavior it chooses can
>depend on state and/or on audience decisions, or perhaps just random
>selection within a schema category. (Audience decisions can also figure in
at
>conventional branch points.) Typically the calling behavior will then await
>state changes caused by events of the new behavior (e.g. Johnny coming home
>from the war, or being killed in the war) before advancing its own progress
>any further, though other mechanisms are possible. There is no "horizontal"
>chaining of behaviors; that is, there is no point where a new behavior or
>scene must be chosen entirely on the basis of the current world state
and/or
>story construction rules. (Consequently, no story manager is necessary to
do
>the choosing.) Except for the initial top-level behavior(s), every new
>behavior arises in the context of another already in play.
>
>There are many ways to make this more complex and sophisticated. For
example,
>we could do away with the type-schemas for behaviors and instead select
them
>based on fitness relative to the current story state, the way Dramaton
>selects scenes. (That, of course, puts the story manager back in.) But the
>core concepts here are so simple that I'm writing a highly limited version
as
>a text-only paper card game. Of course, that version has many drawbacks
(the
>"cards" are actually full size pages to hold the amount of text and
>instructions required, and the scenes have to be written full of
>generalizations like "as you proceed toward your destination..." that would
>be easily taken care of by role variables in any computer version). Still,
it
>helps in developing the idea and may soon help in demonstrating it.
>
>So we seem to have many points of similarity, including:
>
>- Author-intensive, data-intensive, and (if I'm intepreting correctly)
>story-like content units.
>- Playback content tightly constrained by the data elements put in; not
>on-the-fly creation of original story, but rather interactive recombination
>of story elements.
>- Systems designed specifically for a single story or at least a single
style
>of story. To write a different style of story, you need not only different
>behaviors, but a different schema (in my case) or a different story manager
>(in yours).


Or if the story manager is itself implemented with a particular flavor of
behaviors -- that is, the highest level in the behavior hierarchy -- then to
create a different story or style of story, you need to rewrite the entire
behavior hierarchy (but hopefully appropriating / re-using pieces of the
last one).



>- (Maybe) Hierarchical story structures.

yes.


>We may also share some of the same difficulties, including:
>
>- Authorial challenge. I know in my own cases that writing story elements
for
>my system is difficult.


Yes.  At least half of the trick of all this is becoming skillful at
authoring within the architecture, whatever your architecture may be.  If
one invents themself a Photoshop, it doesn't mean they'll be able to make
interesting pictures.  I don't think anyone would disagree that to create
interactive stories you still need good ol'fashioned writing talent -- but
now made doubly difficult, because you must adapt to this new procedural way
of authoring.


>In fact, I want it to be difficult. When I arrive at
>a variation that appears to not be difficult to write, I change it. My
>reason? Simple. If you pump a tire pump and there's no resistance, it means
>that for some reason you're not getting compressed air into the tire. If I
>write an element and it's too easy, it means the process isn't demanding
>enough creative thought from the author. But just because the difficulty is
>necessary and appropriate doesn't mean it's not a problem.

Hey, this enterprise is masochistic enough without that kind of attitude!
:-)


>Who's going to
>write these things?

Good question...  I think one needs to be both a writer and programmer.


>Also, there's "good difficulty" and "bad difficulty."
>Creative thought that goes into puzzling over how to make the system work,
>like a clog in the air hose of the tire pump, adds resistance without
helping
>to pressurize the tire.
>
>- Forms of user interactivity. These designs seem to lend themselves more
to
>explicit decision points than to at-will action-based participation. But
the
>latter is not only more in vogue, it's probably artistically superior.


We're specifically trying to avoid explicit decision points, to make the
interaction seamless and continuous, or "at-will" as you put it.  We want
the player's interactions to truly have continuous, direct and indirect
effects on the story.  Any one thing the player says may have a direct
effect (e.g., triggering a specific dialog reaction from a character), as
well as having longer-term effects (e.g., increasing/decreasing the overall
trust a character has for the player, which can have many ramifications for
how all behaviors operate, which behaviors are chosen in the future, etc.)


>Now, I know that Oz was action based, and you mentioned that immediate
>reactiveness is one of your goals. But this is where you're going to have
the
>most difficulty. Suppose the player-character, given the freedom to do so,
up
>and slugs another character. There's a chance that a current active
behavior
>has rules for interpreting that action and orchestrating a response to it.
>But if all behaviors are story-rich sequences created by authors, then it
>seems that a behavioral response would only be available if the author
>planned for it. What if the action is completely senseless, or sensible but
>the author didn't plan for it?


A core design requirement for us is that we always have at least a
semi-believable response for all combinations of possible player
interactivity at all times.  (We plan to do a many-to-few mapping of surface
text to discourse acts, so the total number of combinations is not
completely insane.)  This will require a lot of authoring, yes.  Worst case,
if we cannot cover all these combinations, we fallback to the general
"recovery" behaviors described earlier.  Hopefully those will only happen
10-20% of the time, not 75% of the time.

Naturally it will depend on how "cooperative" a player is when they play.
The interaction will be designed to "offer" plenty of openings for players
to meaningfully interact -- a technique from improv.  Additionally I hope to
also listen for interactions that we didn't make offers for, so that players
can feel they are being clever. :)  All of this requires lots of authoring.

The experience will reward players that stay within the domain of the story,
try to humor players that briefly go outside the domain of the story, but
barely support players that go beyond what is reasonable.  For example, if a
player continously talks about off-the-wall topics, talks in non-sequitors
or uses lots of inappropriate language, the story will be forced to
prematurely abort.   Of course, what will really suck is if player's _are_
actually being reasonable and cooperative, but the system is too stupid to
recognize that, and ends the story unnecessarily.


>(In the former case, we could say that the
>audience deserves to have the action ignored or reacted to illogically
>because they're clearly not playing along. But the latter case is more
>problematic.)

yes.

> Or we could require the story manager to arrange a reaction,
>perhaps by triggering a behavior that handles "react to unprovoked punch."
>But that doesn't seem like what you had in mind for what behaviors should
be.

No, that's a reasonable thing to do.

>And even with such a behavior, there has to be a way for the implications
of
>the action to be propagated to other parts of the ongoing story. Either the
>behaviors will have to be able to react autonomously to the state change,
>requiring behaviors to be carefully wired into some sort of global
character
>state (including mental states) model, or the story manager would have to
>have rules (requiring common-sense knowledge) for figuring out the effect
of
>an action on a given behavior.


We're going to try for a bit of both.  An unprovoked punch may in the
short-term cause a clumsy general recovery behavior, but it would most
certainly cause meaningful effects in global character state, affecting how
current behaviors operate, and affecting which beats and behaviors are
chosen in the future.  So, this requires authoring behaviors to pay
attention to global character state, and modulate their behavior
appropriately.


>(For a soap opera version of my method, I've designed an interface that's
>at-will but tightly constrained. The audience's interaction is limited to
>giving props, items that are currently significant to the active story
>elements, to characters. (The audience does not play a character role, but
>meddles in the story through this mechanism.) This tight constraint allows
>the system to reasonably interpret every possibility because they're
limited
>in number. Every element, for each of its props, has rules or branches for
>the effects of giving the prop to any character who has a role in that
plot.
>(Other characters are uninterested in that prop, at least until I add a
"plot
>partisan" mechanism that tracks characters' awareness and attitudes about
>plots in which they don't have direct roles.) Since that's the only form of
>audience activity, the number of alternatives needed is comparable to the
>number needed for ordinary direct decision points. I think this works, but
>it's far from a general solution, and it's far from the goals you've stated
>for Facade.)


Well, even though we're allowing for open-ended text input, like I've said
we're planning to do an intelligent many-to-few mapping to a limited number
of combinations of discourse acts.  But until the system is up and running,
we can only guess if we'll be doing enough to give players a satisfying
level of expressiveness.  Probably won't be.


>Perhaps contributing to my confusion on this point is what appears to be
some
>ambivalence in your description of Facade as to whether behaviors are
>associated with characters or with units of dramatic action involving
>multiple characters.

Both!

>In the parts of the Oz work I'm familiar with, behaviors
>were instantiated in a character. Therefore behaviors had a lot to do with
>reacting to stimuli from the environment (particularly, other characters),
>and one could conceptualize a hierarchy of behaviors as the long-term
mental
>state (current emotions, but also plans, wishes, habits) of a character.
This
>makes the characters very responsive and realistic.


We are planning to make the characters themselves weakly autonomous, only
responsible for "easy", personality-specific moment-to-moment reactivity;
characters are reliant on the higher-level global story behaviors for being
told what to do next on a story level.  Whereas the Oz characters tended to
be (or seemed to be designed to be) strongly autonomous, far more
responsible on their own for what decisions they make, getting infrequent
guidance from higher-level story management.  This difference is described
in detail at http://www.interactivestory.net/papers/MateasSternAAAIFS00.pdf.


> But there wasn't much
>capability for organizing individual behavior into dramatic sequences;
story
>had to "emerge" from free-form character interaction.

At least in the systems built to date.  Similarly Petz and Babyz relied on
emergent narrative.


>On the other hand, your
>description of behaviors, particularly the process of authoring behaviors,
>makes them sound more akin to (an evolved form of) scripted scenes, as I
>mentioned above, involving multiple characters. If that's the case, then
>responsiveness becomes a problem. Why? Because authored elements don't come
>equipped with the causal rules by which they were generated. That's their
>virtue; it allows them to embody authorial intelligence, which presumably
>involves causal rules too subtle and complex to be explicated. But it also
>makes it impossible to automate their variation. The ability to react to
>at-will audience choices cannot be taken for granted, despite how
>successfully reactive the earlier character-behavior-based systems were.
This
>is true even if behaviors do not have reactive variation; that is, if the
>only decision to be made is whether the behavior continues to run or not.
>
>The divide between character behaviors and scene-like story behaviors is a
>huge one. It's not just an architectural difference. For an author creating
>behaviors, for example, the process of creating behaviors for characters
>which by their interactions will get involved in stories, and creating
>storyteller (or puppeteer) behaviors for generating or performing dramatic
>scenes, are one hundred percent different. On a more fundamental level,
>character behaviors would ground you in the "P = 1.0" (simulator) attractor
>and puppeteer behaviors would ground you in the "P = 0.0" (linear story)
>attractor. (Even in the middle ground, you need supply lines back to one or
>the other, or both.) So which is the dominant model for Facade?
>
>Perhaps I can predict a possible answer: both! In other words, why not take
>advantage of the aforementioned hierarchical nature of the behavior model?
>Combine low-level behaviors instantiated in characters with higher level
>behaviors that generate or perform narratively contiguous scenes by
>controlling the low-level behaviors. Puppeteering by proxy. If a puppeteer
>behavior wants Joe to go to the pizza place, it doesn't walk him to the
pizza
>place, it instantiates a behavior in Joe that directs him to go to the
pizza
>place. So he'll go, but the rest of his hierarchy of behaviors is still
>present. If I punch him on the way there he'll still be able to react
>appropriately.


yes.  By now you've probably already surmised that this would be our answer.


>Does this succeed in claiming that middle ground? Not quite. The chasm
>between the two attractors still exists. It's now located between your
>higher-level and lower-level behaviors. Your success depends on how well
you
>can span that gulf. The key issue is your information flow from lower level
>behaviors back to the higher level ones, and how well the higher level
>behaviors or the story manager can react to that information.


I think you're right.  It's becoming clearer that how one organizes the
hierarchy, and customizes how behaviors operate at different levels of the
hierarchy, will make or break this approach.  We'll in the thick of working
through that right now.  It's not obvious yet how well it will work.


>If I punch Joe on his way to the pizza place, the reaction can't be handled
>by low-level behaviors alone. A Woggle could appropriately punch back, or
run
>away, or whatever, and that would be the end of it (except for some
lingering
>resentment). But in a story world the implications have to go farther than
>that. Suppose Joe was going to the pizza place to buy a pizza for _me_. It
>would be rather absurd if after I sucker-punch him, he runs away, mops up
the
>blood, finds an alternate route avoiding me, and then, interruption
resolved,
>resumes his earlier behavior of going to the pizza place to get me a pizza!
>Somehow the story behavior or the story manager has to find out that
>circumstances have changed, and react to the change.
>
>Okay, this example's not too hard. This particular change could be mediated
>through the aforementioned global character mental state model (I guess
it's
>needed after all).


right. :)


>snip<
>Okay, let's try a harder one. One the way to buy me a pizza, Joe sees or
>learns that my sister has been in an accident and taken to the hospital.
What
>part of the system "knows" or "figures out" that he's not going to just
>continue on to the pizza place, that the pizza plot behavior has to be
called
>off? The states of all characters involved in the pizza plot are unchanged.


There are many ways to handle this.  One would be a high-level behavior that
is monitoring for emergency situations, such as a relative taken to the
hospital.  This behavior is prioritized such that once it becomes active,
its high level and priority automatically causes the suspension of other
lower-priority or lower-level currently active behaviors, giving it control
to do what it needs to do (e.g., run to the hospital).  However, if Joe were
starving and literally needed to eat before he goes to the hospital, a
starving behavior would kick in and temporarily override the
hospital-behavior.


>So the change has to be mediated through whatever behavior was responsible
>for my sister's accident. This is possible. After all, that accident didn't
>just happen at random, not in this type of world. But you can see the
>complexity mounting up.


No I wouldn't go with that approach, it's better for a behavior to know
about itself and when it should fail.


>Presumably there is some general rule that
>higher-priority story behaviors cause lower ones to be called off when
their
>character roles overlap. As soon as Joe gets subsumed into a role in the
>accident plot behavior (and how does that happen...? Or doesn't it happen,
in
>which case Joe walks right by the accident scene and does nothing?), it
takes
>priority and the pizza plot behavior is canceled or suspended.


yes, as just described in the pizza-hospital example.  This is fundamental
to how a believable agent behavior hierarchy is designed to work.


>Now for the really difficult example: While Joe is on the way to buy me a
>pizza, I meet up with him and give him a pizza. If this were an AI planning
>system, there would be no problem. Joe's plan would adapt to the fact that
>his current super-goal (obtain pizza) was suddenly and unexpectedly
fulfilled
>and he can skip directly to the next goal (give me the pizza) which will be
>easy because I'm standing right there. But this isn't an AI planning
system.


That's not exactly true.  While the behavior might not be written to be able
to adapt its own execution, it could stop (ie, succeed or fail) when it
makes no sense to continue.  Behaviors can have context conditions and
success tests, which allow them to end prematurely if something unexpectedly
changes or happens.  The get-pizza behavior would be written to succeed
immediately if the character were suddenly given a pizza or for some reason
stopped feeling hungry.

There's an interesting discussion of how behaviors for believable agents are
similar and different to traditional AI planning systems, in the thesis of
one of the Oz project members, see
http://www.cs.cmu.edu/afs/cs.cmu.edu/project/oz/web/papers/CMU-CS-97-123.pdf
, especially p.43 and p.61.


>Furthermore there's no state change (at least not one the author of the
>behavior is likely to have anticipated and provided a story-level response
>for), nor is there any other interrupting behavior involved. So it appears
>inevitable that Joe will thank me for the pizza (a normal low-level
behavior
>mediating a normal response to having been given a moderately desirable
item
>by a non-romantic friend), and then continue on to the pizza place to buy
me
>a pizza. The problem is that in this case determining the effect of my
action
>on the story seems to require common sense knowledge -- in this case, that
>someone who has a pizza probably doesn't need another one.


Common-sense knowledge would be an even better solution than the
aforementioned one.  (Speaking of which, a recent Minsky lecture at the
computer game conference is good reading on this topic, see
http://technetcast.ddj.com/tnc_play_stream.html?stream_id=526 .)

But short of that, context conditions and success tests can handle this,
they just have to be authored.


>Oh well, no system is perfect. But such examples start arising in droves
when
>the audience is free to talk to characters, even if through a limited
dialog
>choice mechanism. You need many ambient low-level character behaviors to
>handle immediate reactions realistically. If you allow the audience to say
>and do significant and emotionally charged things to characters, then your
>web of reactive behaviors might end up looking a lot like Erasmatron. If
>audeince utterances are more constrained, then fewer more specialized
>reaction behaviors would suffice (analogous to the "react to being punched"
>behavior, these might be things like "react to a comment about your
personal
>appearance").  Either way, the key issue is, again, the information flow
from
>these low-level behaviors to the story and story manager behavior levels.


Yep


>Curtailing at-will audience actions, of course, eliminates this problem, at
a
>rather severe cost. That's the approach I'm forced to take, at least in the
>short term.


I think that's a reasonable tradeoff to make; we may be getting ourselves
into a real mess by allowing such open-ended input.  Of course a total
worst-case we could fall back to a more limited-input interface (e.g.,
commands or menus).


>Of course, I might be analyzing a fantasy of what I think you meant, rather
>than your actual approach. Wouldn't be the first time. So here are my
>questions and requests for clarification:
>
>1. Behaviors: character-centric, puppeteer-like, scripted-scene-like, or a
>hierarchical combination?

Hierarchical combination


>2. Behaviors: do they "branch" or otherwise contain alternatives reactive
to
>events, or is their reaction to events limited to "continue or not
continue"?


Behaviors with limited alternatives within them.  If a behavior cannot
handle the current interaction, this behavior would fail and a different
behavior than can handle it would take over.


>3. I don't really understand how beats fit in... that is, how does the
>concept of beats help to organize the system or the authoring process?


Essentially beats are flavors of behaviors (hey that rhymes!) that are
higher-up in the behavior hierarchy, that are keeping track of story values.
There will probably several hierarchy layers of such story behaviors, to be
determined.


>4. User interaction model. Decision points, at-will, or state-constrained
>at-will (e.g. you can only punch someone who's recognized as a current
enemy
>of yours)?

At-will unconstrained -- but because there are only two other characters and
a limited set of props, we've inherently limited the amount of damage the
player can do. :-)


>One more comment: Clearly, it's difficult to carry on a dialog of ten-page
>papers. I wanted to lay out my thoughts and answer the questions from your
>prior posts, but now that we've both made that start, perhaps we could
break
>things down and discuss one or two points at a time. This might also make
it
>more inviting for others to participate in the dialog.

Yeah, good idea.  I have several new questions / issues I'll bring up in the
near future -- one at a time -- unless you or someone else does so first.

Andrew