[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Facade discussion



This is a response to Andrew's description of his approach for the Facade 
project. It's been a while, so I'm going to quote most of the oringinal 
message

> This is an area where I would like to brainstorm with the minds on this
> mailing list -- how could we find a middle ground approach 
[between open-ended plot driven by autonomous character behavior, 
and a pre-planned plot controlled by a story manager -- WF].   
> For our Summer
> Tale discussion, it would be interesting to discuss a variety of approaches.
> (Walt, you mentioned something about prose data structures in a previous
> post?)

Well, that phrase makes it sound like I've developed a way to turn prose into 
a convenient struct that can be manipulated by a few lines of C code. I can't 
claim anything so useful or interesting. Sorry if I was misleading. What I'm 
working on is more like a tagged scripting format for prose with variable 
elements. You've probably seen a hundred of 'em. What makes it analogous to a 
data structure is the type-schematization of prose segments, which allows a 
variable insertions in tagged segments to be -- besides conventional variable 
nouns, phrases, etc. -- entire other segments. More on that below.

> To get the discussion going, I'll describe one approach, which my
> collaborator Michael and I are experimenting with for Façade.  I apologize
> if this seems a bit long, but there's no easy way to describe it in a few
> sentences.
> 
> As I was saying earlier, working on behavior-based characters led to the
> desire to
> build a story manager that operates using authored behaviors.   This kind of
> architecture would have the flexibility to "do anything at any time", to
> potentially switch to any of the authored story pieces at any time, instead
> of being stuck on a fixed railroad track that can only maneuver by following
> the current track to the next branch.  The system can be very reactive, able
> to respond immediately to user interaction in a variety of ways, to give the
> user a high degree of true agency (a characteristic sorely lacking in most
> story systems to date).
> 
> Of course, along with this kind of flexibility, we want the user to have (as
> close as possible) the experience of story happening -- that is, a narrative
> that is clearly progressing in an interesting way, where events are
> happening at a good pace, characters are changing, ideas and themes are
> being explored.
> 
> Our plan is to cleverly author the story manager's behaviors such that they
> can immediately react to the moment-by-moment user's interactions as they
> happen (which are what behaviors are traditionally good at), while
> simultaneously are deliberately trying to make events happen and arc story
> values over time (ie, progress and perform a story).
> 
> By "story manager behaviors" we don't mean character behaviors per se, we
> mean behaviors (reactive-plan programs) that are controlling the flow of the
> story.  We plan to use the organizing principle of story beats within these
> behaviors, referred to in a previous email.  (For detailed description what
> behaviors are and how they work, see the CMU Oz group's papers on the
> language Hap they designed, there's a link on my webpage to it)
> 
> Architecturally, the story manager would be a single agent, not a collection
> of independent autonomous agents.  The metaphor of "conversation" between
> user and system applies here: the single human user is interacting
> one-on-one with the story manager.  The story manager "expresses itself" by
> controlling the computer characters.  (Each computer-character could still
> be their own agent, but they are regularly given sets of character behaviors
> to operate with by the story manager; in this way the characters are weakly
> autonomous.  The story manager itself is the only strongly autonomous agent
> in the system.)
> 
> A big advantage to the approach of using behaviors is that when an author
> adds a new behavior to the system, the system automatically starts trying to
> use it, anywhere and everywhere it can.  As an author you can slowly build
> up the story over time by adding more and more behaviors to the system.  You
> could find the system ends up sequencing the behaviors in unexpected ways
> (which can be good and bad).
> 
> The story manager would respond to the user's interactions in these ways:
> - at any moment, the user's interaction would influence the story manager's
> decision of which behaviors choose among to perform; the more behaviors that
> have been authored, the more possibilities there are.  Each behavior would
> have preconditions and effects, to know when it is appropriate to choose
> them.
> - if we are already in the middle of a behavior, the user's interaction
> could influence / modulate HOW the behavior is performed, since a behavior
> could have a variety of ways to play itself out
> - if the user's interaction demands we switch to another behavior, we could
> try a bit of "recovery" where we try to stay in within the current behavior,
> before switching away, in an attempt to not thrash all over the place.
> - if we need to switch to a new behavior, do so, but by inserting a
> transition behavior of some sort, and remember where we had left off in the
> previous behavior, in case it is possible to return to it later, in an
> attempt to for the story to be efficient.
> 
> I think the hard part is:  it is up to the author to design and write these
> behaviors so that it will "make sense" to switch among them, that each has
> the capability to progress the story, no matter which is running.  If one
> behavior requires certain things to have already happened so far, these are
> encoded in the behavior's preconditions, which limits when the behavior can
> be used.  In this way you can enforce a partial order among all the
> behaviors.  The tighter the ordering you author, the closer to "hard wired"
> or "pre-planned" you get.
> 
> This approach still leaves the author with the burden to generate all of the
> events and endings, in the form of story behaviors.  To have a story feel
> "open ended" in the sense that it could end in a variety of ways, still
> requires the human to author these.  Here, what the AI does do for you is to
> _perform_ the story in a variety of flexible and reactive ways.  In
> cinema-speak, the AI is more of a director and editor than a writer.
> 
> Walt, would you call this a middle ground between the open-ended
> character-behavior simulation approach and the "pre-planned" approach you
> described?

Yes, I would. But wow, what a large stretch of that treacherous middle ground 
you're aiming to conquer! This is impressive and scary.

I recognize the concepts of the components of the system you're describing 
from the Oz model, as you pointed out. But I'm having a lot of difficulty 
abstracting the examples from that work to the same model applied to 
storytelling, quite a "scale up" in complexity. For example, the idea of 
"recovery" when switching from one behavior to another is easy to grasp in 
the domain of Woggles. It's easy to understand why, and how, one would use a 
recovery scheme to prevent a Woggle from overtly switching from a "bounce to 
destination" behavior to a "go to sleep" behavior while still in mid-bounce, 
even if the executive decision for that character to switch happens to be 
made in the character's mid-bounce. I can also understand the "why" of doing 
the same thing, analogously, with story behaviors. But how to actually do 
that, given the real-world specificity of story events... yikes! Aren't you 
going to end up requiring a hardwired recovery scheme for each possible 
combination of interrupted behavior and interrupting event or new behavior... 
perhaps even more than one, depending on where the interruption occurs 
chronologically? But that, of course, would clobber the encapsulation of 
behaviors.

And that's just one detail.

Okay, so where have you hidden the "thenMiracleOccurs" calls in your system? 
Obviously (since you haven't actually tried to hide it) in the behaviors. 
Possibly in the beats as well, but probably not. If I'm interpreting this at 
all correctly, I see beats as a unit of performance of a story, a unit of 
action on about the same scale as two consecutive Erasmatron verbs, such as 
"John makes a pass at Mary, and Mary rejects John." Unlike tron verbs, beats 
have no decision-making procedures of their own; all of a beat's variables 
(who makes the pass at whom, what type of pass, how hard the slap, etc.) 
would be assigned by the behavior that invokes the beat. The mechanisms 
internal to a beat, if any, would be focused on issues involved in performing 
the action for the audience (so a beat might contain very sophisticated 
animation scripts or algorithms, for example).

On the other boundary we have the story manager. I imagine that the story 
manager also fits the Oz schema of hierarchical intelligence, so you're not 
trying to slip a miraculous storytelling genius in here. In other words, the 
story manager has no real-world common sense knowledge about the world the 
story takes place in or the events going on in it. The story teller makes its 
decisions based on story-arc-level abstractions (such as "the level of 
conflict is low, raise it"), and its actions are limited to assigning 
behaviors based on characteristics tagged to the behaviors (to raise the 
level of conflict, choose a behavior that is permissible in the current state 
and which has a high conflict attribute, and put it into play). I'm assuming 
that the story manager is, in your overall structure, really just another 
behavior (you haven't emphasized the hierarchical nature of behaviors but 
that's central to the Oz paradigm). So its capabilities are not really 
limited to what I've described. There's no reason the story manager couldn't 
decree beats directly, perhaps for key events of the story (the tidal wave, 
the shark attack...) and it could embody sufficient causality rules to 
control such events adequately. But my point is that the story manager does 
not have any large body of "universal" or "general-purpose" knowledge do draw 
on. It could only decide something like "Sam needs a comeuppance at this 
point, which could be achieved by having him caught in an undertow and 
requiring rescue by his worst enemy" if such a possible turn of events were 
specifically hardwired into it.

So what does that leave for the behaviors in general? Well, basically, 
everything. Here are some thoughts about behaviors.

Behaviors would seem to be a lot like scripted scenes, in the following 
respects:
- They are data-intensive. All the story-ness as well as the real-world 
logical consistency of a sequence of events is embodied in the behavior, not 
in the environment (story manager; beats) in which the behavior exists.
- In particular, a behavior has a pre-planned outcome or a set of possible 
pre-planned outcomes "in mind" at all times. In other words, the author's 
authorial intelligence applied to creating the behavior provides what it's 
most needed to provide: resoultions, endings... the "hard part" of 
storytelling. If, that is, the outcomes are associated with endings and 
resolutions rather than just state changes designed to set up the next 
behavior. This bears on the issue of chained versus hierarchical behavior 
organization that I'll return to.

How does a behavior differ from a scripted scene?
- They could be interactive (or more precisely, reactive). I envision a range 
of different types and degrees of reactivity possible within the basic 
defition of a behavior (a behavior is a machine, after all). So you could 
have behaviors like branching story trees, or behaviors that branch based on 
more sophisticated conditions like variable thresholds. There might even be 
supervisory behaviors that, by virtue of having little "story" in them, do 
not need to branch at all but instead vary their effects in an analog way. 
However, it appears that your system does not _require_ behaviors to have 
reactive variations. In that case, a behavior's only choices for reacting to 
choices and events would be to continue or not continue; this would simplify 
some things considerably, though I don't think it makes a fundamental 
difference in most of the issues discussed below.
- They are not chronologically contiguous. Unlike a scripted scene, they 
don't take continuous control of the stage from start to finish. A behavior 
doesn't have to be acting to be active. An active behavior should be able to 
perform a few events, idle for a while (perhaps while waiting for one of 
several possible state configurations to occur), then perform a few more 
events. This is necessary for many simultaneous active behaviors to be able 
to share the stage.
- They are possibly hierarchical. A typical behavior might do a mixture of 
two things: controlling beats directly, and passing control to subordinate 
behaviors that it instantiates. Like I said, you haven't emphasized the 
hierarchical nature of behaviors, perhaps it's not a central part of your 
plan this time. But I think it's key.

It's all about time and how you slice it. Consider one extreme, in which 
there is no hierarcy of behaviors. There's only the story manager and a 
single layer of behaviors. How, then, does one behavior relate to another? 
Through state changes. One behavior changes the state, the story manager 
searches for another behavior whose preconditions match the new state. Since 
multiple behaviors can exist simultaneously, an active behavior might find 
that the world state has changed to one that cannot sustain it. For example, 
a behavior that's a romance between two characters must terminate if one of 
its characters has died in the big bank robbery shoot-out. The story manager 
deals with such recoveries. 

To go even further with that extreme, imagine if only one behavior were ever 
allowed to be active at a time. Then except for some architectural details 
you've essentially got Dramaton (if behaviors are relatively long complex 
scenes) or Erasmatron (if behaviors are individual actions).

Now, you've mentioned that the story manager should also be able to remember 
where a behavior left off, in case it's possible to return to it later. A 
fine idea. The character involved in a romance doesn't die, but he does go 
off to fight in a war. The romance behavior isn't going to go anywhere, but 
you want to be able to come back to it when Johnny comes marching home. The 
story manager could do that, but why not just leave the romance behavior 
itself active? In other words, shift the architecture so that the romance beha
vior itself contains the rules for when it can be returned to. Now you've got 
something like a hierarchical relationship between behaviors. To make it 
complete, let it be the romance behavior itself that decides to send the 
character off to war in the first place. Instead of just reacting to 
interruptions from "outside," the behavior can now interrupt itself, in order 
to more deeply develop its own internal story.

The trick is schematizing those interruptions to maintain both 
logical/narrative continuity, and flexibility, and to give authors the 
boundaries they need to work with them. If the character in the romance 
always goes off to war in the middle of it, then the going-off-to-war might 
as well just be built into the romance behavior itself. But suppose other 
behaviors were available that could result in the character going off to 
jail, or to minor league training camp, or to a substance abuse program, or 
on a business trip to Europe, or on a fishing voyage. Each offers roughly the 
same possible directions for further development: the character does not 
return; the character returns having fallen in love with somebody else; the 
character returns more smitten with the other than ever before. Other 
behaviors could similarly draw on the same set of "character goes away" 
behaviors, so efficiency need not suffer. Do you see what I'm getting at 
here? And this is a very crude example. Instead of a set of "character goes 
away" behaviors, it would be better to have a more generally useful set of 
"romance complication" behaviors. The goal is to get as general as you can 
while still able to schematize for interchangeability.

At this point I've passed from attempting to recap and understand your 
schema, to describing mine. Let me take that a step further. Here's what 
you'd change to turn your system into something like mine.

The most important change is that every outcome of every behavior must be an 
ending -- that is, an author-crafted resolution (though not necessarily a 
successful or happy one) of the conflict, goal, or problem around which that 
behavior has been built. In other words, every behavior must now be a small 
story.

In my approach, it has to be a small interactive story. Let's keep it simple 
and make each behavior, at its core, a plain dumb branching story tree.

Once a behavior begins, it always remains active until it reaches a 
resolution. Nothing the audience does can alter that. If a bomb is set to go 
off at noon, then it will either go off or not go off (be disarmed, etc.) but 
there's no way it's going to just be forgotten about. The same is true of a 
romantic interest between two characters. The audience's choices can 
determine the outcome, but cannot determine whether or not there will be an 
outcome. This is the crux of my trick, a cheap and dirty way of preserving a 
semblance of narrative continuity in the face of interactivity.

If we allow for multiple behaviors simultaneously, then we have to allow for 
state changes from "outside" a behavior (caused by the effects of other 
behaviors) that complicate or obviate the behavior. We handle those by 
including rules within the behavior that branching to alternative paths or 
outcomes if the state goes "out of bounds." If a character in a romance 
behavior dies (if that's a possible state; in other words if the world allows 
deaths), there's an outcome in the romance behavior for that eventuality.

Now we can just get rid of the story manager, because the most important 
things a story manager does (suspend behaviors, recover from behavior to 
behavior, etc.) either don't need to be done or are handled by the behaviors 
themselves. (Sure, we still need global exective functionality for other 
purposes, but these functions are no longer directly concerned with 
storytelling rules).

But what puts behaviors into effect? We start with one or more active, and 
active behaviors instantiate others during the story at points where there 
own scripts instruct them to do so. Hierarchically, as I described before. In 
other words, the script implicit in a romance behavior might come to a point 
where it invokes another behavior, filling the new behavior's roles according 
to its own current roles and state. _Which_ new behavior it chooses can 
depend on state and/or on audience decisions, or perhaps just random 
selection within a schema category. (Audience decisions can also figure in at 
conventional branch points.) Typically the calling behavior will then await 
state changes caused by events of the new behavior (e.g. Johnny coming home 
from the war, or being killed in the war) before advancing its own progress 
any further, though other mechanisms are possible. There is no "horizontal" 
chaining of behaviors; that is, there is no point where a new behavior or 
scene must be chosen entirely on the basis of the current world state and/or 
story construction rules. (Consequently, no story manager is necessary to do 
the choosing.) Except for the initial top-level behavior(s), every new 
behavior arises in the context of another already in play.

There are many ways to make this more complex and sophisticated. For example, 
we could do away with the type-schemas for behaviors and instead select them 
based on fitness relative to the current story state, the way Dramaton 
selects scenes. (That, of course, puts the story manager back in.) But the 
core concepts here are so simple that I'm writing a highly limited version as 
a text-only paper card game. Of course, that version has many drawbacks (the 
"cards" are actually full size pages to hold the amount of text and 
instructions required, and the scenes have to be written full of 
generalizations like "as you proceed toward your destination..." that would 
be easily taken care of by role variables in any computer version). Still, it 
helps in developing the idea and may soon help in demonstrating it.

So we seem to have many points of similarity, including:

- Author-intensive, data-intensive, and (if I'm intepreting correctly) 
story-like content units.
- Playback content tightly constrained by the data elements put in; not 
on-the-fly creation of original story, but rather interactive recombination 
of story elements.
- Systems designed specifically for a single story or at least a single style 
of story. To write a different style of story, you need not only different 
behaviors, but a different schema (in my case) or a different story manager 
(in yours).
- (Maybe) Hierarchical story structures.

We may also share some of the same difficulties, including:

- Authorial challenge. I know in my own cases that writing story elements for 
my system is difficult. In fact, I want it to be difficult. When I arrive at 
a variation that appears to not be difficult to write, I change it. My 
reason? Simple. If you pump a tire pump and there's no resistance, it means 
that for some reason you're not getting compressed air into the tire. If I 
write an element and it's too easy, it means the process isn't demanding 
enough creative thought from the author. But just because the difficulty is 
necessary and appropriate doesn't mean it's not a problem. Who's going to 
write these things? Also, there's "good difficulty" and "bad difficulty." 
Creative thought that goes into puzzling over how to make the system work, 
like a clog in the air hose of the tire pump, adds resistance without helping 
to pressurize the tire.

- Forms of user interactivity. These designs seem to lend themselves more to 
explicit decision points than to at-will action-based participation. But the 
latter is not only more in vogue, it's probably artistically superior. 

Now, I know that Oz was action based, and you mentioned that immediate 
reactiveness is one of your goals. But this is where you're going to have the 
most difficulty. Suppose the player-character, given the freedom to do so, up 
and slugs another character. There's a chance that a current active behavior 
has rules for interpreting that action and orchestrating a response to it. 
But if all behaviors are story-rich sequences created by authors, then it 
seems that a behavioral response would only be available if the author 
planned for it. What if the action is completely senseless, or sensible but 
the author didn't plan for it? (In the former case, we could say that the 
audience deserves to have the action ignored or reacted to illogically 
because they're clearly not playing along. But the latter case is more 
problematic.) Or we could require the story manager to arrange a reaction, 
perhaps by triggering a behavior that handles "react to unprovoked punch." 
But that doesn't seem like what you had in mind for what behaviors should be. 
And even with such a behavior, there has to be a way for the implications of 
the action to be propagated to other parts of the ongoing story. Either the 
behaviors will have to be able to react autonomously to the state change, 
requiring behaviors to be carefully wired into some sort of global character 
state (including mental states) model, or the story manager would have to 
have rules (requiring common-sense knowledge) for figuring out the effect of 
an action on a given behavior.

(For a soap opera version of my method, I've designed an interface that's 
at-will but tightly constrained. The audience's interaction is limited to 
giving props, items that are currently significant to the active story 
elements, to characters. (The audience does not play a character role, but 
meddles in the story through this mechanism.) This tight constraint allows 
the system to reasonably interpret every possibility because they're limited 
in number. Every element, for each of its props, has rules or branches for 
the effects of giving the prop to any character who has a role in that plot. 
(Other characters are uninterested in that prop, at least until I add a "plot 
partisan" mechanism that tracks characters' awareness and attitudes about 
plots in which they don't have direct roles.) Since that's the only form of 
audience activity, the number of alternatives needed is comparable to the 
number needed for ordinary direct decision points. I think this works, but 
it's far from a general solution, and it's far from the goals you've stated 
for Facade.)

Perhaps contributing to my confusion on this point is what appears to be some 
ambivalence in your description of Facade as to whether behaviors are 
associated with characters or with units of dramatic action involving 
multiple characters. In the parts of the Oz work I'm familiar with, behaviors 
were instantiated in a character. Therefore behaviors had a lot to do with 
reacting to stimuli from the environment (particularly, other characters), 
and one could conceptualize a hierarchy of behaviors as the long-term mental 
state (current emotions, but also plans, wishes, habits) of a character. This 
makes the characters very responsive and realistic. But there wasn't much 
capability for organizing individual behavior into dramatic sequences; story 
had to "emerge" from free-form character interaction. On the other hand, your 
description of behaviors, particularly the process of authoring behaviors, 
makes them sound more akin to (an evolved form of) scripted scenes, as I 
mentioned above, involving multiple characters. If that's the case, then 
responsiveness becomes a problem. Why? Because authored elements don't come 
equipped with the causal rules by which they were generated. That's their 
virtue; it allows them to embody authorial intelligence, which presumably 
involves causal rules too subtle and complex to be explicated. But it also 
makes it impossible to automate their variation. The ability to react to 
at-will audience choices cannot be taken for granted, despite how 
successfully reactive the earlier character-behavior-based systems were. This 
is true even if behaviors do not have reactive variation; that is, if the 
only decision to be made is whether the behavior continues to run or not.

The divide between character behaviors and scene-like story behaviors is a 
huge one. It's not just an architectural difference. For an author creating 
behaviors, for example, the process of creating behaviors for characters 
which by their interactions will get involved in stories, and creating 
storyteller (or puppeteer) behaviors for generating or performing dramatic 
scenes, are one hundred percent different. On a more fundamental level, 
character behaviors would ground you in the "P = 1.0" (simulator) attractor 
and puppeteer behaviors would ground you in the "P = 0.0" (linear story) 
attractor. (Even in the middle ground, you need supply lines back to one or 
the other, or both.) So which is the dominant model for Facade?

Perhaps I can predict a possible answer: both! In other words, why not take 
advantage of the aforementioned hierarchical nature of the behavior model? 
Combine low-level behaviors instantiated in characters with higher level 
behaviors that generate or perform narratively contiguous scenes by 
controlling the low-level behaviors. Puppeteering by proxy. If a puppeteer 
behavior wants Joe to go to the pizza place, it doesn't walk him to the pizza 
place, it instantiates a behavior in Joe that directs him to go to the pizza 
place. So he'll go, but the rest of his hierarchy of behaviors is still 
present. If I punch him on the way there he'll still be able to react 
appropriately.

Does this succeed in claiming that middle ground? Not quite. The chasm 
between the two attractors still exists. It's now located between your 
higher-level and lower-level behaviors. Your success depends on how well you 
can span that gulf. The key issue is your information flow from lower level 
behaviors back to the higher level ones, and how well the higher level 
behaviors or the story manager can react to that information.

If I punch Joe on his way to the pizza place, the reaction can't be handled 
by low-level behaviors alone. A Woggle could appropriately punch back, or run 
away, or whatever, and that would be the end of it (except for some lingering 
resentment). But in a story world the implications have to go farther than 
that. Suppose Joe was going to the pizza place to buy a pizza for _me_. It 
would be rather absurd if after I sucker-punch him, he runs away, mops up the 
blood, finds an alternate route avoiding me, and then, interruption resolved, 
resumes his earlier behavior of going to the pizza place to get me a pizza! 
Somehow the story behavior or the story manager has to find out that 
circumstances have changed, and react to the change.

Okay, this example's not too hard. This particular change could be mediated 
through the aforementioned global character mental state model (I guess it's 
needed after all). When I punch Joe, Joe goes from liking me to disliking me. 
Since Joe liking me is a mandatory state condition for the zany pizza plot 
behavior (of which Joe getting the pizza is one beat), when that condition 
changes the story manager knows it has to suspend that behavior.

Okay, let's try a harder one. One the way to buy me a pizza, Joe sees or 
learns that my sister has been in an accident and taken to the hospital. What 
part of the system "knows" or "figures out" that he's not going to just 
continue on to the pizza place, that the pizza plot behavior has to be called 
off? The states of all characters involved in the pizza plot are unchanged. 
So the change has to be mediated through whatever behavior was responsible 
for my sister's accident. This is possible. After all, that accident didn't 
just happen at random, not in this type of world. But you can see the 
complexity mounting up. Presumably there is some general rule that 
higher-priority story behaviors cause lower ones to be called off when their 
character roles overlap. As soon as Joe gets subsumed into a role in the 
accident plot behavior (and how does that happen...? Or doesn't it happen, in 
which case Joe walks right by the accident scene and does nothing?), it takes 
priority and the pizza plot behavior is canceled or suspended.

Now for the really difficult example: While Joe is on the way to buy me a 
pizza, I meet up with him and give him a pizza. If this were an AI planning 
system, there would be no problem. Joe's plan would adapt to the fact that 
his current super-goal (obtain pizza) was suddenly and unexpectedly fulfilled 
and he can skip directly to the next goal (give me the pizza) which will be 
easy because I'm standing right there. But this isn't an AI planning system. 
Furthermore there's no state change (at least not one the author of the 
behavior is likely to have anticipated and provided a story-level response 
for), nor is there any other interrupting behavior involved. So it appears 
inevitable that Joe will thank me for the pizza (a normal low-level behavior 
mediating a normal response to having been given a moderately desirable item 
by a non-romantic friend), and then continue on to the pizza place to buy me 
a pizza. The problem is that in this case determining the effect of my action 
on the story seems to require common sense knowledge -- in this case, that 
someone who has a pizza probably doesn't need another one.

Oh well, no system is perfect. But such examples start arising in droves when 
the audience is free to talk to characters, even if through a limited dialog 
choice mechanism. You need many ambient low-level character behaviors to 
handle immediate reactions realistically. If you allow the audience to say 
and do significant and emotionally charged things to characters, then your 
web of reactive behaviors might end up looking a lot like Erasmatron. If 
audeince utterances are more constrained, then fewer more specialized 
reaction behaviors would suffice (analogous to the "react to being punched" 
behavior, these might be things like "react to a comment about your personal 
appearance").  Either way, the key issue is, again, the information flow from 
these low-level behaviors to the story and story manager behavior levels.

Curtailing at-will audience actions, of course, eliminates this problem, at a 
rather severe cost. That's the approach I'm forced to take, at least in the 
short term.

-----

Of course, I might be analyzing a fantasy of what I think you meant, rather 
than your actual approach. Wouldn't be the first time. So here are my 
questions and requests for clarification:

1. Behaviors: character-centric, puppeteer-like, scripted-scene-like, or a 
hierarchical combination?

2. Behaviors: do they "branch" or otherwise contain alternatives reactive to 
events, or is their reaction to events limited to "continue or not continue"?

3. I don't really understand how beats fit in... that is, how does the 
concept of beats help to organize the system or the authoring process?

4. User interaction model. Decision points, at-will, or state-constrained 
at-will (e.g. you can only punch someone who's recognized as a current enemy 
of yours)?

One more comment: Clearly, it's difficult to carry on a dialog of ten-page 
papers. I wanted to lay out my thoughts and answer the questions from your 
prior posts, but now that we've both made that start, perhaps we could break 
things down and discuss one or two points at a time. This might also make it 
more inviting for others to participate in the dialog.

- Walt