scope of interest

Let’s say you’re playing a game in which the ref is framing a scene. Not a huge stretch here since this is basically all of traditional RPG gaming and a lot of the rest of it. I think what follows will apply to other patterns of play as well, but let’s stick to what we know here. So you (the ref) are framing a scene.

What do you want? You want the players to engage with something, make choices, and consequently cause the wheels of the system to turn and have that machine generate whatever it generates. That’s the reason we buy games, right? We are buying a machine and it’s up to use to get it started and keep it moving. The beginning of a scene is how the engine gets started.

How do you do that? Usually you want to get to an event. Now you might start with casual discussion between characters and NPCs but this will usually stall in banalities unless something external HAPPENS. And event. As ref, probably your most useful input to the game is to craft events. Ad libbing based on the results of events is maybe the next bit. But it’s up to you to push the starter on this engine. The rest of the players shoulder a substantial burden as well: to engage with it. And, in the best of all possible games, to start stirring up their own shit, their own events, to feed the engine. But as ref even if you don’t see it as your responsibility to start shit (as in, say, a pure sandbox where you are mostly reacting) it is still a tool in your kit.

In my games I expect the ref to kick things off.

In thinking about this, about events that define scenes, I find three “scopes of engagement” for the players and their characters. Each is very different, has different results, and different values at different times. I think that recognizing these three scopes and understanding them lets us use them deliberately rather than instinctively or accidentally and that has to be a good thing.

Uninvested

This is an event in which the players have no initial investment. It happens to a place or person or thing that we haven’t discussed yet and so the players cannot have invented an investment in it. That’s not to say it won’t be affecting, in fact we hope it will! But since nothing about the event has any relevance to the player (not the character! We may find that the character is incredibly invested, but that’s super important: we are going to find this out) it does not require (and does not benefit from) any kind of decision tree.

The event happens and the players react. The event is a done deal, a fait accomplis. It is an instigator.

Since we’re all big fucking nerds, let’s use Star Wars for an example.

Han Solo jumps into Alderaan system and it’s nothing but rubble. That’s the event. The Empire has destroyed an entire planet. Before this event Han’s player knew nothing about Alderaan — we hadn’t discussed it, it’s not on their character sheet. Their introduction to Alderaan is its destruction. Consequently the player cannot be invested in it yet. Consequently we don’t need a big decision tree leading up to it. We present it.

What happens next in the scene is the reaction to the event. Facts have been established about the Empire’s ruthlessness, their evil. Players will want to investigate, maybe find survivors, maybe punish the wicked. At this scope of engagement, the uninvested event, we generate investment. All of the scene is about reaction. This is a self-guided missile, a fire-and-forget tool for the ref. Kick it off and ad lib against the player reactions.

Invested

Here we have an event that will affect something the players are invested in though not, critically, their character. We have already somehow established investment through backstory, prior play, mechanical elements, or some other method. We know about the thing that will be threatened by the event and we already care about it.

As referee you have carefully chosen this event to threaten something players are invested in. You have deliberately selected this scope for the scene.

When the players are invested we want them to be able to change the apparent course of events and consequently there must be decision points built into the scene: when you threaten something players are invested in, they must be able to act to affect the outcome. That’s the whole reason you chose this scope. So as ref, don’t get too invested in a particular outcome. You kicked the hornet’s nest and your plans get what they deserve: player agency.

Star Wars again suits me for illustration.

Princess Leia is threatened by assorted villains on the Death Star: cough up the rebel info or we destroy your homeworld! Well, shit, Leia’s extensive backstory notes are full of info about Alderaan! Her first girlfriend is there, her prized record collection, her family, her friends. It’s all in the backstory. Of course you read it, that’s why you’re threatening to blow it up!

Leia’s character is invested. They are motivated to stop this. As ref, this is the hinge of your scene! Betray everything you believe in and we’ll keep your planet safe otherwise it’s plasma. A moral dilemma (and this is the scope in which they thrive) — betray your most earnestly held beliefs or save your family, your friends, and people you don’t even know? A decision point. Not a chain of them, this isn’t suddenly positional combat on a grid, but at least one.

Leia decides to give the information but lie. The baddies destroy Alderaan anyway. I guess she should have put more points in SOCIAL but maybe when she levels up the player can think about that. In the meantime, angst, betrayal, and further investment in something that matters (the course of the narrative) at the expense of something that matters less (backstory). I use expense deliberately: backstory is a currency. We use it to buy things. If we don’t spend it, it’s not useful. Spend backstory.

Affected

At this scope characters are directly threatened. We don’t care about investment because we are going to be in a situation where they have to act because the bad thing is happening to them now. This is the easiest way to engage the system but none of these scopes are “best”! They do totally different things. This one is the easiest, most mechanical, but does not always provide the most (or even a lot) of change within the story.

This is because it is defined by multiple, perhaps many, decision points that are focused solely on the event and not the story arc. We are zooming in, blow by blow, making choices that are critical in the moment (I draw my knife!) but irrelevant from a larger scale. Ultimately there is still only one hinge here — what is the end state when the smoke clears — and a lot of decisions. It’s a lot of system engagement for comparatively little story change.

But! But we’re here to engage the system. Not better. Not worse. Different. We play the game at a minor expense to story (per unit time).

Star Wars fails us here, at least in the Alderaan scene, so let’s look at a character that never got mentioned: Planetary Defense Captain Olberad Pinch! While everyone else is wringing their hands or waiting for fireworks, Olberad Pinch has a problem with multiple decision points! Now we all know they failed utterly, but look at the expenditure in table time to get there. And it was very important and interesting for Pinch’s player.

Detection. A moon-sized warship enters the Alderaan system! What do Planetary Defenses do? That’s in Pinch’s capable tentacles. They investigate, gather information, determine the next course of action. Maybe send ships — maybe Pinch is on one and their story ends in a lopsided dogfight! Maybe they escape!

Action. The Death Star is determined to have planet destroying weapons and is powering up! Did you get spies aboard? Was Pinch one of them? What about the planetary railguns? The local fighter swarm? Sure, all of these things obviously failed, but there are one or more detailed, system-engaging scenes here. In game time, this space which is largely unseen in the movie, could be multiple sessions, maybe the bulk of a months play. This is the nature of the Affected scope! It’s about your character, not just something you like! You care this much!

Climax! The Death Star is powering up! If you’re not in a position to stop it maybe you can escape? Evade TIE fighters in your shuttle just in time? With who? Which eight people did you select? And where are you going now? Again detail, lots of table time, all to save your ass.

And so

Those are the three scopes of engagement I can think of for a scene. Each requires a different level of planning or ad libbing from the ref. Each has different expectations about the players and uses their character sheets differently. Each has a place, makes different things happen. If you over-use one habitually, think about the others. Think about ways you can fabricate investment with uninvested scenes. Think about ways you can engage the system by explicitly threatening characters.Think about ways you can make a scene-staging event interesting by picking on investments the player has declared right there on the character sheet (and incidentally this is why the lonely loner backstory will always be the most useless — if the character cares about nothing then a third of the tools are obviated — if you take anything away from this as a player it should be that the more your character clearly cares about things the more interesting things can happen to them).

mystical security

This is something that stuck in my head while at work today.

WARNING: NOT NECESSARILY ABOUT GAMES

The general case

Talents that are new to humanity go through four phases. Well, on different axes they go through all kinds of phases, but there’s one progression I’m interested in today.

Mysticism. At first there are few people with the talent and it is largely unexamined. Even the practitioners don’t really know how they do what they do. They have talent and inspiration and they seem to be effective. There are individual heroes and we tolerate a lot of bullshit because there’s not much out there but heroes at this stage. The word “genius” gets thrown around a lot.

Organized Mysticism. Once our mystics recognize that they have something special they organize. The find other mystics and grant them access to the organization. They deny access to those that don’t have it. This may or may not be literally organized, but there’s at least a social aggregation.

Investigation. At some point people realize that there can’t be anything magical or purely intuitive about this. There  must be a way that people with the talent do what they do. Something we can quantify and proceduralize. This requires an honest and rigorous analysis of the talent and the talented.

Engineering. Once the talent is quantified we can teach it to others. No longer do we rely on the intuitive talent of individuals nor (in some cases worse) the accreditation of an individual by a mystic cabal. It can be taught and it can be tested and it can be reproduced. Anyone who wants this talent can have it.

One problem that arises is that during the Organized Mysticism phase there will be a lot of resistance to investigation. There is significant pressure to remain mystical!

First it’s a lot less work because people can only check your results and not your process. And your results don’t have to be all that good to be good enough — just a little better than a random guess. In reality you don’t even need to be that good if your successes are spectacular enough or the failures of those who don’t use your mystic organization are publicized properly.

Second it’s lucrative. You control access to the talent, so you can price it however you like. And then you also control membership to the Mystic Cabal and if your outcomes aren’t all that controlled, maybe you just want to sell some memberships and make a packet that way. This may or may not happen but the pressure is there and the controls are absent.

And investigation is expensive and has no immediate pay off. It’s an academic exercise, one done for the love of the knowledge. It’s a future-value endeavour and one that may or may not pay off. I mean, we might discover that the talent doesn’t actually exist and then you are stuck at the Organized Mysticism stage and you are discredited. The value in self examination is low.

And honestly if you have an amazing intuitive talent do you really want to be surrounded next year by people — just anyone really — doing what you do? That’s bound to bring down salaries.

So getting out of the Organized Mysticism phase is hard. It’s an ethical move. It should be the next step for any mystics who honestly believe that their talent is both valuable (to humanity — being valuable to yourself is actually a negative motivator here) and real. Resistance to investigation is suspicious.

The specific case

In the standard way of doing security risk assessments there is this idea of a risk calculation matrix, in which you cross index the impact of an event with the likelihood of an event to determine just how bad a threat is and therefore how much you should spend to mitigate it. At its root this is a good idea — it comes from safety analysis, after all, which is a time honoured science.

However, what we do here in co-opting this mechanism for security is not science, and it’s very much to our advantage as “experts” (especially certified experts) for it not to become a science. As long as it’s an art we don’t have to do much real work and at the same time our job seems like it’s a lot more clever than it is.

In a safety case, since we are dealing with an event tree that triggers on equipment failure (that is, on mean time to fail numbers — published numbers) rather than malicious activity, that “frequency” or even less credibly “probability” column is an actual number you get from a manufacturer. My fault tree shows that if component A and component B fail simultaneously then I cannot guarantee the system is safe. A and B both have published mean time between failure numbers (which are both measured and very conservative). The probability column here is just arithmetic.

In a security case that probability column is a Wild Assed Guess. We cloak it in two things: our credentialed “expertise” and by refusing to assert real (and therefore unsupportable since there are no real) numbers but rather vague order-of-magnitude categories. A first glance at the problem might suggest that this is just inevitable — the probability of malicious activity is not quantifiable. To me, though, this should not imply that we simply trust the instinct of a credentialled expert to suddenly make it quantifiable because the problem isn’t that it’s hard to know and that you need a lot of training and experience to estimate it. The problem is that it’s genuinely unknowable. That means when someone tells you they can quantify it, even vaguely, at an order-of-magnitude level, they are lying to you.

Unfortunately this lie is part of the training. You even get tested on it.

This makes us a (currently powerful) cabal of mystics. And the problem with a cabal of mystics being in charge is that first, they aren’t helping because they are not doing any science and second, as soon as someone starts doing some science they will entirely evaporate, exposed as charlatans. So naturally for those invested in the mysticism there will be some resistance to improving the situation.

The essence of science, setting aside for a moment the logical process (and that’s a big ask but it’s out of scope here) is measurement.

One axis of that risk calculation matrix is measured. The impact. Now it might be measured vaguely, but you can go down the list of items that qualify an event for an impact category and agree that the event belongs there. Someone could get seriously injured. Tick. Someone could get killed? Nope. Okay it goes in the SIGNIFICANT column. It’s lightweight as measurement goes but it’s good enough and it’s mechanizable (and that’s a red flag that separates engineers from mystics). You don’t need a vaguely defined expertise to be able to judge this. Anyone can do it if they understand the context and the concepts.

So the question I keep banging my head against is the other axis: frequency or probability. And since this is both unmeasurable and also has vast error bars (presumably to somehow account for the unmeasurability, but honestly if it’s impossible to measure then the error bars should be infinite — an order of magnitude is just painting a broken fence) my opinion is that it should be discarded. Sure it’s familiar because of safety analysis, but they have an axis they can measure. This one is not measurable. It’s therefore the wrong axis.

A plausible (and at least estimable if not measurable) axis is cost to effect. How much does it cost to execute the attack? This has a number of advantages:

  • You can estimate it and you can back up your estimate with some logic. There’s a time component, a risk of incarceration, expertise, and some other factors. You can break it down and make an estimate that’s not entirely ad hoc and is better than an order of magnitude.
  • It reveals multiple mitigations when examined in detail.
  • It reveals information about the opposition. Actors with billions to spend might not be on your radar for policy reasons. Threats that can be realized for the cost of a cup of coffee cannot be ignored — you can hardly be said to be doing due diligence if attacking the system is that cheap.
  • It is easily re-estimated over time because you retain the logic by which you established the costs. When you re-do the assessment in a year’s time and a component that cost a million dollars now costs a hundred, the change in the threat is reflected automatically in the matrix. No new magic wand needs to be waved. It’s starting to feel sciencey.

A useful cost to attack estimate (and I have nothing against estimates, I just expect them to be defensible and quantified) would need some standardized elements. For example, I would want us to largely agree on what the cost is of a threat of imprisonment. If I wet my finger and wave it in the air I’m happy with a hundred grand per year (a fair salary) of likely incarceration times about 10% for chance of getting caught. If we’re not happy with the estimate we can do some research and find our what the chances of getting caught really are and what the sentencing is like. We might find out that I’m being way too expensive here.

This is a good sign though. When I am compelled to say “we ought to do some research” I am happily thinking that we are getting closer to a science. What credible research could you do on probability of attack? Where would you even begin? And what would its window of value be? Or its geographic dependencies? Or its dependencies on the type of business the customer does?

Because you want to break the cost to attack down into the various costs imposed on the attacker — their time, their risk, their equipment costs — you have grounds to undermine the attack with individual mitigations. What if a fast attack took many hours? What if you could substantially increase the chance of catching them? What if you could increase the chance of incarcerating them? Suddenly those legal burdens start looking like they could be doing you a favour: you make this attack less likely by increasing your ability to gather evidence and to work with law enforcement. Publish it. Make an actual case and win it. Your risk goes down. These are mitigations that are underexplored by the current model but that could do some genuine good for the entire landscape if taken seriously. Sadly they don’t imply flashy new technologies at fifty grand a crack. But I am not interested in selling you anything. I want your security to improve.

In most of our assessments the threat vector, the person attacking, is categorized fairly uselessly into “hacker” and “terrorist” and “criminal” and so on. But their motivation doesn’t actually help you all that much. This isn’t useful information. How much they are willing to spend, however, does tell you about them. It tells you plenty. If you have a policy that you are only interested in threats from below a government level, that is that you aren’t taking action to protect yourself from a hostile nation state (and this is perfectly reasonable since it’s probably insurable: check your policies) then what you really want to do is decide how much money gets spent by an attacker before they qualify as a nation state? As organized crime? As industrial espionage? And so on? If you can put dollars to these categories then you can not only make intelligent decisions about mitigations but those decisions and the arguments behind them might even have some weight with your insurance adjuster. That’d be nice.

Finally these threats all change over time. Legislation changes, law enforcement focus changes, technology changes. But all of these changes are reflected in some component of the cost to attack. Consequently the value is possible to re-assess regularly. A vague value with no measurements is harder to justify re-considering — the whole thing starts to unravel if you ever wonder whether or not it’s right. Because it has no fabric to begin with. It’s just smoke and mirrors. It’s better not to look behind the curtain in that case.

But it’s much better to build on a foundation of measurement. It’s always better to have a calculation that you can expose to reasoned debate than to shrug and trust an “expert”. None of this is so complicated that no one can understand it without training. Making it seem so is a threat to doing the job properly. Let’s throw back the curtain and make this a science again. Let’s measure things.

catastrophe in the first person

So yesterday I blurted out this twitter-splort as a sort of sub-tweet related to someone asking about what could happen to engage characters when an asteroid station’s reactor malfunctions. I gave them direct and I hope useful advice but then I did this.

Something that doesn’t get explored enough for my tastes in RPGs: confusion. In real life confusion + baseline fear creates some of the most terrifying and difficult to navigate circumstances.

When something big and terrible happens in an RPG often we start with full knowledge of it. This is a missed opportunity. Often the outward signs of a disaster for someone not immediately killed are ambiguous and subtly terrifying.

There are lots of emergency people and they don’t know what to do. People are running in multiple directions (no obvious origin of danger). Things that always work are working sporadically or not at all. There are sounds that aren’t alarming but you’ve never heard them before.

There are dead and injured and it’s not obvious what killed or injured them. There are people demanding you help who don’t know how you can help. Visibility is suddenly restricted or obliterated. Alarming smells are suddenly commonplace (gas, smoke, rubber, metal)

But most importantly these haphazard inputs are all you have. They don’t assemble into a certainty as to what’s going on. They might not even help. If you are in this situation you are either:

* leaving

* investigating so you can understand

* helping the immediately in danger

A fair question is, how do you evoke this in a game. Now my first thought is that this isn’t mechanical in the strict sense — it doesn’t need points or clocks or dice. I mean, you can employ those things, but there are more general techniques you can bring to bear.

Maybe it’s obvious, but if a real person is terrified because things are uncertain and confusing and dangerous then evoking the mood for players guiding a character through the disaster might benefit from the same thing: lack of information. This is of course in direct conflict with the idea that players should have full information and play their characters as though they don’t. Sometimes that’s the right thing and lets mechanisms already present engage, but it doesn’t establish mood. So what I’ll suggest is that whether or not you eventually draw back the curtain to allow the mechanism to play out, at least start with limited information.

So consider this asteroid reactor failure:

Ref: You’re buying noodles at a swing-bar when suddenly there’s a lurch. The air goes opaque with dust or something and your noodles fly out of your hands, whirling across the open space of the Trade Void. You hear screaming and you can’t see shit.

This is where I start: you don’t need to evoke confusion or simulate. Start with the actual confusion. Players will probably start looking for information. Before they get too much out, follow up. This makes things urgent.

Ref: People are rushing past you, just grey shapes in this fog, bumping into you. They are heading in different directions and are incoherent. Except for the one begging for help from across the ‘Void. You find your clothes are smeared with blood from someone who passed you.

Players are now in a position where they have little information, no easy way to get more information, and yet a motivation to either leave, help, or investigate.

I think it’s a critical technique to know and use as ref: to step back from the simulation engine and use the information itself to establish mood and urgency. It’s a story telling technique, not a game mechanism. When you rush or interrupt people, they get anxious. When they don’t have enough information they get the Fear. When they know the danger is real but don’t know the direction that is dangerous, they get careful.

The problem with this is that it’s not safe. When you try to get real emotions at the table you are treading on dangerous ground. If you’re going to attempt to directly evoke fear and anxiety in people, they better all be on board for that. And even if they feel like they are, it’s helpful to have an out like an X-Card or a Script Change. Make sure everyone knows what they are in for and have a way to opt out. If I use fast random information and overtalking people in order to establish confusion and anxiety, I’m doing a real thing to real people and you bear a great deal of responsibility when you do that. Someone not prepared for it would have every right to get angry about it. So tread lightly and talk first.

The upside is that the mood is easier to get into, easier to react within context, easier to build scenes that are memorable for the emotion and tension.

Nachtwey_NewYork_1
Most of our catastrophe images have context because we are looking back on the event through lens of investigation and analysis. But what could you conclude from this if it’s all you knew? A vast cloud of thick grey is descending on you and the noise is tremendous and people are screaming. Context is a luxury.

One level above this is how to analyze situations in order to understand how to place someone in them convincingly. If you’ve never been in mortal danger, you might have no idea what features of that terror are easily conveyed. But there are things that are generally true as I indicated in those tweets:

Low information: initially you know nothing except the effects you see.

Low visibility: bad things often create visual confusion. Fog, smoke, tear gas, crowds — your ability to see what is going on is constrained, so don’t describe everything.

High emotions: people are screaming, crying, begging. Not all of them are in danger or physical distress but almost all of them are overwhelmed by the confusion. You can’t immediately tell which are which.

Blood: Even just second order injuries (people getting banged about by the confused other people) generate a lot of blood after a few minutes. And you can’t tell who’s badly injured from who just has a broken nose. Or who’s covered in someone elses blood.

Low air: whether the air is filled with Bad Things or you’re overcrowded or you’re just hyperventilating it always feels like there is not enough air.

On the upside you will also usually find pockets of local organization: there’s usually someone trying to help and even if they have no idea what’s going on this will tend to form a nucleus of organization: people in this situation are attracted down the confusion gradient. They’ll walk right into a crossfire of bullets if it’s easier to see and breathe there.

There’s also usually a coordinated response very rapidly and that forced organization defuses confusion rapidly. The longer it takes to get there the more certain people are that it’s never coming, which amplifies confusion rapidly.

Presenting these things fall into the category of technique for me. You can mechanize some of them I suppose, but I think you only want to do that if you want your game to be about catastrophe. If you just want your particular game night to deal with a catastrophe, you want to hone some skills for presenting the catastrophic.

advancement

Oh advancement systems how we love you in the RPG world. By “we” here I mean you, and maybe not you specifically. Personally, I dislike them a great deal.

The problem with character advancement is that opposition either scales with character advancement or it doesn’t.

When opposition scales with you, the best advancement systems have the following features:

  • The range of options available to the player increases
  • The range of options available to the ref increases
  • New chapters of the monster manual are brought to the front — you are revealing new pictures of new opposition

For myself, the first two of those are not appealing to me. I don’t want my game to get more complicated as I play it. I’m not saying you’re bad, stupid, or evil if you like it, but let’s acknowledge that it’s a very important design choice that people are going to react to differently.

img_0252
A possible hero to me without being a secret fireball-throwing wizard.

Especially as ref, new complexity can feed my anxiety and lead to me violating the rules. There’s no way I’m managing a spell list for a high level dragon and deciding what they do from round to round as though I was playing my wizard character. At least in part because this dragon is probably going to die soon and there’s another complicated monster in the next room.

As a player I can cope, especially if there’s a type of character that doesn’t change much in complexity. If my fighter has increasing bonuses to scale with the baddies but not a lot of tactical choices that increase over levels, I’ll probably play the fighter and not the sorceror.

Revealing new parts of the monster manual is valuable: changing up the nature of opposition is cool. And the implication that these increasingly powerful monsters imply increasingly existential threats to the low level societies I am protecting is pretty cool. But this is a very specific kind of story arc and not one I want to play every time I sit down. And, frankly, not one I have the patience to work through from zero to hero. It’s just not for me. I’d rather start where the fun is, wherever that is for me today.

Everything else is basically the same except the numbers are bigger, and this can get to feel pointless, especially if the monster manual is weak. If the gnolls just keep getting bigger and better at magic then I don’t feel like we’re going anywhere interesting. I’m just doing more damage against larger hit point pools.

If the system doesn’t scale opposition (like an asymmetrical system where the opposition model doesn’t change or where the opposition isn’t really modelled at all) then something very different happens: you just get more successful. Now I actually find that pretty interesting as long as it happens slowly and as long as failure is rich — the whole tone of the game should change over time. But it has a cap and not a very well defined one: at some point there are no challenges any more and that’s an unsatisfying way to end a story. It might make an amusing allegory once. Just once.

Again these are matters of personal taste. I know there are people (because I was one) who get a rush from advancing. Accruing enough points to ring the bell and get a new power is intrinsically satisfying regardless of its relationship to the story (and sadly there often isn’t one — maybe I’d be keener if something happened in the fiction to explain and explore my sudden leap in ability). But this makes it a mode of play, not a necessary feature of play. I like playing cards for money but it doesn’t mean that money needs to be on the table for every card game.

This is why advancement figures weakly if at all in my games: it doesn’t sing to me. It’s important that there are games that have it because it sings to a lot of people. But it’s important to have games that don’t as well, because that thrill of improvement ties a reward to the accrual of experiences that help you advance, which can distract you from the fact that sometimes these things are abhorrent and rewarding them should be questionable. When the thrill comes from this reward, this advancement, questioning the underpinnings of the idea of rewarding murder and robbery (for example) is uncomfortable and unproductive. I think we need to play without mechanical reward for a while to get a grip on what kinds of things we love in a story that aren’t murder and robbery. Maybe that leads us to games that reward different things and in different ways. And sometimes we find that it’s fine that that changes from session to session, and maybe advancement as a reward isn’t always necessary.

And sometimes, for sure, we want to ring that bell as we stand on the corpse of a wizard-dragon that took hours of smart choices to slay. But not always. And, for me, not even mostly.

Postscript

I wanted to talk a little about heroism but I forgot to. I don’t think a hero should be defined by their capabilities. I mean they can be, but it feels insufficient — even Superman is a hero for reasons far beyond being crazy strong and largely invulnerable. Powers enable hero or villain. A hero to me is about how someone responds to adversity — about the choices they make when the choices are hard. So “heroic” gaming to me is gaming within a context where its’ not obvious what the right thing to do is and, most importantly, where you’re celebrated when you make a great choice. People treat you like a hero when you’re heroic. Scale of conflict is not strictly relevant, though it’s a cheap way to get action that could resolve heroically.

chaos and economy and weather

Economies are multi-variate chaotic systems.

What’s a chaotic system?

A chaotic system is a function. It’s arithmetic. But it’s a function in which its variables future state depends on its current state. For example:

f(z) = z² + C

C is a constant. Every iteration you take the old value of z and square it, then add C and that’s your new value of z. Doesn’t look dangerous, right? Well if z is a complex number (so it’s really two parameters, not one — the real part and the imaginary part) and you map the number of iterations before it explodes to some huge number or to zero, you get this;

File:Mandel zoom 00 mandelbrot set.jpg
Created by Wolfgang Beyer with the program Ultra Fractal 3. – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=321973

And that’s just two variables. Well, one complex one. Which is like two. Even that lovely image doesn’t do justice to the complexity of this result. If you zoom in on the regions that border on the black area of certainty, drilling down into variations in starting conditions at the fifth, tenth, sixteen thousandth decimal place, you will see an explosion of new complexity. Not random, but chaotic. With tiny islands of stability, regions of periodicity, and a whole lot of places that cannot be determined without another decimal place.

Just two variables.

Let’s look at just one. Let’s use f(x) =  + 0.21. Just one variable, no complex math. And let’s use a spreadsheet to see what happens between, say 0.01 and 1.3 or so. I won’t paste in my whole sheet but you can do it for yourself. Here are some features:

Screenshot 2019-09-26 17.02.54
It’s tiny so you can see the convergence patterns. The #NUM! errors is because Excel doesn’t like to work with numbers that have more than around 300 digits. So we’re calling that infinity.

If we start with x less than 0.3, the function trends up towards 0.3.

Between 0.3 and almost 0.7 (in fact infinitely close to 0.7), the function trends down towards 0.3.

At 0.7 the function just always returns 0.7

After 0.7 the function explodes faster and faster towards infinity.

0.3 is clearly some kind of attractor, an orbit in this simple (so very simple) system. And something is magic about 0.7 — it’s perfectly, utterly stable over time and yet it is so very precise. A millionth of a millionth more or less than 0.7 and it either hugs 0.3 eventually or it spins very rapidly indeed off to infinity.

In any chaotic system there may be regions of stability, called “attractors” or “orbits” where state fluctuates around some point on the map, never quite leaving the region (for the period we simulate). You see some perfect attractors in the super simple system above — 0.3 is obviously an attractor. 0.7 isn’t but it’s a stable point. A tiny tiny one. In more complex systems these orbits might not be so reliably convergent — they might diverge suddenly as they approach. Maybe at the fiftieth iteration. Maybe later.

The important thing here is…well there’s two. They are:

Unpredictability. You can’t guess from the current state what the next state will be. You have to crunch the numbers. In simpler systems you can make a guess of course, and even likely be right, but it’s hard (or in more complex systems impossible) to prove your guess analytically.

Sensitivity. You can’t simulate the future state because tiny variations in a variable’s value can have dramatic impact on future state: your simulation has to be perfect to be useful. 0.6999999999999999 converges on 0.3. 0.7000000000000001 goes to infinity. A rounding error can kill you.

Our economy has thousands if not millions of variables. Just three would need a three dimensional image to display, and one we can see inside of. Four variables would need a space we are not equipped to visualize. The economy is mind bogglingly complex. It’s at least as bad as the weather. And while we have all kinds of tricks for predicting the weather they all pretty much boil down to this: tomorrow will be like today only slightly different, with a sprinkling of last time it was like this the next day was like that.

A free market economy (a perfect one — a spherical cow in a vacuum) is a chaotic system in which participants have faith that this algorithm will result in the best possible world for the most people. This algorithm was not, however, designed in any fashion. It was just set loose. No one ever has understood all the variables and it’s intrinsically impossible to predict its behaviour. It has no knowledge nor interest in us or itself. It’s just a huge chunk of arithmetic. This is a weird place to put your faith.

Variations on the free market are an attempt to change key variables to get local effects that are desirable. Experimentally (and even analytically) we can find ways to reduce interdependency, to force certain variable states, and even to make certain variables irrelevant to the calculations. It’s fundamentally an attempt to simplify the chaotic system, to make it less chaotic. Or to push people into stable regions with positive effects.

You are already aware of some of the more stable regions. Being incredibly wealthy is a fairly stable region. Few variables impact you meaningfully and you have to make extreme moves to put you in a position where you won’t just be nudged back into this comfortable orbit no matter what you do.

Being incredibly poor is also a very stable region. There is very little that will shift this orbit short of the random injection of a ton of money. It’s why lotteries are so popular even though the odds are bad. It’s worth being worse off even for the hope of being catapulted out of this region of space, whether or not it comes true. Because it’s pretty much the only game in town once you orbit this dark star.

But without deliberately manipulating the system, without trying to control variables based on past experience, you are at the mercy of the winds. There are no guarantees. No large body of math cares about you. Don’t put your faith in it. Put your faith in people.

splitting an infinitive

Why not?

Well these days (as opposed to a hundred (good to one sig fig) year period of conservatism around which the language is fluid as hell) that’s maybe not a useful question. We do what we please with English and the language is sort of famous for surviving it. For a long time, however, and currently amongst the sort of pedant that has a strong opinion about Oxford commas, the split infinitive was Not Allowed.

But English is really good for splitting infinitives.

The infinitive form of a verb is its naked form, unconjugated. So in English the infinitive “to go” is conjugated as “she goes, we go, they go, you go”. That infinitive is apparently never allowed to have a word inserted between “to” and “go”. It’s to be treated as though it’s un-fucking-divisible. A single word with a space inside it that apparently acts like a letter.

This is, I think, mostly an effort at linguistic political correctness to avoid drawing attention to the fact that many (maybe most) lesser languages do not have this feature. Their infinitives (aller, for example, en Français) are really one word. Which means they do not have the tonal equivalent of “to boldly go” which delivers a mood distinct (to my ear anyway) from “boldly to go” or, worse, “to go boldly”. It’s perhaps the proscription itself that lends this tone (which totally undermines my argument by making the proscription necessary in order to have the feature) by undermining the formality of the “correct” structures. Kirk in the Star Trek opener is established by his linguistic choice as an everyman who doesn’t give a rat’s ass about ancient style guides nor, by extension, Robert’s Rules of Order. We know in our viscera before we even see him that he’s a hero we get to aspire to be. He shirked his way through college and the academy (which later we find out is true). He must have.

And some infinitive busting structures don’t even have correct variants. Consider “I’m going to fucking shoot you in the face.” It’s distinct from “I’m going to shoot you in the fucking face” in that the rude word modifies face instead of shoot. And obvious you can’t say “I’m going fucking to shoot you in the face.” Then you just get laughed at. You’ve descended below the low bar of lovable rogue to incomprehensible villain. “I’m fucking going to shoot you in the face” is weird acceptable, modifies the wrong word, and seems like a grammatically worse choice than splitting the infinitive even though it’s fine. It’s more of a hipster bandit move; an attempt to get you to argue with their usage so they can produce evidence it’s correct. Before shooting you in the face.

So let me suggest that we need not be polite to our compatriot languages who are stuck with indivisible verbs. Our verbs are naturally divisible and this division begs for modifiers. Every space is a possibility for a slightly different tone. It does not invite confusion but rather establishes the writer’s intent clearly and efficiently. The space in the middle of our infinitives is a tool to be wielded however we like to use tools.

Of course, once we get to this point we have to wonder what the “to” is for anyway. What does “to go” mean, decomposed? What work does the “to” do? In the phrase “I’m going to go” it seems to have more to do with “going” than “go” to my ear. That is, as the sentence proceeds, “I’m going to…” is still sensible — I’m certainly going somewhere and to is a somewhere word. I’m going to the store. I’m going to outer space. I’m going to sleep. The “to” is independent — it doesn’t need a verb at all to be useful.

So rather than knuckle under to linguistic equivalentists who would hobble English in order to put it on equal footing withe French or, heaven forbid, Latin, let’s instead celebrate the feature of the English infinitive. Split it at will. It’s already split.

Diaspora testing still happens every week

In the current testing form for Anabasis, the rules for a check are something like: ref declares a risk, then player rolls |d6-d6| and add your skill. If you have a relevant specialization, add another 1. Index on the table:

  • 0 — fails and always generates a new risk from the 6
  • 1-2 — fail, risk realized
  • 3-5 — success, risk realized
  • 6+ — success, no risk

Now, this means that very often risks are realized. So there’s another rule: if you take a stress point, you can increase your roll by one. Take more if you like. Now as your stress goes up you start getting character quirks that could be troublesome, so there’s no “win” here — either the risk is realized (you’re still successful at what you tried unless you roll 2 or lower) or you start to get burdened with Compulsion and Bad Judgement and so on. The ref starts needling you with “the inactivity is agitating you” and “even though there’s a battle going on you are highly distracted by the electrical system under the dash, which doesn’t look properly grounded”.

Both of these have the same purpose: they generate new and unexpected trouble. The big difference is that the risk is in the hands of the ref and the stress effects are in the hands of the player.

abadyos
He looks a little stressed out, no?

An example: Abadyos is trying to fly an unfamiliar shuttle through the atmosphere of a has giant. He faces a roll with the risk REVELATION — something heretofore unknown will be brought to light and it won’t be something good for the characters. Abadyos makes his roll with a total of 4. So he could spend 2 stress to get past the risk or he could just suffer the risk realization. In either case he has a success: he’s going to successfully fly this flight path through the gas giant’s strange atmosphere.

So this is a pivot: either way the story is likely to take a new direction. We’re not just flying to Haifeng the dirigible city any more.

Abadyos’ player chose the stress. He was under severe stress once before and compulsively disassembled and knolled part of the medbay, which was a problem for weeks. This stress has no immediate effect, but later, agitated waiting for a stealthy resolution of another problem, he decides to make a Bad Decision (a stress effect) and burst through doors he knows are guarded.

Acting on his stress is something that was up to the player. I cued it, prodding with declarations about the character’s internal state, but the player declared the action. In the past I would have been skeptical about such a purely social mechanism and wanted to mechanize it with points and a meter to manage or something like that. Maybe I just have great players, but this mechanization appears to be unnecessary. Some players are happy to take the cue and make their lives harder. They recognize that they bought the trouble by spending stress points. They know they should make good on the purchase.

If he’d chosen the REVELATION, a bad choice of rocket operation parameters would have ignited part of the gas giant’s atmosphere, pointing a giant arrow at the characters who are trying to hide. Now this is my space as ref: I am being asked to ad lib a major change in plot direction. It’s similar to the stress situation in that in both cases someone has a new creative burden with loose but clear direction: you character is agitated and impulsive and prone to making bad decisions right now or, in the case of the risk, the ref is mandated to create a new fact that changes the direction of the game.

I used to feel I had to mechanize things like this further, but someone pointed out to me that the fiction has its own weight. That there are things that need no further rules because they have a fictional presence that can only be responded to in a limited fashion within the context of the rest of the fiction. If you have a rope, you can do rope things. You don’t need a rule for every possible use of rope. We know what rope is for, and the current context of the fiction establishes the limits of what rope can do. You can write rules for it if you want, but you can get away with startlingly few when we’re talking about something everyone understands deeply. Rope. Agitation. Impatience.

I recognize that this is not necessarily a popular direction. But I think you will like it — maybe love it — because where Diaspora Anabasis puts its effort in mechanically is the setting creation and the character creation. We mechanize the establishing context and then inject deviations and obstacles. I think this is consistent with the original vision of Diaspora and it’s certainly consistent with how I plan and run a game.

You may notice this is similar to the Soft Horizon system and it is. It’s tuned for a different purpose and the dice are different, but the core method is the same. So far this is because it really really works for me. That could be the kiss of death commercially.