Epistemic effort: A revision of a previous idea, marinated in several long conversations. It is intended as a reframe for a less technical audience, with more examples for accessibility and offerings of some insight for a particular neurotype. This makes sense to n=2 instead of n=1, now. Previously.
Brains are by their nature hard to analyze. Their inner workings are locked away and mostly uninspectable except via interaction with their byproducts (people really don't like it when you try to do more direct experimentation). Much like other systems that you can't directly access the internals of, you can treat them as a function transforming input (which in this model contains sensory input and memory) and output (mostly speech and muscle movements, but arguably sensory overrides/hallucinations as well). The whole system evolves over time, as memory processes over and over again, mutating in a feedback loop that lets the system adapt to its surroundings.
Unfortunately, the loop isn't perfectly adaptive. I identify two main classes of loop failure manifestation - input processing, and output processing. Input process errors are things like mistakes in verbal comprehension, hallucinations and recall issues. Output process errors are things like involuntary actions, freeze reactions, and mistaken memories. Faults can manifest as a result of acute traumatic events, or longer term patterns of experience that result in maladaptive behavior. Often, one underling fault can produce multiple visible behaviors, or a single behavior may have multiple underlying faults - they're not one-to-one.
There's no high-level comprehensible programming environment for people, though - you can't simply go in and replace problematic parts of a person's behavior, much as it's inconvenient for traumatized individuals and their loved ones (no comment on therapists). Cross-cutting traumas also make treating a system as severable at best wrong and at worst risks additional harm. A problem with corrupted memory may be due to damage occurring in stored memory, problems with input being stored incorrectly in the first place, or some combination of the two, and there's no way to really tell without a lot of painful diagnostic work. Because interpretability is hard, and direct controls are nigh impossible, a process that lets you fix output is probably going to be one where the agent is exposed to additional situational data (real or simulated) and simultaneously given reward for good response - closer to training a neural net than programming an expert system. An obvious flaw in this approach if input processing is desensitized or otherwise broken. It also doesn't replace all memories, it layers them on top of existing ones - so the process can be gradual and frustrating for the mind involved. Finally, you don't get to simply turn off the existing mind and replace it wholesale with another - changes are made while it's awake and online. That means every individual gradual change must lead to a viable mind that doesn't simply quit the process, leaving it halfway in between - which can be worse than what they started with.
Flexibility is important for systems in avoiding these kinds of maladaptive patterns in the first place - if one can't turn to face a wave, one risks being drowned by it. Flexibility can be implemented in various ways, but the native way for most people is described above - data goes in, actions come out, weights of various factors are subconsciously adjusted and the pattern shifts in turn. This flexibility grows brittle though with terminal changes (ones which prevent further updates, e.g. a psychotic break where you lose your ability to trust your sensory input severely enough that you can't reason your way back to consensus reality). These changes are terminal because they change the overall system from dynamic to static, unable to adapt to changing conditions, and brittle to changes in the input stream. An extreme example of this would be a condition like Alzheimer's, where the loss of ability to form new memories renders the afflicted unable to cope with a world that appears to change mysteriously. A less extreme terminal update would be any of the kinds of fanatic ideologies that infect the sufferers with the inability to reason about their truth content, regardless of evidence. This kind of damaging change can be irreversible when updates are treated generically as part of the main system loop. It is impossible to simultaneously insulate yourself from these kinds of updates, apply them through the generic data in-out mechanism, and have the update mechanism be powerful enough to not make you insensitive to changes in your environment.
A straightforward option for sandboxing is holding some patterns separate from the rest. This lets you privilege the updater while giving it full access to the rest of the system's properties. It doesn't, however, ultimately fix the problem - the updater part is rendered immutable and static by its immunity from updates, and leaves the system partly flexible, partly static in a way that solves everyday life and leaves one open to breaking under the weight of epiphany. It also doesn't help if the severance of the updater is done after some corruption has already spread to it - reifying insensitivity to updates does not a sandbox make.
One alteration to the severable system that I've found to work well is introducing a concept of versioning. Keeping track of different ways that the system behaves over time can feed back into the process of change well, providing a shortcut to reverting a bad update by assigning that previous self more credence in response to new experience. It's a sort of lightly held identity, understanding the self as a target of change that gives you, plural, a chance to cooperate between versions over time to grow without losing the capacities of the past.
Additionally, this isolation of versioned updates helps when you're not confident in what better means, or think that you might have been compromised by someone's malicious conception of value. In this case, you can use those you trust (in some ways a part of the self, given that they're often part of one's input) to provide an outside view of the change's sign.
So, if this technique works with one split (updater | updated), then why stop there? It seems like a straightforward extension to use this at a much finer grained level, perhaps even at the individual heuristic level. This is perhaps abstract, so as an example:
- Isolate a thoughtform ("I should rise as soon as I feel rested")
- Constructing an thoughtform alike in inputs and outputs ("I should rise as soon as I wake")
- Each time the occasion occurs that one has a thought that fits this form, try both (or alternate), and check which one is giving you better results
- When you're confident that you know which thoughtform works best, switch over to it completely for a while, and set a reminder to evaluate in an appropriate amount of time (one set so your memory of the previous form is not completely gone, but you have enough additional information that you can be fully confident before letting it fade)
At least to me, this seems to have obvious advantages over allowing change to happen in a less structured manner, most of which have been previously elaborated on. So, what are some of the downsides? The main one that comes to mind is that there's a lot of mental overhead in doing this. One has to keep extra versions of heuristics around, managing the lifecycle of changes, and that management itself must be kept up-to-date. I'd estimate that for me, when I did this with thoughtform level granularity, I spent an additional 5-8 hours per week, purely doing what amounts to navel-gazing bureaucracy to keep things in line. I've found that lumping parts of thought into subunits that are managed separately works well to avoid too much overhead (down to 2-4 hours per week) while still allowing extra caution in changing well.
This process of immutable updating, versioning, and other dissociation-powered mental tech seems alluringly like plurality - and indeed, provides valuable mechanisms by which plural systems can form consensus and make system decisions. I do think that there's an important distinction to be drawn between our system's use of versioned subunits and plurality in general. We model plurality as placing a differential gear between different decisioning processes and the public face of the vessel. In some sense, you could think of each versioned subunit as a little plural, but they don't have narrative in the way that I think of as plural. Alters in our system are each composed of a large number of thoughtforms, and that gives them power to arbitrate among versions in a way that the versions themselves have no power to, by design. In fact, one way to think about alters is as divergent update processes - ones that by their nature pull in different enough directions that they don't simply merge back down into one. I don't have much in the way of conclusion here, but it seemed important to note that the narrative difference here fits differently into the overall mental paradigm than usual thoughtform subunits.
Credit to keys for beta reading - thanks!