The process that invents democracy is part of some telotect, but is it part of a telophore? Or is the telophore only reached when democracy is implemented?
Musing about how (maybe) certain telopheme impose constraints on the structure (logic) of their corresonding telophores and telotects. Eg democracy, freedom, autonomy, justice, corrigibility, rationality, ... (thought plausibly you'd not want to count (some of) those examples as telophemes in the first place?)
I think that your question points out how the concepts as I've laid them out don't really work. I now think that values such as liking a certain process or liking mental properties should be treated as first-class values, and this pretty firmly blurs the telopheme / telophore distinction.
Curious whether the following idea rhymes with what you have in mind: telophore as (sort of) doing ~symbol grounding, i.e. the translation (or capacity to translate) from description to (wordly) effect?
It's definitely like symbol grounding, though symbol grounding is usually IIUC about "giving meaning to symbols", which I think has the emphasis on epistemic signifying?
Right, but I feel like I want to say something like "value grounding" as its analogue.
Also... I do think there is a crucial epistemic dymension to values, and the "[symbol/value] grounding" thing seems like one place where this shows quite well.
Ok yeah I agree with this. Related: https://tsvibt.blogspot.com/2023/09/the-cosmopolitan-leviathan-enthymeme.html#pointing-at-reality-through-novelty
And an excerpt from a work in progress:
For example, I reach out and pick up some blueberries. This is some kind of expression of my values, but how so? Where are the values?
Are the values in my hands? Are they entirely in my hands, or not at all in my hands? The circuits that control my hands do what they do with regard to blueberries by virtue of my hands being the way they are. If my hands were different, e.g. really small or polydactylous, my hand-controller circuits would be different and would behave differently when getting blueberries. And the deeper circuits that coordinate visual recognition of blueberries, and the deeper circuits that coordinate the whole blueberry-getting system and correct errors based on blueberrywise success or failure, would also be different. Are the values in my visual cortext? The deeper circuits require some interface with my visual cortex, to do blueberry find-and-pick-upping. And having served that role, my visual cortex is specially trained for that task, and it will even promote blueberries in my visual field to my attention more readily than yours will to you. And my spatial memory has a nearest-blueberries slot, like those people who always know which direction is north.
It may be objected that the proximal hand-controllers and the blueberry visual circuits are downstream of other deeper circuits, and since they are downstream, they can be excluded from constituting the value. But that's not so clear. To like blueberries, I have to know what blueberries are, and to know what blueberries are I have to interact with them. The fact that I value blueberries relies on me being able to refer to blueberries. Certainly, if my hands were different but comparably versatile, then I would learn to use them to refer to blueberries about as well as my real hands do. But the reference to (and hence the value of) blueberries must pass through something playing the role that hands play. The hands, or something else, must play that role in constituting the fact that I value blueberries.
In general, values are founded on reference. The context that makes a value a value has to provide reference.
The situation is like how an abstract concept, once gained, doesn't overwrite and obselete what was abstracted from. Maxwell's equations don't annihilate Faraday's experiments in their detail. The experiments are unified in idea--metaphorically, the field structures are a "cross-section" of the messy detailed structure of any given experiment. The abstract concepts, to say something about a specific concrete experimental situation, have to be paired with specific concrete calculations and referential connections. The concrete situations are still there, even if we now, with our new abstract concepts, want to describe them differently.
If so, then values are essentially diasystemic.
Reference goes through unfolding.
To refer to something in reality is to be brought (or rather, bringable) to the thing. To be brought to a thing is to go to where the thing really is, through whatever medium is between the mind and where the thing really is. The "really is" calls on future novelty. See "pointing at reality through novelty".
In other words, reference is open--maybe radically open. It's supposed to incorporate whatever novelty the mind encounters--maybe deeply.
An open element can't be strongly endosystemic.
An open element will potentially relate to (radical, diasystemic) novelty, so its way of relating to other elements can't be fully stereotyped by preexisting elements with their preexisting manifest relations.
[Metadata: crossposted from https://tsvibt.blogspot.com/2023/06/telopheme-telophore-and-telotect.html. First completed June 7, 2023.]
To come to know that a mind will have some specified ultimate effect on the world, first come to know, narrowly and in full, what about the mind makes it have effects on the world.
The fundamental question
Suppose there is a strong mind that has large effects on the word. What determines the effects of the mind?
What sort of object is this question asking for? Most obviously it's asking for a sort of "rudder" for a mind: an element of the mind that can be easily tweaked by an external specifier to "steer" the mind, i.e. to specify the mind's ultimate effects on the world. For example, a utility function for a classical agent is a rudder.
But in asking the fundamental question that way——asking for a rudder——that essay losses grasp of the slippery question and the real question withdraws. The section of that essay on The word "What", as in ¿What sort of thing is a "what" in the question "What determines a mind's effects?", brushes against the border of this issue but doesn't trek further in. That section asks:
It should have asked more fully:
That is, what structure does a mind have to possess, so that there can be an element that determines the mind's ultimate effects?
To put it another way: asking how to "put a goal into an agent" makes it sound like there's a slot in the agent for a goal; asking how to "point the agent" makes it sound like the agent has the capacity to go in a specified direction. Here the question is, what does an agent need to have, if it has the capacity to go in a specified direction? What is the mental context in which a goal unfolds so that the goal is a goal? What do we necessarily think of an agent as having or being, when we think of the agent as pursuing a goal?
Synopsis
Telopheme
The rudder, the element that determines the mind's ultimate effects, is a telopheme. The morpheme "telo-" means "telos" = "goal, end, purpose", here meaning "ultimate effects". The morpheme "-pheme" is like "blaspheme" ("deceive-speak"). ("Telopheme" is probably wrong morphology and doesn't indicate an agent noun, which it ought to do, but sadly I don't speak Ancient Greek.) So a telopheme is a goal-sayer: it says the goal, the end, the ultimate effects.
For example, a utility function for an omnipotent classical cartesian agent is a telopheme.
There's a hidden implication in the name "telopheme". The implication is that ultimate effects are speakable.
Telophore
The minimal sufficient preconditions for a mind's telopheme to be a telopheme is the telophore (or telophor) of the telopheme. Here "-phore" means "bearer, carrier" (as in "phosphorus" = "light-bearer", "metaphor" = "across-carrier"), in the sense of "one who supports, one who bears the weight". So a telophore is a goal-bearer: it carries the telopheme, it constitutes (supports) the goal-ness of what the telopheme says, it unfolds the telopheme into action and effect, it makes it the case that the telopheme says the ultimate effects of the mind. (The telophore gives the mind substantial-ultimate-effect-having-ness——which could be called "telechia", "telos-having-ness", cf. entelechy.) An alternative name for telophores would be "agency-structures", but "agency" is ambiguous, and it emphasizes action-taking rather than effect-having.
Minimality
Continuing the example of an omnipotent cartesian classical agent, the telophore is murkier than the telopheme. It seems to require the whole rest of the agent besides the utility function U: the world-model, the policy generator, and the search procedure applying U to possible worlds.
Taking the whole rest of the agent as a telophore is casting too wide a net. It doesn't give us any traction on understanding the agent; it says that to come to know that the telopheme is a telopheme, we have to come to know everything about the whole agent. A telophore is more narrow. A telophore is a set of elements that is sufficient to support that the telopheme determines the mind's ultimate effects, and is minimal with that property, i.e. a strict subset wouldn't suffice.
As an example of how a telophore could be more narrowly identified, consider an agent's world model:
So the telophore can in some cases be narrowed down from the whole mind, by excluding the understanding that the mind gains through processes which transparently are going to gain that understanding.
Sufficiency
"What unfolds the telopheme into action and effect" sounds like it describes specifically the agent's decision theory. But the agent's decision theory isn't sufficient because a decision theory without a world model can't translate a goal-statement into effects. So the telophore is not (contained in) the decision theory.
Suppose there is a strong mind M that is corrigible by a human. That means that the human can arbitrarily modify M. So, if there is a telopheme, its telophore is extended across both M and the human. Something entirely contained in M wouldn't explain how it survived the human's scrutiny. Without explaining how it persists to translate the telopheme into effects on the world, it's not sufficient to make the telopheme a telopheme.
Limits
An edge case: suppose that an agent's utility function is encrypted, so that external observers can't predict how the world will end up, and can't specify how the world will end up without learning how the telophore will decrypt the telopheme.
I suspect that nearly all minds have a telophore, even if they don't have a telopheme.
Telotect
Humans do not have a telopheme, at least not yet. Minds that we encounter may also not have a telopheme, at least at first. For this reason, the fundamental question asks about natural, autonomously growing minds. The question asks about what constructs the telophore and what writes the telopheme, so that the full telophore is revealed.
The answer to that question is a mind's telotect. "-tect" = "carpenter, builder", as in "architect". A telotect is a goalmaker: it makes the telopheme and the telophore, the structures that say and bear the goal into the world.
These two operations might be better to separate, but they might not be feasible to separate, because minds might naturally come with the telophore-maker and the telopheme-writer intertwined. The simple-seeming definition of telotect is: the telotect is all the elements (forces, processes, selection, decisions, optimization) that makes it end up the case that the mind makes X happen. The fact that the mind makes X happen could be broken up into
but the simpler fact is that the mind makes X happen.
Examples
Telophore vs. telotect
Some of these examples show that the boundaries between {the telopheme, the telophore} and the rest of the telotect are murky or nonexistent. The process that invents democracy is part of some telotect, but is it part of a telophore? Or is the telophore only reached when democracy is implemented? The telotect, to borrow a metaphor, climbs up a ladder to the telophore and then kicks the ladder out from under itself, screening off the mind's history from the subsequent ultimate effects of the mind as borne by the telophore and specified by the telopheme. The telophore says: "Even if I'd gotten to be how I am by a different historical route, I'd still want X.".
Telophores cut across events like invention and implementation. E.g. a human may reflectively endorse a desire to have children for reasons influenced by ze's knowledge of zer evolutionary origin: ze wants to "see more of zerself in humanity". We lack the concepts to draw tight boundaries around telophores.
By showing what changes, the telotect shows what can't be part of the telophore. To see the telophore, see what the telotect doesn't change. The effort to clarify decision theory is an effort to isolate the telophore.
The fundamental question revisited
The fundamental question asks after the telopheme of natural minds in order to shed light on the telotect: to answer how a mind gets its telopheme would require understanding the telotect. The question asks after the telotect in order to shed light on the telophore: the writer of the telopheme is intertwined with the builder of the telophore. The question asks about the telophore because the telopheme can't be grasped without understanding the telophore that bears it.