AI ALIGNMENT FORUM
AF

Paul Bricman
Ω101010
Message
Subscribe to posts

Sequences

Hypothesis Subspace

Posts

Sorted by New
11Cataloguing Priors in Theory and Practice4mo
0
15(Structural) Stability of Coupled Optimizers4mo
0
11Interlude: But Who Optimizes The Optimizer?4mo
0
17Representational Tethers: Tying AI Latents To Human Ones5mo
0
7Ideological Inference Engines: Making Deontology Differentiable*5mo
0
14Oversight Leagues: The Training Game as a Feature5mo
0
17Benchmarking Proposals on Risk Scenarios5mo
0
10Steelmining via Analogy6mo
0

Wiki Contributions

No wiki contributions to display.

Comments

Linda Linsefors's Shortform
Paul Bricman4mo1

You mean, in that you can simply prompt for a reasonable non-infinite performance and get said outcome?

Reply