Rob Bensinger

Communications lead at MIRI. Unless otherwise indicated, my posts and comments here reflect my own views, and not necessarily my employer's.

Sequences

2022 MIRI Alignment Discussion
Late 2021 MIRI Conversations

Wiki Contributions

Load More

Comments

Pivotal outcomes and pivotal processes

In my view, in practice, the pivotal acts framing actually pushes people to consider a more narrow space of discrete powerful actions, "sharp turns", "events that have a game-changing impact on astronomical stakes". 

My objection to Critch's post wasn't 'you shouldn't talk about pivotal processes, just pivotal acts'. On the contrary, I think bringing in pivotal processes is awesome.

My objection (more so to "Pivotal Act" Intentions, but also to the new one) is specifically to the idea that we should socially shun the concept of "pivotal acts", and socially shun people who say they think humanity needs to execute a pivotal act, or people who say positive things about some subset of pivotal acts.

This seems unwise to me, because it amounts to giving up on humanity's future in the worlds where it turns out humanity does need to execute a pivotal act. Suppose you have this combination of beliefs:

  1. Humanity probably won't need to execute any pivotal acts in order to avoid existential catastrophe.
  2. ... But there's a non-tiny chance (e.g., 10%) that at least one pivotal act will in fact be necessary.
  3. A decent number of people currently misunderstand the idea of "pivotal acts" as evil/adversarial/"villainous", in spite of the fact that there's a decent chance humanity will need someone to commit this "villainy" in order to prevent the death of every human on Earth.

I personally think that a large majority of humanity's hope lies in someone executing a pivotal act. But I assume Critch disagrees with this, and holds a view closer to 1+2+3.

If so, then I think he shouldn't go "well, pivotal acts sound weird and carry some additional moral hazards, so I will hereby push for pivotal acts to become more stigmatized and hard to talk about, in order to slightly increase our odds of winning in the worlds where pivotal acts are unnecessary".

Rather, I think hypothetical-Critch should promote the idea of pivotal processes, and try to reduce any existing stigma around the idea of pivotal acts, so that humanity is better positioned to evade destruction if we do end up needing to do a pivotal act. We should try to set ourselves up to win in more worlds.

(Where things in this category get straw-manned as "Rube-Goldberg-machine-like")

If you're referring to my comment, then this is itself straw-manning me!

Rube-Goldberg-ishness is a matter of degree: as you increase the complexity of a plan, it becomes harder to analyze, and tends to accumulate points of failure that reduce the probability of success. This obviously doesn't mean we should pick the simplest possible plan with no consideration for anything else; but it's a cost to keep in mind, like any other.

I mentioned this as a quantitative cost to keep in mind; "things in this category get straw-manned as 'Rube-Goldberg-machine-like'" seems to either be missing the fact that this is a real cost, or treating me as making some stronger and more specific claim.

As often, one of the actual cruxes is in continuity assumptions, where basically you have a low prior on "smooth trajectory changes by many acts" and high prior on "sharp turns left or right".
 

This seems wrong to me, in multiple respects:

  • Continuity assumptions are about what's likely to happen, not about what's desirable. It would be a separate assumption to say "continuity is always good", and I worry that a reasoning error is occurring if this is being conflated with "continuity tends to occur".

    Why this matters here: My claim is that pivotal acts are likely to be necessary for good outcomes, not that they're necessarily likely to occur. If your choices are "execute a pivotal act, or die", then insofar as you're confident this is the case, the base rate of continuous events just isn't relevant.
     
  • The primary argument for hard takeoff isn't "stuff tends to be discontinuous"; it's "AGI is a powerful invention, and e.g. GPT-3 isn't a baby AGI". The discontinuity of hard takeoff is not a primitive; it's an implication of the claim that AGI is different from current AI tech, that it contains a package of qualitatively new kinds of cognition that aren't just 'what GPT-3 is currently doing, but scaled up'.

    No one claims that AlphaGo needs to be continuous with theorem-proving AI systems, or that a washing machine needs to be continuous with a chariot. The core disagreement here is about whether X and Y are the same kind of thing, not about whether incremental tweaks to a given kind of thing tend to produce small improvements.

I think you should be more of a fox with respect to continuity, and less of a hedgehog. The reason hard takeoff is very likely true isn't some grand, universal Discontinuity Narrative. It's just that different things work differently. Sometimes you get continuities; sometimes you don't. To figure out which is which, you need to actually analyze the specific phenomenon under discussion, not just consult the universal cosmic base rate of continuity.

(And indeed, I think Paul is doing a lot more 'analyze the specific phenomenon under discussion' than you seem to give him credit for. I think it's straw-manning Paul and Eliezer to reduce their disagreement to a flat 'we have different priors about how many random things tend to be continuous'.)

Second crux, as you note, is doom-by-default probability: if you have a very high doom probability, you may be in favour of variance-increasing acts

I agree with this in general, but I think this is a wrong lens for thinking about pivotal acts. On my model, a pivotal act isn't a hail mary that you attempt because you want to re-roll the dice; it's more like a very specific key that is needed in order to open a very specific lock. Achieving good outcomes is a very constrained problem, and you need to do a lot of specific things in order to make things go well.

We may disagree about variance-increasing tactics in other domains, but our disagreement about pivotal acts is about whether some subset of the specific class of keys called 'pivotal acts' is necessary and/or sufficient to open the lock.

Given this deep prior differences, it seems reasonable to assume this discussion will lead nowhere in particular. (I've a draft with a more explicit argument why.)

I'm feeling much more optimistic than you about trying to resolve these points, in part because I feel that you've misunderstood almost every aspect of my view and of my comment above! If you're that far from passing my ITT, then there's a lot more hope that we may converge in the course of incrementally changing that.

(Or non-incrementally changing that. Sometimes non-continuous things do happen! 'Gaining understanding of a topic' being a classic example of a domain with many discontinuities.)

Pivotal outcomes and pivotal processes

An example of a possible "pivotal act" I like that isn't "melt all GPUs" is:

Use AGI to build fast-running high-fidelity human whole-brain emulations. Then run thousands of very-fast-thinking copies of your best thinkers. Seems to me this plausibly makes it realistic to keep tabs on the world's AGI progress, and locally intervene before anything dangerous happens, in a more surgical way rather than via mass property destruction of any sort.

Looking for pivotal acts that are less destructive (and, more importantly for humanity's sake, less difficult to align) than "melt all GPUs" seems like a worthy endeavor to me. But I prefer the framing 'let's discuss the larger space of pivotal acts, brainstorm new ideas, and try to find options that are easier to achieve, because that particular toy proposal seems suboptimally dangerous and there just hasn't been very much serious analysis and debate about pathways'. In the course of that search, if it then turns out that the most likely-to-succeed option is a process, then we should obviously go with a process.

But I don't like constraining that search to 'processes only, not acts', because:

  • (a) I'm guessing something more local, discrete, and act-like will be necessary, even if it's less extreme than "melt all GPUs";
  • (b) insofar as I'm uncertain about which paths will be viable and think the problem is already extremely hard and extremely constrained, I don't want to further narrow the space of options that humanity can consider and reason through;
  • (c) I worry that the "processes" framing will encourage more Rube-Goldberg-machine-like proposals, where the many added steps and layers and actors obscure the core world-saving cognition and action, making it harder to spot flaws and compare tradeoffs;
  • and (d) I worry that the extra steps, layers, and actors will encourage "design by committee" and slow-downs that doom otherwise-promising projects.

I suspect we also have different intuitions about pivotal acts because we have different high-level pictures of the world's situation.

I think that humanity as it exists today is very far off from thinking like a serious civilization would about these issues. As a consequence, our current trajectory has a negligible chance of producing good long-run outcomes. Rather than trying to slightly nudge the status quo toward marginally better thinking, we have more hope if we adopt a heuristic like speak candidly and realistically about things, as though we lived on the Earth that does take these issues seriously, and hope that this seriousness and sanity might be infectious.

On my model, we don't have much hope if we continue to half-say-the-truth, and continue to make small steady marginal gains, and continue to talk around the hard parts of the problem; but we do have the potential within us to just drop the act and start fully sharing our models and being real with each other, including being real about the parts where there will be harsh disagreements.

I think that a large part of the reason humanity is currently endangering itself is that everyone is too focused on 'what's in the Overton window?', and is too much trying to finesse each other's models and attitudes, rather than blurting out their actual views and accepting the consequences.

This makes the situation I described in The inordinately slow spread of good AGI conversations in ML much stickier: very little of the high-quality / informed public discussion of AGI is candid and honest, and people notice this, so updating and epistemic convergence is a lot harder; and everyone is dissembling in the same direction, toward 'be more normal', 'treat AGI more like business-as-usual', 'pretend that the future is more like the past'.

All of this would make me less eager to lean into proposals like "yes, let's rush into establishing a norm that large parts of the strategy space are villainous and not to be talked about" even if I agreed that pivotal processes are a better path to long-run good outcomes than pivotal acts. This is inviting even more of the central problem with current discourse, which is that people don't feel comfortable even talking about their actual views.

You may not think that a pivotal act is necessary, but there are many who disagree with you. Of those, I would guess that most aren't currently willing to discuss their thoughts, out of fear that the resultant discussion will toss norms of scholarly discussion out the window. This seems bad to me, and not like the right direction for a civilization to move into if it's trying to emulate 'the kind of civilization that handles AGI successfully'. I would rather a world where humanity's best and brightest were debating this seriously, doing scenario analysis, assigning probabilities and considering specific mainline and fallback plans, etc., over one where we prejudge 'discrete pivotal acts definitely won't be necessary' and decide at the outset to roll over and die if it does turn out that pivotal acts are necessary.

My alternative proposal would be: Let's do scholarship at the problem, discuss it seriously, and not let this topic be ruled by 'what is the optimal social-media soundbite?'.

If the best idea sounds bad in soundbite form, then let's have non-soundbite-length conversations about it. It's an important enough topic, and a complex enough one, that this would IMO be a no-brainer in a world well-equipped to handle developments like AGI.

it's safer to aim for a pivotal outcome to be carried out by a distributed process spanning multiple institutions and states, because the process can happen in a piecemeal fashion that doesn't change the whole world at once

We should distinguish "safer" in the sense of "less likely to cause a bad outcome" from "safer" in the sense of "less likely to be followed by a bad outcome".

E.g., the FDA banning COVID-19 testing in the US in the early days of the pandemic was "safer" in the narrow sense that they legitimately reduced the risk that COVID-19 tests would cause harm. But the absence of testing resulted in much more harm, and was "unsafe" in that sense.

Similarly: I'm mildly skeptical that humanity refusing to attempt any pivotal acts makes us safer from the particular projects that enact this norm. But I'm much more skeptical that humanity refusing to attempt any pivotal acts makes us safer from harm in general. These two versions of "safer" need to be distinguished and argued for separately.

Any proposal that adds red tape, inefficiencies, slow-downs, process failures, etc. will make AGI projects "safer" in the first sense, inasmuch as it cripples the project or slows it down to the point of irrelevance.

As someone who worries that timelines are probably way too short for us to solve enough of the "pre-AGI alignment prerequisites" to have a shot at aligned AGI, I'm a big fan of sane, non-adversarial ideas that slow down the field's AGI progress today.

But from my perspective, the situation is completely reversed when you're talking about slowing down a particular project's progress when they're actually building, aligning, and deploying their AGI.

At some point, a group will figure out how to build AGI. When that happens, I expect an AGI system to destroy the world within just a few years, if no pivotal act or processes finishes occurring first. And I expect safety-conscious projects to be at a major speed disadvantage relative to less safety-conscious projects.

Adding any unnecessary steps to the process—anything that further slows down the most safety-conscious groups—seems like suicide to me, insofar as it either increases the probability that the project fails to produce a pivotal outcome in time, or increases the probability that the project cuts more corners on safety because it knows that it has that much less time.

I obviously don't want the first AGI projects to rush into a half-baked plan and destroy the world. First and foremost, do not destroy the world by your own hands, or commit the fallacy of "something must be done, and this is something!".

But I feel more worried about AGI projects insofar as they don't have a lot of time to carefully align their systems (so I'm extremely reluctant to tack on any extra hurdles that might slow them down and that aren't crucial for alignment), and also more worried insofar as they haven't carefully thought about stuff like this in advance. (Because I think a pivotal act is very likely to be necessary, and I think disaster is a lot more likely if people don't feel like they can talk candidly about it, and doubly so if they're rushing into a plan like this at the last minute rather than having spent decades prior carefully thinking about and discussing it.)

Let's See You Write That Corrigibility Tag

Some hopefully-unnecessary background info for people attempting this task:

A description of corrigibility Eliezer wrote a few months ago: "'corrigibility' is meant to refer to the sort of putative hypothetical motivational properties that prevent a system from wanting to kill you after you didn't build it exactly right".

An older description of "task-directed AGI" he wrote in 2015-2016: "A task-based AGI is an AGI intended to follow a series of human-originated orders, with these orders each being of limited scope", where the orders can be "accomplished using bounded amounts of effort and resources (as opposed to the goals being more and more fulfillable using more and more effort)."

A central AI alignment problem: capabilities generalization, and the sharp left turn

Ronny Fernandez on Twitter:

I think I don’t like AI safety analogies with human evolution except as illustrations. I don’t think they’re what convinced the people who use those analogies, and they’re not what convinced me. You can convince yourself of the same things just by knowing some stuff about agency.

Corrigibility, human values, and figure-out-while-aiming-for-human-values, are not short description length. I know because I’ve practiced finding the shortest description lengths of things a lot, and they just don’t seem like the right sort of thing.

Also, if you get to the level where you can realize when you’ve failed, and you try it over and over again, you will find that it is very hard to find a short description of any of these nice things we want.

And so this tells us that a general intelligence we are happy we built is a small target within the wide basin of general intelligence

Ideal agency is short description length. I don’t think particular tractable agency is short description length, and ml cares about run time, but there are heuristic approximations to ideal agency, and there are many different ones because ideal agency is short description length

So this tells us that there is a wide basin of attraction for general intelligence.

AGI Ruin: A List of Lethalities

From an Eliezer comment:

Interventions on the order of burning all GPUs in clusters larger than 4 and preventing any new clusters from being made, including the reaction of existing political entities to that event and the many interest groups who would try to shut you down and build new GPU factories or clusters hidden from the means you'd used to burn them, would in fact really actually save the world for an extended period of time and imply a drastically different gameboard offering new hopes and options. [...]

If Iceland did this, it would plausibly need some way to (1) not have its AGI project bombed in response, and (2) be able to continue destroying GPUs in the future if new ones are built, until humanity figures out 'what it wants to do next'. This more or less eliminates the time pressure to rush figuring out what to do next, which seems pretty crucial for good long-term outcomes. It's a much harder problem than just 'cause all GPUs to stop working for a year as a one-time event', and I assume Eliezer's focusing on nanotech it part because it's a very general technology that can be used for tasks like those as well.

A central AI alignment problem: capabilities generalization, and the sharp left turn

I kind of like the analogous idea of an alignment target as a repeller cone / dome.

Corrigibility is a repeller. Human values aren't a repeller, but they're a very narrow target to hit.

Continuity Assumptions

A lot of models of what can or can't work in AI alignment depends on intuitions about whether to expect "true discontinuities" or just "steep bits".

Note that Nate and Eliezer expect there to be some curves you can draw after-the-fact that shows continuity in AGI progress on particular dimensions. They just don't expect these to be the curves with the most practical impact (and they don't think we can identify the curves with foresight, in 2022, to make strong predictions about AGI timing or rates of progress).

Quoting Nate in 2018:

On my model, the key point is not 'some AI systems will undergo discontinuous leaps in their intelligence as they learn,' but rather, 'different people will try to build AI systems in different ways, and each will have some path of construction and some path of learning that can be modeled relatively well by some curve, and some of those curves will be very, very steep early on (e.g., when the system is first coming online, in the same way that the curve "how good is Google’s search engine" was super steep in the region between "it doesn’t work" and "it works at least a little"), and sometimes a new system will blow past the entire edifice of human knowledge in an afternoon shortly after it finishes coming online.' Like, no one is saying that Alpha Zero had massive discontinuities in its learning curve, but it also wasn't just AlphaGo Lee Sedol but with marginally more training: the architecture was pulled apart, restructured, and put back together, and the reassembled system was on a qualitatively steeper learning curve.

My point here isn't to throw 'AGI will undergo discontinuous leaps as they learn' under the bus. Self-rewriting systems likely will (on my models) gain intelligence in leaps and bounds. What I’m trying to say is that I don’t think this disagreement is the central disagreement. I think the key disagreement is instead about where the main force of improvement in early human-designed AGI systems comes from — is it from existing systems progressing up their improvement curves, or from new systems coming online on qualitatively steeper improvement curves?

And quoting Eliezer more recently:

if the future goes the way I predict and yet anybody somehow survives, perhaps somebody will draw a hyperbolic trendline on some particular chart where the trendline is retroactively fitted to events including those that occurred in only the last 3 years, and say with a great sage nod, ah, yes, that was all according to trend, nor did anything depart from trend

And:

There is, I think, a really basic difference of thinking here, which is that on my view, AGI erupting is just a Thing That Happens and not part of a Historical Worldview or a Great Trend.

Human intelligence wasn't part of a grand story reflected in all parts of the ecology, it just happened in a particular species.

Now afterwards, of course, you can go back and draw all kinds of Grand Trends into which this Thing Happening was perfectly and beautifully fitted, and yet, it does not seem to me that people have a very good track record of thereby predicting in advance what surprising news story they will see next - with some rare, narrow-superforecasting-technique exceptions, like the Things chart on a steady graph and we know solidly what a threshold on that graph corresponds to and that threshold is not too far away compared to the previous length of the chart.

One day the Wright Flyer flew. Anybody in the future with benefit of hindsight, who wanted to, could fit that into a grand story about flying, industry, travel, technology, whatever; if they've been on the ground at the time, they would not have thereby had much luck predicting the Wright Flyer. It can be fit into a grand story but on the ground it's just a thing that happened. It had some prior causes but it was not thereby constrained to fit into a storyline in which it was the plot climax of those prior causes.

My worldview sure does permit there to be predecessor technologies and for them to have some kind of impact and for some company to make a profit, but it is not nearly as interested in that stuff, on a very basic level, because it does not think that the AGI Thing Happening is the plot climax of a story about the Previous Stuff Happening.

And:

I think the Hansonian viewpoint - which I consider another gradualist viewpoint, and whose effects were influential on early EA and which I think are still lingering around in EA - seemed surprised by AlphaGo and Alpha Zero, when you contrast its actual advance language with what actually happened.  Inevitably, you can go back afterwards and claim it wasn't really a surprise in terms of the abstractions that seem so clear and obvious now, but I think it was surprised then; and I also think that "there's always a smooth abstraction in hindsight, so what, there'll be one of those when the world ends too", is a huge big deal in practice with respect to the future being unpredictable.

 

(As an example, compare Paul Christiano's post on takeoff speeds from 2018, which is heavily about continuity, to the debate between Paul and Eliezer in late 2021. Despite the participants spending years in discussion, progress on bridging the continuous-discrete gap between them seems very limited.)

Paul and Eliezer have had lots of discussions over the years, but I don't think they talked about takeoff speeds between the 2018 post and the 2021 debate?

AGI Ruin: A List of Lethalities

I'm not Eliezer, but my high-level attempt at this:

[...] The things I'd mainly recommend are interventions that:

  • Help ourselves think more clearly. (I imagine this including a lot of trying-to-become-more-rational, developing and following relatively open/honest communication norms, and trying to build better mental models of crucial parts of the world.)
  • Help relevant parts of humanity (e.g., the field of ML, or academic STEM) think more clearly and understand the situation.
  • Help us understand and resolve major disagreements. (Especially current disagreements, but also future disagreements, if we can e.g. improve our ability to double-crux in some fashion.)
  • Try to solve the alignment problem, especially via novel approaches.
    • In particular: the biggest obstacle to alignment seems to be 'current ML approaches are super black-box-y and produce models that are very hard to understand/interpret'; finding ways to better understand models produced by current techniques, or finding alternative techniques that yield more interpretable models, seems like where most of the action is.
  • Think about the space of relatively-plausible "miracles" [i.e., positive model violations], think about future evidence that could make us quickly update toward a miracle-claim being true, and think about how we should act to take advantage of that miracle in that case.
  • Build teams and skills that are well-positioned to take advantage of miracles when and if they arise. E.g., build some group like Redwood into an org that's world-class in its ability to run ML experiments, so we have that capacity already available if we find a way to make major alignment progress in the future.

This can also include indirect approaches, like 'rather than try to solve the alignment problem myself, I'll try to recruit physicists to work on it, because they might bring new and different perspectives to bear'.

Though I definitely think there's a lot to be said for more people trying to solve the alignment problem themselves, even if they're initially pessimistic they'll succeed!

I think alignment is still the big blocker on good futures, and still the place where we're most likely to see crucial positive surprises, if we see them anywhere -- possibly Eliezer would disagree here.

AGI Ruin: A List of Lethalities

I think most worlds that successfully navigate AGI risk have properties like:

  • AI results aren't published publicly, going back to more or less the field's origin.
  • The research community deliberately steers toward relatively alignable approaches to AI, which includes steering away from approaches that look like 'giant opaque deep nets'.
    • This means that you need to figure out what makes an approach 'alignable' earlier, which suggests much more research on getting de-confused regarding alignable cognition.
      • Many such de-confusions will require a lot of software experimentation, but the kind of software/ML that helps you learn a lot about alignment as you work with it is itself a relatively narrow target that you likely need to steer towards deliberately, based on earlier, weaker deconfusion progress. I don't think having DL systems on hand to play with has helped humanity learn much about alignment thus far, and by default, I don't expect humanity to get much more clarity on this before AGI kills us.
  • Researchers focus on trying to predict features of future systems, and trying to get mental clarity about how to align such systems, rather than focusing on 'align ELIZA' just because ELIZA is the latest hot new thing. Make and test predictions, back-chain from predictions to 'things that are useful today', and pick actions that are aimed at steering — rather than just wandering idly from capabilities fad to capabilities fad.
    • (Steering will often fail. But you'll definitely fail if you don't even try. None of this is easy, but to date humanity hasn't even made an attempt.)
  • In this counterfactual world, deductive reasoners and expert systems were only ever considered a set of toy settings for improving our intuitions, never a direct path to AGI.
    • (I.e., the civilization was probably never that level of confused about core questions like 'how much of cognition looks like logical deduction?'; their version of Aristotle or Plato, or at least Descartes, focused on quantitative probabilistic reasoning. It's an adequacy red flag that our civilization was so confused about so many things going into the 20th century.)

To me, all of this suggests a world where you talk about alignment before you start seeing crazy explosions in capabilities. I don't know what you mean by "we didn't even have the concept of machine learning back then", but I flatly don't buy that the species that landed on the Moon isn't capable of generating a (more disjunctive version of) the OP's semitechnical concerns pre-AlexNet.

You need the norm of 'be able to discuss things before you have overwhelming empirical evidence', and you need the skill of 'be good at reasoning about such things', in order to solve alignment at all; so it's a no-brainer that not-wildly-incompetent civilizations at least attempt literally any of this.

AGI Ruin: A List of Lethalities

I understand the first part of your comment as "sure, it's possible for minds to care about reality, but we don't know how to target value formation so that the mind cares about a particular part of reality." Is this a good summary? 

Yes!

I was, first, pointing out that this problem has to be solvable, since the human genome solves it millions of times every day! 

True! Though everyone already agreed (e.g., EY asserted this in the OP) that it's possible in principle. The updatey thing would be if the case of the human genome / brain development suggests it's more tractable than we otherwise would have thought (in AI).

Seems to me like it's at least a small update about tractability, though I'm not sure it's a big one? Would be interesting to think about the level of agreement between different individual humans with regard to 'how much particular external-world things matter'. Especially interesting would be cases where humans consistently, robustly care about a particular external-world thingie even though it doesn't have a simple sensory correlate.

(E.g., humans developing to care about sex is less promising insofar as it depends on sensory-level reinforcement such as orgasms. Humans developing to care about 'not being in the Matrix / not being in an experience machine' is possibly more promising, because it seems like a pretty common preference that doesn't get directly shaped by sensory rewards.)

3. Producing a mind which reliably terminally values a specific non-sensory entity, like diamonds

Is the distinction between 2 and 3 that "dog" is an imprecise concept, while "diamond" is precise? FWIW, 2 and 3 currently sound very similar to me, if 2 is 'maximize the number of dogs' and 3 is 'maximize the number of diamonds'.

If you could reliably build a dog maximizer, I think that would also be a massive win and would maybe mean that the alignment problem is mostly-solved. (Indeed, I'm inclined to think that's a harder feat than building a diamond maximizer, and I think being able to build a diamond maximizer would also suggest the strawberry-grade alignment problem is mostly solved.)

But maybe I'm misunderstanding 2.

Nope, wasn't meaning any of these! I was talking about "causing the optimizer's goals to point at things in the real world" the whole time.

Cool!

I'll look more at your shards document and think about your arguments here. :)

Load More