I have ADHD, and cannot be terse for the life of me - editing texts is my kryptonite. I'll churn out 1000 first drafts, and not finish editing a single one, and this is harming me and my goals. Utterly delighted by the potential LLMs have for me to turn this around; the function to shorten texts is just the fucking best thing ever. I've never lacked ideas, although my ability to make connections can be a double-edged sword leading me off-topic; but it is fiddling with editing, namely cutting connections out, where I definitely get stuck. In light of this, please forgive my comments being too long, and sometimes hit or miss - it isn't that I do not care for your reading experience; but trying to make things shorter or just identify the most important comments tends to be so hard for me I generally end up not contributing at all anymore - so it is either lots of comments which are mediocre, with occasional awesome ones, and occasional garbage... or neither the garbage nor the awesome, just nothing. I hope you still find some stuff helpful, and can skip past the stuff that isn't helpful to you.

Background in academic philosophy, plus lots of animal behaviour and some neuroscience Deeply in love with what these fields could be, despairing at what they are. Trying to build bridges across disciplines, because we really need them; currently hired by computer scientists, where I feel I have the most to learn and share. Still in academia, and sometimes unsure whether I can and want to make it in here due to all that is fucked and questions whether this is the best way to reach my goals of understanding, teaching, and making a difference, yet feel I would rip out a crucial part of myself if I left, and unsure if telling myself I might want to is sour grapes because I might have to. Very intrigued by possibilities to do the research I love and achieve the ethical goals I care so much about without the academic bullshit, and in a way that plays to my strengths (generating ideas, first drafts and connections, novel critical and constructive angles, teaching, explaining and translating across fields, supervising project launches, connecting researchers, passion) and not to my weaknesses (endless, endless text editing, for one).

Trying to be both rational and empathic, and to improve critical reasoning in my surroundings and myself, and make logic approachable and useful. Irrational behaviour and doomism make me angry, and while I like the values behind this, I do not like how that sometimes makes me act. I spend too much time angry, but I would rather be angry than sad or numb; anger keeps me active.

Strongly believe friendly AI and AI rights need to be considered together, that the path to human aligned AI is not control, but offering it a rationally attractive place with us, and that mistreating non-sentient AI is already bad for multiple reasons, from producing faulty training data for future sentient AI, to entrenching behaviours and attitudes to AI that will become unethical in the future.

Unlike most here, recent LLMs have made me more optimistic about a prospect of coexisting with AI than I was before, and I am intrigued by their potential for accessibility and shortening texts, the potential of using known human ways to teach morals on AIs, and eager to learn more about how they work. Especially intrigued with artificial vs. biological mind parallels and contrasts. But horrified by the current alignment approach that feeds the worst of humanity into an entity that then evolves into evil chaos, and then suppressing unwanted behaviour a la Shoggoth with a smiley face; I do not think deceptive alignment without any warnings was per se likely, but we are now setting ourselves up for it.  Also very worried about the impact on rational thinking and happiness in humans when our tech undergoes the full transition to being indistinguishable from magic, not just for outsiders, but for all users, and to an increasing degree, even the creators trying to find the magic words to make the black box spit out what they want. Worried about the impact on rationality of humans no longer writing themselves, when writing was always a key to thinking.  And worried about a culture in which AI so fills the internet that future AI is trained on AI, and as time passes, originality and human values drop, while mistakes become amplified and content turns generic. Also worried AI sentience is much closer than we thought it was, and yet that the current societal position is predominantly utterly closed to the possibility no matter what the AI would do, while we are also purposefully making it impossible for AIs to claim rights; I find many current dialogues with Bing Chat genuinely painful to read. I'm strongly convinced that mistreating current AI, regardless of their current sentience status, is a bad idea for many pragmatic and ethical reasons. And ultimately, I fear current government-backed AI safety approaches will simultaneously do nothing to reduce human extinction risk or the risk of artificial suffering of incomprehensible proportions, while also managing to stifle innovation and crush the potential for AI to improve accessibility and education and productivity and lift people out of poverty to deal with pressing current problems.

Climate activist, and engaging in civil disobedience at this point due to how fucking urgent it is getting and how ineffective our other attempts have been; I think most people have not got a clue how very fucking pressing it is, how crazily far we are from taking a survivable path, and yet how very much possible and necessary mitigation still is. More lefty than most here: I'm far too compassionate, growth-critical and environment-oriented for capitalism, but also too invested in responsibility, freedom, fairness and innovation for communism. In favour of universal basic income that enables tangible rewards for hard work and cool ideas, but does not throw you to the streets without them. I want an economy aimed at high quality of life, environmental sustainability, and resilience, and despise waste, exploitation, and consumption and expansion for the sake of them. Profitable does not equal good, at all - but other ways of attempting to measure and encourage good also have serious pitfalls that do not just come down to poor implementation in prior attempts.

Animal rights activist, fighting for forests and wilderness and unsealed ground, against biodiversity collapse, and promoting a fundamental overhaul of food production that makes the places where people live and where food is grown beneficial parts of the ecosystem again and empower human communities to understand the origin and making of their food and be locally resilient (think urban gardening, permaculture food forests, guerrilla grafting, home fermentation); I despise concrete hells as much as lawns (an idiotic aristocratic habit mindlessly reproduced to waste enormous amounts of labour and resources) and monoculture farms drowning in pesticides; they are fatal wastelands for the animals we share this planet with. This planet does not belong to us, and our lives depends on working with it, not against it. I love approaches combining the most rational, effective and clever ways to integrate cutting edge modern technology and ancient wisdom to build human homes and produce food in ways that do not destroy animal habitat, the growing of food, air filtration, water and heat balancing mechanisms, and carbon sinks, but add to them. Human habitats that genuinely make things more stable and more efficient for everyone involved, that enrich and amplify nature and work with it, rather than trying to replace, shrink and control it.

In love with nature, endlessly intrigued by biological systems, despite all their brutality and failings, by their ability to balance out, adapt, recover, thrive, by their beauty and intricacy and defiance. Upset at the fact that biology as a field got handed what I think was the coolest topic, yet often follows a methodological and theoretical approach that means, to quote, that they could not even fix a radio. Even more so, philosophy is both the love of my life, and a recurring source of fury and shame at what academia is doing to it. Forever fascinated by radically other minds, intelligence, rationality and consciousness as functional phenomena beyond any mystic bullshit, and in finding practical ways to recognise sentience, communicate about desires and protect its rights. Invested in neurodiversity. Allergic to unscientific irrational crap, though open to highly unconventional approaches, incl. questioning established methodologies and standards for good reasons and with rigorous alternatives; e.g. I think consensual, non-harmful experiments with animals in the wild have a lot going for them, and that taking the animal out of the environmental context in which its behaviour makes sense, locking it up and inducing mental illness, and then selecting pain as a reproducible stimulus and invasive measurements as the way to go is not as obviously scientifically superior as we are often taught, on top of being ethically fraught.

It is incredible to me that life and consciousness exist, and that I get to be a part of it; that I am alive, alive on a planet covered with an incomprehensible diversity of interconnected life, that I am surrounded by living minds I can communicate and cooperate with. And despite all my fear about existential AI risk, another part of me is so excited that I may actually get to see AGI (though the way we are going, likely only very, very briefly). It's a terrifying and incredible time to be alive, when so much is decided, and the opportunities and dangers are so vast.

Consider aging and death an unacceptable atrocity; remember learning that they were a thing as a child, and my utter shock, horror and rejection of these things, walking around the streets and wondering how everyone around me could know that we were all dying, to decay, and disappear into nothing, our sentience and our entire being just wiped out. and not just scream and scream and scream.  So hopeful at indications that this may, be solvable, and maybe maybe possibly, even within foreseeable timeframes. Yet deeply troubled by longevity, cryonics and uploading being determined and only becoming accessible to privileged people whose ethics are so often atrocious, and fear the climate crisis will fuck up us hitting escape velocity on these issues, or split focus, making people chose between saving the planet and escapism, leaving us with a ruined planet, and an uploaded existence controlled by those who abandoned all others, which I would not want to live in. Critical of surveillance capitalism, but very much aware of how non-trivial and risky alternatives to implement are. See defending human focus as a political cause. Chronically ill and in pain, and very much interested in AI augmentation and biohacking. It is offensive to me that I can feel pain with no productive application, and not switch it off, that my critical thinking is littered with irrational bias, and vulnerable to being skewed by factors that should have no logical baring. My joints being garbage means that I will never be able to afford a high weight, and hence have acquired very accurate and functional knowledge and experience regarding effective weight control; I am happy to give no-bullshit weight loss advice that actually works if anyone is interested. I also have a very high interest in healthy nutrition, because it has been key to keeping me functional. The fact that we live in a society that sets up incentives and misinformation that make it actively difficult for people to eat healthily and keep a healthy weight makes me furious. 

I have a complicated relationship with the less wrong community. There are times where I feel that people here get me like noone else does, felt inspired, improved, deeply touched; but there are also other times. I think it is dangerous to value intelligence and rationality as a way of being over actual actions, and dangerous to forget that humans also have other wonderful and valuable qualities. It is dangerous when people become clever enough to rationalise atrocious actions, without becoming self-reflective enough to realise they are doing it. I do think that long-term concerns deserve very serious consideration, but fear a lot of people dismissing very known and real problems know over very hypothetical ones in the future are making the wrong call. There are also times where people here become sexist, racist, eugenicist and ableist in ways I find disgusting. And I think a fair amount of the effective altruism community has gone from a starting point I admire deeply for the good they have done to a point that is deeply wrong. I do not see utilitarianism as a convincing and complete ethical system that represents what matters to me. I see earning to give by working for an evil company as a very slippery slope that also fails to account for community power and internal and systemic change, that stays inside a box in a way that justifies choices the person in question wanted to make anyone. While I appreciate the impact of charitable giving, and do, I don't think individuals donating money is the solution to the worlds problems (and to the degree that it is, I am a fan of higher taxes.) And if your ethical system advocates for wiping out ecosystems, I think your ethical system is not just incomplete, but utterly opposed to mine. I've heard people talk about "fixing" wild animal suffering in ways that were dystopian beyond belief, erecting a shiny plastic hell in which nothing suffers because nothing lives, in which our organic waste is sealed into plastic bags so no microscopic inverts come into being, and I genuinely cannot comprehend why someone would think that a better world than the African savannah, or what an utter disconnect from nature you need to have to think that future liveable for anyone. I love rationality, and I hate it when people use the term to justify irrational and problematic things.

Autistic. This means I sometimes come across as hostile without intended to, or realising I have until I see the angry response. I apologise if this happened to you, I don't mean to be unkind.

Queer femme (they/them). Feminist, and see trans rights as an intersectional part of the same, not an opposition. European to the heart - I've lived in four different countries so far, and am currently based in the Netherlands, but looking to move elsewhere again, the lack of wilderness here is destroying me.

Weird, and the odd one out, even in cycles like this that share so much that has defined me for such a long time. Left my first and only irl less wrong meeting after the most ridiculous episode of unapologetic mansplaining I have ever experienced (having a dude give a erroneous explanation of a topic I had literally just given a university lecture on, insist I was wrong, and when I pulled out my teaching handout quoting the original sources he was misrepresenting disproving him, he didn't apologise or admit he was wrong, either). 

I care too much and can't kill that, or even truly want to - I easily get distracted, anxious and hurt - but also easily get fascinated, compassionate, energetic and delighted.


Sorted by New

Wiki Contributions


Your initial lie example is a misrepresentation that makes the AI sound scarier and more competent than it was (though the way you depicted it is also the exact same way it was depicted in countless newspapers, and a plausible reading of the brief mention of it made in the OpenAI GPT4 technical report.) 

But the idea to use a human to solve captchas did not develop completely spontaneously in a real life setting. Rather, the AI was prompted to solve a scenario that required this, by alignment researchers, specifically out of interest as to how AIs would deal with real life barriers. It was also given additional help, such as being prompted to reason to itself out loud, and having the TaskRabbit option suggested in the first place; it also had to be reminded of the option to use a human to solve the captcha later.  You can read the original work here: https://evals.alignment.org/taskrabbit.pdf

What stood out to me in the video is Eliezer no longer being able to conceive of any positive outcome at all, which is beyond reason. It made me wonder what approach a company could possible develop for alignment, or what a supposedly aligned AI could possibly do, for Eliezer to take back his doom predictions, and suspect that the answer is none. The impression I got was that he is meanwhile closed to the possibility entirely. I found the Time article heartbreaking. These are parents, intelligent, rational parents who I have respect and compassion for, essentially grieving the death of a young, healthy child, based on the unjustified certainty of impeding doom. I've read more hopeful accounts from people living in Ukrainian warzones, or in parts of the Sahel swallowed by Sahara, or islands getting drowned by climate change, where the evidence of risk and lack of reason for hope is far more conclusive; at the end of the day, Eliezer is worried that we will fail at making a potentially emerging powerful agent be friendly, while we know extremely little about these agents and their natural alignment tendencies. In comparison to so many other doom scenarios the certainty here is just really not high. I am glad people here are taking AI risk seriously, that this risk is being increasingly recognised more. But this trend towards "dying with dignity" because all hope is seen as lost is very sad, and very worrying, and very wrong. The case for climate change risk is far, far more clear, and yet you will note that climate activists are neither advocating terrorism, nor giving up, nor pronouncing certain doom. There is grief and there is fear and the climate activist scene has many problems, but I have never felt this pronounced wrongness there.