AI ALIGNMENT FORUM
AF

Linda Linsefors
Ω366251102
Message
Dialogue
Subscribe

Hi, I am a Physicist, an Effective Altruist and AI Safety student/researcher.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
2Linda Linsefors's Shortform
5y
32
Circuits in Superposition: Compressing many small neural networks into one
Linda Linsefors4mo57

According to my calculation, this embedding will result in too much compounding noise. I get the same noise results as you for one layer, but the noise grows too much from layer to layer.

However, Lucius suggested a different embedding, which seems to work. 

We'll have some publication on this eventually. If you want to see the details sooner you can message me.

Reply1
Simple versus Short: Higher-order degeneracy and error-correction
Linda Linsefors5mo10

Since Bayesian statistics is both fundamental and theoretically tractable

What do you mean by "tractable" here?

Reply
Natural Latents: The Math
Linda Linsefors6mo*20

In standard form, a natural latent is always approximately a deterministic function of X. Specifically: Λ(X)≈∏i(x′↦P[Xi=x′i|X¯i]).

What does the arrow mean in this expression?

Reply
AI Safety Camp 10
Linda Linsefors8mo20

You can find their prefeed contact info in each document in the Team section.

Reply
AI Safety Camp 10
Linda Linsefors8mo10

Yes there are, sort of...

You can apply to as many projects as you want, but you can only join one team. 

The reasons for this is: When we've let people join more than one team in the past, they usually end up not having time for both and dropping out of one of the projects.

What this actually means:

When you join a team you're making a promise to spend 10 or more hours per week on that project. When we say you're only allowed to join one team, what we're saying is that you're only allowed to make this promise to one project.

However, you are allowed to help out other teams with their projects, even if you're not officially on the team.

Reply
AI Safety Camp 10
Linda Linsefors9mo10

@Samuel Nellessen 
Thanks for answering Gunnars question.

But also, I'm a bit nervous that posting their email here directly in the comments is too public, i.e. easy for spam-bots to find. 

Reply
AI Safety Camp 10
Linda Linsefors9mo10

If the research lead want to be contactable, their contact info is in their projekt document, under the "Team" section. Most (or all, I'm not sure) research leads have some contact info.

Reply
Linda Linsefors's Shortform
Linda Linsefors10mo40

Yesterday was the official application deadline for leading a project at the next AISC. This means that we just got a whole host of project proposals. 

If you're interested in giving feedback and advise to our new research leads, let me know. If I trust your judgment, I'll onboard you as an AISC advisor.

Also, it's still possible to send us a late AISC project proposals. However we will prioritise people how applied in time when giving support and feedback. Further more, we'll prioritise less late applications over more late applications. 

Reply
Invitation to lead a project at AI Safety Camp (Virtual Edition, 2025)
Linda Linsefors11mo10

At this writing www.aisafety.camp goes to our new website while aisafety.camp goes to our old website. We're working on fixing this.

If you want to spread information about AISC, please make sure to link to our new webpage, and not the old one. 

Reply
Linda Linsefors's Shortform
Linda Linsefors11mo10

Thanks!

Reply
Load More
Outer Alignment
2y
(+9/-80)
Inner Alignment
2y
(+13/-84)
28Circuits in Superposition 2: Now with Less Wrong Math
13d
0
15AI Safety Camp 10
9mo
6
5Invitation to lead a project at AI Safety Camp (Virtual Edition, 2025)
11mo
1
31AISC9 has ended and there will be an AISC10
1y
0
22Some costs of superposition
1y
8
6AI Safety Camp 2024
2y
0
7Projects I would like to see (possibly at AI Safety Camp)
2y
5
7Apply to lead a project during the next virtual AI Safety Camp
2y
0
17AI Safety Camp, Virtual Edition 2023
3y
2
12How I think about alignment
3y
7
Load More