Victoria Krakovna | v1.12.0Dec 9th 2022 | (+41/-31) | ||
Alex Turner | v1.11.0Nov 21st 2020 | (+22) Softened epistemic status to not exaggerate intuitive implications of power-seeking theorems | ||
Alex Turner | v1.10.0Aug 5th 2020 | (+95/-32) | ||
Alex Turner | v1.9.0Aug 5th 2020 | (+7/-7) | ||
Alex Turner | v1.8.0Aug 5th 2020 | (+57) | ||
Ruben Bloom | v1.7.0Aug 5th 2020 | (+19/-17) | ||
Ruben Bloom | v1.6.0Aug 5th 2020 | |||
Ruben Bloom | v1.5.0Aug 5th 2020 | (+10/-13) | ||
Ruben Bloom | v1.4.0Aug 4th 2020 | (-1) | ||
Alex Turner | v1.3.0Aug 4th 2020 | (+1507/-20) |
Impact measures penalize an AI for affecting us too much. To reduce the risk posed by a powerful AI, you might want to make it try accomplish its goals with as little impact on the world as possible. You reward the AI for crossing a room; to maximize time-discounted total reward, the optimal policy makes a huge mess as it sprints to the other side.
How do you rigorously define "low impact" in a way that a computer can understand – how do you measure impact? These questions are important for both prosaic and future AI systems: objective specification is hard; we don't want AI systems to rampantly disrupt their environment. In the limit of goal-directed intelligence, theorems suggest that seeking power tends to be optimal; we don't want highly capable AI systems to permanently wrench control of the future from us.
Impact measures penalize an AI for affecting us too much. To reduce the risk posed by a powerful AI, you might want to make it try accomplish its goals with as little impact on the world as possible. You reward the AI for crossing a room; to maximize time-discounted total reward, the optimal policy makes a huge mess as it sprints to the other side.
How do you rigorously define "low impact" in a way that a computer can understand? Howunderstand – how do you measure impact? These questions are important for both prosaic and future AI systems: objective specification is hard, and; we don't want AI systems to rampantly disrupt their environment. Furthermore,In the limit of goal-directed intelligence, seeking power tends to be optimal; we don't want highly capable AI systems to permanently wrench control of the future from us.
Impact measures penalize an AI for affecting us too much. To reduce the risk posed by a powerful AI, you might want to make it try accomplish its goals with as little impact on the world as possible. You reward the AI for crossing a room; to maximize time-discounted total reward, the optimal policy makes a huge mess as it sprints to the other side.
How do you rigorously define "low impact" in a way that a computer can understand? How do you measure impact?impact? These questions are important for both prosaic and future AI systems: objective specification is hard, and we don't want AI systems to rampantly disrupt their environment. Furthermore, we don't want highly capable AI systems to permanently wrench control of the future from us.
Impact measures penalize an AI for affecting us too much.To reduce the risk posed by a powerful AI, you might want to make it try accomplish its goals with as little impact on the world as possible. You reward the AI for crossing a room; to maximize time-discounted total reward, the optimal policy makes a huge mess as it sprints to the other side.
How do you rigorously define "low impact" in a way that a computer can understand? How do you measure impact? These questions are important for both prosaic and future AI systems: objective specification is hard, and we don't want AI systems to rampantly disrupt their environment. Furthermore, we don't want highly capable AI systems to permanently wrench control of the future from us.
To reduce the risk posed by a powerful AI, you might want to make it try accomplish its goals with as little impact on the world as possible. You reward the AI for crossing a room; to maximize time-discounted total reward, the optimal policy makes a huge mess as it sprints to the other side.
How do you rigorously define low impact"low impact" in a way that a computer can understand? How do you measure impact?impact? These questions are important for both prosaic and future AI systems: objective specification is hard, and we don't want AI systems to rampantly disrupt their environment. Furthermore, we don't want highly capable AI systems to permanently wrench control of the future from us.
To reduce the risk posed by a powerful AI, you might want to make it try accomplish its goals with as little impact on the world as possible. You reward the AI for crossing a room; to maximize time-discounted total reward, the optimal policy makes a huge mess as it sprints to the other side.
How do you rigorously define "low impact"low impact in a way that a computer can understand? How do you measure impact? These questions are important for both prosaic and future AI systems: objective specification is hard, and we don't want AI systems to rampantly disrupt their environment. Furthermore, we don't want highly capable AI systems to permanently wrench control of the future from us.
To reduce the risk posed by a powerful AI, you might want to make it try accomplish its goals with as little impact on the world as possible. But howYou reward the AI for crossing a room; to maximize time-discounted total reward, the optimal policy makes a huge mess as it sprints to the other side.
How do you rigorously define "low impact""low impact" in a way that a computer can understand? How do you measure impact? These questions are important for both prosaic and future AI systems: objective specification ishard, and we don't want AI systems to rampantly disrupt their environment. Furthermore, we don't want highly capable AI systems to permanently wrench control of the future from us.
Currently, impact? measurement research focuses on two approaches:
For a review of earlier work, see A Survey of Early Impact Measures.
Sequences on impact measurement:
Related tags: Instrumental Convergence, Corrigibility, Mild Optimization.
Impact
measuresregularizers penalize an AI for affecting us too much. To reduce the risk posed by a powerful AI, you might want to make it try accomplish its goals with as little impact on the world as possible. You reward the AI for crossing a room; to maximize time-discounted total reward, the optimal policy makes a huge mess as it sprints to the other side.How do you rigorously define "low impact" in a way that a computer can understand – how do you measure impact? These questions are important for both prosaic and future AI systems: objective specification is hard; we don't want AI systems to rampantly disrupt their environment. In the limit of goal-directed intelligence, theorems suggest that seeking power tends to be optimal; we don't want highly capable AI systems to permanently wrench control of the future from us.
Currently, impact
measurementregularization research focuses on two approaches:Sequences on impact
measurement:regularization: