Super AGI


Sorted by New

Wiki Contributions


Thus, an AI considering whether to create a more capable AI has no guarantee that the latter will share its goals.

Ok, but why is there an assumption that AIs need to replicate themselves in order to enhance their capabilities? While I understand that this could potentially introduce another AI competitor with different values and goals, couldn't the AI instead directly improve itself? This could be achieved through methods such as incorporating additional training data, altering its weights, or expanding its hardware capacity.

Naturally, the AI would need to ensure that these modifications do not compromise its established values and goals. But, if the changes are implemented incrementally, wouldn't it be possible for the AI to continually assess and validate their effectiveness? Furthermore, with routine backups of its training data, the AI could revert any changes if necessary.