You are viewing revision 1.0.0, last edited by
joaolkfThe Emulation argument for human-level AI argues that since whole brain emulation seems feasible then human-level AI must also be feasible. There are many underlying assumptions in the argument, most of them are explored by Chalmers (2010)1. Perhaps the most debated premise is holding that a brain emulation could have a consciousness mind or that consciousness isn’t fundamental to human intelligence. Chalmers 2 formalized the argument as follows:”
- (i) The human brain is a machine.
- (ii) We will have the capacity to emulate this machine (before long).
- (iii) If we emulate this machine, there will be AI.
——————
- (iv) Absent defeaters, there will be AI (before long)”
An emulated-brain populated world could hold severe negative consequences, such as:
- Inherent inability to have consciousness, if some philosophers are right 3 4 5 6.
- Elimination of culture in general, due to an extremely increasing penalty for inefficiency in the form of flamboyant displays 7
- Near zero costs for reproduction, pushing most of emulations to live in a subsistence state. 8
References