Deceptive Alignment is when an AI which is not actually aligned temporarily acts aligned in order to deceive its creators or its training process. It presumably does this to avoid being shut down or retrained and to gain access to the power that the creators would give an aligned AI.

See also: Mesa-optimization, Treacherous Turn, Eliciting Latent Knowledge, Deception