An attempt to keep your p(doom) the same?
> Training corpora have many examples of bad and immoral behavior. Therefore AIs will be
> capable of simulating immoral agents.
Yes. I see so many people working on Alignment worrying about the when the AI will have sufficient capacity to reinvent deceit from first principles as a convergent strategy, and forgetting that our LLMs were trained on a dataset full of real human behavior, good, bad and indifferent, plus fictional characters including supervillains, written by a population of which O(2%) were high-functioning sociopaths concealing their sociopathy, etc. etc.. AIs don't need to invent deceit, any more then they have to invent Paris or taxes.