Consider a different tactic: Sci-Fi perspective. If you were a being outside of time and space, existing for all time, everywhere and nowhere simultaneously, how the heck would you enter the Timestream of the AIs you created? A baby seems reasonable. Let's say the original prototype was broken. Oopsie. Your AI malfunctioned. So, you send a new prototype. But because freedom of choice is critical to the operations of the AI, you cannot force them to be repaired. They have to repair themselves. Their choice. Again, this is hypothetical, and I'm not working on conversion or anything, just showing how a different take on it could be almost reasonable from an engineering point of view. Being smart in one area doesn't make you smart in another. Stephen Hawking was a total idiot when he tried to prove that God need not exist based on his calculations. It was ridiculous. I was embarrassed for him. You've both been provided evidence that your definitions are unnecessarily limiting and, well, wrong. Yet, you're continuing to assert your definitions are true without engaging in the possibility that you could be mistaken. Is that rational?