In addition, some scholars argue that solutions to the control problem, alongside other advances in AI safety engineering, might also find applications in existing non-superintelligent AI.
One particular concern is that humanity will have to solve the control problem before a superintelligent AI system is created, as a poorly designed superintelligence might rationally decide to seize control over its environment and refuse to permit its creators to modify it after launch. In artificial intelligence (AI) and philosophy, the AI control problem is the issue of how to build AI systems such that they will aid their creators, and avoid inadvertently building systems that will harm their creators.