Artificial Intelligence (AI) has become a hot topic since 2016 when Lee Se-dol and AlphaGo became a big hit. However, AlphaGo's computational results were made through the hand of a substitute driver in the game. It predicted that if there was hardware that matched advanced software, viewers would be more immersed in the Go game and that AlphaGo and Lee Se-dol would be clearly visible as AI and human. Therefore, in this paper, we want to design the combination between software and hardware using vision system and Robot for user immersion and real-time interactive communication with AI. In particular, unlike previous experiments in which only one robot was previously treated as a single robot and conducted only in a limited area, it induces higher user interaction through parallel processing in two. The implementation of the vision system took place in the Python environment of Zet Brains, and two robots are used. In fact, when a user starts a concave egg on a board of go, the starting point of the concave AI is calculated through the vision system, and the launch of the board is carried out through the robot. In conclusion, the proposed AI robotic omok platform based on multi-robot cooperative system proves its effectiveness and will be available in areas such as education for children with developmental disabilities.