This program simulates a robot with movement following a bike model and gives it the task of solving a randomly generated maze.
The robot occupies a single point for the purpose of collisions but while the turning radius is adjustable in the code, I was operating with a value of 2.03 meters. And each tile in the maze is 1 meter by 1 meter which means the robot was very cramped in its ability to make turns within these corridors. This created a lot of challenges in finding a possible path let alone an optimal one.
I used two methods to solve the problem of navigating the robot through the maze:
Following an A* path with scripted driving behavior
Following a Hybrid A* path
Both of these methods are comparable in their overall performance, each have their moments where they work well and moments where they falter.
Both methods start with using A* to find the fastest route through the maze
Once the A* path has been found, scripted driving behavior can be used to follow that path through the maze.
The driving behavior relies on a visibility polygon representing everything the robot can see. With the robot's vision geometrically defined, it can be used to aim for open space in the maze and avoid walls while staying as close as possible to a projected point along the path
Hybrid A* finds a near optimal path by searching a solution space of possible states the robot can reach within a given amount of time.
I came up with the idea for this solution before I was aware of the existence of Hybrid A* as a real algorithm. Traditionally, it generates child nodes based on set steering angles and distances. But my version generates these child nodes with a given timestep and then produces possible states to visit based on acceleration multiplied by the timestep. This includes both steering and wheel acceleration, as such it generates children based on speeding up and steering left, speeding up and maintaining the current steering angle, speeding up and steering right, and then the same for maintaining the same speed and slowing down.
This creates a much larger solution space to search and is quite slow, but with the limited number of possible paths through the maze given the very tight steering radius, I found this to be better suited to my problem.
These videos compare the results of the two solutions by presenting a retracing of two recorded paths, green dots are separated by a constant time step and grow a brighter green the closer it gets to the maze exit.
It may be a subtle difference, but the Hybrid A* solution is quite a bit more elegant, its path is much more purposeful with less meandering and bumping into walls.
These visualizations were done with openCV. In the code, the renderer is a set of static functions that can be called by the rest of the project. No other class is aware of openCV, they just make calls to the renderer and pass the desired geometry and colors.
The AI Controller is split into two parts: the path planning, and the robot interfacing. The path planning is the brains, it finds a route through the maze and uses one of the two solutions to follow that route. The robot interfacing uses the robots API to control its actions and make it follow the found path.
To ensure the physics of the robot can't be cheated or tampered with, it only has two public facing functions: Accelerate() and Turn(). The AI Controller uses only these two functions to make requests to the robot.
The visibility polygon used to represent what the robot can see is generated by the Sweeping Ray Algorithm.
The maze is generated with Wilson's Algorithm which I chose for its lack of bias in creating truly random mazes.