As an experiment, I recently decided to attempt solving Rubik’s cube using genetic algorithms (GA). If you are not familiar with GA, the main idea is to find a solution in ways that mimic natural selection and evolution. In general, these are the steps that need to be followed:

  1. Create a population of candidate solutions (usually randomly generated).
  2. Define a fitness score that will be used to evaluate how close a candidate is to achieving the goal.
  3. Evaluate all candidate solutions and sort them according to their fitness.
  4. Define a strategy regarding how to evolve the population. Usually you’ll want to keep the top performers and introduce some random changes, for example, you can slightly mutate a candidate solution or combine two solutions.
  5. Once you have a new population, rinse and repeat until you find a solution (goto step 3).
  6. Sometimes the “genetic pool” will not be good enough to find a solution. In this case the most common approach is to kill the entire population and start from scratch again.

But before talking about implementation details, some basic knowledge regarding Rubik’s cube is needed. Let’s start with notation.

Rubik’s Cube Notation

  • Single letter rotations are performed in a clockwise way.
  • Rotations that have the prime symbol are performed counterclockwise.
  • Look directly at the specified side to determine which direction is clockwise or counterclockwise. For example, for D and D' you have to imagine that you are directly facing the bottom side.
  • U2 is equivalent to two U rotations.
  • Finally, notice that there are some cases (x, y, z) which rotate the whole cube instead of a single side.
U U' U2 D D' D2
U U' U2 D D' D2
R R' R2 L L' L2
R R' R2 L L' L2
F F' F2 B B' B2
F F' F2 B B' B2
M M' M2 E E' E2
M M' M2 E E' E2
s S' S2 x y z
S S' S2 x y z

Implementing Rubik’s cube


There are many approaches on how to represent a cube, but I decided to use a simple dictionary holding six 3x3 numpy matrixes (one for each face).

self.faces = {
    FRONT: np.full((3, 3), GREEN),
    LEFT: np.full((3, 3), ORANGE),
    RIGHT: np.full((3, 3), RED),
    TOP: np.full((3, 3), WHITE),
    BOTTOM: np.full((3, 3), YELLOW),
    BACK: np.full((3, 3), BLUE),

For the front face it’s very intuitive that [0, 0] will be the left top corner. However, back[0, 0], left[0, 0] and bottom[0, 0] are dependent on the way you looking at the cube, so this is the mapping to avoid any confusions. The [0, 0] position is marked in purple. This is also consistent with the way clockwise and counterclockwise are defined.



Numpy includes many useful methods such as rot90 and flip which are handy for the cube’s rotation implementation. For example, let’s analyze R.


As you can see, we first perform a rotation on the right face matrix and then we swap the values of the bottom, front, top and back matrixes accordingly.

def R(self):
    self.faces[RIGHT] = np.rot90(self.faces[RIGHT], axes=CLOCKWISE)
    # __swap_y recieves four 3-tuple values
    # Each 3-tuple has:
    # - index 0 = face
    # - index 1 = column
    # - index 2 = flip values (boolean)
    # This reads: 
    # - move "bottom" column 2 to "front"   column 2
    # - move "front"  column 2 to "top"     column 2
    # - move "top"    column 2 to "back"    column 0 flipping values
    # - move "back"   column 0 to "bottom"  column 2 flipping values
    self.__swap_y((BOTTOM, 2, False), (FRONT, 2, False), (TOP, 2, True), (BACK, 0, True))

The only tricky part is that when dealing with the back face the values need to be inverted as shown in the image.


TOP[0, 2] -> BACK[2, 0]
TOP[1, 2] -> BACK[1, 0]  
TOP[2, 2] -> BACK[0, 0]

If you are interested, you can check the full implementation in the source code.

Genetic Algorithm

Population initialization

As described in the introduction, we need to create a Rubik’s cube population for our simulation. We’ll use the following scramble that was obtained using this online scrambler.

D' B2 D2 L2 U' L R' F L2 R2 U' L2 B' L D' B2 R2 B' R F U2 R B2 F' L' B2 L2 R F2 L'

By the way, this cube simulator is excellent and I used it extensively to experiment, debug, validate test case results, generate images, etc.

Scrambled Cube

After scrambling all the cubes, we are going to randomize them by performing two random moves such as R, L', U, D2, etc.

# initialize population
cubes = []
for i in range(0, self.population_size):
    cube = Cube()
    # randomize it

Fitness function

Remember that the fitness function will allow us to determine which candidates are closer to the solution and one option would be to count how many correctly placed cube pieces we have. However, it’s a little bit annoying to calculate. For instance, a corner should be counted as one piece, but it has interactions with three faces. What we can do instead is sticker counting. Counting misplaced stickers is easier, so the goal will be to minimize that number. In other words, lower is better and if the fitness is equal to zero then the cube is solved.

def __calculate_fitness(self):
    misplaced_stickers = 0

    for k, face in self.faces.items():
        # centers are fixed in a Rubik cube
        center = face[1, 1]

        for i in range(0, 3):
            for j in range(0, 3):
                if face[i, j] != center:
                    misplaced_stickers += 1

    return misplaced_stickers

Evolution strategy

My early attempts involved randomly generating moves (R, L, U, etc.), but this strategy didn’t work at all. To understand why, we need to take a look at what happens when we perform a single rotation:


In this case, 20 stickers changed their position (3 on each side plus 8 on the top face). A cube has 54 stickers, so each time we perform a rotation we are changing 37% of the cube’s state and that’s too much if we want to apply an evolutionary approach. So what we can do? Can we find a way to introduce smaller changes? The answer is yes and we can use well known Rubik’s cube algorithms for that purpose. For example, R U' R U R U R U' R' U' R2 permutes just three stickers as you can see in the image.

Algorithm 1

Similarly, F' L' B' R' U' R U' B L F R U R' U introduces a small change (two edge permutation on the top layer).

Algorithm 2

Here’s the list of all the permutations we are going to use:

    # permutes two edges: U face, bottom edge and right edge
    "F' L' B' R' U' R U' B L F R U R' U",
    # permutes two edges: U face, bottom edge and left edge
    "F R B L U L' U B' R' F' L' U' L U'",
    # permutes two corners: U face, bottom left and bottom right
    "U2 B U2 B' R2 F R' F' U2 F' U2 F R'",
    # permutes three corners: U face, bottom left and top left
    "U2 R U2 R' F2 L F' L' U2 L' U2 L F'",
    # permutes three centers: F face, top, right, bottom
    "U' B2 D2 L' F2 D2 B2 R' U'",
    # permutes three centers: F face, top, right, left
    "U B2 D2 R F2 D2 B2 L U",
    # U face: bottom edge <-> right edge, bottom right corner <-> top right corner
    "D' R' D R2 U' R B2 L U' L' B2 U R2",
    # U face: bottom edge <-> right edge, bottom right corner <-> left right corner
    "D L D' L2 U L' B2 R' U R B2 U' L2",
    # U face: top edge <-> bottom edge, bottom left corner <-> top right corner
    "R' U L' U2 R U' L R' U L' U2 R U' L U'",
    # U face: top edge <-> bottom edge, bottom right corner <-> top left corner
    "L U' R U2 L' U R' L U' R U2 L' U R' U",
    # permutes three corners: U face, bottom right, bottom left and top left
    "F' U B U' F U B' U'",
    # permutes three corners: U face, bottom left, bottom right and top right
    "F U' B' U F' U' B U",
    # permutes three edges: F face bottom, F face top, B face top
    "L' U2 L R' F2 R",
    # permutes three edges: F face top, B face top, B face bottom
    "R' U2 R L' B2 L",
    # H permutation: U Face, swaps the edges horizontally and vertically
    "M2 U M2 U2 M2 U M2"

Now that we are able to introduce small and simple changes, it’s time to decide how are we going to evolve the current population. This part took a lot of experimentation, so I’ll just describe the strategy that worked best:

  • Promote the best performers to the next generation without changing them in any way (in GA this is called elitism).- Discard all the other candidate solutions and replace them:
    • Getting a copy of random top performer.
    • Randomly adding permutations or rotations to the copy.
    • Replacing the candidate solution with the new copy.
 # the goal is to minimize the fitness function
# 0 means that the cube is solved
for i in range(0, len(cubes)):
    if cubes[i].fitness == 0:
        print("Solution found")

    # elitism: the best performers move to the next generation without changes
    if i > self.elitism_num:
        # copy a random top performer cube
        self.__copy(cubes[i], cubes[rnd.randint(0, self.elitism_num)])
        evolution_type = rnd.randint(0, 5)

        if evolution_type == 0:
        elif evolution_type == 1:
        elif evolution_type == 2:
        elif evolution_type == 3:
        elif evolution_type == 4:
        elif evolution_type == 5:

If you’ve read the source code, you might wonder what an orientation is. For reasons that I’ll explain below, it worked best to separate full cube rotations into two groups: rotations and orientations (lacking a better name).

ROTATIONS = ["x", "x'", "x2", "y", "y'", "y2"]
ORIENTATIONS = ["z", "z'", "z2"]

Now let’s take a look again at permutations. If you notice, in the list of permutations we only have two permutations that change the values of two edges.

# permutes two edges: U face, bottom edge and right edge
"F' L' B' R' U' R U' B L F R U R' U",

Permutation 1

# permutes two edges: U face, bottom edge and left edge
"F R B L U L' U B' R' F' L' U' L U'",

Permutation 2

But what if we need to swap the values of two edges in the bottom or the back layer?. One approach would be to include all possible permutations (not just the previous two), but it’s a lot of error-prone work. This is why we include rotations and orientations in the evolution strategy. So if we need to swap two edges in the back face, a rotation and an orientation (or viceversa) might be enough to move the cube into a position which is handled by our permutations.

Just for completeness, in GA theory there are two important concepts: mutation and crossover. Mutation involves changing a part of an existing solution. For example:

R L U2 D B' R
R R U2 D B' R

In our case, we could argue that adding moves is some sort of mutation.

Crossover, on the other hand, involves selecting two candidate solution and creating offspring by combining their “genes”. For our Rubik’s cube case it could look like this:

Parent 1 : R L U2 D B' R
Parent 2 : F B R2 L U' D
Child : R L U2 L U' D

However, in practice and for this specific problem, crossover didn’t perform well and actually had a penalty on performance because the cube’s state had to be recalculated. I guess that when every step of the solution depends on the previous steps it’s very hard to come up with a breakthrough using crossover.

As for parameters, I’ve got good results with this configuration. It means that we want to have 500 cubes and that we are going to promote 50 to the next generation unchanged. Hence we are going to replace the other 450 cubes with a mutation applied to a random top 50 performer. If we don’t find a solution after 300 generations, then let’s get rid of the population and start over (up to 10 times).

population_size = 500
elitism_num = 50
max_generations = 300
max_resets = 10

If you decide to experiment with these values, a smaller population size will speedup generations, but you’ll risk getting stuck because of lack of randomness in your population.

Let’s run it

python -m src.solver


Let’s check if it’s actually a valid solution. We are going to use the cube simulator.

Solution Check

It works! If you run the program multiple times, you’ll notice that you’ll get different solutions and your mileage will definitely vary depending on the Gods of Random Numbers.

Is this a good solution?

We need more cube theory to answer this question. You can read the whole article here, but this is the most important conclusion:

“With about 35 CPU-years of idle computer time donated by Google, a team of researchers has essentially solved every position of the Rubik’s Cube™, and shown that no position requires more than twenty moves.

In our previous run we found a 276-move solution, so it’s far away from the optimal one. However, keep in mind that our permutations incrementally changed very small parts of the cube as a requirement for the GA to work, so it’s not a very fair comparison. Methods used for blindfolded solving are more similar to our approach and it’s not uncommon for those solutions to be in the hundred-move range, so maybe our solution is not that bad. 😉

But in any case, it’s a good time to explain some of the GA disadvantages:

  • It’s hard to find the best solution. What usually happens is that the genetic algorithm finds a “good enough” solution and keeps working on it.
  • Tuning the parameters can take a lot of time and experimentation.
  • The results are non-deterministic; you’ll get different solutions and the time to find them can vary quite significantly in many cases.

Final thoughts

It was a fun project and I’m satisfied with the results for now. Maybe I’ll try to optimize the fitness function or the overall stategy in the future. If you want to experiment or improve the program, you can find the source code on GitHub.

If you want to learn more about Rubik’s cube, BadMephisto’s site is a very good resource and it covers all the basic and advanced topics. Finally, if you decide to get a Rubik’s cube, please make sure to buy a speedcubing one because learning with a clunky and slow turning cube is really frustrating. You can find some nice suggestions in this video.

Thanks for reading!!!