Last time, we considered the linear equations and linear inequalities that the solution of a nonogram must satisfy. Let us now solve the minimization problem and see how well compressive sensing works. I will consider several examples, then offer a simple fix for the problems that we will encounter.

# 5. Writing code

There are three coding aspects that I want to go over: (1) how to pass the nonogram information to our code, (2) how to efficiently create the matrices and , and (3) how to solve the -minimization problem.

## a. Input files

In order to create the constraints, our code should know the size of the nonogram, the row sequences, and the column sequences. We create a text file (.txt), whose first line shows the number of rows and number of columns, and whose subsequent lines show the row sequences and column sequences.

Consider the **stylish lambda**.

For this nonogram, the input file looks like,

4 3 2 1 3 1 1 1 2 1 1 3

We also want our code to know the true solution (the solution that we had in mind). We create another text file and list the basis vectors that are needed to describe the true solution. In particular, each line shows the indices .

Here is the solution file for the **stylish lambda**:

1 2 1 2 1 3 3 3 1 4 1 1 4 1 3

## b. Sparse matrices

If you look back, you will see that the matrices and that we derived are sparse. Many of their entries are 0. In such case, it may be computationally more efficient to store only the locations and values of nonzero entries. We know from Section 4 how many nonzero entries there are and where they appear.

In Matlab, we create a sparse matrix by calling the **sparse** routine and providing arrays of the row indices, column indices, and values of the nonzero entries.

% Create a sparse matrix A A = sparse(A_row, A_col, A_val);

## c. Solver

We use Matlab’s **intlinprog** (mixed-integer linear programming) routine to solve the minimization problem

These are the inputs that we pass:

% Solve for x x = intlinprog(ones(N, 1), ... 1 : double(N), ... B, c, ... A, b, ... zeros(N, 1), ones(N, 1), ... []);

Lines 4-5 are clear. The input matrices and vectors form the linear equations and inequalities that the solution must satisfy.

The vector of ones on line 2 corresponds to the objective function. Recall that, because our solution vector takes nonnegative values, we can write the 1-norm of as a linear combination of its entries:

The vector **1 : double(N)** on line 3 tells Matlab that all entries of must be integers, while the vectors of zeros and ones on line 6 indicate their lower and upper bounds. This is how we ask Matlab to look for a solution vector with binary values.

Lastly, the empty array on line 7 tells Matlab to use its best judgment when solving the minimization problem. Instead, we could specify how to preprocess the problem (make it easier to solve), how to make branches and cuts (advanced techniques for optimization), etc. Right now, we do not possess any knowledge in optimization to fine-tune these options. Please note that the obtained solution vector can be different depending on which options are selected.

# 6. Examples

## a. Stylish lambda

5 out of 24 basis vectors are used to solve the nonogram (). There are 16 equations and 16 inequalities.

We see that the row sequences are satisfied. In particular, the two blocks on row 4 are correctly spaced apart. However, we get a “flipped” solution because we had enforced the column sequences weakly. The fact that the column sums are satisfied does not guarantee that we will get the correct solution.

## b. Car

20 out of 288 basis vectors are used (). There are 73 equations and 98 inequalities.

Here, notice that the blocks appear in the correct order for rows 3 and 5, which have non-repeating block sizes.

## c. Square root of i

14 out of 288 basis vectors are used (). There are 73 equations and 84 inequalities.

Finally, an example where we are successful at recovering the solution. The success isn’t “real,” however, since a column with 1 cell or with 8 cells (extending all rows) is easy to satisfy using its column sum.

## d. crunchingnumbers(a)live

137 out of 4200 basis vectors are used (). There are 421 equations and 674 inequalities.

This example and the ones below are the nonograms that I posted with each blog post. These show that we can even solve large nonograms.

## e. Snoopy the flying ace

63 out of 4200 basis vectors are used (). There are 421 equations and 508 inequalities.

## f. Gimbap

99 out of 4200 basis vectors are used (). There are 421 equations and 594 inequalities.

## g. Hug me I’m scared

82 out of 4200 basis vectors are used (). There are 421 equations and 554 inequalities.

# 7. Introducing column basis vectors

We get better-looking solutions if we consider column basis vectors in addition to the row basis vectors. The motivation, of course, stems from the fact that we are doing a great job of reading the row sequences using the row basis vectors . Why not do the same for the column sequences by creating the column basis vectors ?

Even the extra work that we must do is simple, because there is really no distinction between the rows and columns of a nonogram. If you turn the nonogram over about the diagonal, then the rows become the columns and the columns become the rows. This means, we can *reuse* code to create constraints using column basis vectors:

% Create constraints using row basis vectors create_constraints(row_sequences, column_sequences, m, n); % Create constraints using column basis vectors create_constraints(column_sequences, row_sequences, n, m);

Afterwards, we need to connect the row basis vectors with the column basis vectors. To do so, imagine recording if the -th cell is empty or shaded with a variable (we do not actually create this variable in code).

We know from Section 4b that the following equation is true:

Apply the same logic to the column basis vectors. We must have,

Hence, equations relate the row basis vectors to the column basis vectors:

.

## a. Stylish lambda

10 out of 54 basis vectors are used (). There are 45 equations and 34 inequalities.

This time, we obtain the correct solution. It really helps to have a column basis vector that can group three cells on column 3.

## b. Car

41 out of 576 basis vectors are used (). There are 210 equations and 198 inequalities.

## c. Square root of i

24 out of 576 basis vectors are used (). There are 210 equations and 154 inequalities.

## d. crunchingnumbers(a)live

209 out of 8400 basis vectors are used (). There are 1242 equations and 1208 inequalities.

It is truly amazing that we can solve a nonogram in a couple of seconds.

## e. Snoopy the flying ace

142 out of 8400 basis vectors are used (). There are 1242 equations and 1062 inequalities.

Here, we do not get the correct solution (although it looks much better than before). The problem lies with rows 8, 15, 17 and columns 3, 19, which have repeating block sizes. Recall that for such sequences, we weakly enforce the block order.

## f. Gimbap

174 out of 8400 basis vectors are used (). There are 1242 equations and 1130 inequalities.

Again, sequences with repeating block sizes give us trouble. The blocks on rows 5, 6, 7, 12, 13 and columns 16, 17, 19 appear in an incorrect order.

## g. Hug me I’m scared

156 out of 8400 basis vectors are used (). There are 1242 equations and 1088 inequalities.

Almost. The blocks on column 12 do not appear in the correct order.

# 8. Conclusion

We have come up with a new way to solve nonograms using compressive sensing. Our method is easy to understand and implement, and only requires that we solve a binary integer programming problem. We can obtain good solutions by considering both row and column basis vectors. However, there is room for improvement.

As we saw from the **Snoopy**, **gimbap**, and **hug** examples, we need constraints that strongly enforce the block order for sequences with repeating block sizes. I believe that, once we fix this problem, we will always obtain the correct solution.

# Notes

After I finished working on and writing the nonogram series, I discovered that Robert Bosch had also used binary variables to solve nonograms in 2000. The idea that blocks are the base unit (he calls them clusters) and the linear constraints that he had derived are somewhat similar, since they represent the rules of and strategies for a nonogram. However, he had used the binary variables to record different things, and had solved a simultaneous system of equations and inequalities, rather than a compressive sensing or an optimization problem.

You can find his report here. I believe that the compressive sensing approach that Oscar and I created is more theoretically solid and easier to explain and understand.

You can find the code in its entirety here:

Download from GitHub