Over the next month, I want to present a project that my friend Oscar and I came up with and worked on for a class. We developed a new technique to solve nonograms using compressive sensing.

The trick is knowing how to write the solution to *any* nonogram as a sparse vector . Sparse means there are many entries that are zero, and compressive sensing considers this fact to find a *unique* solution to an underdetermined system! It’s strikingly different from traditional linear algebra, which says that an underdetermined system that has a solution has, in fact, infinitely many solutions.

Our method avoids (1) partial fill-ins, (2) heuristics, and (3) over-complication, and only requires that we solve a binary integer programming problem. There was one issue that Oscar and I couldn’t solve in time, however. I hope that I can now by revisiting this project after three years. Ideas are welcome!

# 1. Introduction

A nonogram is a logic puzzle where we shade certain cells in a 2D grid to reveal a hidden image. We use the sequences of numbers on the left and the top of the grid to figure out how many and which cells to shade. Here is a completed nonogram that reveals a very stylish .

A sequence means, we shade blocks of many cells, with at least one empty cell between two consecutive blocks. There may or may not be empty cells before the block of cells and after that of cells.

Typically, one (a person or a computer program) uses logic to solve a nonogram. For a large puzzle, this may be laborious (albeit deliciously challenging). In fact, determining if the solution to a nonogram exists, even with assumptions such as the solution being convex or connected, and determining if the solution is unique are both NP-complete problems **[6, 7, 5]**. Note that we can easily check if we have the right solution; we just look at the row and column sequences and see if our solution satisfies them.

Here, we will solely focus on solving a nonogram. The only assumptions that we need are that the solution to the nonogram exists and is unique. These are very reasonable assumptions; we probably wouldn’t want to solve a nonogram that has no solution or has multiple solutions. Other than these assumptions, the nonogram is as general as it can be. It has the size -by-, with rows and columns. For the **stylish lambda** above, we have and .

We briefly mention that a nonogram is related to a discrete tomography problem. In discrete tomography, we are interested in recreating an object from a number of its projections. Consider a 2D object (colored in green below).

We make two 1D projections—along the “row” and “column” directions—and obtain the row sums and the column sums . How can we recreate the object from the values and ?

People have developed different techniques to solve a nonogram; we bring up only a few here and refer to **[8]** for a comprehensive list. As previously mentioned, one typically uses logic to solve a nonogram. Some have tried to partially fill in cells using logic and heuristically decide which row or column to tackle next **[2, 3]**. They would apply some technique to the selected row or column, and repeat these steps until the nonogram is solved. More advanced (complicated) techniques include using an evolutionary algorithm** [1, 4]**.

# 2. Complexity in solving nonograms

Solving nonograms is a NP-complete problem. However, proving so is neither trivial nor intuitive. In this section, we present a simple combinatorial argument that shows why an exhaustive search through the possible solutions is impractical. This warrants us to design more efficient methods. In addition, **Lemma 2.1** and **Theorem 2.3** will help us analyze our method in the next two sections.

## Lemma 2.1. Expectation for nonograms

Suppose that a nonogram is created randomly. That is, in the creation process, we decided to shade each cell or leave it empty with a probability of , independently of other cells.

Then, for any row, the expected number of shaded cells, i.e. the expected row sum , is equal to .

Furthermore, for any row, the expected number of blocks of shaded cells, , is about (equal to ).

### Proof.

The first statement is clear: we can expect half of the cells on a row to be shaded, and the other half to be empty.

For the second, we appeal to the following problem: If we flip a fair coin times, how many runs of heads and runs of tails can we expect? Well, let’s set if the -th flip comes out heads, and if tails.

For , a run ends if (this occurs with probability ). For the last flip , a run always meets its end. Hence, the total number of runs, , is equal to

,

where is the indicator function (it returns a value of 1 if the expression inside is true, 0 otherwise).

By the linearity of expectation, we get

.

In a nonogram, a block of shaded cells corresponds to the runs of heads. Take one-half of the expected value above.

■

## Lemma 2.2. Integer partition

There are integer solutions to the system

### Proof.

We have books to place on a shelf, and we also want to divide the books into groups (a group may contain no books). Clearly, this requires that we have bookends. Hence, there will be objects on the shelf, and we will choose of them to be the bookends.

■

## Theorem 2.3. Row partition

Let be a sequence given for a row of the nonogram, and let be the row sum. There are configurations that satisfy the sequence.

### Proof.

Let denote the number of empty cells between two consecutive blocks of shaded cells. Furthermore, let and denote the number of empty cells before the first block and after the last block.

The numbers are fixed, so it is the numbers that dictate the possible configurations. The numbers satisfy the system

Equivalently, if we write for ,

From **Lemma 2.2**, we know that there are solutions.

■

## Corollary 2.4. Number of possible solutions

Let and denote the row sum and the number of blocks of shaded cells for the -th row.

An exhaustive search that takes row sequences into account still requires that we check configurations. Had the nonogram been created randomly, this number is about .

### Proof.

The statement easily follows from **Lemma 2.1** and **Theorem 2.3**.

■

We end this section by observing how quickly the number in Corollary 2.4 grows as we increase the puzzle size. Suppose a nonogram (i.e. ) has been created randomly. We round down and to represent a best-case scenario. For an optimistic comparison (“it could be worse”), we also give the number for a pure exhaustive search.

# References

**[1]** K. J. Batenburg and W. A. Kosters, A Discrete Tomography Approach to Japanese Puzzles, Proc. Belgian-Dutch Conf. Artificial Intelligence (2004), 243-250.

**[2]** K. J. Batenburg and W. A. Kosters, Solving Nonograms by Combining Relaxations, Pattern Recognition 42 (2009), 1672-1783.

**[3]** Sancho Salcedo-Sanz, et al., Solving Japanese Puzzles with Heuristics, Computational Intelligence and Games (2007), 224-231.

**[4]** Jinn-Tsong Tsai, Solving Japanese Nonograms by Taguchi-Based Genetic Algorithm, Applied Intelligence 37 (2012), 405-419.

**[5]** Nobuhisa Ueda and Tadaaki Nagao, NP-Completeness Results for Nonogram via Parsimonious Reductions, technical report (1996).

**[6]** Jan N. Van Rijn, Playing Games: The Complexity of Klondike, Mahjong, Nonograms and Animal Chess, master’s thesis (2012).

**[7]** Gerhard J. Woeginger, The Reconstruction of Polyominoes from Their Orthogonal Projections, Information Processing Letters 77 (2001), 225-229.

**[8]** Jan Wolter, Survey of Paint-by-Number Puzzle Solvers, http://webpbn.com/survey/, accessed Nov. 2012.