We know how to represent the solution of a nonogram as a sparse vector . Let us now design the constraints; they will be linear equations and linear inequalities that the entries of
must satisfy according to the nonogram. The only information that the nonogram provides are the row and column sequences, so we want to make the most of it. Afterwards, we obtain the sparse vector
by minimizing its 1-norm under these constraints. I will discuss the code and the results in the next and final post.
4. Designing constraints
Recall that our basis vectors lie along the row direction. As a result, it is hard to use row basis vectors to ensure that we satisfy the column sequences. We can only require the column sums to be satisfied.
We can do a lot more with the row sequences. We will have one set of linear equality constraints that counts how many times each block size appears in a row sequence. Clearly, this is better than asking the row sums to be satisfied. We will also have two sets of linear inequality constraints that enforce the rules of the nonogram.
The -minimization problem takes the form,
for some matrices ,
and for some vectors
,
. We will determine how large
and
are at the end.
Note that, by definition, the 1-norm of a vector is the sum of its entries in magnitude:
.
The entries (these are the basis vectors
in their natural order) are either 0 or 1, and both of these values are nonnegative. Therefore, we can write instead,
.
Matlab needs us to write the objective function as a linear combination of the solution vector’s entries. The equation above shows that is a linear combination of the entries of
, with coefficients of 1.
a. Sparsity level
Unlike in most optimization problems, we know the minimum value that our objective function should reach. It is equal to the sparsity level
. (Recall that
equals how many numbers appear in the row sequences, so we know its value.)
We pass this information to our solver as a linear equation:
.
We can write this as a matrix equation .
is a
matrix whose entries are all 1, and
is a vector with just one entry,
.
b. Column sums
Consider the -th column, where
is a number between 1 and
. From the column sequence
, we can find the column sum
. Recall that the column sum tells us how many cells on the
-th column must be shaded. There are
basis vectors, and we want exactly
among them to pass the
-th column.
Well, which basis vectors can pass the -th column? To answer this question, we consider one of the rows. We can list the basis vectors that pass the
-th column in the following manner:

Which basis vectors have we considered above? Clearly,
ranges between 1 and
, i.e. for each block size, there exists a basis vector that passes the
-th column.
What about , the starting column index? The lowest that
can be and still allow the basis vector to pass the
-th column certainly depends on the block size
. From our list of basis vectors above, we conclude
. It is possible that this number is less than 1, which would not make sense. We stop this from happening by selecting the larger between
and 1.
The highest that can be is
. We saw this number when we designed the basis vectors last time. If this number is greater than
, then the basis vector would start somewhere after the
-th column instead of passing it. Therefore, we consider the smaller between
and
.
In summary, the basis vectors that pass the
-th column are those whose indices satisfy,
where,
The -th column sum is satisfied if we require that
.
We use the natural order of the basis vectors to arrive at a linear system , where
is a
matrix.
For the stylish lambda,
we have and
. Their entries are given by,
.
The 1’s on the first row of correspond to all basis vectors that pass the first column, the 1’s on the second row the second column, and the 1’s on the third row the third column. The entries of
are the column sums.
c. Block counts
Let us now consider the row sequences. Consider the -th row, where
is a number between 1 and
. From the row sequence, we can determine
, how many times blocks of size
should appear on the
-th row.
We write the following equations for the -th row:
,
where,
.
These equations are true, since the left-hand side counts how many basis vectors of size we use, and this must equal how many times
appears in the row sequence. We use the natural order and arrive at
, where
is a
matrix.
For the stylish lambda, we find that and
. Their entries are shown below (see if you can make sense of them):
.
d. Nonogram rules! (part 1)
One of the rules of nonogram is that we must have at least one empty cell between two consecutive blocks of shaded cells. Necessarily, certain basis vectors on the same row cannot appear together. We cannot have
and
, for example. Since
take on binary values, we can require that
to force one of the two variables to be zero.
The stylish lambda, which has 3 columns, lends these inequalities for each row:
Can you see a problem here? There are a total of 56 inequalities but only 24 unknown variables! We would be liars to call our approach compressive sensing when we make these many measurements. In general, the number of such inequalities is,
.
We arrive at a new insight by considering the following picture:

The picture looks similar to the one from Section 4b, where we considered all basis vectors that pass the -th column. The only difference is that we added basis vectors that start at the
-th column. We call these basis vectors mutually exclusive, because no pair among them should occur together. Check it!
For each , we can ensure that the basis vectors are mutually exclusive with a linear inequality:
.
The bounds of are given by,
We have such inequalities for each row, for a total of constraints. We can use the natural order to write
, where
is a
matrix. We are truly lucky that our solution vector takes binary values. How else could we combine (compress, heh)
amount of information into just
?!
For the stylish lambda, and
are as follows:
.
e. Nonogram rules! (part 2)
Another rule of nonogram is that we must shade blocks of cells in the right order. For example, a row sequence of means we shade a block of 1 cell first, then a block of 2 cells after that. It would be incorrect if a block of 2 cells appears first. How can we express this rule as a constraint?
At the time of this writing, I do not believe there is an easy way to strongly enforce the block order for all row sequences. The two types of constraints that I will derive below are necessary conditions (“if the row sequence is true, then the constraints are true”). However, the second type does not form a sufficient condition (“if the constraints are true, then the row sequence is true”). We will see how this affects the solution when we consider examples next time.
Suppose that we have columns and the row sequence
. Consider the first block, of 1 cell. Where can the corresponding basis vector
appear? The block is the first in the sequence, so the lowest
can be is 1. Moreover, a block of 2 cells is to follow afterward, so the highest
can be is limited. Imagine what would happen if the second block is “pushed to the end.”
We see that can be at most 5.
Next, consider where the second block can appear.
If we push the preceding block of 1 cell to the end, then the earliest the second block can start is on the 3rd column. That is, must be at least 3. There are no blocks after the second block, so
can be at most
.
We can make sure that the block of 1 cell appears first with two linear inequalities:
Notice that the coefficient of each is its starting column index
. Since
is binary, the left-hand side of the inequality records where the block of 1 cell appears, and the right-hand side where the block of 2 cells appears. Think about how the constraints from Sections 4c and 4d are in play.
The good news is, this works for any sequence whose block sizes appear only once. On the other hand, if there is a repeat in the block size, the corresponding inequality may not even be satisfied by the true solution. This can happen when two blocks of the same size are supposed to be close to each other. Two (maybe more) basis vectors on one side of the inequality then equal 1, and the resulting linear combination no longer represents the starting column index.
We now consider another approach, one that is necessarily true for all sequences (i.e. regardless of whether block sizes repeat), but creates a weaker statement and does not guarantee that the blocks appear in the right order.
Consider the sequence . If we have 8 columns, where can each block appear?

From the picture above, we see that,
As in Section 4c, the linear combination with coefficients of 1 counts how many blocks of a particular size appear. We used logic to narrow down the range where the blocks can appear. For certain, there is at least 1 block of that size in this range.
The upper bound is determined by how many other blocks of the same size appear in the range (i.e. number of overlaps among the ranges). For example, the upper bounds of and
are 2, because we can place blocks of 1 cell on the 1st and 3rd columns (or on the 3rd and 5th columns) and the block of 2 cells afterward.
In summary, we follow this procedure for block order:
Consider the
-th row, where
is a number between 1 and
.
1. If the row sequence has only one number, then do nothing.
2. If the row sequence has block sizes that do not repeat, then for each pair of consecutive blocks, create 2 inequalities such as those shown in the first approach. This strongly enforces the order.
3. If the row sequence has blocks sizes that repeat, then for each block, create 2 inequalities such as those shown in the second approach. This weakly enforces the order.
We arrive at the inequality . Note that we can change the strict inequalities in the first approach to inequalities with an equal sign if we add 1 to the smaller side. How large
is depends on the row sequences and differs from one nonogram to another. The most rows that
can have is
, when all row sequences have more than one number and have repeating block sizes.
f. Put it altogether
We created three sets of linear equations and two sets of linear inequalities. We can combine the linear equations into one equation: . Simply let,
.
The number of linear equality constraints is,
.
Note that already exceeds the minimum number of measurements,
, that is required for sparse recovery.
Similarly, we can combine the linear inequalities into one inequality: . Let,
.
The number of linear inequality constraints depends on the nonogram. However, we have a bound for :
.
Notes
Allow me to explain why, in compressive sensing, we want to minimize the 1-norm of the solution vector:
.
After all, isn’t the 2-norm of a vector (Euclidean distance)
more commonly used?
The solution lives in an
-dimensional space, which may be hard to imagine. For now, just imagine a 2D space. Over there, the linear constraints
and
become a line.
We want a solution that lies on the line (meets the constraints) and has the smallest size (when measured in 1-norm or 2-norm). Define the
-ball of radius
to be the set of all vectors whose 1-norm equals
, and similarly define the
-ball.
In 2D, the -ball always takes the shape of a square diamond, and the
-ball, a circle. First, consider drawing
-balls that get smaller and smaller in size:
An intersection between the ball and the line represents a solution . The solution that we want occurs when the ball is just small enough, so that, if it becomes any smaller, it would not intersect the line anywhere.
Notice that the minimizer lies on an axis. We know that points on an axis have one coordinate that is equal to 0. Similarly, the minimizer (a vector) has one entry that is 0. This is in 2D. In
dimensions, the minimizer will likely have many entries that are 0, resulting in a vector that is sparse.
Now, consider drawing -balls that get smaller and smaller in size:
This time, the minimizer does not land on an axis. Its two entries are both nonzero. By the same token, in dimensions, the minimizer will likely have many nonzero entries, resulting in a vector that is not sparse.