Forgetting, knowledge holes and longest increasing subsequence

2022-02-12 CC-BY

Quadratic equation zeros formula

Yes. I forgot the formula. I was quite surprised to realize I cannot recall the lovely formula that I used quite a lot. I used it for solving simple math drills, physics problems that involve falling objects launched from a flying airplane, inequalities, minimas and maximas of piecewise concave functions etc.

Finding the formula again is not easy:

\[ a x^2 + b x + c = 0 \]

\[ a x^2 - (x_0 + x_1) x + x_0 x_1 \\ = a (x - x_0) (x - x_1) \text{,} \]

looking at the two formulas above one cannot see how to get there. There is no system of equations that can be set. It needs to be pure symbolic manipulation, although I guess playing around with the formula when there is only one zero might give hints about the shape of the two zeros formula.

To make \(x\) appear on the left side, I must be able to take a square root:

\[ x^2 + \frac{b}{a} x + \frac{c}{a} = 0 \]

\[ x^2 + \frac{b}{a} x = -\frac{c}{a} \]

Now the requirement is that I remember \((x + y)^2 = x^2 + 2xy + y^2\).

\[ x^2 + \frac{b}{a} x + \frac{b^2}{4a^2} = -\frac{c}{a} + \frac{b^2}{4a^2} \]

\[ {\bigg(x + \frac{b}{2a}\bigg)}^2 = -\frac{c}{a} + \frac{b^2}{4a^2} \]

Here we are. Root can be taken and the final formula is back again in my memory.

Longest increasing subsequence with constant increase

Finding the length or the elements of the longest increasing subsequence is a well known problem. The algorithm is taught at early computer science courses everywhere. Both the \(O(n^2)\) and \(O(n \log{n})\) algorithms are very simple. I wrote the code for both algorithms many times.

Examples:

I recently encountered the problem where one has to find the longest increasing subsequence, but the increase needs to be exactly one (or some constant k).

Using the examples above:

Turns out this variant has an algorithm that is \(O(n)\). In general algorithms, there is no knowledge about the end of the previous longest increasing subsequence when new number \(a\) is observed. \(a\) can be added to a previous increasing subsequence only if it is larger than the last number of that subsequence. But when increase is required to be a constant \(k\), the end of that subsequence needs to be \(a - k\). This makes the search for the end of the previous increasing subsequence unnecessary.

The code below prints out the longest increasing subsequence where increase is exactly \(k\) (input is just a permutation of a sequence going from \(0\) to \(n - 1\)):

#include <vector>
using namespace std;
int main() {
  int n, k;
  cin >> n >> k;
  vector<int> lis(n + k, 0);
  int m = 0;
  for(int i = 0; i < n; ++i) {
    int a;
    cin >> a, a += k;
    lis[a] = lis[a - k] + 1;
    // if `a - k` did not appear,
    // then `a` starts the subsequence (+1)
    m = max(m, lis[a]);
  }
  cout << m << endl;
  return 0;
}

This knowledge hole surprised me a lot. I was not aware of this variant having more efficient algorithm. Why did I not think about this variant when I encountered LIS algorithms before? I have no idea. But obviously I have many more of these surprising knowledge holes that tell a lot about my learning process and depth of understanding of the concepts I learn. I am obviously not expanding or simplifying the problems enough, when I encounter them.

It is sad to discover that, but it is nice to know I can still make these discoveries. Recently, I became aware that a solution that I discovered, that made me sleep very deep that day, that inspired me to write a book, that made me reach catharsis after a long struggle with the problem, was completely incorrect. I found a simple counter example. At that time I was convinced it was correct. I guess now I can relate to what Andrew Wiles was feeling.