Here’s some stuff I’ve been working on in AI.

# Binet's Formula

This proof was shared with me by my friend Chuck Larrieu Casias, and I liked it so much I wanted to write it up here.
Suppose we have two similar rectangles, $A$ and $B$. Let $A$ have sides $a$ and $b$, and $B$ have corresponding sides $b$ and $a+b$.
Then $\frac{b}{a} = \frac{a+b}{b}$. Also, $\frac{b}{a} = \frac{a}{b} + 1$.
Call $\phi = \frac{b}{a}$. Then $\phi = \frac{1}{\phi} + 1$.
Multiply both sides by $\phi$ to get $\phi^2 = \phi + 1$.

# Intersection of Convex Sets

Convex sets are interesting in the context of optimization, as they represent regions where we know we can find global minima. A convex region is a region where for every pair of points within the region, every point on the line that joins the points is also within the region.
Fun fact - the intersection of all convex sets containing a given subset A of Euclidean space, is called the convex hull of A, which is the smallest convex set containing A.

# Linear Algebra Basics

Useful terms and ideas diagonal matrix: Matrix with zero values except on the diagonal. change of basis matrix: Taking the product of this matrix provides a new basis, which is helpful to us if that basis transformation sets us up for an operation that requires the matrix to be in a particular form. For instance, a change of basis matrix could diagonalize the matrix, such as the choice of the orthogonal matrix as the change of basis matrix in PCA.

# Navier-Stokes Existence and Smoothness Problem

Let me say up front, I have not solved this problem, nor am I in the running. I came to math late in 2016 after a lifetime of music study, and while I am building a solid trellis upon which to grow my imagination, it will take me many years to capitalize on my potential. That being said, the Millenium Prizes are my one of my favorite sporting events, and the Navier-Stokes Existence and Smoothness Problem is my home team I’m rooting for to emerge next.

# Network Geometry

Network geometry is one of the topics that propelled me towards complexity research and machine learning. If we accept that interactions between agents in a networked system have themselves an impact on the dynamics of a system, it follows that we must investigate the emergence and charactertistics of these interactions. We start to wonder how we might reason about these system “overtones”.
I find Mulder & Bicaconi’s 2018 paper Network Geometry and Complexity to be a nice primer, so I’ll start there.

# Parrallelogram is convex

Let $S$ be the parallelogram consisting of all linear combinations of $t_{1}v_{1} + t_{2}v_{2}$ with $0 \leq t_{1} \leq $ and $0 \leq t_{2} \leq $, or equivlently $0 \leq t_{i} \leq $.
We remember that the line segment $PQ$ consists of all points $(1-t)P + tQ$ with $0\leq t \leq 1$, and that $PQ$ exists in vector space $S$ if all points $P, Q$ exist in $S$.
Proof. Let $P=t_{1}v_{1} + t_{2}v_{2}$ and $Q=t_{1}v_{1} + t_{2}v_{2}$ be points in $S$.

# Phenomenology of Perception

Phenomenology of Perception by Maurice Merleau-Ponty.
Originally printed in 1945 by Editions Gallimard, with English translation published in 1958 by Routledge & Kegan Paul. I am referencing the 2002 Routledge Classics edition.
Overview There are times in our lives when we notice the apparatus of our perception. Maybe we see a mirage emerge on the horizon, or grow determined to know why our ears are ringing. When we study such visual apparitions and sonic glitches, we align ourselves with generations of philosophers and cognitive scientists who have looked at perception’s outliers to help us understand what happens all along without our noticing.

# Principal Component Analysis (PCA)

Cool Result Considering the decompostions $A = Q^{T}DQ$, where $Q$ is an orthogonal matrix and $D$ is the diagonal matrix of eigenvalues, the columns of $Q$ are the principal components of our matrix!
How does it work? Remember, $Y = XP$, where $P$ is the change of basis matrix.
First we start with the covariance matrix $S_{x} = \frac{1}{m}X^{T}X$, where X is our input matrix. If every feature was necessary, with no redundancy, we’d see a diagonal covariance matrix where the only non-zero values would be lined up on the diagonal.

# Radicals

OK, so I have just learned about nested radicals, and want to use them as a springboard to meditate on the use of radicals in imagining symmetry in various dimensions. We begin with the infinite nested radicals problem posed by Srivinas Ramanujan.
$? = \sqrt{1 + 2\sqrt{1 + 3\sqrt{1 + \dots}}}$.
His solution involves expressing the geometric series under the radical with the following general formulation
$? = \sqrt{ax + (n + a)^2 + x\sqrt{a(x+n) + (n+a)^2 + (x+n)\sqrt{\dots}}}$.

# Reinforcement Learning

Reinforcement Learning: An Introduction (1998). Richard Sutton & Andrew Barto. At the onset, I’m curious to know how much has changed since this book’s publication 20 years ago. That being said, as these are two leaders in the field, I’m interested in gaining a sense of their perspective on the history/origin of this subfield, and acclimating to some of the core concepts/constructs. CHAPTER 1 One of the key takeaways from this chapter was the distinction between the value function and rewards function in an RL problem.