Here’s some stuff I’ve been working on in AI.

# Intersection of Convex Sets

Convex sets are interesting in the context of optimization, as they represent regions where we know we can find global minima. A convex region is a region where for every pair of points within the region, every point on the line that joins the points is also within the region.
Fun fact - the intersection of all convex sets containing a given subset A of Euclidean space, is called the convex hull of A, which is the smallest convex set containing A.

# Linear Algebra Basics

Useful terms and ideas diagonal matrix: Matrix with zero values except on the diagonal. change of basis matrix: Taking the product of this matrix provides a new basis, which is helpful to us if that basis transformation sets us up for an operation that requires the matrix to be in a particular form. For instance, a change of basis matrix could diagonalize the matrix, such as the choice of the orthogonal matrix as the change of basis matrix in PCA.

# Network Geometry

Network geometry is one of the topics that propelled me towards complexity research and machine learning. If we accept that interactions between agents in a networked system have themselves an impact on the dynamics of a system, it follows that we must investigate the emergence and charactertistics of these interactions. We start to wonder how we might reason about these system “overtones”.
I find Mulder & Bicaconi’s 2018 paper Network Geometry and Complexity to be a nice primer, so I’ll start there.

# Parrallelogram is convex

Let $S$ be the parallelogram consisting of all linear combinations of $t_{1}v_{1} + t_{2}v_{2}$ with $0 \leq t_{1} \leq $ and $0 \leq t_{2} \leq $, or equivlently $0 \leq t_{i} \leq $.
We remember that the line segment $PQ$ consists of all points $(1-t)P + tQ$ with $0\leq t \leq 1$, and that $PQ$ exists in vector space $S$ if all points $P, Q$ exist in $S$.
Proof. Let $P=t_{1}v_{1} + t_{2}v_{2}$ and $Q=t_{1}v_{1} + t_{2}v_{2}$ be points in $S$.

# Phenomenology of Perception

Phenomenology of Perception by Maurice Merleau-Ponty.
Originally printed in 1945 by Editions Gallimard, with English translation published in 1958 by Routledge & Kegan Paul. I am referencing the 2002 Routledge Classics edition.
Overview There are times in our lives when we notice the apparatus of our perception. Maybe we see a mirage emerge on the horizon, or grow determined to know why our ears are ringing. When we study such visual apparitions and sonic glitches, we align ourselves with generations of philosophers and cognitive scientists who have looked at perception’s outliers to help us understand what happens all along without our noticing.

# Principal Component Analysis (PCA)

Cool Result Considering the decompostions $A = Q^{T}DQ$, where $Q$ is an orthogonal matrix and $D$ is the diagonal matrix of eigenvalues, the columns of $Q$ are the principal components of our matrix!
How does it work? Remember, $Y = XP$, where $P$ is the change of basis matrix.
First we start with the covariance matrix $S_{x} = \frac{1}{m}X^{T}X$, where X is our input matrix. If every feature was necessary, with no redundancy, we’d see a diagonal covariance matrix where the only non-zero values would be lined up on the diagonal.

# Radicals

OK, so I have just learned about nested radicals, and want to use them as a springboard to meditate on the use of radicals in imagining symmetry in various dimensions. We begin with the infinite nested radicals problem posed by Srivinas Ramanujan.
$? = \sqrt{1 + 2\sqrt{1 + 3\sqrt{1 + \dots}}}$.
His solution involves expressing the geometric series under the radical with the following general formulation
$? = \sqrt{ax + (n + a)^2 + x\sqrt{a(x+n) + (n+a)^2 + (x+n)\sqrt{\dots}}}$.

# Reinforcement Learning

Reinforcement Learning: An Introduction (1998). Richard Sutton & Andrew Barto. At the onset, I’m curious to know how much has changed since this book’s publication 20 years ago. That being said, as these are two leaders in the field, I’m interested in gaining a sense of their perspective on the history/origin of this subfield, and acclimating to some of the core concepts/constructs. CHAPTER 1 One of the key takeaways from this chapter was the distinction between the value function and rewards function in an RL problem.

# Space-Time Continuous Models of Swarm Robotic Systems

Space-Time Continuous Models of Swarm Robotic Systems: Supporting Global-to-Local Programming, by Heiko Hamann
Fundamentals of Swarm Robotics Multi-robot systems have the ability to show complex behavior, which is one of the features that motivate my study of them. That, and they have the potential to solve classic problems in novel, distributed ways. What I’m excited about in this bookis the presentation of how we derive partial differential equations (Fokker-Planck equation) from a stochastic differential equation (Langevin equation), which forms the basis of the Brownian motion model.

# Square Root of 2 is Irrational

Proof. Assume for contradiction that $\sqrt{2}$ is rational.
Therefore, there exists $p, q$ such that $\sqrt{2} = \frac{p}{q}$, where $q \neq 0$, and $p, q$ share no common divisors other than 1.
Squaring both sides gives $2 = \frac{p^2}{q^2}$, so consequently $2q^2 = p^2$.
If a square of a number is even, then the number is even, so there exists some number $k$ such that $p=2k$. Subsituting this value into the equation yields $2q^2 = 4k^2$, or $q^2 = 2k^2$.