### Chebfun example: Types of isolated singularities

Inspired by reading through Wegert’s Visual Complex Functions and by earlier examples of phase portraits in Chebfun (1, 2), I put together a psychedelic example of my own about different types of isolated singularities.

0

A point mass starts at the middle of the left edge of an $8\times8$ checkerboard of unit square. On three of the squares, circular masses of diameter $1$ are fixed, attracting the point mass with an inverse square force with constant $1$. The particle accelerates from rest and reaches the right side of the checkerboard without hitting any of the disks. What is the minimum time for this to happen?

In other words: You have three checkers you can place in the center of squares on a checkerboard, each of which checkers will attract a point mass with gravitational constant $1$. Which configuration of checkers results in the minimal time for flinging a resting point mass from the middle of one side of the checkerboard to the opposite side, and what is the corresponding time?

The solution space of this problem is not only finite but tangibly finite: there are $\frac12 {64\choose3} = 20832$ candidate configurations (the $^1\!\!/\!_2$ takes into account the reflective symmetry over the horizontal axis), which is few enough that brute force is the reasonable approach if an individual trial can be computed quickly.

My solution was slow since (1) I used Chebfun’s overloads of the usual ODE solvers in MATLAB and (2) I didn’t know about the events flag for the ODE113. I searched what seemed like a probable subset of the solution space in 90 minutes, arriving at the correct configuration. Then it was just a matter of turning down the tolerances as low as possible on the ODE solver. Here is the solution, with $t_\mathrm{end} = 9.52559012682\ldots.$

Everyone in the Problem Squad got all the above digits correct (Hadrien & Rodrigo got one more digit). After the meeting, I set up the problem in Mathematica and easily got fifty digits: $t_\mathrm{end} = 9.5255901268261495154725701035994843104288335536362\ldots.$

0

Let $\alpha$ be a negative number and let $A$ be the $10^6\times10^6$ matrix whose $5\times5$ cousin is this: $$A = \begin{bmatrix} 1^\alpha & 2^\alpha & 6^\alpha & 7^\alpha & 15^\alpha \\ 3^\alpha & 5^\alpha & 8^\alpha & 14^\alpha & 16^\alpha \\ 4^\alpha & 9^\alpha & 13^\alpha & 17^\alpha & 22^\alpha \\ 10^\alpha & 12^\alpha & 18^\alpha & 21^\alpha & 23^\alpha \\ 11^\alpha & 19^\alpha & 20^\alpha & 24^\alpha & 25^\alpha \end{bmatrix}$$ If $||A||_2=2$, what is $\alpha$?

This problem is reminiscent of Problem 3 of the 100-Digit Challenge, which is to find the $2$-norm of the infinite matrix $B$ with $b_{11}=1$, $b_{12}=\frac12$, $b_{21}=\frac13$, $b_{13}=\frac14$, $b_{22}=\frac15$, etc. (The matrix $B$ is the same as the above matrix $A$ with the exception that the natural numbers on it run down from the top row typewriter-style rather than winding up and down lawnmower-style.) The two problems are inverses: the 100-Digit Challenge problem was to calculate the $2$-norm of a structured matrix $B$ with entrywise exponent $-1$, whereas this new one is to calculate the entrywise exponent of a very similar structured matrix $A$ such that the $2$-norm of $A$ is equal to $2$.

The 100DC problem has been solved to high accuracy, and its solution suggests a way to approach this new one. Nick Trefethen’s Ten Digit Algorithms includes a MATLAB code operator_norm.m that solves 100DC P#3 to 12 digits of accuracy in less than a second using epsilon extrapolation on the sequence of solutions for the leading principal minors of $A$ of sizes $n=2,4,8,\ldots,1024$.

#### Winning method: Extrapolate from smaller matrices

The matrix $A$ is far too big and dense and ill-conditioned to work with on a normal computer, but smaller versions of it are not. The entries of $A$ decay away from the $(1,1)$ entry, so we may be able to capture the behaviour of $\alpha$ by computing it for increasingly larger matrices. Below are the results of such a computation on the leading principal minors of $A$ of different sizes $n$:

n$\alpha$
4$-0.418940425987$
8$-0.539293628130$
16$-0.584092406894$
32$-0.603132120453$
64$-0.611797710113$
128$-0.615860842842$
256$-0.617775990326$
512$-0.618669432557$
1024$-0.619078159850$
2048 $-0.619260673566$

Applying epsilon extrapolation on the above sequence of numbers leads to a best guess of $\alpha \approx -0.61939\ldots.$

Edit: Prof Trefethen was able to convince us on Friday that Richardson extrapolation is the proper method to use on the sequence of exponents above; he was also able to compute $\alpha$ for matrices of size up to $n=3200$ by using MATLAB’s faster normest in place of the usual norm command, arriving at the estimate $\alpha \approx -0.61941\ldots.$

#### Another method: Approximating the largest eigenvalue

Here is a sketch of another method to estimate $\alpha$. For real symmetric matrices, $||A||_2 = \sigma_\mathrm{max}(A) = \lambda_\mathrm{max}(A),$ i.e., the largest eigenvalue is equal to the 2-norm.

Consider $A$ as a sum of its symmetric part $H$ and antisymmetric part $N$, writing $A = \underbrace{\frac12(A+A^*)}_H + \underbrace{\frac12(A-A^*)}_N.$ Because of the way its entries wind along antidiagonals, the matrix $A$ has small antisymmetric part. For example, in the small case $n~=~1000$, the corresponding entrywise exponent that satistfies the norm condition $||A_n||_2~=~2$ is approximately $\alpha_n~=~-0.619$. In this case, we find that $||H_n||_2~=~1.998$ while $||N_n||_2~=~0.098$.

Therefore $\lambda_\mathrm{max}(A_n)$ will be approximately equal to $||A_n||_2$, and the exponent $\alpha_\lambda$ such that $\lambda_\mathrm{max}(A_n)~=~2$ is an estimate of the solution to our problem.

Besides the difficulty that $\alpha_\lambda$ is still nontrivial to estimate for large matrices, we are faced with the fact that this method will not offer more than perhaps two digits of $\alpha$ since $||N||_2$ does not shrink as $n$ grows, and so the error in our estimate will remain large.

#### Another method: For the Frobenius norm

Another approach, suggested by Mohsin Javed, is to bound the $2$-norm of $A$ by its Frobenius norm and then study distribution of the singular values of $A$: $4 = ||A||_2^2 = \sigma_{max}^2 (A) \leq \sum_{k=1}^n \sigma_k^2 = ||A||_F^2.$ In particular, for our special $A$, we find the interesting bound $4 = ||A||_2^2 \leq ||A||_F^2 = \sum_{k=1}^{10^{12}} k^{2\alpha}.$ This idea did not lead us to much with regard to the quest for $\alpha$, but we did make the interesting discovery that we can solve the same problem for the Frobenius norm with a simple query to Wolfram|Alpha, namely

sum from k=1 to 10^12 of k^(2*x) = 4


which returns $\alpha_F=-0.6469367572\ldots.$ The above sum resembles a trunction of the zeta function, but it turns out that solving $\zeta(2x)=4$ yields an $x$ that agrees to only four digits with $\alpha_F$.

0

### Chebfun example: Bugs on a rectangle

I posted a new ODE example today for Chebfun following the delightful paper "Four bugs on rectangle" by Chapman et al. (2011) that delves into the surprising details of the solution of the following question:

Four particles start at $t=0$ at the vertices of a $2\times1$ rectangle. Each one chases the particle to its left with speed $1$. When does a collision occur?

The straight answer to the question is not so interesting ($t=1.5$), but the story of how the bugs get there is stunning. The bugs wind around the origin infinitely many times before colliding, and their configuration (which remains a parallelogram by symmetry) alternates between rectangles and rhombi of exponentially increasing aspect ratio. By $t=1.4$, the configuration is nearly one-dimensional, and a lot of exciting asymptotics happen in the final moment. Chebfun illustrates a remarkable accuracy for calculating key quantities in this example.

0

### Inauguration of maths.h with a Chebfun example

Having been recently crowned a mathematics graduate student, I look forward to a few years of a lot of research and reading and collaboration and general problem-solving. This site will serve as a way to record ideas, successes, failures, and anything else interesting and mathematics-related that comes my way. As with scraps.h, I view this blog as more of an electronic record for myself than as a public issue, but of course I am happy to engage anyone who comes my way.

Enough with the prelude! Here is an image teaser from the recent Chebfun example I wrote entitled “A pathological function of Weierstrass”:

I chose the constants you find in the example because they made integral evaluation by hand easy (in particular, the sum collapses after the first term). The general problem seems much more difficult. I did not do too much searching, but I wonder what kinds of results exist for problems like $\int_a^b F(x) \mathrm{d}x$ or $\min_{x \in [a,b]} F(x)$ for a general Weierstrass-type function $F$.

0