Skip to content Skip to navigation

In praise of J.V. Uspensky

The two of us have shared a fascination with James Victor Uspensky's book Introduction to Mathematical Probability ever since our graduate student days: it contains many interesting results not found in other books on the same subject in the English language, together with many non-trivial examples, all this clearly stated with careful proofs. We would like to present some of Uspensky's gems to a modern audience hoping to tempt people to read Uspensky for themselves, as well as report on some of the other mathematical topics he also wrote on (for example, his book on number theory contains results about perfect shuffles).

 

Uspensky also had an interesting life: a member of the Russian Academy of Sciences, he spoke at the 1924 ICM in Toronto before fleeing Russia and coming to the US and Stanford. His students included Vinogradov and Kuzmin in Russia, and Carl Olds in the US. Comparatively little has been written about him not in Russian; we hope to remedy this.

The nearest unvisited vertex walk on random graphs

We revisit an old topic in algorithms, the deterministic walk on a connected finite graph-with-edge-lengths which always moves at speed one toward the nearest unvisited vertex, until every vertex is visited. There is an elementary relation between this cover time and ball-covering (metric entropy) measures. For some familiar models of random connected graphs, this relation allows the order of magnitude of the cover time to be deduced easily from first passage percolation estimates. Obtaining sharper estimates of this cover time remains a challenging problem.

How did the rabbit get into the hat?

The question of my title is asked every time someone witnesses a certain feat of stage magic, no matter how young or old the witness. Statistical versions of essentially the same question are becoming increasingly rare in the Big Data era, where magical results can make headlines without such questions being raised. Issues arising from this phenomenon will be discussed, including susceptibility to zombie data, whether post-selection inference is really a “thing”, and what some of Persi's earliest work has to tell us related to this issue.

How cereal settles in a shaking box

Around 1960 Bernal and Scott did experiments on the settling of ball bearings in a large box as the box was shaken, and discovered the curious phenomenon of random close packing. Recent experimental advances on this phenomenon now show that when shaken gently enough, and long enough, the balls spontaneously begin to reorganize into a global crystalline configuration. Theoretical explanation requires the same stochastic reasoning invented by Boltzmann and used to model the freezing of water molecules into crystalline ice. It is here applied to the deterministic evolution of ball bearings (or cereal) in a precisely shaken box. This offers an accessible but physically realistic setting to mathematically study the entry of probability into macroscopic physical laws.

Radix sort and PATRICIA bridges

Given a finite collection of pairwise distinct infinite binary words, there is a minimal length finite prefix for each word such that the resulting collection of finite prefixes is also pairwise distinct. The set of initial segments of these finite prefixes may be identified with the vertices of a finite rooted tree in which each interior vertex has a left child, a right child or both, and the finite prefixes themselves correspond to the leaves of the tree. A depth first search of this so-called radix sort tree produces an ordering of the leaves that agrees with the lexicographic ordering of the corresponding infinite binary words. The radix sort tree stores information that is redundant for the purpose of sorting the input infinite binary words into lexicographic order. Indeed, if one deletes the out-degree one vertices in the radix sort tree and “closes up the gaps”, then the resulting PATRICIA tree maintains all the information that is necessary for sorting. We investigate the infinite bridges corresponding to the radix sort and PATRICIA chains; that is, the tree-valued Markov chains built by successively feeding the radix sort and PATRICIA algorithms the elements of an infinite i.i.d. sequence of infinite binary words. In particular, we show that the infinite PATRICIA bridges coincide with those associated with a chain introduced by Rémy for successively generating uniform random binary trees with larger and larger numbers of leaves, and we obtain a concrete representation of all such bridges in terms of sampling from an underlying real tree.

Persification

"Persification" can be defined as the process of turning a mathematical result into a "story" explaining how this result applies to a concrete or real-world situation, in the manner of Persi Diaconis. We will give several examples of persification related to algebraic combinatorics.

Reinforcement in social dynamics

I will talk about some reinforcement processes in some social dynamics models of philosophical interest. There are interesting open problems.