Help

# math

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.

## Fabergé Fractals

Original author:
Cory Doctorow  Here's a mesmerizing gallery of "Fabrege Fractals" created by Tom Beddard, whose site also features a 2011 video of Fabrege-inspired fractal landscapes that must be seen to be believed. They're all made with Fractal Lab, a WebGL-based renderer Beddard created.

## Major Advance Towards a Proof of the Twin Prime Conjecture

Original author:
Soulskill

ananyo writes "Researchers hoping to get '2' as the answer for a long-sought proof involving pairs of prime numbers are celebrating the fact that a mathematician has wrestled the value down from infinity to 70 million. That goal is the proof to a conjecture concerning prime numbers. Primes abound among smaller numbers, but they become less and less frequent as one goes towards larger numbers. But exceptions exist: the 'twin primes,' which are pairs of prime numbers that differ in value by 2. The twin prime conjecture says that there is an infinite number of such twin pairs. Some attribute the conjecture to the Greek mathematician Euclid of Alexandria, which would make it one of the oldest open problems in mathematics. The new result, from Yitang Zhang of the University of New Hampshire in Durham, finds that there are infinitely many pairs of primes that are less than 70 million units apart. He presented his research on 13 May to an audience of a few dozen at Harvard University in Cambridge, Massachusetts. Although 70 million seems like a very large number, the existence of any finite bound, no matter how large, means that that the gaps between consecutive numbers don't keep growing forever."   Read more of this story at Slashdot.

## Q: Why are determinants defined the weird way they are?

Original author:
The Physicist

Physicist: This is a question that comes up a lot when you’re first studying linear algebra.  The determinant has a lot of tremendously useful properties, but it’s a weird operation.  You start with a matrix, take one number from every column and multiply them together, then do that in every possible combination, and half of the time you subtract, and there doesn’t seem to be any rhyme or reason why.  This particular math post will be a little math heavy.

If you have a matrix, ${\bf M} = \left(\begin{array}{cccc}a_{11} & a_{21} & \cdots & a_{n1} \\a_{12} & a_{22} & \cdots & a_{n1} \\\vdots & \vdots & \ddots & \vdots \\a_{1n} & a_{2n} & \cdots & a_{nn}\end{array}\right)$, then the determinant is $det({\bf M}) = \sum_{\vec{p}}\sigma(\vec{p}) a_{1p_1}a_{2p_2}\cdots a_{np_n}$, where $\vec{p} = (p_1, p_2, \cdots, p_n)$ is a rearrangement of the numbers 1 through n, and $\sigma(\vec{p})$ is the “signature” or “parity” of that arrangement.  The signature is (-1)k, where k is the number of times that pairs of numbers in $\vec{p}$ have to be switched to get to $\vec{p} = (1,2,\cdots,n)$.

For example, if ${\bf M} = \left(\begin{array}{ccc}a_{11} & a_{21} & a_{31} \\a_{12} & a_{22} & a_{32} \\a_{13} & a_{23} & a_{33} \\\end{array}\right) = \left(\begin{array}{ccc}4 & 2 & 1 \\2 & 7 & 3 \\5 & 2 & 2 \\\end{array}\right)$, then $\begin{array}{ll}det({\bf M}) \\= \sum_{\vec{p}}\sigma(\vec{p}) a_{1p_1}a_{2p_2}a_{3p_3} \\=\left\{\begin{array}{ll}\sigma(1,2,3)a_{11}a_{22}a_{33}+\sigma(1,3,2)a_{11}a_{23}a_{32}+\sigma(2,1,3)a_{12}a_{21}a_{33}\\+\sigma(2,3,1)a_{12}a_{23}a_{31}+\sigma(3,1,2)a_{13}a_{21}a_{32}+\sigma(3,2,1)a_{13}a_{22}a_{31}\end{array}\right.\\=a_{11}a_{22}a_{33}-a_{11}a_{23}a_{32}-a_{12}a_{21}a_{33}+a_{12}a_{23}a_{31}+a_{13}a_{21}a_{32}-a_{13}a_{22}a_{31}\\= 4 \cdot 7 \cdot 2 - 4 \cdot 2 \cdot 3 - 2 \cdot 2 \cdot 2 +2 \cdot 2 \cdot 1 + 5 \cdot 2 \cdot 3 - 5 \cdot 7 \cdot 1\\=23\end{array}$

Turns out (and this is the answer to the question) that the determinant of a matrix can be thought of as the volume of the parallelepiped created by the vectors that are columns of that matrix.  In the last example, these vectors are $\vec{v}_1 = \left(\begin{array}{c}4\\2\\5\end{array}\right)$, $\vec{v}_2 = \left(\begin{array}{c}2\\7\\2\end{array}\right)$, and $\vec{v}_3 = \left(\begin{array}{c}1\\3\\2\end{array}\right)$.

The parallelepiped created by the vectors a, b, and c.

Say the volume of the parallelepiped created by $\vec{v}_1, \cdots,\vec{v}_n$ is given by $D\left(\vec{v}_1, \cdots, \vec{v}_n\right)$.  Here come some properties:

1) $D\left(\vec{v}_1, \cdots, \vec{v}_n\right)=0$, if any pair of the vectors are the same, because that corresponds to the parallelepiped being flat.

2) $D\left(a\vec{v}_1,\cdots, \vec{v}_n\right)=aD\left(\vec{v}_1,\cdots,\vec{v}_n\right)$, which is just a fancy math way of saying that doubling the length of any of the sides doubles the volume.  This also means that the determinant is linear (in each column).

3) $D\left(\vec{v}_1+\vec{w},\cdots, \vec{v}_n\right) = D\left(\vec{v}_1,\cdots, \vec{v}_n\right) + D\left(\vec{w},\cdots, \vec{v}_n\right)$, which means “linear”.  This works the same for all of the vectors in $D$.

Check this out!  By using these properties we can see that switching two vectors in the determinant swaps the sign. $\begin{array}{ll} D\left(\vec{v}_1,\vec{v}_2, \vec{v}_3\cdots, \vec{v}_n\right)\\ =D\left(\vec{v}_1,\vec{v}_2, \vec{v}_3\cdots, \vec{v}_n\right)+D\left(\vec{v}_1,\vec{v}_1, \vec{v}_3\cdots, \vec{v}_n\right) & \textrm{Prop. 1}\\ =D\left(\vec{v}_1,\vec{v}_1+\vec{v}_2, \vec{v}_3\cdots, \vec{v}_n\right) & \textrm{Prop. 3} \\ =D\left(\vec{v}_1,\vec{v}_1+\vec{v}_2, \vec{v}_3\cdots, \vec{v}_n\right)-D\left(\vec{v}_1+\vec{v}_2,\vec{v}_1+\vec{v}_2, \vec{v}_3\cdots, \vec{v}_n\right) & \textrm{Prop. 1} \\ =D\left(-\vec{v}_2,\vec{v}_1+\vec{v}_2, \vec{v}_3\cdots, \vec{v}_n\right) & \textrm{Prop. 3} \\ =-D\left(\vec{v}_2,\vec{v}_1+\vec{v}_2, \vec{v}_3\cdots, \vec{v}_n\right) & \textrm{Prop. 2} \\ =-D\left(\vec{v}_2,\vec{v}_1, \vec{v}_3\cdots, \vec{v}_n\right)-D\left(\vec{v}_2,\vec{v}_2, \vec{v}_3\cdots, \vec{v}_n\right) & \textrm{Prop. 3} \\ =-D\left(\vec{v}_2,\vec{v}_1, \vec{v}_3\cdots, \vec{v}_n\right) & \textrm{Prop. 1} \end{array}$

4) $D\left(\vec{v}_1,\vec{v}_2, \vec{v}_3\cdots, \vec{v}_n\right)=-D\left(\vec{v}_2,\vec{v}_1, \vec{v}_3\cdots, \vec{v}_n\right)$, so switching two of the vectors flips the sign.  This is true for any pair of vectors in D.  Another way to think about this property is to say that when you exchange two directions you turn the parallelepiped inside-out.

Finally, if $\vec{e}_1 = \left(\begin{array}{c}1\\0\\\vdots\\0\end{array}\right)$, $\vec{e}_2 = \left(\begin{array}{c}0\\1\\\vdots\\0\end{array}\right)$, … $\vec{e}_n = \left(\begin{array}{c}0\\0\\\vdots\\1\end{array}\right)$, then

5) $D\left(\vec{e}_1,\vec{e}_2, \vec{e}_3\cdots, \vec{e}_n\right) = 1$, because a 1 by 1 by 1 by … box has a volume of 1.

Also notice that, for example, $\vec{v}_2 = \left(\begin{array}{c}v_{21}\\v_{22}\\\vdots\\v_{2n}\end{array}\right) = \left(\begin{array}{c}v_{21}\\0\\\vdots\\0\end{array}\right)+\left(\begin{array}{c}0\\v_{22}\\\vdots\\0\end{array}\right)+\cdots+\left(\begin{array}{c}0\\0\\\vdots\\v_{2n}\end{array}\right) = v_{21}\vec{e}_1+v_{22}\vec{e}_2+\cdots+v_{2n}\vec{e}_n$

Finally, with all of that math in place, $\begin{array}{ll} D\left(\vec{v}_1,\vec{v}_2, \cdots, \vec{v}_n\right) \\ = D\left(v_{11}\vec{e}_1+v_{12}\vec{e}_2+\cdots+v_{1n}\vec{e}_n,\vec{v}_2, \cdots, \vec{v}_n\right) \\ = D\left(v_{11}\vec{e}_1,\vec{v}_2, \cdots, \vec{v}_n\right) + D\left(v_{12}\vec{e}_2,\vec{v}_2, \cdots, \vec{v}_n\right) + \cdot + D\left(v_{1n}\vec{e}_n,\vec{v}_2, \cdots, \vec{v}_n\right) \\= v_{11}D\left(\vec{e}_1,\vec{v}_2, \cdots, \vec{v}_n\right) + v_{12}D\left(\vec{e}_2,\vec{v}_2, \cdots, \vec{v}_n\right) + \cdot + v_{1n}D\left(\vec{e}_n,\vec{v}_2, \cdots, \vec{v}_n\right) \\ =\sum_{j=1}^n v_{1j}D\left(\vec{e}_j,\vec{v}_2, \cdots, \vec{v}_n\right) \end{array}$

Doing the same thing to the second part of D, $=\sum_{j=1}^n\sum_{k=1}^n v_{1j}v_{2k}D\left(\vec{e}_j,\vec{e}_k, \cdots, \vec{v}_n\right)$

The same thing can be done to all of the vectors in D.  But rather than writing n different summations we can write, $=\sum_{\vec{p}}\, v_{1p_1}v_{2p_2}\cdots v_{np_n}D\left(\vec{e}_{p_1},\vec{e}_{p_2}, \cdots, \vec{e}_{p_n}\right)$, where every term in $\vec{p} = \left(\begin{array}{c}p_1\\p_2\\\vdots\\p_n\end{array}\right)$ runs from 1 to n.

When the $\vec{e}_j$ that are left in D are the same, then D=0.  This means that the only non-zero terms left in the summation are rearrangements, where the elements of $\vec{p}$ are each a number from 1 to n, with no repeats.

All but one of the $D\left(\vec{e}_{p_1},\vec{e}_{p_2}, \cdots, \vec{e}_{p_n}\right)$ will be in a weird order.  Switching the order in D can flip sign, and this sign is given by the signature, $\sigma(\vec{p})$.  So, $D\left(\vec{e}_{p_1},\vec{e}_{p_2}, \cdots, \vec{e}_{p_n}\right) = \sigma(\vec{p})D\left(\vec{e}_{1},\vec{e}_{2}, \cdots, \vec{e}_{n}\right)$, where $\sigma(\vec{p})=(-1)^k$, where k is the number of times that the e’s have to be switched to get to $D(\vec{e}_1, \cdots,\vec{e}_n)$.

So, $\begin{array}{ll} det({\bf M})\\ = D\left(\vec{v}_{1},\vec{v}_{2}, \cdots, \vec{v}_{n}\right)\\ =\sum_{\vec{p}}\, v_{1p_1}v_{2p_2}\cdots v_{np_n}D\left(\vec{e}_{p_1},\vec{e}_{p_2}, \cdots, \vec{e}_{p_n}\right) \\ =\sum_{\vec{p}}\, v_{1p_1}v_{2p_2}\cdots v_{np_n}\sigma(\vec{p})D\left(\vec{e}_{1},\vec{e}_{2}, \cdots, \vec{e}_{n}\right) \\ =\sum_{\vec{p}}\, \sigma(\vec{p})v_{1p_1}v_{2p_2}\cdots v_{np_n} \end{array}$

Which is exactly the definition of the determinant!  The other uses for the determinant, from finding eigenvectors and eigenvalues, to determining if a set of vectors are linearly independent or not, to handling the coordinates in complicated integrals, all come from defining the determinant as the volume of the parallelepiped created from the columns of the matrix.  It’s just not always exactly obvious how.

For example: The determinant of the matrix ${\bf M} = \left(\begin{array}{cc}2&3\\1&5\end{array}\right)$ is the same as the area of this parallelogram, by definition.

The parallelepiped (in this case a 2-d parallelogram) created by (2,1) and (3,5).

Using the tricks defined in the post: $\begin{array}{ll} D\left(\left(\begin{array}{c}2\\1\end{array}\right),\left(\begin{array}{c}3\\5\end{array}\right)\right) \\[2mm] = D\left(2\vec{e}_1+\vec{e}_2,3\vec{e}_1+5\vec{e}_2\right) \\[2mm] = D\left(2\vec{e}_1,3\vec{e}_1+5\vec{e}_2\right) + D\left(\vec{e}_2,3\vec{e}_1+5\vec{e}_2\right) \\[2mm] = D\left(2\vec{e}_1,3\vec{e}_1\right) + D\left(2\vec{e}_1,5\vec{e}_2\right) + D\left(\vec{e}_2,3\vec{e}_1\right) + D\left(\vec{e}_2,5\vec{e}_2\right) \\[2mm] = 2\cdot3D\left(\vec{e}_1,\vec{e}_1\right) + 2\cdot5D\left(\vec{e}_1,\vec{e}_2\right) + 3D\left(\vec{e}_2,\vec{e}_1\right) + 5D\left(\vec{e}_2,\vec{e}_2\right) \\[2mm] = 0 + 2\cdot5D\left(\vec{e}_1,\vec{e}_2\right) + 3D\left(\vec{e}_2,\vec{e}_1\right) + 0 \\[2mm] = 2\cdot5D\left(\vec{e}_1,\vec{e}_2\right) - 3D\left(\vec{e}_1,\vec{e}_2\right) \\[2mm] = 2\cdot5 - 3 \\[2mm] =7 \end{array}$

Or, using the usual determinant-finding-technique, $det\left|\begin{array}{cc}2&3\\1&5\end{array}\right| = 2\cdot5 - 3\cdot1 = 7$.

## Probability theory for programmers Jeremy Kun, a mathematics PhD student at the University of Illinois in Chicago, has posted a wonderful primer on probability theory for programmers on his blog. It's a subject vital to machine learning and data-mining, and it's at the heart of much of the stuff going on with Big Data. His primer is lucid and easy to follow, even for math ignoramuses like me.

For instance, suppose our probability space is $\Omega = \left \{ 1, 2, 3, 4, 5, 6 \right \}$ and $f$ is defined by setting $f(x) = 1/6$ for all $x \in \Omega$ (here the “experiment” is rolling a single die). Then we are likely interested in more exquisite kinds of outcomes; instead of asking the probability that the outcome is 4, we might ask what is the probability that the outcome is even? This event would be the subset $\left \{ 2, 4, 6 \right \}$, and if any of these are the outcome of the experiment, the event is said to occur. In this case we would expect the probability of the die roll being even to be 1/2 (but we have not yet formalized why this is the case).

As a quick exercise, the reader should formulate a two-dice experiment in terms of sets. What would the probability space consist of as a set? What would the probability mass function look like? What are some interesting events one might consider (if playing a game of craps)?

(Image: Dice, a Creative Commons Attribution (2.0) image from artbystevejohnson's photostream)

## Q: What are fractional dimensions? Can space have a fractional dimension?

Physicist: There are a couple of different contexts in which the word “dimension” comes up.  In the case of fractals the number of dimensions has to do (this is a little hand-wavy) with the way points in the fractal are distributed.  For example, if you have points distributed at random in space you’d say you have a three-dimensional set of points, and if they’re all arranged on a flat sheet you’d say you have two-dimensional set of points.  Way back in the day mathematicians figured that a good way to determine the “dimensionality” of a set is to pick a point in the set, and then put progressively larger and larger spheres of radius R around it.  If the number of points contained in the sphere is proportional to Rd, then the set is d-dimensional.

(Left) as the sphere grows, the number of points from the line that it contains increases like R, so the line is one-dimensional. (Right) as the sphere increases in size the number of points from the rectangle that it contains increases like R2, so the square is two-dimensional.

However, there’s a problem with this technique.  You can have a set that’s really d-dimensional, but on a large scale it appears to be a different dimension.  For example, a piece of paper is basically 2-D, but if you crumple it up into a ball it seems 3-D on a large enough scale.  A hairball or bundle of cables seems 3-D (by the “sphere test”), but they’re really 1-D (Ideally at least.  Every physical object is always 3-D).

A “crumpled up” set seems like it has a higher dimension than it really does.  You can get around this by using smaller and smaller spheres.  Eventually you’ll get the correct dimension.

This whole “look at the number of points inside of tiny spheres and see how that number scales with size” thing works great for every half-way reasonable sets.  However, fractal sets can be “infinitely crumpled”, so no matter how small a sphere you use, you still get a dimension larger than you might expect.

The edge of the Mandelbrot set “should” be one-dimensional since it’s just a line. However, it’s infinitely twisty, and no matter how much you zoom in it stays just as messed up.

When the “sphere trick” is applied to tangled messes it doesn’t necessarily have to give you integer numbers until the spheres are small enough.  With fractals there is no “small enough” (that should totally be a terrible movie tag line), and you find that they have a dimension that’s often a fraction.  The dimension of the Mandelbrot’s boundary (picture above) is 2, which is the highest it can be, but there are more interesting (but less pretty) fractals out there with genuinely fractional dimensions, like the “Koch snowflake” which has a dimension of approximately 1.262.

The Koch snowflake, which is so messed up that it has a dimension greater than 1, but less than 2.

That all said, when somebody (looking at you, all mathematicians) talks about fractional dimensions, they’re really talking about a weird, abstract, and not at all physical notion of dimension.  There’s no such thing as “2.5 dimensional universe”.  When we talk about the “dimension of space” we’re talking about the number of completely different directions that are available, not the whole “sphere thing”.  The dimensions of space are either there or not, so while you could have 4 dimensions, you couldn’t have 3.5.

## One Cool Day Job: Building Algorithms For Elevators

McGruber writes "The Wall Street Journal has an article about Theresa Christy, a mathematician who develops algorithms for Otis Elevator Company, the world's largest manufacturer and maintainer of people-moving products including elevators, escalators and moving walkways. As an Otis research fellow, Ms. Christy writes strings of code that allow elevators to do essentially the greatest good for the most people — including the building's owner, who has to allocate considerable space for the concrete shafts that house the cars. Her work often involves watching computer simulation programs that replay elevator decision-making. 'I feel like I get paid to play videogames. I watch the simulation, and I see what happens, and I try to improve the score I am getting,' she says."   Read more of this story at Slashdot.

## In Calculator Arms Race, Casio Fires Back: Color Touchscreen ClassPad

KermMartian writes "In what seems to be an accelerating arms race for graphing calculator supremacy between Texas Instruments and Casio, the underdog Casio has fired a return salvo to the recently-announced TI-84 Plus C Silver Edition. The new ClassPad fx-CP400 has a massive color touchscreen and a Matlab-esque CAS. Though not accepted on the SAT/ACT, will such a powerful device gain a strong following among engineers and professionals?"   Read more of this story at Slashdot.

## Randomly Generated Math Article Accepted By &#39;Open-Access&#39; Journal

call -151 writes "Many years ago, a human-generated intentionally nonsense paper was accepted by the (prominent) literary culture journal Social Text. In August, a randomly-generated nonsense mathematics paper was accepted by one of the many low-tier 'open-access' research mathematics journals. The software Mathgen, which generated the accepted submission, takes as inputs author names (or those can be randomly selected also) and generates nicely TeX'd and impressive-sounding sentences which are grammatically correct but mathematically disconnected nonsense. This was reviewed by a human, (quickly, for math, in 12 days) and the reviewers' comments mention superficial problems with the submission (PDF). The references are also randomly-generated and rather hilarious. For those with concerns about submitting to lower-tier journals in an effort to promote open access, this is not a good sign!"   Read more of this story at Slashdot.

## Dana is a 28-year-old mathematician and pornographer. She...

Dana is a 28-year-old mathematician and pornographer. She makes a porn magazine for women who like dick.