Skip navigation
Help

-- By the Physicist

warning: Creating default object from empty value in /var/www/vhosts/sayforward.com/subdomains/recorder/httpdocs/modules/taxonomy/taxonomy.pages.inc on line 33.
Original author: 
The Physicist

Physicist: Political conversations with family, for one.

“Friction” is a blanket term to cover all of the wide variety of effects that make it difficult for one surface to slide past another.

There a some chemical bonds (glue is an extreme example), there are electrical effects (like van der waals), and then there are effects from simple physical barriers.  A pair of rough surfaces will have more friction than a pair of smooth surfaces, because the “peaks” of one surface can fall into the “valleys” of the other, meaning that to keep moving either something needs to break, or the surfaces would need to push apart briefly.

This can be used in hand-wavy arguments for why friction is proportional to the normal force pressing surfaces together.  It’s not terribly intuitive why, but it turns out that the minimum amount of force, Ff, needed to push surfaces past each other (needed to overcome the “friction force”) is proportional to the force, N, pressing those surfaces together.  In fact this is how the coefficient of friction, μ, is defined: Ff = μN.

Friction

The force required to push this bump “up hill” is proportional to the normal force.  This is more or less the justification behind where the friction equation comes from.

The rougher the surfaces the more often “hills” will have to push over each other, and the steeper those hills will be.  For most practical purposes friction is caused by the physical roughness of the surfaces involved.  However, even if you make a surface perfectly smooth there’s still some friction.  If that weren’t the case, then very smooth things would feel kinda oily (some do actually).

Sheets of glass tend to be very nearly perfectly smooth (down to the level of molecules), and most of the friction to be found with glass comes from the subtle electrostatic properties of the glass and the surface that’s in contact with it.  But why is that friction force also proportional to the normal force?  Well… everything’s approximately linear over small enough forces/distances/times.  That’s how physics is done!

That may sound like an excuse, but that’s only because it is.

Q: It intuitively feels like the friction force should be directly proportional to the surface area between materials, yet this is never considered in any practical analysis or application.  What’s going on here?

A: The total lack of consideration of surface area is an artifact of the way friction is usually considered.  Greater surface area does mean greater friction, but it also means that the normal force is more spread out, and less force is going through any particular region of the surface.  These effects happen to balance out.

If you have one pillar

If you have one pillar the total friction is μN. If you have two pillars each supports half of the weight, and thus exert half the normal force, so the total friction is μN/2 + μN/2 = μN.

Pillars are just a cute way of talking about surface area in a controlled way.  The same argument applies to surfaces in general.

Q: If polishing surfaces decreases friction, then why does polishing metal surfaces make them fuse together?

A: Polishing two metal surfaces until they can fuse has to do with giving them both more opportunities to fuse (more of their surfaces can directly contact each other without “peaks and valleys” to deal with), and polishing also helps remove impurities and oxidized material.  For example, if you want to weld two old pieces of iron together you need to get all of the rust off first.  Pure iron can be welded together, but iron oxide (rust) can’t.  Gold is an extreme example of this.  Cleaned and polished gold doesn’t even need to be heated, you can just slap two pieces together and they’ll fuse together.

Inertia welders also need smooth surfaces so that the friction from point to point will be constant (you really don’t want anything to catch suddenly, or everyone nearby is in trouble).  This isn’t important to the question; it’s just that inertia welders are awesome.

Q: Why does friction convert kinetic energy into heat?

A: The very short answer is “entropy”.  Friction involves, at the lowest level, a bunch of atoms interacting and bumping into each other.  Unless that bumping somehow perfectly reverses itself, then one atom will bump into the next, which will bump into the next, which will bump into the next, etc.

And that’s essentially what heat is.  So the movement of one surface over another causes the atoms in each to get knocked about jiggle.  That loss of energy to heat is what causes the surfaces to slow down and stop.

0
Your rating: None
Original author: 
The Physicist

Physicist: Generally speaking, by the time a gas is hot enough to be seen, it’s a plasma.

The big difference between regular gas and plasma is that in a plasma a fair fraction of the atoms are ionized.  That is, the gas is so hot, and the atoms are slamming around so hard, that some of the electrons are given enough energy to (temporarily) escape their host atoms.  The most important effect of this is that a plasma gains some electrical properties that a non-ionized gas doesn’t have; it becomes conductive and it responds to electrical and magnetic fields.  In fact, this is a great test for whether or not something is a plasma.

For example, our Sun (or any star) is a miasma of incandescent plasma.  One way to see this is to notice that the solar flares that leap from its surface are directed along the Sun’s (generally twisted up and spotty) magnetic fields.

A solar flare as seen in the x-ray spectrum.

A solar flare as seen in the x-ray spectrum.  The material of the flare, being a plasma, is affected and directed by the Sun’s magnetic field.  Normally this brings it back into the surface (which is for the best).

We also see the conductance of plasma in “toys” like a Jacob’s Ladder.  Spark gaps have the weird property that the higher the current, the more ionized the air in the gap, and the lower the resistance (more plasma = more conductive).  There are even scary machines built using this principle.  Basically, in order for a material to be conductive there need to be charges in it that are free to move around.  In metals those charges are shared by atoms; electrons can move from one atom to the next.  But in a plasma the material itself is free charges.  Conductive almost by definition.

Jacob's Ladder; for children of all ages

A Jacob’s Ladder.  The electricity has an easier time flowing through the long thread of highly-conductive plasma than it does flowing through the tiny gap of poorly-conducting air.

As it happens, fire passes all these tests with flying colors.  Fire is a genuine plasma.  Maybe not the best plasma, or the most ionized plasma, but it does alright.

Because the flame has a bunch of free charged particles it is pushed and pulled by

The free charges inside of the flame are pushed and pulled by the electric field between these plates, and as those charged particles move they drag the rest of the flame with them.

Even small and relatively cool fires, like candle flames, respond strongly to electric fields and are even pretty conductive.  There’s a beautiful video here that demonstrates this a lot better than this post does.

The candle picture is from here, and the Jacob’s ladder picture is from here.

0
Your rating: None
Original author: 
The Physicist

Physicist: This is a question that comes up a lot when you’re first studying linear algebra.  The determinant has a lot of tremendously useful properties, but it’s a weird operation.  You start with a matrix, take one number from every column and multiply them together, then do that in every possible combination, and half of the time you subtract, and there doesn’t seem to be any rhyme or reason why.  This particular math post will be a little math heavy.

If you have a matrix, {\bf M} = \left(\begin{array}{cccc}a_{11} & a_{21} & \cdots & a_{n1} \\a_{12} & a_{22} & \cdots & a_{n1} \\\vdots & \vdots & \ddots & \vdots \\a_{1n} & a_{2n} & \cdots & a_{nn}\end{array}\right), then the determinant is det({\bf M}) = \sum_{\vec{p}}\sigma(\vec{p}) a_{1p_1}a_{2p_2}\cdots a_{np_n}, where \vec{p} = (p_1, p_2, \cdots, p_n) is a rearrangement of the numbers 1 through n, and \sigma(\vec{p}) is the “signature” or “parity” of that arrangement.  The signature is (-1)k, where k is the number of times that pairs of numbers in \vec{p} have to be switched to get to \vec{p} = (1,2,\cdots,n).

For example, if {\bf M} = \left(\begin{array}{ccc}a_{11} & a_{21} & a_{31} \\a_{12} & a_{22} & a_{32} \\a_{13} & a_{23} & a_{33} \\\end{array}\right) = \left(\begin{array}{ccc}4 & 2 & 1 \\2 & 7 & 3 \\5 & 2 & 2 \\\end{array}\right), then

\begin{array}{ll}det({\bf M}) \\= \sum_{\vec{p}}\sigma(\vec{p}) a_{1p_1}a_{2p_2}a_{3p_3} \\=\left\{\begin{array}{ll}\sigma(1,2,3)a_{11}a_{22}a_{33}+\sigma(1,3,2)a_{11}a_{23}a_{32}+\sigma(2,1,3)a_{12}a_{21}a_{33}\\+\sigma(2,3,1)a_{12}a_{23}a_{31}+\sigma(3,1,2)a_{13}a_{21}a_{32}+\sigma(3,2,1)a_{13}a_{22}a_{31}\end{array}\right.\\=a_{11}a_{22}a_{33}-a_{11}a_{23}a_{32}-a_{12}a_{21}a_{33}+a_{12}a_{23}a_{31}+a_{13}a_{21}a_{32}-a_{13}a_{22}a_{31}\\= 4 \cdot 7 \cdot 2 - 4 \cdot 2 \cdot 3 - 2 \cdot 2 \cdot 2 +2 \cdot 2 \cdot 1 + 5 \cdot 2 \cdot 3 - 5 \cdot 7 \cdot 1\\=23\end{array}

Turns out (and this is the answer to the question) that the determinant of a matrix can be thought of as the volume of the parallelepiped created by the vectors that are columns of that matrix.  In the last example, these vectors are \vec{v}_1 = \left(\begin{array}{c}4\\2\\5\end{array}\right), \vec{v}_2 = \left(\begin{array}{c}2\\7\\2\end{array}\right), and \vec{v}_3 = \left(\begin{array}{c}1\\3\\2\end{array}\right).

Parallelepiped

The parallelepiped created by the vectors a, b, and c.

Say the volume of the parallelepiped created by \vec{v}_1, \cdots,\vec{v}_n is given by D\left(\vec{v}_1, \cdots, \vec{v}_n\right).  Here come some properties:

1) D\left(\vec{v}_1, \cdots, \vec{v}_n\right)=0, if any pair of the vectors are the same, because that corresponds to the parallelepiped being flat.

2) D\left(a\vec{v}_1,\cdots, \vec{v}_n\right)=aD\left(\vec{v}_1,\cdots,\vec{v}_n\right), which is just a fancy math way of saying that doubling the length of any of the sides doubles the volume.  This also means that the determinant is linear (in each column).

3) D\left(\vec{v}_1+\vec{w},\cdots, \vec{v}_n\right) = D\left(\vec{v}_1,\cdots, \vec{v}_n\right) + D\left(\vec{w},\cdots, \vec{v}_n\right), which means “linear”.  This works the same for all of the vectors in D.

Check this out!  By using these properties we can see that switching two vectors in the determinant swaps the sign.

\begin{array}{ll}    D\left(\vec{v}_1,\vec{v}_2, \vec{v}_3\cdots, \vec{v}_n\right)\\    =D\left(\vec{v}_1,\vec{v}_2, \vec{v}_3\cdots, \vec{v}_n\right)+D\left(\vec{v}_1,\vec{v}_1, \vec{v}_3\cdots, \vec{v}_n\right) & \textrm{Prop. 1}\\    =D\left(\vec{v}_1,\vec{v}_1+\vec{v}_2, \vec{v}_3\cdots, \vec{v}_n\right) & \textrm{Prop. 3} \\    =D\left(\vec{v}_1,\vec{v}_1+\vec{v}_2, \vec{v}_3\cdots, \vec{v}_n\right)-D\left(\vec{v}_1+\vec{v}_2,\vec{v}_1+\vec{v}_2, \vec{v}_3\cdots, \vec{v}_n\right) & \textrm{Prop. 1} \\    =D\left(-\vec{v}_2,\vec{v}_1+\vec{v}_2, \vec{v}_3\cdots, \vec{v}_n\right) & \textrm{Prop. 3} \\    =-D\left(\vec{v}_2,\vec{v}_1+\vec{v}_2, \vec{v}_3\cdots, \vec{v}_n\right) & \textrm{Prop. 2} \\    =-D\left(\vec{v}_2,\vec{v}_1, \vec{v}_3\cdots, \vec{v}_n\right)-D\left(\vec{v}_2,\vec{v}_2, \vec{v}_3\cdots, \vec{v}_n\right) & \textrm{Prop. 3} \\    =-D\left(\vec{v}_2,\vec{v}_1, \vec{v}_3\cdots, \vec{v}_n\right) & \textrm{Prop. 1}    \end{array}

4) D\left(\vec{v}_1,\vec{v}_2, \vec{v}_3\cdots, \vec{v}_n\right)=-D\left(\vec{v}_2,\vec{v}_1, \vec{v}_3\cdots, \vec{v}_n\right), so switching two of the vectors flips the sign.  This is true for any pair of vectors in D.  Another way to think about this property is to say that when you exchange two directions you turn the parallelepiped inside-out.

Finally, if \vec{e}_1 = \left(\begin{array}{c}1\\0\\\vdots\\0\end{array}\right), \vec{e}_2 = \left(\begin{array}{c}0\\1\\\vdots\\0\end{array}\right), … \vec{e}_n = \left(\begin{array}{c}0\\0\\\vdots\\1\end{array}\right), then

5) D\left(\vec{e}_1,\vec{e}_2, \vec{e}_3\cdots, \vec{e}_n\right) = 1, because a 1 by 1 by 1 by … box has a volume of 1.

Also notice that, for example, \vec{v}_2 = \left(\begin{array}{c}v_{21}\\v_{22}\\\vdots\\v_{2n}\end{array}\right) = \left(\begin{array}{c}v_{21}\\0\\\vdots\\0\end{array}\right)+\left(\begin{array}{c}0\\v_{22}\\\vdots\\0\end{array}\right)+\cdots+\left(\begin{array}{c}0\\0\\\vdots\\v_{2n}\end{array}\right) = v_{21}\vec{e}_1+v_{22}\vec{e}_2+\cdots+v_{2n}\vec{e}_n

Finally, with all of that math in place,

\begin{array}{ll}  D\left(\vec{v}_1,\vec{v}_2, \cdots, \vec{v}_n\right) \\  = D\left(v_{11}\vec{e}_1+v_{12}\vec{e}_2+\cdots+v_{1n}\vec{e}_n,\vec{v}_2, \cdots, \vec{v}_n\right) \\  = D\left(v_{11}\vec{e}_1,\vec{v}_2, \cdots, \vec{v}_n\right) + D\left(v_{12}\vec{e}_2,\vec{v}_2, \cdots, \vec{v}_n\right) + \cdot + D\left(v_{1n}\vec{e}_n,\vec{v}_2, \cdots, \vec{v}_n\right) \\= v_{11}D\left(\vec{e}_1,\vec{v}_2, \cdots, \vec{v}_n\right) + v_{12}D\left(\vec{e}_2,\vec{v}_2, \cdots, \vec{v}_n\right) + \cdot + v_{1n}D\left(\vec{e}_n,\vec{v}_2, \cdots, \vec{v}_n\right) \\    =\sum_{j=1}^n v_{1j}D\left(\vec{e}_j,\vec{v}_2, \cdots, \vec{v}_n\right)  \end{array}

Doing the same thing to the second part of D,

=\sum_{j=1}^n\sum_{k=1}^n v_{1j}v_{2k}D\left(\vec{e}_j,\vec{e}_k, \cdots, \vec{v}_n\right)

The same thing can be done to all of the vectors in D.  But rather than writing n different summations we can write, =\sum_{\vec{p}}\, v_{1p_1}v_{2p_2}\cdots v_{np_n}D\left(\vec{e}_{p_1},\vec{e}_{p_2}, \cdots, \vec{e}_{p_n}\right), where every term in \vec{p} = \left(\begin{array}{c}p_1\\p_2\\\vdots\\p_n\end{array}\right) runs from 1 to n.

When the \vec{e}_j that are left in D are the same, then D=0.  This means that the only non-zero terms left in the summation are rearrangements, where the elements of \vec{p} are each a number from 1 to n, with no repeats.

All but one of the D\left(\vec{e}_{p_1},\vec{e}_{p_2}, \cdots, \vec{e}_{p_n}\right) will be in a weird order.  Switching the order in D can flip sign, and this sign is given by the signature, \sigma(\vec{p}).  So, D\left(\vec{e}_{p_1},\vec{e}_{p_2}, \cdots, \vec{e}_{p_n}\right) = \sigma(\vec{p})D\left(\vec{e}_{1},\vec{e}_{2}, \cdots, \vec{e}_{n}\right), where \sigma(\vec{p})=(-1)^k, where k is the number of times that the e’s have to be switched to get to D(\vec{e}_1, \cdots,\vec{e}_n).

So,

\begin{array}{ll}    det({\bf M})\\    = D\left(\vec{v}_{1},\vec{v}_{2}, \cdots, \vec{v}_{n}\right)\\    =\sum_{\vec{p}}\, v_{1p_1}v_{2p_2}\cdots v_{np_n}D\left(\vec{e}_{p_1},\vec{e}_{p_2}, \cdots, \vec{e}_{p_n}\right) \\    =\sum_{\vec{p}}\, v_{1p_1}v_{2p_2}\cdots v_{np_n}\sigma(\vec{p})D\left(\vec{e}_{1},\vec{e}_{2}, \cdots, \vec{e}_{n}\right) \\    =\sum_{\vec{p}}\, \sigma(\vec{p})v_{1p_1}v_{2p_2}\cdots v_{np_n}    \end{array}

Which is exactly the definition of the determinant!  The other uses for the determinant, from finding eigenvectors and eigenvalues, to determining if a set of vectors are linearly independent or not, to handling the coordinates in complicated integrals, all come from defining the determinant as the volume of the parallelepiped created from the columns of the matrix.  It’s just not always exactly obvious how.

For example: The determinant of the matrix {\bf M} = \left(\begin{array}{cc}2&3\\1&5\end{array}\right) is the same as the area of this parallelogram, by definition.

The parallelepiped (in this case a 2-d parallelogram) created by (2,1) and (3,5).

The parallelepiped (in this case a 2-d parallelogram) created by (2,1) and (3,5).

Using the tricks defined in the post:

\begin{array}{ll}  D\left(\left(\begin{array}{c}2\\1\end{array}\right),\left(\begin{array}{c}3\\5\end{array}\right)\right) \\[2mm]  = D\left(2\vec{e}_1+\vec{e}_2,3\vec{e}_1+5\vec{e}_2\right) \\[2mm]  = D\left(2\vec{e}_1,3\vec{e}_1+5\vec{e}_2\right) + D\left(\vec{e}_2,3\vec{e}_1+5\vec{e}_2\right) \\[2mm]  = D\left(2\vec{e}_1,3\vec{e}_1\right) + D\left(2\vec{e}_1,5\vec{e}_2\right) + D\left(\vec{e}_2,3\vec{e}_1\right) + D\left(\vec{e}_2,5\vec{e}_2\right) \\[2mm]  = 2\cdot3D\left(\vec{e}_1,\vec{e}_1\right) + 2\cdot5D\left(\vec{e}_1,\vec{e}_2\right) + 3D\left(\vec{e}_2,\vec{e}_1\right) + 5D\left(\vec{e}_2,\vec{e}_2\right) \\[2mm]  = 0 + 2\cdot5D\left(\vec{e}_1,\vec{e}_2\right) + 3D\left(\vec{e}_2,\vec{e}_1\right) + 0 \\[2mm]  = 2\cdot5D\left(\vec{e}_1,\vec{e}_2\right) - 3D\left(\vec{e}_1,\vec{e}_2\right) \\[2mm]  = 2\cdot5 - 3 \\[2mm]  =7  \end{array}

Or, using the usual determinant-finding-technique, det\left|\begin{array}{cc}2&3\\1&5\end{array}\right| = 2\cdot5 - 3\cdot1 = 7.

 

0
Your rating: None

Physicist: The wave-like property of particles allows you to do a lot of cute things with particles that would otherwise seem impossible, but making a particle disappear isn’t one of them.  You can use destructive interference to make it very unlikely to find a particle in particular places, which is exactly what’s happening in the double slit experiment, and is also the basis behind how “diffraction gratings” work.

Lasers are one of the big reasons to go into physics.  Sure, there's the whole "understanding the nature of reality" thing, but it's mostly about lasers.

The wave nature of things (in this case light) can be used to get it to go places it normally wouldn’t (right), or not go places it normally would (left).  By using two slits you can interfere the light waves to cause the light to never show up in places where it would when passing through one slit (dark bands in in the lower left).

But in order to destroy a particle using destructive interference you’d need to ensure that there was nowhere left with any constructive interference.  In the pictures above there are dark regions, but also light regions.  Turns out there are some very solid mathematical principles and physical laws that don’t allow waves to behave that way (not just waves in particle physics, but waves in general), not the least of which is the conservation of energy.  So, say you’ve got some kind of waves bouncing around (light waves, sound waves, particle fields, whatever), and you want to make new waves to cancel them out.  You can certainly do that, but only in small regions.

If everyone on the air plane just turned off their noise-cancelling headphones, then the flight would be almost silent.

How noise canceling headphones work.  Rather than destroying sound around them, they cancel out the sound on one side that’s coming from the other.  But ultimately they’re just speakers that make the environment louder overall.

The new waves add energy to the overall system, so no matter how hard you try to cancel things out, you’ll always end up adding energy somewhere, it’s just a question of where.  Noise canceling headphones are a beautiful example.  They actively produce sound in such a way that they destructively interfere with sound waves coming from nearby, in the anti-headward direction.  Although they create quiet in the ears, they create noise everywhere else.  There’s no getting around it.

Yes; this is art.

Say you’ve got a bunch of noise bouncing about.  Noise canceling techniques can create a quite region, but end up generating more total noise outside of that region.

Similarly, you could try to kill off particles by creating new waves in the particles field (each kind of particle has a corresponding field, for example light particles are called “photons” and they have the electromagnetic field) that cancel out the particle.  You can totally create a particle “dead zone”, assuming you know exactly what the particle’s wave form is.  Headphones need this as well; you can’t just get rid of sound, you first need to know exactly what the sound is first.  However, anything you try will end up adding more particles , and won’t destroy the particle you’re worried about anyway, just redirect it.

There are a few ways to destroy a particle.  You can chuck it out (which is as good as destroying it, for most practical purposes), or you can annihilate it.  But annihilation is a bit of a cheat; you just put a particle in a chamber with its anti-particle and let nature take its course.  But that doesn’t use waves and interference (in the way we normally think of waves), they just combine, disappear, and leave behind a burst of energy.  But not in a cool, wavey interferey, kind of way.

The pictures at the top of this post are from here and here.

0
Your rating: None

Physicist: There are a couple of different contexts in which the word “dimension” comes up.  In the case of fractals the number of dimensions has to do (this is a little hand-wavy) with the way points in the fractal are distributed.  For example, if you have points distributed at random in space you’d say you have a three-dimensional set of points, and if they’re all arranged on a flat sheet you’d say you have two-dimensional set of points.  Way back in the day mathematicians figured that a good way to determine the “dimensionality” of a set is to pick a point in the set, and then put progressively larger and larger spheres of radius R around it.  If the number of points contained in the sphere is proportional to Rd, then the set is d-dimensional.

(Left) as the sphere grows, the number of points from the line that it contains increases like R, so the line is one-dimensional. (Right) as the sphere increases in size the number of points from the rectangle that it contains increases like R2, so the square is two-dimensional.

However, there’s a problem with this technique.  You can have a set that’s really d-dimensional, but on a large scale it appears to be a different dimension.  For example, a piece of paper is basically 2-D, but if you crumple it up into a ball it seems 3-D on a large enough scale.  A hairball or bundle of cables seems 3-D (by the “sphere test”), but they’re really 1-D (Ideally at least.  Every physical object is always 3-D).

A “crumpled up” set seems like it has a higher dimension than it really does.  You can get around this by using smaller and smaller spheres.  Eventually you’ll get the correct dimension.

This whole “look at the number of points inside of tiny spheres and see how that number scales with size” thing works great for every half-way reasonable sets.  However, fractal sets can be “infinitely crumpled”, so no matter how small a sphere you use, you still get a dimension larger than you might expect.

The edge of the Mandelbrot set “should” be one-dimensional since it’s just a line. However, it’s infinitely twisty, and no matter how much you zoom in it stays just as messed up.

When the “sphere trick” is applied to tangled messes it doesn’t necessarily have to give you integer numbers until the spheres are small enough.  With fractals there is no “small enough” (that should totally be a terrible movie tag line), and you find that they have a dimension that’s often a fraction.  The dimension of the Mandelbrot’s boundary (picture above) is 2, which is the highest it can be, but there are more interesting (but less pretty) fractals out there with genuinely fractional dimensions, like the “Koch snowflake” which has a dimension of approximately 1.262.

The Koch snowflake, which is so messed up that it has a dimension greater than 1, but less than 2.

That all said, when somebody (looking at you, all mathematicians) talks about fractional dimensions, they’re really talking about a weird, abstract, and not at all physical notion of dimension.  There’s no such thing as “2.5 dimensional universe”.  When we talk about the “dimension of space” we’re talking about the number of completely different directions that are available, not the whole “sphere thing”.  The dimensions of space are either there or not, so while you could have 4 dimensions, you couldn’t have 3.5.

0
Your rating: None

Physicist: Not with any current, or remotely feasible technology.  The method in use by the universe today; get several Suns worth of mass into a big pile and wait, is a pretty effective way to create black holes.

In theory, all you need to do to create an artificial black hole (a “black faux”?) is to get a large amount of energy and matter into a very small volume.  The easiest method would probably be to use some kind of massive, super-duper-accelerators.  The problem is that black holes are dense, and the smaller and less massive they are the denser they need to be.

A black hole with the mass of the Earth would be so small you could lose it pretty easy.  Except for all the gravity.

But there are limits to how dense matter can get on its own.  The density of an atomic nucleus, where essentially all of the matter of an atom is concentrated, is about the highest density attainable by matter: about 1018 kg/m3, or about a thousand, million, million times denser than water.  This density is also the approximate density of neutron stars (which are basically giant atomic nuclei).

When a star runs out of fuel and collapses, this is the densest that it can get.  If a star has less than about 3 times as much mass as our Sun, then when it gets to this density it stops, and then hangs out forever.  If a star has more than 3 solar masses, then as it collapses, on it’s way to neutron-star-density, it becomes a black hole (a black hole with more mass needs less density).

The long-winded point is; in order to create a black hole smaller than 3 Suns (which would be what you’re looking for it you want to keep it around), it’s not a question of crushing stuff.  Instead you’d need to use energy, and the easiest way to get a bunch of energy into one place is to use kinetic energy.

There’s some disagreement about the minimum size that a black hole can be.  Without resorting to fairly exotic, “lot’s of extra dimensions” physics, the minimum size should be somewhere around 2\times 10^{-21} grams.  That seems small, but it’s very difficult (probably impossible) to get even that much mass/energy into a small enough region.  A black hole with this mass would be about 10-47 m across, which is way, way, way smaller than a single electron (about 10-15 m).  But unfortunately, a particle can’t be expected to concentrate energy in a region smaller than the particle itself.  So using whatever “ammo” you can get into a particle accelerator, you find that the energy requirements are a little steeper.

To merely say that you’d need to accelerate particles to nearly the speed of light doesn’t convey the stupefying magnitude of the amount of energy you’d need to get a collision capable of creating a black hole.  A pair of protons would need to have a “gamma” (a useful way to talk about ludicrously large speeds) of about 1040, or a pair of lead nuclei would need to have a gamma of about 1037, when they collide in order for a black hole to form.  This corresponds to the total energy of all the mass in a small mountain range.  For comparison, a nuclear weapon only releases the energy of several grams of matter.

CERN, or any other accelerator ever likely to be created, falls short in the sense that a salted slug in the ironman falls short.

There’s nothing else in the universe the behaves like a black hole.  They are deeply weird in a lot of ways.  But, a couple of the properties normally restricted to black holes can be simulated with other things.  There are “artificial black holes” created in laboratories to study Hawking radiation, but you’d never recognize them.  The experimental set up involves tubes of water, or laser beams, and lots of computers.  No gravity, no weird timespace stuff, nothin’.  If you were in the lab, you’d never know that black holes were being studied.

0
Your rating: None

The original question was: I’ve heard somewhere that they’re also trying to build computers using molecules, like DNA. In general would it work to try and simulate a factoring algorithm using real world things, and then let the physics of the interactions stand-in for the computer calculation? Since the real-world does all kinds of crazy calculations in no time.

Physicist: The amount of time that goes into, say, calculating how two electrons bounce off of each other is very humbling.  It’s terribly frustrating that the universe has no hang ups about doing it so fast.

In some sense, basic physical laws are the basis of how all calculations are done.  It’s just that modern computers are “Turing machines“, a very small set of all the possible kinds of computational devices.  Basically, if your calculating machine consists of the manipulation of symbols (which in turn can always be reduced to the manipulation of 1′s and 0′s), then you’re talking about Turing machines.  In the earlier epoch of computer science there was a strong case for analog computers over digital computers.  Analog computers use the properties of circuit elements like capacitors, inductors, or even just the layout of wiring, to do mathematical operations.  In their heyday they were faster than digital computers.  However, they’re difficult to design, not nearly as versatile, and they’re no longer faster.

Nordsieck's Differential Analyzer was an analog computer used for solving differential equations.

Any physical phenomena that represents information in definite, discrete states is doing the same thing a digital computer does, it’s just a question of speed.  To see other kinds of computation it’s necessary to move into non-digital kinds of information.  One beautiful example is the gravity powered square root finder.

Newtonian physics used to find the square root of numbers. Put a marble next to a number, N, (white dots) on the slope, and the marble will land on the ground at a distance proportional to √N.

When you put a marble on a ramp the horizontal distance it will travel before hitting the ground is proportional to the square root of how far up the ramp it started.  Another mechanical calculator, the planimeter, can find the area of any shape just by tracing along the edge.  Admittedly, a computer could do both calculations a heck of a lot faster, but they’re still descent enough examples.

Despite the power of digital computers, it doesn’t take much looking around to find problems that can’t be efficiently done on them, but that can be done using more “natural” devices.  For example, solutions to “harmonic functions with Dirichlet boundary conditions” (soap films) can be fiendishly difficult to calculate in general.  The huge range of possible shapes that the solutions can take mean that often even the most reasonable computer program (capable of running in any reasonable time) will fail to find all the solutions.

Part of Richard Courant's face demonstrating a fancy math calculation using soapy water and wires.

So, rather than burning through miles of chalkboards and a swimming pools of coffee, you can bend wires to fit the boundary conditions, dip them in soapy water, and see what you get.  One of the advantages, not generally mentioned in the literature, is that playing with bubbles is fun.

Today we’re seeing the advent of a new type of computer, the quantum computer, which is kinda-digital/kinda-analog.  Using quantum mechanical properties like super-position and entanglement, quantum computers can (or would, if we can get them off the ground) solve problems that would take even very powerful normal computers a tremendously long time to solve, like integer factorization.  “Long time” here means that the heat death of the universe becomes a concern.  Long time.

Aside from actual computers, you can think of the universe itself, in a… sideways, philosophical sense, as doing simulations of itself that we can use to understand it.  For example, one of the more common questions we get are along the lines of “how do scientists calculate the probability/energy of such-and-such chemical/nuclear reaction”.  There are certainly methods to do the calculations (have Schrödinger equation, will travel), but really, if you want to get it right (and often save time), the best way to do the calculation is to let nature do it.  That is, the best way to calculate atomic spectra, or how hot fire is, or how stable an isotope is, or whatever, is to go to the lab and just measure it.

Even cooler, a lot of optimization problems can be solved by looking at the biological world.  Evolution is, ideally, a process of optimization (though not always).   During the early development of sonar and radar there were (still are) a number of questions about what kind of “ping” would return the greatest amount of information about the target.  After a hell of a lot of effort it was found that the researchers had managed to re-create the sonar ping of several bat species.  Bats are still studied as the results of what the universe has already “calculated” about optimal sonar techniques.

You can usually find a solution through direct computation, but sometimes looking around works just as well.

0
Your rating: None