Field of Science

The Boundary Between Knowledge and Belief

The director of CERN, Rolf-Dieter Heuer, talks to European Magazine.

Rolf-Dieter Heuer from European Magazine
It’s a quest for knowledge. The questions we are examining have been asked since the beginning of mankind. We are humans, we want to understand the world around us. How did things begin? How did the universe develop? That distinguishes us from other creatures. If you go outside at night and look up into the sky, you cannot help but dream. Your fantasy develops, you are naturally drawn to these questions about being and existence. And at the same time, our work has very practical consequences. When antimatter was introduced into the theoretical framework 83 years ago, nobody thought that this had any practical relevance. Yet today, the concept is used in hospitals around the world on a daily basis. Positron Emission Tomography (PET) is based on the positron, which is the anti-particle to the electron. Or take the internet. The idea of a worldwide network started in 1989 here at CERN, because we needed that kind of digital network for our scientific work. That’s the beauty of our research: We gain knowledge but we also gain the potential for technological innovation.

More here.

The First Quantum Computer

In a nondescript office park outside Vancouver with views of snow capped mountains in the distance is a mirrored business park where very special work is being done. The company is D-Wave, the quantum computing company. D-Wave's mission is to build a computer which will solve humanity's grandest challenges.

D-Wave aims to develop the first quantum computer in the world, perhaps they already have. The advent of quantum computers would be a sea change in the world that would allow for breaking of cryptography, better artificial intelligence, and exponential increases in computing speed for certain applications. The idea for quantum computers has been bubbling since Richard Feynman first proposed that the best way to simulate quantum phenomena would be with quantum systems themselves, but it has been exceedingly difficult to engineer a computer than can manipulate the possibilities of quantum information processing. Hardly a decade ago D-Wave began with a misstep which is the origin of their name. D-Wave got its name from their first idea which would have used yttrium barium copper oxide (YBCO) which is a charcoal looking material with a superconducting temperature above that of the boiling point of liquid nitrogen. This means that YBCO is the standard science lab demonstration of superconducting magnetic levitation. Ultimately the crystalline structure of YBCO was found to be an imperfect material, but the cloverleaf d-wave atomic orbital that lends YBCO its superconducting properties stuck as D-Wave's name. The vision of D-Wave did not change, but their approach did. They realized they would have to engineer and build the majority of the technology necessary to create a quantum computer themselves. They even built built their own superconducting electronics foundry to perform the electron beam lithography and metallic thin film evaporation processes necessary to create the qubit microchips at the heart of their machine.

I recently got to visit D-Wave, the factory of quantum dreams, for myself. The business park that D-Wave is in is so nondescript that we drove right by it at first. I was expecting lasers and other blinking lights, but instead our University of Washington rented van pulled into the wrong parking lot which we narrowly reversed out of. In the van were several other quantum aficionados, students, and professors, mostly from computer science who were curious at what a quantum computer actually looks like. I am going to cut the suspense and tell you now that a quantum computer looks like a really big black refrigerator or maybe a small room. The chip at the heart of the room is cooled to a few milikelvin, colder than interstellar space, and that is where superconducting circuits count electric quantum sheep. The tour began with us milling around a conference room and our guide, a young scientist and engineer, was holding in his hand a wafer which held hundreds of quantum processors. I took a picture and after I left that conference room they did not let me take any more pictures.
wafer of D-Wave Rainer core quantum processors
Entering the laboratory it suddenly dawned on me that this wasn't just a place for quantum dreams it was real and observable. The entire notion of a quantum computer was more tangible. A quantum computer is a machine which uses quantum properties like entanglement to perform computations on data.The biggest similarity between a quantum computer and a regular computer is that they both perform algorithms to manipulate data. The data, or bits, of a quantum computer are known as qubits. A qubit is not limited to the values of 0 or 1 as in a classical computer but can be in a superposition of these states simultaneously. Sometimes a quantum computer doesn't even give you the same answer to the exact same question. Weird. The best way to conceive of a quantum computing may be to imagine a computation where each possible output of the problem has either positive or negative probability amplitudes (a strange quantum idea there) and when the amplitudes for wrong answers cancel to zero and right answers are reinforced.

The power of quantum computers is nicely understood within the theoretical framework of computational complexity theory. Say for example that I give you the number 4.60941636 × 1018 and ask for the prime factors of this number. Now if someone were to give you the prime factors you could verify them as correct very quickly, but what if I asked you to generate the prime factors for me (I dare you. I have the answer. I challenge you. In actually this challenge is easy. Nine digit number aren't that hard to factor, a friend says they found a webpage that will do it. But the problem doesn't scale well to larger numbers). The quintessential problem here is the P versus NP question which asks whether if a problem can be verified quickly can it also be solved quickly. Quickly is defined as polynomial time meaning that the algorithm scales as the number of some inputs to some power. Computational complexity theory basically attempts to categorize different kinds of problems depending on how fast a solution can be found as the size of the problem grows. A P class problem is one in which the solution can be found within polynomial time. A NP class problem is one in which the solution can be verified in polynomial time. So if I ask you for the prime factors of my number above that is an NP problem because given the numbers you could verify the answer quickly, but it would be very difficult to calculate the numbers just given the number. It is an open question, but it appears likely that all P problems are a subset of NP. This means that problems verifiable in polynomial time are not necessarily solved in polynomial time. The issue is that for some very interesting problems in the real world we could verify the answer if we stumbled upon it, but we won't even be able stumble upon the answer in a time shorter than the age of the universe with current computers and algorithms. What we know we know and what we think we know is a sea of confusion, but the popular opinion and where people would take their wagers is that P is not equal to NP.

Suddenly, with mystique and spooky actions at a distance, quantum computing comes swooping in and claims to be able to solve some NP problems and all P problems very quickly. A general quantum computer would belong to the complexity class of BQP. There is a grand question at hand, is BQP in NP? (More generally, is BQP contained anywhere in the polynomial hierarchy? The polynomial hierarchy is a complexity class which generalizes P and NP problems to a particular kind of perfect abstract computer with the ability to solve decision problems in a single step. See this paper here on BQP and the Polynomial Hierarchy by Scott Aaronson who is a outspoken critic of D-Wave) At this time we cannot even claim to have evidence that BQP is not part of NP, but most scientists close to the problem think that BQP is not a subset of NP. Quantum computing researchers are trying to get better evidence that quantum computers cannot solve NP-complete problems in polynomial time (if NP was a subset of BQP then the polynomial hierarchy collapses). A reasonable wager I would take is that P is a (proper) subset of BQP and BQP is itself is a (proper) subset of NP. This claim has not been rigorously proved but it is suspected to be true and further there are some NP problems which it has been shown to be true for such as prime factorization and some combinatoric problems.

There might be an elephant in the room here. The D-Wave architecture is almost certainly attacking a NP complete problem and reasonable logic says that quantum computers will solve P problems and some NP problems, but not NP complete problems (this is also not proven, but suspected). An NP complete problem is a problem in which the time it takes to compute the answer may reach into millions or billions of years even for moderately large versions of the problem. Thus we don't know if this particular quantum computer D-Wave has built even allows us to do anything efficiently we couldn't already do on a classical computer efficiently; it doesn't seem to be a BQP class computer thus it cannot for example solve prime factorization cryptography problems. So, yes it is a quantum machine, but we don't have any evidence it is an interesting machine. At the same time we don't have any evidence it is an uninteresting machine. It is not general purpose enough to be clear it is a big deal, nor is it so trivial it is totally uninteresting.

The D-Wave lab was bigger than I expected and it was at once more cluttered and more precise than I thought it would be. It turns out the entire process of quantum computing follows this trend. There are a lot of factors they contend with and on the tour I saw people dead focused with their eyes on a microscope executing precise wiring, coders working in pairs, theoreticians gesturing at a chaotic white board, and even automated processes being carried on by computers with appropriately looking futuristic displays. The engineering problems D-Wave faces include circuit design, fabrication, cryogenics, magnetic shielding and so on. There is too much to discuss here so I will focus on what I think are scientifically the two most interesting parts of the D-Wave quantum computer which are the qubit physics and the quantum algorithm which they implement; in fact these two parts of their computer are deeply intertwined.

In the image above is a wafer of Rainer core superconducting microchips. The chips are built to exacting specifications and placed at the center of the D-Wave quantum computer in isolation from external noise such as magnetic fields and heat. In the quantum world heat is noise so the chips are kept at a temperature of a few milikelvin to preserve the quantum properties of the system. On each chip are 128 superconducting flux qubits. The qubit is the quantum of information with which this computer works. There are various ways to create a qubit such as quantum dots, photons, electrons, and so on, but D-Wave has gone with the flux qubit design for engineering concerns.

A flux qubit is a micrometer size loop of conducting material (in this case Niobium) wherein a current either circulates the loop clockwise or counterclockwise in a quantized manner such that the loop is either in a spin up (that is +1 or ↑) or a spin down (that is -1 or ↓)  state. There is an energy potential barrier between the loop spontaneous flipping spin (or current circulation direction) which can be modulated through various control schemes. They control these loops using compound Josephson junctions and SQUIDs using their own propriety techniques, but borrowing heavily on decades of advancement in solid state physics.

Perhaps even more important than the qubit itself is the architecture and the algorithm implemented by the computer. They use a quantum adiabatic algorithm based on the Ising model. When I realized that their algorithm was based on the Ising model I couldn't help but marvel at the powerful simplicity. The Ising model is a statistical mechanics model of ferromagnetism where the atoms (vertices or variables) in a metal (crystal lattice or graph) are discrete variables with spin values that take on spin up or spin down values and each spin interacts with its nearest neighbors. It is a simple model that leads to beautiful complexity (for example see this article on the Ising model here) especially when you allow the interaction of each spin with its neighbor to be finely controlled or when you allow the connectivity of the vertices to be varied. The Ising model is easily extended to more abstract problems. For example we can connect every single vertex to every other vertex, it wouldn't look like a crystalline structure any more, but it makes sense on paper or with wires on a chip.

The quantum adiabatic algorithm borrows ideas from physics such as the process of annealing and spin states in the Ising model to solve a generalized optimization problem. During my tour of D-Wave we continued to talk about the algorithm and what was possible and the whole concept slowly crystallized for me, but it is not immediately obvious why they designed the computer they way they did because their implementation would not create a universal quantum computer. Why the quantum adiabatic algorithm?
  • Quantum annealing is physically motivated method for a quantum computers which is not thwarted by thermodynamics or decoherence.
  • Real world optimization problems can be modeled using the Ising spin glass. The hardware mirrors this.
  • More complicated architectures will borrow from the quantum annealing approach such as a universal adiabatic quantum computer.
D-Wave has not created a general purpose quantum computer. They have created a quantum computer which solves the adiabatic quantum algorithm or equivalently an optimization problem. They use quantum annealing to solve the global minimum of a given objective function with the form of... Wait, wait, let me have a kitten tell you instead (math warning next to paragraphs):
qubit quantum kitten cat tell you about the adiabatic quantum algorithm
Here E is the value to be minimized over the total system state s subject to the constraint of Jij (where Jij <1) acting between each element si and sj (where all s=+/-1). Each element s is weighted by the value hi (where hi >-1). The nearest neighbor spins of each ij pair is calculated according to the connections between vertices in a physics application or depending on the microchips graph architecture of actual physical connections on the D-Wave chip. ) The coupling between ij is determined by Jij so this means that J represents your knowledge of how each component of the system interacts with its neighbors. Immediately we extend the above minimization parameterization to the physical implementation of quantum flux qubits.
In this new form the optimization problem is written as a Hamiltonian which determines the interaction and evolution of the system. The variables are modified, sj →σz i and  si →σz i where σi z are are Pauli matrices at site i for a spin 1/2 qubit. Then hi is the transverse field that represents transitions up and down between the two spin states ↑ and ↓, of each spin. Here Kij is the weighting that defines the interaction between the qubits. The problem is to anneal the system as closely as possible to its classical ground state with the desired Kij.


The D-Wave computer solves the the quantum adiabatic algorithm by initializing the spins of the flux qubits in their ground state with a simple Hamiltonian. Initially the potential well for the spin of qubits is U shaped; the ground state of the of the qubits when they are configured in this mode is a superposition of the |↑> and and |↓> flux basis. Then the qubits are adiabatically, or slowly, evolved to the specific Hamiltonian which encodes the optimization problem that is to be solved; the potential is evolved to the double-welled configuration at which point the ↑> and and |↓> states start to become the dominant basis. Actually, the final configuration is not exactly a double-welled symmetric state, but it has some relative energy difference between the to states which biases the machine towards the encoded problem. Evolving the Hamiltonian can be thought of as modifying the energy barrier between the spin up and down states for each flux qubit. In a real system each potential well has multiple energy levels possible in it besides the lowest energy state which is where the ideal calculation is performed. According to the adiabatic theorem the system remains in the ground state so that at the end the state of the system describes the solution to the problem. However, in a real machine noise, such as the ambient local heat, can still  disturb the system out of the ground state. A key advantage to the D-Wave approach is robustness to noise in many situations. The slower the Hamiltonian is evolved, the more the process adheres to the ideal adiabatic theoretical calculation. Performing the calculation more slowly decreases the chance of jumping out of the ground state. Adding more qubits makes the energy gap at the tipping point smaller. Thus engineering is a machine with more qubits is hard. Interestingly, because quantum machines have statistical uncertainties each computation will have uncertainties which can be reduced by either running each calculation slower (and we are talking a few microseconds here) or by running the same calculation many times and seeing what different answers come up. As it turns out it is usually faster to run the calculation many times and compare answers than run one long calculation.

The theoretical minimization problem that is solved is best understood separately from what the actual quantum qubits are doing. Over at the D-Wave blog, Hacking the Mulitiverse, they liken the optimization problem to finding the best setting for a bunch of light switches that have various weightings. Each light switch can be either on or off and can have an either positive or negative weighting, the hterm above, and it can have a dependency on any other switch in the system determined by the Jij term. It turns out to a be a really hard problem as for just 100 switches there would be 2100 possible ways to arrange the switches.

Hello multiverseTraditionally the first program a coder writes in a new language is a simple print statement which says Hello world. On a quantum computer the first program you write says Hello multiverse! You could write this program on a D-Wave. Yes, you really can because you can go out any buy one. Lockheed Martin bought one earlier this year for ten million dollars. The detractors to D-Wave would say you are not getting a real quantum computer,  but then why did Lockheed Martin buy one? It is legitimate to ask, is D-Wave if the first true quantum computer? This of course depends on your definition of a quantum computer. The answer is probably no if you want a universal quantum computer (which belonged to the BQP complexity class discussed earlier). Probably no here means that reasonable computer scientists studying quantum computers have excellent reason to believe the answer is no but they lack rigorous mathematical proof. On the other hand if you are looking for a computer which exploits quantum effects to implement a specific purpose quantum algorithm then I think you can safely say, yes, this is a quantum computer. I am just a naive astronomer though so don't take my word for it. So let me clarify and say that just because a computer exploits quantum mechanics does not make it a quantum computer. All microchips today are small enough that the designers know something about quantum mechanics, maybe they even have to account for it in the chip's design, but crucially the compilers and the code that is written for the machine has no knowledge of the quantum mechanics. The algorithms run on the machine assume nothing about quantum mechanics in our universe. However, a real quantum computer would obviously be programmed according to the rules of quantum mechanics. Indeed the the D-Wave computer is executing an algorithm which explicitly takes into account quantum mechanics. Further, whether or not the D-Wave computer is actually a quantum computer that will satisfy computer scientist's definition is a mute point compared to asking if it is useful. Currently D-Wave is running experiments that show that the speed scaling of their machine as a function of inputs is, hopefully, better than classical computers and algorithms. In the future they will have to show with double blind experiments that their machine scales better than classical machines. If they can execute calculations in a few microseconds which take classic computers decades I don't care if you call it the one true quantum computer or an oracle, I will just want one.


ResearchBlogging.orgReferences


Harris, R., Johansson, J., Berkley, A., Johnson, M., Lanting, T., Han, S., Bunyk, P., Ladizinsky, E., Oh, T., Perminov, I., Tolkacheva, E., Uchaikin, S., Chapple, E., Enderud, C., Rich, C., Thom, M., Wang, J., Wilson, B., & Rose, G. (2010). Experimental demonstration of a robust and scalable flux qubit Physical Review B, 81 (13) DOI: 10.1103/PhysRevB.81.134510

Harris, R., Johnson, M., Han, S., Berkley, A., Johansson, J., Bunyk, P., Ladizinsky, E., Govorkov, S., Thom, M., Uchaikin, S., Bumble, B., Fung, A., Kaul, A., Kleinsasser, A., Amin, M., & Averin, D. (2008). Probing Noise in Flux Qubits via Macroscopic Resonant Tunneling Physical Review Letters, 101 (11) DOI: 10.1103/PhysRevLett.101.117003

Superluminal claims require super evidence

Neutrinos, those mercurial smidgens of the particle world, travel faster than the speed of light. That's the claim the OPERA collaboration makes in a paper subtly titled: Measurement of the neutrino velocity with the OPERA detector in the CNGS beam. This is a big claim that could have implications for particle physics and time travel. It has made the news, news, news, news, but what does it all mean? Lets talk about neutrinos.
faster than the speed of light
First, let me say that if neutrinos do travel faster than the speed of light then physicists have a lot of explaining to do. The repercussions of faster that light travel for any particle (also known as superluminal travel) would be revolutionary. So revolutionary that most physicists I spoke to this past week at a conference did not take the news too seriously: it was too extraordinary to comment on without further thought and details. The OPERA collaboration is actually very brave for putting this paper out there (i.e. on the ArXiV) and asking for outside analysis. They don't even pretend to begin to consider the ramifications. The last line of the paper sums up their position:
We deliberately do not attempt any theoretical or phenomenological interpretation of the results.
So let me ignore the wild theoretical implications and discussions of tachyons and just talk about the experiment and an astrophysical constraint on the velocity of neutrinos.

Why are physicists so confident that neutrinos travel at the speed of light? Well, start with the fact that every piece of credible data ever taken has never seen anything—be it particle or information—travel faster than the speed of light. Given previous observations it is hard to understand how neutrinos could be any different. Of course neutrinos are very difficult to measure because they interact very weakly with regular matter. Consider that 60 billion neutrinos generated from the core of the sun pass through your pinky each second and none of them interact with you (nor do they interact with the Earth, they are passing through you day and night).

The creation and detection of neutrinos is complicated. The process begins for the OPERA experiment over at CERN where the Super Proton Synchrotron (SPS) creates high energy (400 GeV/c) protons that collide with a graphite target producing pions and kaons which decay into muons and muon neutrinos. The neutrinos coming out of SPS are almost pure muon type neutrinos with an average energy of 17 GeV. The neutrinos travel through the solid Earth in a straight path unimpeded into a cavern below a mountain, Gran Sasso, in Italy. The OPERA neutrino experiment was designed to look for the direct appearance of muon to tau neutrinos (νμ → ντ), but their anomalous findings on the velocity of neutrinos is much more interesting.

The OPERA experiment found that the velocity of neutrinos was about 0.00248% faster than the speed of light. This measurement was made by precisely measuring the distance traveled by neutrinos and the time of travel. The OPERA collaboration did a lot of work to measure both parameters precisely. They found this velocity by measuring that the time of arrival of neutrinos at their detector by using atomic clocks. Their measurement was precise to a few nanoseconds. Wow, that is quick. Light only travels about a foot in a single nanosecond.

In order to measure the distance between CERN and Gran Sasso the OPERA team used very precise GPS systems. For example, they noticed a 2009 earthquake in that area produced a sudden displacement of 7 centimeters. So the exact distance the neutrinos traveled was 730534.61±.20 meters (or about 2.44 light milliseconds), however some have suggested that the GPS based positioning they used has errors introduced by atmospheric refraction. Intriguing possibility.

In order to measure the time, what OPERA calls the time of flight measurement, they used atomic cesium clocks. But the 'time' cannot be precisely measured at the single interaction level since the protons from the SPS source have a 10.5 microsecond extraction window. They had to look at time distributions where the most likely time for a burst of neutrinos to be created was inferred to higher precision. Additionally, the actual moment where the meson produces a neutrino in the decay tunnel is unknown, but it introduces negligible inaccuracy in the time of flight measurement. So, these distance and time measurements are really important, but really subtle. I recommend reading the paper if you are a glutton for punishment.
There is a very interesting constraint on the speed of neutrinos that comes from astronomy. It was the neutrinos and photons released from the death of a star. Supernova 1987A (SN 1987A) exploded 168,000 years ago when fusion in the core of an old star ceased and the weight of the outer layers of the stars caused the core to collapse. The protons in the atoms of the core of the star merged with the electrons present and converted themselves into neutrinos and electron neutrinos. A mega amount of electron neutrinos, about 1058, were generated and they began their epic journey to Earth. Some of these neutrinos arrived on Earth one morning in February of 1987 in a burst lasting less than 13 seconds. Of those many neutrinos two dozen interacted with detectors on Earth.
Astronomers observed light from SN 1987A just three hours after the neutrinos arrived. Just such a delay is expected as the fireball of the supernova had to have some time to expand and become transparent to photons, whereas neutrinos could escape much sooner. The explosion occurred at a known distant out in the Large Magellenic Cloud. This distant explosion created photons and neutrinos in a timed race to the Earth. With the these measurements in hand (the distance to the supernova and the time of arrival of the photons compared to neutrinos) we can determine the speed of neutrinos from SN1987A.

The accuracy and precision of measurements from SN 1987A are actually much greater than measurements taken at Gran Sasso despite the three hour time window difference between neutrino and photon travel. It comes down to the fact that the relative distance between Earth and SN 1987A is about 1016 times larger than the distance between CERN and Gran Sasso. This means that time measurements from SN 1987A can be extremely imprecise and still be much more precise than the OPERA measurements.

If the neutrinos from supernova 1987A had been traveling as fast as the neutrinos detected at Gran Sasso they would have arrived about four years sooner than the light from SN 1987A.

This supernova constraint on the velocity of neutrinos is very nice, but it doesn't answer every question because the comparison may not be apples to apples. The OPERA neutrinos are tau type, not electron type. And they are traveling through the Earth, not empty space. And they were much higher energy. The neutrinos from SN 1987A were only about 10 MeV, about one hundred times lower energy than the neutrinos in this study. Some may argue that higher energy neutrinos travel faster than lower energy neutrinos. However, a velocity-energy dependence should have stretched out the 13 second arrival time of neutrinos. Further, part of the OPERA collaborations analysis involved splitting the data into two bins with mean energies of 13.9 and 42.9 GeV; a comparison between the two bins indicated no energy dependence on velocity. Thus, while it may still be true that GeV neutrinos move faster than MeV neutrinos, the theoretical wiggle room is shrinking.

This experiment may be a signal of new physics or a case of systematic errors. Yet, even physicists who have developed theories that allow for superluminal velocities are doubtful so I would not bet on proof of hidden extra dimensions or time travel to come from this experiment. Much more extraordinary evidence is necessary to confirm such an extraordinary claim as breaking the speed limit of our Universe.

Turtles all the way down

The beginning was heralded by an elephant's trumpet.

The universe is carried on the back of an ancient turtle.

There were once ten suns embodied by crows. All but one crow was shot by an archer.

The moon is a decapitated head. Her face is painted with bells.

The stars are your ancestors eyes worth remembering.

In time you too will have nine tails and be older and wiser.

All the things which you do not know are vague. Drift clouds.

Having come so far is a matter of vagueness.
I wrote this poem because even modern cosmology faces infinite regression paradoxes with respect to the initial impetus of the Universe. The various creation stories independently formed in different cultures create some stunning mental images for me. The funniest idea for me is that the Universe is resting on the back of a giant turtle. What is the turtle resting on? Why it is turtles all the way down. Oh, and I almost forgot the best blog posts always have a picture; here is a picture of a turtle.

You've been Westinghoused Mr. Edison

Recently while glancing through an old physics text I found a line I had underlined, Westinghouse Electirc Corporation, and I remembered a little phrase that I used to use with other physics students. The phrase was, you've been Westinghoused. Let me explain. There is a curious episode in history know as the the war of the currents wherein the early pioneers of electricity were trying to commercialize the transmission of electricity. Nikola Tesla with the financing of George Westinghouse supported alternating current (AC) against Thomas Edison who supported direct current (DC). Edison tried to discredit the idea of AC transmission by showing how dangerous it was. Edison attempted shenanigans like electrocuting an elephant in public, but in the end practicality and economics prevailed. AC transmission is much more viable than DC transmission because of the pure physics: with DC transmission in order to get adequate power transmitted either the wires would have to be copper as thick as your arm or you would have to have power stations every block or so. It was probably a combination of physics and the shrewd business sense of Westinghouse that it came to pass that Edison lost the war of the currents. This history, like the story of Bohr and Heisenberg, has interesting characters and a certain mystique that lends itself to historical plays and documentaries.

Tesla was a modern Prometheus. Some say that history overlooked Tesla, however, there is a current (pun intended) revival in interest for Nikola Tesla, if not always for his science, for his eccentric personality. This documentary about Tesla talks about his life and work. The part about the war of the currents begins at 18:35.

Now, as Edison fought against AC current he tried to be really clever and he wanted to brand death by electrocution as being Westinghoused. However, Edison's electric empire faded and history summarily shows that he was bested by Tesla and Westinghouse. Scientists are a competitive bunch, so I propose that when one colleague bests another colleague in an academic pursuit, we proclaim that the the defeated has been Westinghoused. It isn't the worst thing to be Westinghoused, it just means you were bested in that pursuit. Edison was a great inventor and is still famous to this day, but he surely got Westinghoused.

Sailing

There was an amazing article up on Wired today about the America's Cup. It reminded of just how cool competitive sailing is. I wrote about sailing upwind in 2009 before the last America's Cup race and I mentioned a revolutionary solid wing multihull boat created by team Oracle. That boat was in fact as fast as promised and it won the race and by doing so team Oracle won the right to dictate the rules of the next America's cup. What they did was create the America's Cup World Series of standarized fixed wing catamaran sailing boats (you can read more about the entire thing in the Wired article). These boats are super fast and super intense. The America's Cup World Series is the water equivalent of Formula 1, but instead of crashes there are capsizes. Well, actually there are crashes too. Here is a hectic highlight real of these boats racing in the first ever event a few days ago in Cascais, Portugal.
Modern sailing is a paradoxical mix of elements. The boats are designed with advanced knowledge of physics and constructed of carbon fiber, yet they are powered by the simplicity of the wind. I think there is an appeal to working with nature to accomplish work rather than fighting against it. Working with nature always seems to be the most graceful option. In space travel rather than firing rockets to propel ships it is advantages to use gravitational assists by swinging by planets. And then of course there are solar sails in space too. The Japanese IKAROS satellite recently successfully unfurled itself in space and is now being pushed by photons on a unique journey. If you think about it astronomy and sailing go together.

A Cubic Millimeter of Your Brain

Are there more connections in a cubic millimeter of your brain than there are stars in the Milky Way? We are going to answer that question in a moment, but first take a look at this image of hippocampal neurons in a mouse's brain. It is an actual color image from a transgenic mouse in which fluorescent protein variations are expressed quasi-randomly in different neurons. This kind of image is known as a brainbow and is aesthetically awesome further it may be one way to empirically examine a cubic millimeter of the brain (neuron tomography).
Hippocampus brainbow
by Tamily Weissman, Harvard University
In reality mapping even an entire cubic millimeter of the brain is an extremely daunting task, but we can still answer my original question. First, I know that there are different kinds of neurons that vary in size and that some neurons can have a soma (the big part that has the nucleus from which the dendrites extend) spanning a millimeter in size. Thus if you picked a random cubic millimeter of brain you could run right into the heart of a neuron and you would find very few connections. Given this fact, we can very easily answer this question with a resounding no, however, this seems like an unsatisfactory trite approach. So I looked up some numbers on how many neurons are in the brain, how many connections are in the brain, and how many stars are in the Milky Way. Lets answer the question using the 'average' number of connections per cubic millimeter.

How many neurons and connections there are in the brain? This is kind of a tricky question and I am not a nuerobiologist so I have gone to several resources for the answer. Professor of Computational Neuroscience at MIT Sebastung Seung says in a TED talk
your brain contains 100 billion neurons and 10,000 times as many connections
Professor of Molecular Cellular Physiology at Stanford Stephen Smith says in a press release on brain imaging that
In a human, there are more than 125 trillion synapses just in the cerebral cortex alone
René Marois from the Center for Integrative and Cognitive Neurosciences at Vanderbilt Vision Research Center states in a recent paper [1]
The human brain is heralded for its staggering complexity and processing capacity: its hundred billion neurons and several hundred trillion synaptic connections can process and exchange prodigious amounts of information over a distributed neural network in the matter of milliseconds.
I have enough expert sources now to confidently say these experiments agree that the human brain has some 100 billion neurons (1011). The number of connections seems less precise, but it is at least several 100 trillion connections (1014) as judged by Marios and Smith and as much as 1015 as judged by Seung.

The number of connections in the brain is tricky to define. We may define a synaptic connection as each place the neuron touches another neuron and a synapse is present. It doesn't seem to make sense to simply count incidental contact. Further, there is the question of whether we should count redundant contacts between neurons. We can obtain an upper bound on the number of connections in the brain by considering the case in which every neuron is connected to every other neuron. Coincidentally the operation of connecting every node in a network with every other node is a process I am familiar with from cross correlating radio signals. Anyways, the equation we are looking for is N(N-1)/2 where N is the number of nodes in the network. Thus, for our N=1011 neurons the maximum number of non-redundant connections is about 1022. This maximum bound is huge! But how huge is it really? Hilariously, while searching for an answer to my original question I found a message board pondering the grand statement
There are more connections in the brain than atoms in the Universe.
A really clever person pointed out that
Theoretically, if we took all the atoms in the universe; wouldn't that include the atoms within the brain?
People have this feeling that the number of connections between items can be much larger than the number of actual items in the collection and while this intuition is true the idea that there are more connections in the brain than there are atoms in the universe is absurd. Lets put it in perspective that a few grams of any substance, like water, is measured units of moles. A mole is standard unit of measurement corresponding to the absolute 6.02 x 1023. Thus even a drop of water contains more atoms than there are connections in the brain.

Now we need to know how many neurons and connections are in an average cubic millimeter of the brain. How big is the brain? John S. Allen of the Department of Neurology at University of Iowa stated in a recent paper that[2]
The mean total brain volumes found here (1,273.6 cc for men, and 1,131.1 cc for women) are very comparable to the results from other high-resolution MRI-volumetric studies.
We can take the volume of the brain as 1000cc as a low estimate (which will only over estimate the density of connections).

The final thing we need to know to answer the question at hand is the number of stars in the Milky Way. Like every other number we have been working with it is rather uncertain. Even if we define a star as only those spheres of gas which are large enough to fuse hydrogen at some point in their lifetime we don't know the answer because we can't see the multitudes of dim stars. There are probably at least 500 billion star like objects in the Milky Way. Lets take 100 billion as the number to be conservative.

Finally, lets bring all the numbers together. One cubic millimeter is 1/1000 of a cubic centimeter and 1/1000000 (10-6) of the entire volume of the brain. We can scale the total number of connections in the brain (using the high estimate of 1015 connections in the brain) then we find that there are 109 connections in a cubic millimeter of the brain. The 109 connections in a cubic millimeter of the brain is two orders of magnitude smaller than a low estimate of the number of stars in the Milky Way. No, on average there are not more connections in a cubic millimeter of your brain than there are stars in the Milky Way. 

My first response to this question was bullshit! This question (or rather statement) is made by David Eagleman here at a TEDx talk and here on the Colbert Report. Colbert also called out Eagleman when he dropped this factoid, but it didn't stop the interview. I have also contacted some actual neuroscientists to see what they thought of this statement and they agree with me that it is not true. Maybe there is special part of the brain particularly more dense in connections than the brain on average, but that would be misleading like saying the density of the Milky Way is that of water because, you know, certain parts of the Milky Way are water. The better statement would be to say that there are are more connections in the brain than there are stars in the Milky Way. As Colbert would say, I am putting you on notice Eagleman.

While we are on the subject I want to mention my favorite talk about the brain which mixes just the right amount of wonder and fact. It is the TED talk I mentioned earlier by Sebastian Seung on what he calls the connectome - the network of connections in your brain between neurons which physically dictates how you think. In the video he discusses another volume tomography technique in the brain using a cube of mouse brain tissue just 6 microns on a side. It is another great visualization for what is actually in a cubic millimeter of your brain.

ResearchBlogging.orgReferences


[1] Marois, R., & Ivanoff, J. (2005). Capacity limits of information processing in the brain Trends in Cognitive Sciences, 9 (6), 296-305 DOI: 10.1016/j.tics.2005.04.010

[2] Allen, J., Damasio, H., & Grabowski, T. (2002). Normal neuroanatomical variation in the human brain: An MRI-volumetric study American Journal of Physical Anthropology, 118 (4), 341-358 DOI: 10.1002/ajpa.10092

On Replications

Repetition is ubiquitous and has many different meanings in education, art, literature, science, and life Ideas replicate and mutate; cultural memes spread through culture seamlessly. Manufactured goods are produced as nearly identical as possible. Deviations from the mold are discarded and parts are interchangeable. Digital data is almost limitlessly replicable. Any data or idea committed to the digital world is perfectly copied (sparing the occurrence of a flipped bit) until it is intentionally modified. This characteristic of digital ideas presents a unique challenge for creators of content, distributors, and bored people on the internet. And of course animals and plants on Earth have the ability to self replicate themselves with minor variations. What do we make of all of this?

I am keen on the intersection of art and science on this matter. I like making collages and have highlighted repeated images before with 35 images of space helmet reflections and 100 images of macchiatos. Through repetition and distortion images may be amplified or diminished. It depends on perspective. Generally in artistic endeavors, as in life, the slight variations of a repeated theme are aesthetically pleasing. On the other hand technical work such as engineering, data analysis, or manufacturing requires precise replication. I work in radio astronomy where each radio telescope in the array is nearly identical and the need for precision trumps all other considerations. I find that randomness is never particularly interesting, but neither is absolute order. Somewhere in between these extremes we have something really beautiful.