Field of Science

The Boundary Between Knowledge and Belief

The director of CERN, Rolf-Dieter Heuer, talks to European Magazine.

Rolf-Dieter Heuer from European Magazine
It’s a quest for knowledge. The questions we are examining have been asked since the beginning of mankind. We are humans, we want to understand the world around us. How did things begin? How did the universe develop? That distinguishes us from other creatures. If you go outside at night and look up into the sky, you cannot help but dream. Your fantasy develops, you are naturally drawn to these questions about being and existence. And at the same time, our work has very practical consequences. When antimatter was introduced into the theoretical framework 83 years ago, nobody thought that this had any practical relevance. Yet today, the concept is used in hospitals around the world on a daily basis. Positron Emission Tomography (PET) is based on the positron, which is the anti-particle to the electron. Or take the internet. The idea of a worldwide network started in 1989 here at CERN, because we needed that kind of digital network for our scientific work. That’s the beauty of our research: We gain knowledge but we also gain the potential for technological innovation.

More here.

The First Quantum Computer

In a nondescript office park outside Vancouver with views of snow capped mountains in the distance is a mirrored business park where very special work is being done. The company is D-Wave, the quantum computing company. D-Wave's mission is to build a computer which will solve humanity's grandest challenges.

D-Wave aims to develop the first quantum computer in the world, perhaps they already have. The advent of quantum computers would be a sea change in the world that would allow for breaking of cryptography, better artificial intelligence, and exponential increases in computing speed for certain applications. The idea for quantum computers has been bubbling since Richard Feynman first proposed that the best way to simulate quantum phenomena would be with quantum systems themselves, but it has been exceedingly difficult to engineer a computer than can manipulate the possibilities of quantum information processing. Hardly a decade ago D-Wave began with a misstep which is the origin of their name. D-Wave got its name from their first idea which would have used yttrium barium copper oxide (YBCO) which is a charcoal looking material with a superconducting temperature above that of the boiling point of liquid nitrogen. This means that YBCO is the standard science lab demonstration of superconducting magnetic levitation. Ultimately the crystalline structure of YBCO was found to be an imperfect material, but the cloverleaf d-wave atomic orbital that lends YBCO its superconducting properties stuck as D-Wave's name. The vision of D-Wave did not change, but their approach did. They realized they would have to engineer and build the majority of the technology necessary to create a quantum computer themselves. They even built built their own superconducting electronics foundry to perform the electron beam lithography and metallic thin film evaporation processes necessary to create the qubit microchips at the heart of their machine.

I recently got to visit D-Wave, the factory of quantum dreams, for myself. The business park that D-Wave is in is so nondescript that we drove right by it at first. I was expecting lasers and other blinking lights, but instead our University of Washington rented van pulled into the wrong parking lot which we narrowly reversed out of. In the van were several other quantum aficionados, students, and professors, mostly from computer science who were curious at what a quantum computer actually looks like. I am going to cut the suspense and tell you now that a quantum computer looks like a really big black refrigerator or maybe a small room. The chip at the heart of the room is cooled to a few milikelvin, colder than interstellar space, and that is where superconducting circuits count electric quantum sheep. The tour began with us milling around a conference room and our guide, a young scientist and engineer, was holding in his hand a wafer which held hundreds of quantum processors. I took a picture and after I left that conference room they did not let me take any more pictures.
wafer of D-Wave Rainer core quantum processors
Entering the laboratory it suddenly dawned on me that this wasn't just a place for quantum dreams it was real and observable. The entire notion of a quantum computer was more tangible. A quantum computer is a machine which uses quantum properties like entanglement to perform computations on data.The biggest similarity between a quantum computer and a regular computer is that they both perform algorithms to manipulate data. The data, or bits, of a quantum computer are known as qubits. A qubit is not limited to the values of 0 or 1 as in a classical computer but can be in a superposition of these states simultaneously. Sometimes a quantum computer doesn't even give you the same answer to the exact same question. Weird. The best way to conceive of a quantum computing may be to imagine a computation where each possible output of the problem has either positive or negative probability amplitudes (a strange quantum idea there) and when the amplitudes for wrong answers cancel to zero and right answers are reinforced.

The power of quantum computers is nicely understood within the theoretical framework of computational complexity theory. Say for example that I give you the number 4.60941636 × 1018 and ask for the prime factors of this number. Now if someone were to give you the prime factors you could verify them as correct very quickly, but what if I asked you to generate the prime factors for me (I dare you. I have the answer. I challenge you. In actually this challenge is easy. Nine digit number aren't that hard to factor, a friend says they found a webpage that will do it. But the problem doesn't scale well to larger numbers). The quintessential problem here is the P versus NP question which asks whether if a problem can be verified quickly can it also be solved quickly. Quickly is defined as polynomial time meaning that the algorithm scales as the number of some inputs to some power. Computational complexity theory basically attempts to categorize different kinds of problems depending on how fast a solution can be found as the size of the problem grows. A P class problem is one in which the solution can be found within polynomial time. A NP class problem is one in which the solution can be verified in polynomial time. So if I ask you for the prime factors of my number above that is an NP problem because given the numbers you could verify the answer quickly, but it would be very difficult to calculate the numbers just given the number. It is an open question, but it appears likely that all P problems are a subset of NP. This means that problems verifiable in polynomial time are not necessarily solved in polynomial time. The issue is that for some very interesting problems in the real world we could verify the answer if we stumbled upon it, but we won't even be able stumble upon the answer in a time shorter than the age of the universe with current computers and algorithms. What we know we know and what we think we know is a sea of confusion, but the popular opinion and where people would take their wagers is that P is not equal to NP.

Suddenly, with mystique and spooky actions at a distance, quantum computing comes swooping in and claims to be able to solve some NP problems and all P problems very quickly. A general quantum computer would belong to the complexity class of BQP. There is a grand question at hand, is BQP in NP? (More generally, is BQP contained anywhere in the polynomial hierarchy? The polynomial hierarchy is a complexity class which generalizes P and NP problems to a particular kind of perfect abstract computer with the ability to solve decision problems in a single step. See this paper here on BQP and the Polynomial Hierarchy by Scott Aaronson who is a outspoken critic of D-Wave) At this time we cannot even claim to have evidence that BQP is not part of NP, but most scientists close to the problem think that BQP is not a subset of NP. Quantum computing researchers are trying to get better evidence that quantum computers cannot solve NP-complete problems in polynomial time (if NP was a subset of BQP then the polynomial hierarchy collapses). A reasonable wager I would take is that P is a (proper) subset of BQP and BQP is itself is a (proper) subset of NP. This claim has not been rigorously proved but it is suspected to be true and further there are some NP problems which it has been shown to be true for such as prime factorization and some combinatoric problems.

There might be an elephant in the room here. The D-Wave architecture is almost certainly attacking a NP complete problem and reasonable logic says that quantum computers will solve P problems and some NP problems, but not NP complete problems (this is also not proven, but suspected). An NP complete problem is a problem in which the time it takes to compute the answer may reach into millions or billions of years even for moderately large versions of the problem. Thus we don't know if this particular quantum computer D-Wave has built even allows us to do anything efficiently we couldn't already do on a classical computer efficiently; it doesn't seem to be a BQP class computer thus it cannot for example solve prime factorization cryptography problems. So, yes it is a quantum machine, but we don't have any evidence it is an interesting machine. At the same time we don't have any evidence it is an uninteresting machine. It is not general purpose enough to be clear it is a big deal, nor is it so trivial it is totally uninteresting.

The D-Wave lab was bigger than I expected and it was at once more cluttered and more precise than I thought it would be. It turns out the entire process of quantum computing follows this trend. There are a lot of factors they contend with and on the tour I saw people dead focused with their eyes on a microscope executing precise wiring, coders working in pairs, theoreticians gesturing at a chaotic white board, and even automated processes being carried on by computers with appropriately looking futuristic displays. The engineering problems D-Wave faces include circuit design, fabrication, cryogenics, magnetic shielding and so on. There is too much to discuss here so I will focus on what I think are scientifically the two most interesting parts of the D-Wave quantum computer which are the qubit physics and the quantum algorithm which they implement; in fact these two parts of their computer are deeply intertwined.

In the image above is a wafer of Rainer core superconducting microchips. The chips are built to exacting specifications and placed at the center of the D-Wave quantum computer in isolation from external noise such as magnetic fields and heat. In the quantum world heat is noise so the chips are kept at a temperature of a few milikelvin to preserve the quantum properties of the system. On each chip are 128 superconducting flux qubits. The qubit is the quantum of information with which this computer works. There are various ways to create a qubit such as quantum dots, photons, electrons, and so on, but D-Wave has gone with the flux qubit design for engineering concerns.

A flux qubit is a micrometer size loop of conducting material (in this case Niobium) wherein a current either circulates the loop clockwise or counterclockwise in a quantized manner such that the loop is either in a spin up (that is +1 or ↑) or a spin down (that is -1 or ↓)  state. There is an energy potential barrier between the loop spontaneous flipping spin (or current circulation direction) which can be modulated through various control schemes. They control these loops using compound Josephson junctions and SQUIDs using their own propriety techniques, but borrowing heavily on decades of advancement in solid state physics.

Perhaps even more important than the qubit itself is the architecture and the algorithm implemented by the computer. They use a quantum adiabatic algorithm based on the Ising model. When I realized that their algorithm was based on the Ising model I couldn't help but marvel at the powerful simplicity. The Ising model is a statistical mechanics model of ferromagnetism where the atoms (vertices or variables) in a metal (crystal lattice or graph) are discrete variables with spin values that take on spin up or spin down values and each spin interacts with its nearest neighbors. It is a simple model that leads to beautiful complexity (for example see this article on the Ising model here) especially when you allow the interaction of each spin with its neighbor to be finely controlled or when you allow the connectivity of the vertices to be varied. The Ising model is easily extended to more abstract problems. For example we can connect every single vertex to every other vertex, it wouldn't look like a crystalline structure any more, but it makes sense on paper or with wires on a chip.

The quantum adiabatic algorithm borrows ideas from physics such as the process of annealing and spin states in the Ising model to solve a generalized optimization problem. During my tour of D-Wave we continued to talk about the algorithm and what was possible and the whole concept slowly crystallized for me, but it is not immediately obvious why they designed the computer they way they did because their implementation would not create a universal quantum computer. Why the quantum adiabatic algorithm?
  • Quantum annealing is physically motivated method for a quantum computers which is not thwarted by thermodynamics or decoherence.
  • Real world optimization problems can be modeled using the Ising spin glass. The hardware mirrors this.
  • More complicated architectures will borrow from the quantum annealing approach such as a universal adiabatic quantum computer.
D-Wave has not created a general purpose quantum computer. They have created a quantum computer which solves the adiabatic quantum algorithm or equivalently an optimization problem. They use quantum annealing to solve the global minimum of a given objective function with the form of... Wait, wait, let me have a kitten tell you instead (math warning next to paragraphs):
qubit quantum kitten cat tell you about the adiabatic quantum algorithm
Here E is the value to be minimized over the total system state s subject to the constraint of Jij (where Jij <1) acting between each element si and sj (where all s=+/-1). Each element s is weighted by the value hi (where hi >-1). The nearest neighbor spins of each ij pair is calculated according to the connections between vertices in a physics application or depending on the microchips graph architecture of actual physical connections on the D-Wave chip. ) The coupling between ij is determined by Jij so this means that J represents your knowledge of how each component of the system interacts with its neighbors. Immediately we extend the above minimization parameterization to the physical implementation of quantum flux qubits.
In this new form the optimization problem is written as a Hamiltonian which determines the interaction and evolution of the system. The variables are modified, sj →σz i and  si →σz i where σi z are are Pauli matrices at site i for a spin 1/2 qubit. Then hi is the transverse field that represents transitions up and down between the two spin states ↑ and ↓, of each spin. Here Kij is the weighting that defines the interaction between the qubits. The problem is to anneal the system as closely as possible to its classical ground state with the desired Kij.

The D-Wave computer solves the the quantum adiabatic algorithm by initializing the spins of the flux qubits in their ground state with a simple Hamiltonian. Initially the potential well for the spin of qubits is U shaped; the ground state of the of the qubits when they are configured in this mode is a superposition of the |↑> and and |↓> flux basis. Then the qubits are adiabatically, or slowly, evolved to the specific Hamiltonian which encodes the optimization problem that is to be solved; the potential is evolved to the double-welled configuration at which point the ↑> and and |↓> states start to become the dominant basis. Actually, the final configuration is not exactly a double-welled symmetric state, but it has some relative energy difference between the to states which biases the machine towards the encoded problem. Evolving the Hamiltonian can be thought of as modifying the energy barrier between the spin up and down states for each flux qubit. In a real system each potential well has multiple energy levels possible in it besides the lowest energy state which is where the ideal calculation is performed. According to the adiabatic theorem the system remains in the ground state so that at the end the state of the system describes the solution to the problem. However, in a real machine noise, such as the ambient local heat, can still  disturb the system out of the ground state. A key advantage to the D-Wave approach is robustness to noise in many situations. The slower the Hamiltonian is evolved, the more the process adheres to the ideal adiabatic theoretical calculation. Performing the calculation more slowly decreases the chance of jumping out of the ground state. Adding more qubits makes the energy gap at the tipping point smaller. Thus engineering is a machine with more qubits is hard. Interestingly, because quantum machines have statistical uncertainties each computation will have uncertainties which can be reduced by either running each calculation slower (and we are talking a few microseconds here) or by running the same calculation many times and seeing what different answers come up. As it turns out it is usually faster to run the calculation many times and compare answers than run one long calculation.

The theoretical minimization problem that is solved is best understood separately from what the actual quantum qubits are doing. Over at the D-Wave blog, Hacking the Mulitiverse, they liken the optimization problem to finding the best setting for a bunch of light switches that have various weightings. Each light switch can be either on or off and can have an either positive or negative weighting, the hterm above, and it can have a dependency on any other switch in the system determined by the Jij term. It turns out to a be a really hard problem as for just 100 switches there would be 2100 possible ways to arrange the switches.

Hello multiverseTraditionally the first program a coder writes in a new language is a simple print statement which says Hello world. On a quantum computer the first program you write says Hello multiverse! You could write this program on a D-Wave. Yes, you really can because you can go out any buy one. Lockheed Martin bought one earlier this year for ten million dollars. The detractors to D-Wave would say you are not getting a real quantum computer,  but then why did Lockheed Martin buy one? It is legitimate to ask, is D-Wave if the first true quantum computer? This of course depends on your definition of a quantum computer. The answer is probably no if you want a universal quantum computer (which belonged to the BQP complexity class discussed earlier). Probably no here means that reasonable computer scientists studying quantum computers have excellent reason to believe the answer is no but they lack rigorous mathematical proof. On the other hand if you are looking for a computer which exploits quantum effects to implement a specific purpose quantum algorithm then I think you can safely say, yes, this is a quantum computer. I am just a naive astronomer though so don't take my word for it. So let me clarify and say that just because a computer exploits quantum mechanics does not make it a quantum computer. All microchips today are small enough that the designers know something about quantum mechanics, maybe they even have to account for it in the chip's design, but crucially the compilers and the code that is written for the machine has no knowledge of the quantum mechanics. The algorithms run on the machine assume nothing about quantum mechanics in our universe. However, a real quantum computer would obviously be programmed according to the rules of quantum mechanics. Indeed the the D-Wave computer is executing an algorithm which explicitly takes into account quantum mechanics. Further, whether or not the D-Wave computer is actually a quantum computer that will satisfy computer scientist's definition is a mute point compared to asking if it is useful. Currently D-Wave is running experiments that show that the speed scaling of their machine as a function of inputs is, hopefully, better than classical computers and algorithms. In the future they will have to show with double blind experiments that their machine scales better than classical machines. If they can execute calculations in a few microseconds which take classic computers decades I don't care if you call it the one true quantum computer or an oracle, I will just want one.


Harris, R., Johansson, J., Berkley, A., Johnson, M., Lanting, T., Han, S., Bunyk, P., Ladizinsky, E., Oh, T., Perminov, I., Tolkacheva, E., Uchaikin, S., Chapple, E., Enderud, C., Rich, C., Thom, M., Wang, J., Wilson, B., & Rose, G. (2010). Experimental demonstration of a robust and scalable flux qubit Physical Review B, 81 (13) DOI: 10.1103/PhysRevB.81.134510

Harris, R., Johnson, M., Han, S., Berkley, A., Johansson, J., Bunyk, P., Ladizinsky, E., Govorkov, S., Thom, M., Uchaikin, S., Bumble, B., Fung, A., Kaul, A., Kleinsasser, A., Amin, M., & Averin, D. (2008). Probing Noise in Flux Qubits via Macroscopic Resonant Tunneling Physical Review Letters, 101 (11) DOI: 10.1103/PhysRevLett.101.117003

Superluminal claims require super evidence

Neutrinos, those mercurial smidgens of the particle world, travel faster than the speed of light. That's the claim the OPERA collaboration makes in a paper subtly titled: Measurement of the neutrino velocity with the OPERA detector in the CNGS beam. This is a big claim that could have implications for particle physics and time travel. It has made the news, news, news, news, but what does it all mean? Lets talk about neutrinos.
faster than the speed of light
First, let me say that if neutrinos do travel faster than the speed of light then physicists have a lot of explaining to do. The repercussions of faster that light travel for any particle (also known as superluminal travel) would be revolutionary. So revolutionary that most physicists I spoke to this past week at a conference did not take the news too seriously: it was too extraordinary to comment on without further thought and details. The OPERA collaboration is actually very brave for putting this paper out there (i.e. on the ArXiV) and asking for outside analysis. They don't even pretend to begin to consider the ramifications. The last line of the paper sums up their position:
We deliberately do not attempt any theoretical or phenomenological interpretation of the results.
So let me ignore the wild theoretical implications and discussions of tachyons and just talk about the experiment and an astrophysical constraint on the velocity of neutrinos.

Why are physicists so confident that neutrinos travel at the speed of light? Well, start with the fact that every piece of credible data ever taken has never seen anything—be it particle or information—travel faster than the speed of light. Given previous observations it is hard to understand how neutrinos could be any different. Of course neutrinos are very difficult to measure because they interact very weakly with regular matter. Consider that 60 billion neutrinos generated from the core of the sun pass through your pinky each second and none of them interact with you (nor do they interact with the Earth, they are passing through you day and night).

The creation and detection of neutrinos is complicated. The process begins for the OPERA experiment over at CERN where the Super Proton Synchrotron (SPS) creates high energy (400 GeV/c) protons that collide with a graphite target producing pions and kaons which decay into muons and muon neutrinos. The neutrinos coming out of SPS are almost pure muon type neutrinos with an average energy of 17 GeV. The neutrinos travel through the solid Earth in a straight path unimpeded into a cavern below a mountain, Gran Sasso, in Italy. The OPERA neutrino experiment was designed to look for the direct appearance of muon to tau neutrinos (νμ → ντ), but their anomalous findings on the velocity of neutrinos is much more interesting.

The OPERA experiment found that the velocity of neutrinos was about 0.00248% faster than the speed of light. This measurement was made by precisely measuring the distance traveled by neutrinos and the time of travel. The OPERA collaboration did a lot of work to measure both parameters precisely. They found this velocity by measuring that the time of arrival of neutrinos at their detector by using atomic clocks. Their measurement was precise to a few nanoseconds. Wow, that is quick. Light only travels about a foot in a single nanosecond.

In order to measure the distance between CERN and Gran Sasso the OPERA team used very precise GPS systems. For example, they noticed a 2009 earthquake in that area produced a sudden displacement of 7 centimeters. So the exact distance the neutrinos traveled was 730534.61±.20 meters (or about 2.44 light milliseconds), however some have suggested that the GPS based positioning they used has errors introduced by atmospheric refraction. Intriguing possibility.

In order to measure the time, what OPERA calls the time of flight measurement, they used atomic cesium clocks. But the 'time' cannot be precisely measured at the single interaction level since the protons from the SPS source have a 10.5 microsecond extraction window. They had to look at time distributions where the most likely time for a burst of neutrinos to be created was inferred to higher precision. Additionally, the actual moment where the meson produces a neutrino in the decay tunnel is unknown, but it introduces negligible inaccuracy in the time of flight measurement. So, these distance and time measurements are really important, but really subtle. I recommend reading the paper if you are a glutton for punishment.
There is a very interesting constraint on the speed of neutrinos that comes from astronomy. It was the neutrinos and photons released from the death of a star. Supernova 1987A (SN 1987A) exploded 168,000 years ago when fusion in the core of an old star ceased and the weight of the outer layers of the stars caused the core to collapse. The protons in the atoms of the core of the star merged with the electrons present and converted themselves into neutrinos and electron neutrinos. A mega amount of electron neutrinos, about 1058, were generated and they began their epic journey to Earth. Some of these neutrinos arrived on Earth one morning in February of 1987 in a burst lasting less than 13 seconds. Of those many neutrinos two dozen interacted with detectors on Earth.
Astronomers observed light from SN 1987A just three hours after the neutrinos arrived. Just such a delay is expected as the fireball of the supernova had to have some time to expand and become transparent to photons, whereas neutrinos could escape much sooner. The explosion occurred at a known distant out in the Large Magellenic Cloud. This distant explosion created photons and neutrinos in a timed race to the Earth. With the these measurements in hand (the distance to the supernova and the time of arrival of the photons compared to neutrinos) we can determine the speed of neutrinos from SN1987A.

The accuracy and precision of measurements from SN 1987A are actually much greater than measurements taken at Gran Sasso despite the three hour time window difference between neutrino and photon travel. It comes down to the fact that the relative distance between Earth and SN 1987A is about 1016 times larger than the distance between CERN and Gran Sasso. This means that time measurements from SN 1987A can be extremely imprecise and still be much more precise than the OPERA measurements.

If the neutrinos from supernova 1987A had been traveling as fast as the neutrinos detected at Gran Sasso they would have arrived about four years sooner than the light from SN 1987A.

This supernova constraint on the velocity of neutrinos is very nice, but it doesn't answer every question because the comparison may not be apples to apples. The OPERA neutrinos are tau type, not electron type. And they are traveling through the Earth, not empty space. And they were much higher energy. The neutrinos from SN 1987A were only about 10 MeV, about one hundred times lower energy than the neutrinos in this study. Some may argue that higher energy neutrinos travel faster than lower energy neutrinos. However, a velocity-energy dependence should have stretched out the 13 second arrival time of neutrinos. Further, part of the OPERA collaborations analysis involved splitting the data into two bins with mean energies of 13.9 and 42.9 GeV; a comparison between the two bins indicated no energy dependence on velocity. Thus, while it may still be true that GeV neutrinos move faster than MeV neutrinos, the theoretical wiggle room is shrinking.

This experiment may be a signal of new physics or a case of systematic errors. Yet, even physicists who have developed theories that allow for superluminal velocities are doubtful so I would not bet on proof of hidden extra dimensions or time travel to come from this experiment. Much more extraordinary evidence is necessary to confirm such an extraordinary claim as breaking the speed limit of our Universe.

Turtles all the way down

The beginning was heralded by an elephant's trumpet.

The universe is carried on the back of an ancient turtle.

There were once ten suns embodied by crows. All but one crow was shot by an archer.

The moon is a decapitated head. Her face is painted with bells.

The stars are your ancestors eyes worth remembering.

In time you too will have nine tails and be older and wiser.

All the things which you do not know are vague. Drift clouds.

Having come so far is a matter of vagueness.
I wrote this poem because even modern cosmology faces infinite regression paradoxes with respect to the initial impetus of the Universe. The various creation stories independently formed in different cultures create some stunning mental images for me. The funniest idea for me is that the Universe is resting on the back of a giant turtle. What is the turtle resting on? Why it is turtles all the way down. Oh, and I almost forgot the best blog posts always have a picture; here is a picture of a turtle.

You've been Westinghoused Mr. Edison

Recently while glancing through an old physics text I found a line I had underlined, Westinghouse Electirc Corporation, and I remembered a little phrase that I used to use with other physics students. The phrase was, you've been Westinghoused. Let me explain. There is a curious episode in history know as the the war of the currents wherein the early pioneers of electricity were trying to commercialize the transmission of electricity. Nikola Tesla with the financing of George Westinghouse supported alternating current (AC) against Thomas Edison who supported direct current (DC). Edison tried to discredit the idea of AC transmission by showing how dangerous it was. Edison attempted shenanigans like electrocuting an elephant in public, but in the end practicality and economics prevailed. AC transmission is much more viable than DC transmission because of the pure physics: with DC transmission in order to get adequate power transmitted either the wires would have to be copper as thick as your arm or you would have to have power stations every block or so. It was probably a combination of physics and the shrewd business sense of Westinghouse that it came to pass that Edison lost the war of the currents. This history, like the story of Bohr and Heisenberg, has interesting characters and a certain mystique that lends itself to historical plays and documentaries.

Tesla was a modern Prometheus. Some say that history overlooked Tesla, however, there is a current (pun intended) revival in interest for Nikola Tesla, if not always for his science, for his eccentric personality. This documentary about Tesla talks about his life and work. The part about the war of the currents begins at 18:35.

Now, as Edison fought against AC current he tried to be really clever and he wanted to brand death by electrocution as being Westinghoused. However, Edison's electric empire faded and history summarily shows that he was bested by Tesla and Westinghouse. Scientists are a competitive bunch, so I propose that when one colleague bests another colleague in an academic pursuit, we proclaim that the the defeated has been Westinghoused. It isn't the worst thing to be Westinghoused, it just means you were bested in that pursuit. Edison was a great inventor and is still famous to this day, but he surely got Westinghoused.


There was an amazing article up on Wired today about the America's Cup. It reminded of just how cool competitive sailing is. I wrote about sailing upwind in 2009 before the last America's Cup race and I mentioned a revolutionary solid wing multihull boat created by team Oracle. That boat was in fact as fast as promised and it won the race and by doing so team Oracle won the right to dictate the rules of the next America's cup. What they did was create the America's Cup World Series of standarized fixed wing catamaran sailing boats (you can read more about the entire thing in the Wired article). These boats are super fast and super intense. The America's Cup World Series is the water equivalent of Formula 1, but instead of crashes there are capsizes. Well, actually there are crashes too. Here is a hectic highlight real of these boats racing in the first ever event a few days ago in Cascais, Portugal.
Modern sailing is a paradoxical mix of elements. The boats are designed with advanced knowledge of physics and constructed of carbon fiber, yet they are powered by the simplicity of the wind. I think there is an appeal to working with nature to accomplish work rather than fighting against it. Working with nature always seems to be the most graceful option. In space travel rather than firing rockets to propel ships it is advantages to use gravitational assists by swinging by planets. And then of course there are solar sails in space too. The Japanese IKAROS satellite recently successfully unfurled itself in space and is now being pushed by photons on a unique journey. If you think about it astronomy and sailing go together.

A Cubic Millimeter of Your Brain

Are there more connections in a cubic millimeter of your brain than there are stars in the Milky Way? We are going to answer that question in a moment, but first take a look at this image of hippocampal neurons in a mouse's brain. It is an actual color image from a transgenic mouse in which fluorescent protein variations are expressed quasi-randomly in different neurons. This kind of image is known as a brainbow and is aesthetically awesome further it may be one way to empirically examine a cubic millimeter of the brain (neuron tomography).
Hippocampus brainbow
by Tamily Weissman, Harvard University
In reality mapping even an entire cubic millimeter of the brain is an extremely daunting task, but we can still answer my original question. First, I know that there are different kinds of neurons that vary in size and that some neurons can have a soma (the big part that has the nucleus from which the dendrites extend) spanning a millimeter in size. Thus if you picked a random cubic millimeter of brain you could run right into the heart of a neuron and you would find very few connections. Given this fact, we can very easily answer this question with a resounding no, however, this seems like an unsatisfactory trite approach. So I looked up some numbers on how many neurons are in the brain, how many connections are in the brain, and how many stars are in the Milky Way. Lets answer the question using the 'average' number of connections per cubic millimeter.

How many neurons and connections there are in the brain? This is kind of a tricky question and I am not a nuerobiologist so I have gone to several resources for the answer. Professor of Computational Neuroscience at MIT Sebastung Seung says in a TED talk
your brain contains 100 billion neurons and 10,000 times as many connections
Professor of Molecular Cellular Physiology at Stanford Stephen Smith says in a press release on brain imaging that
In a human, there are more than 125 trillion synapses just in the cerebral cortex alone
René Marois from the Center for Integrative and Cognitive Neurosciences at Vanderbilt Vision Research Center states in a recent paper [1]
The human brain is heralded for its staggering complexity and processing capacity: its hundred billion neurons and several hundred trillion synaptic connections can process and exchange prodigious amounts of information over a distributed neural network in the matter of milliseconds.
I have enough expert sources now to confidently say these experiments agree that the human brain has some 100 billion neurons (1011). The number of connections seems less precise, but it is at least several 100 trillion connections (1014) as judged by Marios and Smith and as much as 1015 as judged by Seung.

The number of connections in the brain is tricky to define. We may define a synaptic connection as each place the neuron touches another neuron and a synapse is present. It doesn't seem to make sense to simply count incidental contact. Further, there is the question of whether we should count redundant contacts between neurons. We can obtain an upper bound on the number of connections in the brain by considering the case in which every neuron is connected to every other neuron. Coincidentally the operation of connecting every node in a network with every other node is a process I am familiar with from cross correlating radio signals. Anyways, the equation we are looking for is N(N-1)/2 where N is the number of nodes in the network. Thus, for our N=1011 neurons the maximum number of non-redundant connections is about 1022. This maximum bound is huge! But how huge is it really? Hilariously, while searching for an answer to my original question I found a message board pondering the grand statement
There are more connections in the brain than atoms in the Universe.
A really clever person pointed out that
Theoretically, if we took all the atoms in the universe; wouldn't that include the atoms within the brain?
People have this feeling that the number of connections between items can be much larger than the number of actual items in the collection and while this intuition is true the idea that there are more connections in the brain than there are atoms in the universe is absurd. Lets put it in perspective that a few grams of any substance, like water, is measured units of moles. A mole is standard unit of measurement corresponding to the absolute 6.02 x 1023. Thus even a drop of water contains more atoms than there are connections in the brain.

Now we need to know how many neurons and connections are in an average cubic millimeter of the brain. How big is the brain? John S. Allen of the Department of Neurology at University of Iowa stated in a recent paper that[2]
The mean total brain volumes found here (1,273.6 cc for men, and 1,131.1 cc for women) are very comparable to the results from other high-resolution MRI-volumetric studies.
We can take the volume of the brain as 1000cc as a low estimate (which will only over estimate the density of connections).

The final thing we need to know to answer the question at hand is the number of stars in the Milky Way. Like every other number we have been working with it is rather uncertain. Even if we define a star as only those spheres of gas which are large enough to fuse hydrogen at some point in their lifetime we don't know the answer because we can't see the multitudes of dim stars. There are probably at least 500 billion star like objects in the Milky Way. Lets take 100 billion as the number to be conservative.

Finally, lets bring all the numbers together. One cubic millimeter is 1/1000 of a cubic centimeter and 1/1000000 (10-6) of the entire volume of the brain. We can scale the total number of connections in the brain (using the high estimate of 1015 connections in the brain) then we find that there are 109 connections in a cubic millimeter of the brain. The 109 connections in a cubic millimeter of the brain is two orders of magnitude smaller than a low estimate of the number of stars in the Milky Way. No, on average there are not more connections in a cubic millimeter of your brain than there are stars in the Milky Way. 

My first response to this question was bullshit! This question (or rather statement) is made by David Eagleman here at a TEDx talk and here on the Colbert Report. Colbert also called out Eagleman when he dropped this factoid, but it didn't stop the interview. I have also contacted some actual neuroscientists to see what they thought of this statement and they agree with me that it is not true. Maybe there is special part of the brain particularly more dense in connections than the brain on average, but that would be misleading like saying the density of the Milky Way is that of water because, you know, certain parts of the Milky Way are water. The better statement would be to say that there are are more connections in the brain than there are stars in the Milky Way. As Colbert would say, I am putting you on notice Eagleman.

While we are on the subject I want to mention my favorite talk about the brain which mixes just the right amount of wonder and fact. It is the TED talk I mentioned earlier by Sebastian Seung on what he calls the connectome - the network of connections in your brain between neurons which physically dictates how you think. In the video he discusses another volume tomography technique in the brain using a cube of mouse brain tissue just 6 microns on a side. It is another great visualization for what is actually in a cubic millimeter of your brain.


[1] Marois, R., & Ivanoff, J. (2005). Capacity limits of information processing in the brain Trends in Cognitive Sciences, 9 (6), 296-305 DOI: 10.1016/j.tics.2005.04.010

[2] Allen, J., Damasio, H., & Grabowski, T. (2002). Normal neuroanatomical variation in the human brain: An MRI-volumetric study American Journal of Physical Anthropology, 118 (4), 341-358 DOI: 10.1002/ajpa.10092

On Replications

Repetition is ubiquitous and has many different meanings in education, art, literature, science, and life Ideas replicate and mutate; cultural memes spread through culture seamlessly. Manufactured goods are produced as nearly identical as possible. Deviations from the mold are discarded and parts are interchangeable. Digital data is almost limitlessly replicable. Any data or idea committed to the digital world is perfectly copied (sparing the occurrence of a flipped bit) until it is intentionally modified. This characteristic of digital ideas presents a unique challenge for creators of content, distributors, and bored people on the internet. And of course animals and plants on Earth have the ability to self replicate themselves with minor variations. What do we make of all of this?

I am keen on the intersection of art and science on this matter. I like making collages and have highlighted repeated images before with 35 images of space helmet reflections and 100 images of macchiatos. Through repetition and distortion images may be amplified or diminished. It depends on perspective. Generally in artistic endeavors, as in life, the slight variations of a repeated theme are aesthetically pleasing. On the other hand technical work such as engineering, data analysis, or manufacturing requires precise replication. I work in radio astronomy where each radio telescope in the array is nearly identical and the need for precision trumps all other considerations. I find that randomness is never particularly interesting, but neither is absolute order. Somewhere in between these extremes we have something really beautiful.

Looking back and looking forward

This photo was taken on the last US manned space flight in 1975 before the first shuttle launch in 1981. Portrayed is the historic handshake between Tom Stafford and Alexey Leonov through the open hatch between the American Apollo and Russian Soyuz ships. Today the Atlantis shuttle lifted off for the the last shuttle mission. It is the end of an era. Alas, all good things must end.

Mars Rover Curiosity

This animation depicts what will happen in August 2012 if all goes as planned for Curiosity, NASA's next Mars rover. This rover is much larger and and more competent than the previous rovers. It is about the size of a small car and has an entire suite of experiments on board. During entry it uses a series of thrusters to maneuver to the designated landing area. Once the ship has slowed down to Mach two (keep in mind that the atmospheric pressure on the surface of Mar's is of the order .05% that of Earth's) a parachute is deployed. As the vehicle slows the heat shield comes off and a radar detects how close the surface is approaching in order to slow for a smooth landing. The last daring step is a so called 'sky crane' which lowers the rover with a long cable from the rocket thrusted ship above. Eventually Curiosity will begin roving, but it won't be limited to roving only during the day by solar panels as the previous rovers were. The large tilted box on the back of the rover contains 4.8 kg of plutonium dioxide which emits heat serving as the power source of the rover. The power should keep flowing for much longer than the minimum specked science mission of two Earth years. The rover will seek out rough rocks such as ancient Martian riverbeds or canyons where evidence of early environments on Mars can be found. The ability to navigate to these areas is an important science requirement for the rover and is one of the reasons for the rover's large size and nuclear battery which should allow it to travel at least 20 kilometers during its lifetime. Geologists and astrobiologists also want to know if certain conditions such as those necessary for organic molecules are present. In the video a laser and a drill are shown performing experiments. The laser is ChemCam which will project onto hard to reach rocks and detect the reflected light in order to discern the chemical composition of rocks. The drill is about a centimeter in diameter and will extract the dust from the holes it creates to run experiments in mineralogy (the laser device inside the rover shown in the video) or detecting organic molecules. All of these experiments aim to answer the question, could Mars have had an environment capable of supporting life at one time?

If the sky crane works we may soon know the answer to this question. Curiosity has a launch window from November 25 to December 18, 2011 from Kennedy Space Center in Florida. And in other news NASA's James Webb Space Telescope is being threatened with the axe in budget bill in the U.S. House of Representatives today. NASA will never run out of adversaries pulling it down: Gravity and the budget.

Gödel's Proof

There is an idea of reason in the Universe. It is an abstraction which mathematicians have never been content with. Given that scientists exclusively use logic (or mathematical reasoning) for theories and experiments it is of incredible importance to know the limits of logic. It turns out that the study of math itself, metamathematics, has amazing insights on what is knowable.

In 1931 an unassuming paper was published in a German mathematics journal, the title of the paper (translated to English) was 'On Formally Undecidable Propositions of Principia Mathematica and Related Systems I'. It is a confusing title and the kind of paper which I would not understand. The author was a 26 year old Austrian named Kurt Gödel and he had just created a revolutionary idea, but as with so many great ideas it was not simple and it took great minds to fully appreciate it.

The ideas he put forth have been extremely influential and the collective name for the theories that grew from it are known as Gödel's incompleteness theorem. This theorem is a revolutionary outcome in mathematical logical that has implications for not only the philosophy of mathematics, but philosophy in general. It is thus surprising how relatively unknown the theorem is to the general public and even many scientists. I recently read the book Gödel's Proof by Nagel and Newman in just a few sittings at a coffee shop. It is a short and concise explanation of the proof that incrementally brought me closer to understanding the intricacies of Gödel's works.

Gödel's incompleteness theorem is a massive mountain of ideas that I will not attempt to conquer, but I think it is important that everyone at least gets a view. Gödel basically found that no solid guarantee is possible that mathematics is entirely free of internal contradiction. However, Gödel was not out to trash mathematics, contrarily he used mathematics itself to temper the reach of mathematics and place constraints on what is possible to known through mathematics much like a physicist theorizing that a black hole's event horizon places a limit on spaces which the physicist could actually go and measure. Gödel created a new technique of analysis and introduced new probes for logical and mathematical investigation.

The specifics of Gödel's proof even as outlined on wikipedia are extremely complicated (the entire proof is long and there are on the order of 200 links in the article so by the time you were done reading all of the prerequisite mathematical definitions you would have read thousands of pages) for anyone without (or with) extensive mathematical training, so I must admit that I don't understand it completely and not have I attempted to read the actual paper. I want to present here the shortest definition of Gödel's theorem not the the most rigorous.

The key to Gödel's incompleteness theorem is a concept of mapping. In the information age the concept of mapping or coding is familiar to many as in the case of mapping Morse code dots to letter characters. In the explanation that follows take it as a given that it can be shown that all logic systems are equivalent to or mappable to the operators we will be using; this assumption is vital, and I can't quite explain it without detail so I refer the inquisitive mind to read Nagel and Newman's book.

Let us construct a simple logic system using the arbitrary operators P, N, ⊃, and x that have certain properties which we take as given by the table defined below. In the left column a combination of operators is given and in the right column a definition in English of the operators meaning is given.

P⊃x    'print x'
NP⊃x    'never print x'
PP ⊃x    'print xx'
NPP⊃x    'never print xx'

We can combine these statements and create more complicated statements. For example P⊃y where y=P⊃x would mean 'print P⊃x' (note that implicitly I am also using the equals operator). Crucially then NPP⊃x would mean 'never print xx', and this statement could also be written NP⊃xx.

Next ponder what the last statement used on itself means. The statement NPP⊃y where y=NPP⊃ would mean 'never print NPP⊃NPP⊃', but this strange statement could also be written NPP⊃NPP⊃.

So either our system prints NPP⊃NPP⊃ or it never prints NPP⊃NPP⊃. It must do one or the other. If our system prints NPP⊃NPP⊃ then it has printed a false statement because the statement contradicts itself by self reference once it is printed. On the other hand if the system never prints NPP⊃NPP⊃ then we know that there is at least one true statement our system never prints.

So either there are logic statements which may be printed which are false statements, or there are true statements which are never printed. Our system must print some false statements if it is to print all true statements. Or our system will print only true statements, but it will fail to print some true statements.

In the example above I have taken arguments very similar to that in Raymond Smullyan's book Gödels Incompleteness Theorems in order to create an extremely concise, but hopefully accurate description of what lies at the core of Gödel's insight. In Nagel and Newman's book they explain Gödel's proof in much more detail by working out the details of mapping. For example in the explanation above I mapped mathematical statements to the idea of printing, but print could be equivalently be existence. Further, Nagel and Newman argue as Gödel did that all formal axiometric systems can be mapped in some way such that even the most complicated mathematical systems using the common operators of +,-,=, x,0,(,),⊃ and so on can be shown to be incomplete.

Gödel's incompleteness theorem has many forms and implications. Briefly I will demonstrate an analogous, but weaker form of Gödel's incompleteness theorem by analogy to the halting problem. I believe this demonstration is of importance to those of us immersed in the information age and perhaps easier to grasp or at least more applicable than Gödel's work.

The halting problem is to decide whether given a computer program and some input, whether the program will ever stop or will it continue computing infinitely. The key to the halting problem is the concept of computation and algorithms. In the original proof by the enduring Alan Turing specific meanings to the concepts of algorithm and computation were defined. He used a computational machine now known as a Turing complete computer, or a Turing machine. The definition of what constitutes a computer is to the halting problem what the mapping of symbols is to Gödel's theorem. It is at the heart of the problem, and thus actually one of the harder points to define so I will again leave that task as an exercise to the reader.

So lets look at two psuedocode programs and lets imagine that we also have a very special program written by a genius scientist which is called Halting. The scientist claims that Halting can correctly tell your own code B(P,i) whether a program halts.

Program B(P,i)
  if Halting(P,i)==true then
    return true // the program halts
    return false // the program does not halt

Now here is the important part. The genius scientist claims we can analyze any kind of computer program, this is indeed the crux of the halting program, we want to know if any and every program stops. Now imagine a program E that takes X, which is any program, as an argument.

Program E(X)
  if B(X,X)==true then
    while(true) //loops forever
   return true

The first thing E does is take B and passes it X for both arguments. Program E will get back from B either true or false. If it receives back true it will enter an infinite loop and if it receives back false it will terminate.

So suppose I take B and feed it E for both arguments. What answer will B(E,E) give? Think about it.

We will be running our special Halting program on E(E) which will then run the program B(E,E). The answer to B(E,E) will either be either true or false. If the result is false the program E actually returns true and halts immediately; if the result is true then Halting thinks our program does halt, but the program E throws itself into a loop upon this condition and will never halt. Either way program E lies. E was written very craftily to break B on purpose, but nonetheless the damage is done. E cannot be made reliable even in principle. It matters not how clever you are and or how powerful your computer is. There is simply no reliable computer program that can determine whether another program halts on an arbitrary input. The incompleteness problem may have seemed a little bit distant and philosophical, but if you have read this far it should be evident that the halting problem has deep implications computing.

What does Gödel's theorem mean for the real word, experimental verification, and deductive sciences? Well take for example the Banach-Tarski paradox which states that a solid ball in three dimensional space can be split into a finite number of non-overlapping pieces, and then be put back together in a different way to yield two identical copies of the original ball. This process violates sensible physic notions of conservation of volume and area. It turns out that Banach and Tarski came to this conclusion based upon deductions from the Axiom of choice. Now, whether we know anything at all about the axiom of choice we do know that the deductive conclusions drawn from it are in violation of physics. Thus, a physicist could argue that the axiom of choice is not a valid axiom for our Universe. Within mathematics it is unknown, unprovable Gödel says, whether or not we should accept the axiom of choice, because it is after all an axiom. The argument for whether a given axiom is to be accepted must be discussed outside the confines of the logic structure one is arguing about. It turns out that the axiom of choice is important for many other really important mathematical proofs which are used in physics all the time. I don't know what to make of it really, perhaps a mathematician out their should weigh in on this question.

Another important theorem that goes along with Gödel's theorem is Tarski's undefinability theorem. Tarski's undefinabtliy theorem makes a more direct assertion about language and self referential systems. Basically any language sufficiently powerful to be expressively satisfying is limited. In summation we have two vital points to the concept of incompleteness.
  1.  If a system is consistent, it cannot be complete and is limited.
  2. The consistency of the assumptions or axioms cannot be proven entirely within the system.
The repercussions of the meta analysis of logic are profound and subtle. Gödel really has thrown us for a loop. It is unclear if we should draw the line and say this is just a mere curiosity of mathematics or a deep truth about the Universe. It has been proposed by Douglas Hofstadter (author of Gödel, Escher, Bach) that consciousness itself comes from a kind of 'strange loop' induced by a self referential system in our minds. Primarily, I think we can conclude that Gödel's incompleteness theorem implies that in most situations the tools science has to analyze the world are more than adequate because the situations are not self referential. However, I do see a limit to what we can know about the Universe. As physicists forge forward, generally quite successfully, in understanding the Universe it appears that there really is some consistent mathematical basis for our Universe. Many physicists are searching for this mathematical basis to the Universe, it is the so called theory of everything. But does Gödel's result imply that this mathematical basis cannot be self consistent?

Consider this scenario. One day our most powerful, successful, and comprehensive theory ever will predict something that experiment cannot verify or worse an experiment will patently disagree with. Some will argue that the theory must be thrown out because classically the scientific method states that a theory disagreeing with experiment, or making nonsense predictions, is untenable. Another theory will be introduced that makes consistent and testable predictions, however this theory is not able to predict the most intricate traces of nature. Actually, that first case kind of sounds like string theory. Perhaps we will have to start talking about theories being incomplete instead of wrong one day.


Wow, there really is something new to learn everyday.

I Hate Astrology

Perhaps it is cruel to snuff out the shinning gleam in the eyes of a person who upon hearing that I am an astronomer exclaims, "Oh, I love astrology!" and  I reply, "No, I study ASTRONOMY." But they don't understand it. The subtle differences in syllables of the words belies the vast gulf in empirical tendencies between the separate endeavors and it is too much to explain. I simply walk away.

I hate astrology and I hate when people get astronomy and astrology mixed up. I could be more understanding, but I have to choose my battles. I meet a lot of interesting people in coffee shops, bars, airplanes, parties and wherever else life takes me and when someone gets excited about the fact that I study astronomy it means they have a deep curiosity about the skies above. That curiosity is occasionally deeply misguided with astrology and their questions are so fundamentally misconceived I struggle to answer them with candor and accuracy (for example they ask, 'Do the planets affect our daily lives?' and I hesitate to answer honestly that we must consider their gravitational pull, so the answer must be yes). On the other hand I meet people who are genuinely interested in massive collections of gravitationally bound glowing gas and I am very happy to answer their questions.

There is a real danger when logic, or pseudologic, is applied to astrology. Recently there was an uproar about the shifting of the zodiac that made it into some news headlines. Briefly I shared the frustrated sentiments of astrologers because the shifted zodiac has been well known for some time, why is the public just now hearing about it? The book in the image above is from the seventies and claims right there on the cover that, 'Most astrology is unscientific and inaccurate', and goes on to explain the shifted zodiac and how to have a movie ending romance. The ideas in this book are the apotheosis of dangerous thought. A little bit of knowledge is a very dangerous thing when combined with pseudologic in the guise of rigorous proof. It has also not escaped my observation that many of the people I have known who believe in astrology also believe in God as if to demonstrate the utter confusion and inconsistency of their minds. I don't mean to badger defenseless people here. This is simply an honest expression of how I feel. I have summed up my sentiments into a paragraph which I think would be nice to place on a card with which to hand out to people who confuse astrology with astronomy:

I cannot rightly conceive of a logic which would allow one to study such disparate phenomena of love, planets, stars and come to see any connection. Perhaps, desperate for meaning people find it wherever they look; conclusions are forged before the data have been taken. Those who would apply science to astrology may as well attempt to apply science on whom to love and sociologists do study what makes a lasting relationship and neurobiologists study what chemicals are active in the brain during feelings of love, but no scientist will claim that Romeo shouldn't love Juliet. I believe science and an understanding of natural phenomena adds to the beauty life, but pseudologic and lies even when propagated with good intentions ultimately lead to pain and suffering. The human mind has the ability to find patterns anywhere, indeed often where they do not exist.

Making sense of a visible quantum object

Quantum mechanics predicts that nature is fundamentally uncertain. Particles are in multiple states at once; particles are here and there. As we extrapolate these properties of nature to macroscopic objects the results are counterintuitive. The counterintuitive predictions of quantum mechanics should be an observable phenomena, and indeed they are. In this talk the intuition is examined and in this paper by the same physicist, Aaron O'Connell, the physics is examined.

So I haven't been posting lately. I have been drowned in opportunities, hit by funding woes, and frankly it feels like my mind is melting. I will probably post more aggregated non-original content, like this post, but I also have lots of ideas and new things I am working on.

Flying is Unsustainable

Today I am crossing Australia, the Pacific, and then the West Coast by airplane and I feel guilty. You see everything that I do in my daily life to be environmentally friendly is nullified by my airplane travel. Even if I was completely carbon neutral in my daily life the excessive amount of airplane travel that I partake in each year would place me me in the same ranks as the worst polluters in America. According to a green manifesto (also see this description of 'low-energy astrophysics') by astrophysicist P.J. Marshall and others the average energy consumption per day of a person in the U.S. is 250 kWh/day/person. An astrophysicist uses an extra 133kWh/day/astronomer, yet the vast majority of that additional energy usage, 113 kWh/day/astronomer, is contributed by flying. The key message of the manifesto is that while astronomers are not actually a significant energy consumer in the U.S. (they use 0.001% of the national total energy production) we are high profile scientists who must set an example. Astronomers believe global warming is real, and thus must act.

Astronomy in Western Australia

Murchison, AUSTRALIA - Building a radio telescope is nothing like working on an optical telescope, except that both bring you to remote areas. Western Australia reminds me of the Texas hill country. I grew up in Texas and as simple as I can describe it Western Australia is like an upside down Texas. And the people they are nearly the same: they have thick accents, more land than they know what to do with, and national pride. It is hard to describe everything so here are a few pictures of what I have seen out here.

This is a massive 12 meter radio dish from the experiment next door. Western Australia is perfect for radio astronomy. There are very few people and no radio stations to interfere with the data. Several other projects, most importantly the Australian Square Kilometer Array Pathfinder (ASKAP), are very nearby to the Murchison Widefield Array (MWA). Australia is pushing to develop an infrastructure and technical knowledge base in a bid to host the Square Kilometer Array which will be the ultimate next generation radio telescope.

A Bungarra has come to visit MWA! A Bungarra or Sand Goana is a type of monitor lizard common in this part of Western Australia. They are big critters with a swaggering gate and curious yet skittish attitude. This one was wandering around our site for some time. I think he was as equally curious as to what we were doing as to I was about what he was doing. The big white box is a receiver that takes input from the antennas which we were testing.

The self reliant mind set is necessary out here. I am on what an American would call a ranch, but what they call a station. The stations muster, or as I would say drive, thousands of head of cattle and sheep to turn a profit. We were driving along the road one day on our 40 kilometer commute from the station to the antennas when we came upon this airplane. The station manager flies it around to help spot and muster the livestock because it is one of the only ways to find anything on 900,000 acres of land. There are lots of subtle differences to the stations round here to what I would expect in Texas actually. For example there are kangaroos instead of deer, and they don't use horses to muster they use dirt bikes.

This image is a track left by a Bungarra marching off into the distance. Long before the scientists, engineers, or even the ranchers converged onto this remote land an indigenous population known as the Wajarri lived here. Bungarra and their eggs were, or rather still are, a source of food for these people. The Wajarri, like other Aboriginal peoples in Australia, have a different cultural background which is hard for myself and many other westerners to comprehend. What is clear to me is that ancient wisdom still matters in this modern world because humans have a tendency to overreach; technology allows us to do many things, but what should we choose to do? The Wajarri people seem to agree that we should do astronomy as they have allowed us the use of their land for radio astronomy. Perhaps a desire to understand our place in the Universe is a shared cultural value.

A Primer on Radio Astronomy from Australia

Murchison, AUSTRALIA - I am seemingly in the middle of nowhere, and yet I do not doubt that the Murchison Widefield Array (MWA) is at the center of the Universe. Australia is beautiful out here. The area is surprisingly green because of recent rains and the sunsets are a mix of pastel reds and blues. At night the sky is filled with shooting stars and the Milky Way cuts through the sky so bright that dust lanes and nebula, like the Cosack Nebula, seem to have been painted in black on top of the band of stars in our galactic plane. The radio sky as the MWA sees it would look very different. In order to grasp what the MWA does we will have to first explore what radio astronomy and interferometry is.
Radio astronomy is the alchemy of astronomy; shrouded by secrecy and perpeputated by false claims of being able to transmute raw data into gold. There was a time when radio astronomy was really hard, and that time is always, but technology is making new things possible. The Murchison Wide Field array that I am working on here in the outback is only one of the many next generation low frequency radio telescopes coming online or planned such as LOFAR, LWA, and others.

Modern astrophysicists can observe the Universe using light, particles, or (hopefully) gravity waves. Classically astronomers observed light through a telescope, but today we don't look through telescopes and we don't just vaguely see light; we precisley count photons from every part of the electromagnetic spectrum. Light is made of photons, but a photon can be thought of as a wave and a particle. Indeed, a photon is a wave and a particle at once. Longer wavelength photons have lower energy and lower frequency compared to short wavelength photons. In the radio regime of the electromagnetic spectrum the particle view of light is not very helpful, in fact many radio astronomers and engineers actually neglect to ever think about about photons and only consider wavelength or frequency. Radio astronomers view light as an electromagnetic waves impinging upon our patient antennas like waves on the beach.

Long wavelength photons come from some very interesting sources in the sky. Radio waves certainly come from the Sun, because the Sun emits some energy at just about every wavelength. Radio waves are also emitted by galaxies, pulsars, and neutral hydrogen (through the 21cm line). However, the wavelength of photons is not constant: it increases as the photons traverse the Universe due to cosmological redshift such that more distant objects are seen at progressively larger and larger wavelengths consider to more and more distant objects. In my research I am particularly interested in studying the distribution of matter in the Universe at the largest of scales and at the earliest epochs when there was an abundance of neutral hydrogen. Radio waves are perfect for studying these phenomena, but it is difficult to build a telescope that can see a widefield of view, can see a wide range of frequencies at once, and has good resolution.

As radio waves arrive at our antennas we can either immediately detect them or we can reflect them to a receiver. The Arecibo telescope in Puerto Rico is reflector type telescope, as are the antennas in the Very Large Array. The antenna on your car directly receives the electromagnetic wave because it induces an oscillation in the field inside the metal of the receiving antenna and then you can hook up a transistor, and a speaker - that is a radio like in your car.

The problem with detecting radio wavelengths is that they are not easy to catch and they act way too much like a wave. Waves have strange properties such as interference and diffraction. It can be shown from wave theory that a telescope of diameter D receiving light of wavelength λ has a fundamental angular resolution limit proportional to λ/D. For example the colossal 300 meter diameter Arecibo telescope can only resolve objects down to 3.5 arc minutes (or about half a degree) at a wavelength of .2 meters (or 1.4 Ghz) and that resolving power will only get worse as we move to longer wavelengths. So if you want to see small things in the sky you had better have a huge radio telescope. But wait, there is more. The field of view that a radio telescope can see is also proportional to λ/D. For example at a wavelength of .2 meters it would take Arecibo about 10 separate observations to make an image of the full moon which is about half a degree in the sky.

So if you want to look at the radio sky at high resolution you had better use a huge telescope, but if you want to look at the radio sky in huge swaths, like in survey, you had better use a small telescope. It would seem to be that we are at an impasse to find a decent resolution and decent field of view radio telescope. Enter radio interferometry.
The diagram above is a pictorial representation of the principles of radio interferometry. In box A we have a big radio dish like Arecibo and a radio wave incident upon it. The radio waves hit the dish and reflect to a receiver (not shown in the cartoon) at the focal point. In box B we have chopped our radio telescope into little bits and so while the dish would behave as a smaller dish it would operate via the same principles. Each part reflects the radio waves incident upon it to a single focal point. In box C we have moved the pieces of the dish into several independent dishes and wired them together. Each dish now has its own focus, field of view, and angular resolution limit (this setup is similar to the VLA). Finally, in box D we have gotten rid of even the dishes. Instead of turning the dish to point to a particular object we use the time delay due to the finite speed of light to 'point' the antennas. An object in the direction θ in the sky sends out radio waves that arrive at the antenna tilted. So to catch the same the wave on the antenna on the left and right spanning the arrow in the diagram we use the time delay τ. In this setup there is no pointing the dish there is only an electronic control of simple antenna elements; this is how the MWA works.

The most difficult part of pulling the telescope apart is reassembling the signals coherently. In the diagram this is the function of the box with the circle and x. In radio astronomy that box would be a complex supercomputer and is called a correlator. The computing power needed to operate a correlator scales as the number of antenna elements squared thus it really takes a powerful computer to operate an array with many antennas. The idea is that the signal from each pair of antennas is correlated together to determine the pattern of incident radio waves. This is the basic idea of radio interferometry; the beautiful thing is that you get the large field of view that each antenna would see and the excellent resolution that the large diameter of antennas provide. The description I have given here of radio interferometry is wildly simplified.
So here I am in a shed in Murchison. A generator is humming along and powering all our computers and equipment and importantly the air conditioner keeping me cool. Flies and strange insects pester us relentlessly the moment I step outside. There are a lot of great things about Australia, but it is also a very harsh environment out here. The array has not been cooperating perfectly: there are amplifiers, attenuators, analog to digital converters, correlators, and more that all have to act in symphony for the system to work. The last few days we have solved as many problems as we have created. The radio sky sends its nite rote down upon us and waits for us to complete the instrument.