Field of Science

Earthquakes and Telescopes

On Friday, February 26 at 10:34 PM PST there was an 8.8 magnitude earthquake in Chile. This earthquake released some 500 times more energy than the Haitian earthquake, but the death toll is only about 1/500 than that of Haiti. Thankfully, because Chile was prepared and because the epicenter was more distant from urban areas the destruction and human suffering was limited. My heart goes out to all those affected by the Chilean and Haitian earthquakes.

The astronomer in me immediately brought my mind to the telescopes in Chile. Chile is host to many present and planned astronomical observatories. If the telescopes there were to be destroyed or even minimally damaged it could set back astronomical research for years. How did the telescopes in Chile manage? So far I am only certain that the two 8.1 meter Gemini telescopes are okay and will carry on observations as normal tonight. High in the Chilean Andes there are many more astronomical observatories including the European Southern Observatory's Very Large Telescope, which is actually four separate 8.2 meter optical telescopes. The more I considered the region I realized that they are prepared for exactly this kind of event. Anil Ananthaswamy on the Edge of Physics blog explains with an excerpt from his new book why the VLT is most likely perfectly fine:
The primary mirror is 18 centimeters thick. Because of its weight, the mirror’s precise shape can warp when it is tilted, so 150 actuators, upon which the mirror rests, continually push and pull at least once a minute to ensure that the optimal curvature is maintained. More impressive than the actuators are the clamps around the edges of the mirror, which can, at a moment’s notice, lift the entire mirror, all 23 tons of it, off the actuators and secure it to the telescope’s support structure in case of an earthquake (moderate quakes, of less than 7.75 Richter, are not uncommon here, thanks to the ongoing collision of the Nazca and South American plates). The entire telescope is designed to swing during an earthquake, and securing the primary mirror prevents it from rattling against the metal tubes that surround it.
very large telescope mirror, VLT mirror
A Very Large Telescope primary mirror. The support structure and clamps are visible. Image by Dirk Essel

Update: Also NASA reports that the Chilean quake may have shortened earth days. The explanation for this  is of course the conservation of angular momentum: the earth is an ice skater pulling in its arms to spin faster.

The Astronomist @ Field of Science

The Astronomist has moved to a new home at Field of Science. It is great to be here with a community of other diverse science blogs which you can browse at the top of the page. A few things have changed with the blog like the look and the URL, but all the changes are for the better. I would also like to take this chance to talk a little about The Astronomist.


The Astronomist is a blog about science, mostly astronomy, but also anything else that is interesting. I write this blog because I am deeply committed to staying curious. I estimate that 99.999% of the world makes their way in this world by thinking only of what happens on earth, but some of us are concerned with what happens above. What is matters though is that all (even those of you who say you don't) of us feel awe when looking up. Occasionally, when I explain what I do I am asked why would we study the sky above when we haven't even figured out everything on earth, but I think if you have to ask, then you aren't asking enough. Through this virtual venue I can share my thoughts about various curiosities and just as importantly I hope that I can receive feedback from other wise and curious folks. I would not be surprised, in fact I hope, that blog comments could be better than my original posts because collectively the community knows far more than I will ever know. I don't always write about what I know so the best part my blog post could begin where the blog post ends. Just stay curious.

Astronomist is not a word

There are economists, agronomists, physicists, biologists, and scientists, but where are the Astronomists? When I was a child I took a long time to start reading and was diagnosed with dyslexia. I think one of the difficulties I encountered was the inconsistencies of language. In languages you can't be certain without a priori knowledge whether the patterns you know will be valid for a specific case. For example when pluralizing certain animals you can not just add an 's' and be done with the matter; you can have one goose, but there is no such thing as Gooses there are Geese (that nomenclature isn't so unexpected actually). However, when your Geese are just generally around a large group is a flock, when they are in flight they become a skein, and when they waddle on the ground they become a gaggle. How is anyone supposed to know that without having been told that? I know a little of many languages, mostly just enough to get myself in trouble, because I can't predict the special cases of conjugation, spelling, or pronunciation. I think all of us who are learning a new language just use the general rule or pattern we observe and extrapolate to new cases. Thus those who are learning English may take science and make the new word scientists, but when they take astronomy and make astronomists they are punished for their clever deduction! Thus I propose that Astronomist is a good as word as any.
Astronomist, astronomer, astronomy, image by raul-ruiz

Sun Dog

The Solar Dynamics Observatory just launched on February 11 on at Atlas V rocket. It is going to do great science. The launch was fantastic because the rocket's supersonic shock wave was visible in a sun dog!


The next perihelion of Halley's Comet will be 28 July, 2061. The comet has a storied history, particularily its visit in 1066:

28.07.2061, Halley's commetIn 1066, the comet was seen in England and thought to be an omen: later that year Harold II of England died at the Battle of Hastings; it was a bad omen for Harold, but a good omen for the man who defeated him, William the Conqueror. The comet is represented on the Bayeux Tapestry as a fiery star, and the accounts that have been preserved represent it as having appeared to be four times the size of Venus, and to have shone with a light equal to a quarter of that of the Moon. Halley came within 0.10 AU of the Earth. This appearance of the comet is also noted in the Anglo-Saxon Chronicle. Eilmer of Malmesbury may have seen it in 989, as he wrote of the comet in 1066: "You've come, have you?…You've come, you source of tears to many mothers, you evil. I hate you! It is long since I saw you; but as I see you now you are much more terrible, for I see you brandishing the downfall of my country. I hate you!" The Irish Annals of the Four Masters recorded the comet as "A star [that] appeared on the seventh of the Calends of May, on Tuesday after Little Easter, than whose light the brilliance or light of the moon was not greater; and it was visible to all in this manner till the end of four nights afterwards." Chaco Native Americans in New Mexico may have recorded the 1066 apparition in their petroglyphs.
The comet also heralded the birth and death of Mark Twain. Twain said in his autobiography,

I came in with Halley's Comet in 1835. It is coming again next year, and I expect to go out with it. It will be the greatest disappointment of my life if I don't go out with Halley's Comet. The Almighty has said, no doubt: ‘Now here are these two unaccountable freaks; they came in together, they must go out together.'"
Indeed, he died two days after perihelion.

An Upper Limit On Not Knowing What the F*** They're Doing

First, I should say that the Supernova Cosmology Group and others using Type Ia supernova as standard candles are very precise in their work and I don't seriously doubt their results as they have been very consistent with other observations. There is though the one dark shadow looming over all their results and that is systematic error. Cosmologists use Type Ia supernova as a lighthouse in the dark because we can assume that all lighthouses have the same intrinsic luminosity and therefore any difference in observed luminosity is due solely to the distance from us. Thus by observing distant supernovae and recording their various properties such as luminosity and recession velocity from us we can plot their velocity versus distance and we can learn about the expansion of our universe and the cosmological constant. However, we assumed that we knew their intrinsic luminosity, but of course there are always unknown unknowns:
As we know,
There are known knowns.
There are things we know we know.
We also know
There are known unknowns.
That is to say
We know there are some things
We do not know.
But there are also unknown unknowns,
The ones we don’t know
We don’t know.

—Donald Rumsfeld, Feb. 12, 2002, Department of Defense news briefing
Today I read two things online that I really enjoyed and I realized that they are actually very connected. On The Blog of Steve Shwartz I read that No One Knows What the F*** They're Doing (or "The 3 Types of Knowledge") and couldn't agree more (for example, I certainly don't know what I am doing). And in Nature I read about An upper limit on the contribution of accreting white dwarfs to the type Ia supernova rate (and the arXiv preprint here) which raised questions about possible systematics in the use of supernovae in cosmology. The abstract from the nature article:
There is wide agreement that type Ia supernovae (used as standard candles for cosmology) are associated with the thermonuclear explosions of white dwarf stars. The nuclear runaway that leads to the explosion could start in a white dwarf gradually accumulating matter from a companion star until it reaches the Chandrasekhar limit, or could be triggered by the merger of two white dwarfs in a compact binary system. The X-ray signatures of these two possible paths are very different. Whereas no strong electromagnetic emission is expected in the merger scenario until shortly before the supernova, the white dwarf accreting material from the normal star becomes a source of copious X-rays for about 107 years before the explosion. This offers a means of determining which path dominates. Here we report that the observed X-ray flux from six nearby elliptical galaxies and galaxy bulges is a factor of ~30–50 less than predicted in the accretion scenario, based upon an estimate of the supernova rate from their K-band luminosities. We conclude that no more than about five per cent of type Ia supernovae in early-type galaxies can be produced by white dwarfs in accreting binary systems, unless their progenitors are much younger than the bulk of the stellar population in these galaxies, or explosions of sub-Chandrasekhar white dwarfs make a significant contribution to the supernova rate.
So, what the researchers found using Chandra data is observational evidence that type Ia supernovae are not simply explosions of Chandrasekhar mass white dwarfs, which would have been the simple case. The 'classic' picture is that when the amount of material accreted onto a white dwarf exceeds the Chandrasekhar mass the dwarf explodes:
The new Chandra results indicate that some Type Ia supernovae probably originate from the collision of white dwarf binaries. The collision occurs because the stars radiate away gravitational waves and move inevitably closer. The result is an explosion of two stars that are near the Chandrasekhar mass so the observed luminosity may not be so standard:
There is at least one caveat to the results and the explanation given above. The Chandra observations were focused on elliptical galaxies and on the the center of one spiral galaxy because these areas had minimal amounts of gas and dust which block X-rays from reaching detectors. To summarize the results, the dominant mechanism for Type Ia supernovae in the elliptical early type galaxies Chandra observed is white dwarf mergers and not mass accretion. The take away point is that cosmologists need to take into account the galaxy type when using supernovae as standard candles because elliptical and spiral galaxies have different supernova progenitors; the supernova cosmology surveys have only used a small fraction of supernova from elliptical galaxies though, so it wont really change current results! So all that worry to discover nothing so troubling, but perhaps we gain assurance that soon even more distant standard candles can be trusted (like the GRB as a standard candle) despite that we can never really place anything more than an upper limit on unknown unknowns.

Marat Gilfanov, & Akos Bogdan (2010). An upper limit on the contribution of accreting white dwarfs to the type
Ia supernova rate Nature, 18 February 2010, Vol.463, p.924 arXiv: 1002.3359v1

Imagine the next 60 years

the next 60 years, four visions of the future, human development index versus planetary health
Futurism is an endeavor fraught with speculation. This is an inescapable fact. Many people, myself included, like to think about the future and wild things like space travel or exotic space ships, but this thinking is too often aimed at tangible objects and fiction. What we need is a frame work for thinking about the future that involves the most important factor and that is people. Consider the future in terms of the human development index versus planetary health. Consider how the future might be and what we want the future to be like. This is a refreshing approach to futurism because rather than an obsession with a singular aspect of the future, like the singularity, we are encouraged to make a plan for what to do with the wild technologies we may or may not obtain. Here is Dr. Chris Luebkeman with four predictions for Earth's future.


Imagine the world was just created a momemnt ago and will be gone the next
This is a philosophy experiment. The idea is not as absurd as it sounds (similarly, physicists are perplexed by the Boltzmann brain paradox and you should be too).You will not need any materials to preform it other than your brain. Simply consider what it would mean if the universe was created just an instant ago, and will vanish again the next instant.

The Cosmos isn't strange, people are strange

The cosmos isn't strange, people are strange. The universe on the largest of scales is actually simple compared to the complexities of the human mind or even the weather. In a statistical sense all current observations indicate that universe is homogeneous and isotropic everywhere. The best evidence for this statement is the cosmic microwave background (CMB) radiation which is light from the big bang that has traveled unimpeded through the universe since recombination. A simple and consistent model for the universe is that just after the big bang an inflationary field with quantum fluctuations rapidly expanded. These fluctuations seeded the CMB with a Gaussian random field of temperature perturbations. The seventh year Wilkinson Microwave Anisotropy Probe (WMAP) data is consistent with an inflationary ΛCDM model that specifies just six parameters (see Larson et al. 2010): the baryon density Ωb, the cold dark matter density Ωc, a cosmological constant ΩΛ, a spectral index of scalar fluctuations ns, the optical depth to reionization τ , and the scalar fluctuation amplitude Δ2R. These results are not new. They are further refinements on previous WMAP data which have all been consistent with the ΛCDM model showing that the universe is flat, with a nearly (but not exactly) scale invariant fluctuation spectrum seeded by quantum flucuations during inflation, with Gaussian random phases, and with statistical isotropy over the entire sky. When WMAP data are combined with additional cosmological data, the ΛCDM model remains robust, and stronger constraints are placed on allowed parameters.

However, if you keep looking closer you can find surprises in the data. The human mind is a readily adept tool at recognizing patterns so a visual inspection of the WMAP image is always a good idea. You can find many statistically unlikely events in the WMAP sky map, but because the human mind is naturally a poor estimator of probabilities strange observations more often only mark the strange patterns of human thought and not fundamental inconsistencies of the cosmos. Exactly what is hidden in the cosmic microwave background is like asking what you see in the clouds, but some claim to see the secret masters of the universe at work. In fact you can see Stephen Hawking's initials in the WMAP image!

Stephen Hawking's initials in the WMAP sky, secret masters of the universe, CMB anomalies
This seems like an outrageous claim so the first thing I did when I heard this was to look at my desktop, which of course is an image of the WMAP sky, and indeed this is no ruse. All I have done in the above image is to outline what was already present; to corroborate this I encourage you to observe the original images of the microwave sky from the WMAP collaboration.

There are many other strange occurrences in the WMAP sky which are not so easily observed by the casual observer or with the human eye. However when you go bowling with the CMB even statistics can lead you astray. You have to ask yourself is physics cognitively biased? Some striking visual anomalies that cosmologists have pointed out include the extremely large cold spot at the center, the four blue ridges in the lower hemisphere, the 'SH' initials, etc. Despite these observations the standard cosmological interpretation (see Komatsu et al. 2010) is, not surprisingly, standard. There are no anomalies to account for, but many researchers are searching for them. The situation is similar to particle physics where if the LHC finds the Higg's boson at the expected energy range then the standard model is validated, but if something unexpected is found new physics, answers to open questions, or a new direction for discovery other than the standard model may be opened (indeed, some find the prospect of merely finding the Higg's boson a disappointment). For example if there is statistically significant support for a hemispherical or dipole power asymmetry across the sky this could point to evidence for a unique inflaton field and, out on a limb here, evidence for physics beyond our universe. The real question is whether or not there are anomolies in the data which are significant. In Bennet et al 2010 the prospect for CMB anomalies is seriously addressed. An excerpt from the abstract
In this paper we examine potential anomalies and present analyses and assessments of their significance. In most cases we find that claimed anomalies depend on posterior selection of some aspect or subset of the data. Compared with sky simulations based on the best fit model, one can select for low probability features of the WMAP data. Low probability features are expected, but it is not usually straightforward to determine whether any particular low probability feature is the result of the a posteriori selection or non-standard cosmology. Hypothesis testing could, of course, always reveal an alternate model that is statistically favored, but there is currently no model that is more compelling. We find that two cold spots on the map are normal CMB fluctuations. We also find that that the amplitude of the quadrupole is well within the expected 95% confidence range and therefore is not anomalously low. We find no significant anomaly with a lack of large angular scale CMB power for the best-fit CDM model. We examine in detail the properties of the power spectrum data with respect to the CDM model and find no significant anomalies. The quadrupole and octupole components of the CMB sky are remarkably aligned, but we find that this is not due to any single map feature; it results from the statistical combination of the full sky anisotropy pattern. It may be due, in part, to chance alignments between the primary and secondary anisotropy, but this only shifts the coincidence from within the last scattering surface to between it and the local matter density distribution. This alignment has been known for years and yet no theory has replaced CDM as more compelling. We examine claims of a hemispherical or dipole power asymmetry across the sky and find that the evidence for these claim is not statistically significant. We confirm the claim of a strong quadrupolar power asymmetry effect, but there is considerable evidence that the effect is not cosmological. The likely explanation is an insufficient handling of beam asymmetries. We conclude that there is no compelling evidence for deviations from the CDM model, which is generally an acceptable statistical fit to WMAP and other cosmological data.
In case no one ever told you, the answer to any question asked in a paper's title is no. So if you make an arbitrary decision on how to run statistical analysis on your data (like something as a trivial as a tuned bin size in a histogram) this imposes a posterior selection on the data which will likely effect your conclusion. In order to draw conclusions about such a complicated observation (I began by saying this was all simple and I maintain that, but the instrument and detectors taking the observations are not simple) you must run Monte Carlo simulations to determine expected deviations from your model. So in conclusion cosmic variance limited data will necessarily show probabilistically unlikely events, and the cosmos isn't strange people are strange. Though this certainly isn't the end of probing the CMB with the PLANCK mission currently flying and CMB polarization just being explored. We still want to know how closely natures matches our theory, exactly or not exactly? And I will give you a hint here, you can always measure again so if you have to ask...

C. L. Bennett, R. S. Hill, G. Hinshaw, D. Larson, K. M. Smith, J. Dunkley, B. Gold, M. Halpern, N. Jarosik, A. Kogut, E. Komatsu, M. Limon, S. S. Meyer, M. R. Nolta, N. Odegard, L. Page, D. N. Spergel, G. S. Tucker, J. L. Weiland, E. Wollack, & E. L. Wright (2010). Seven-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Are
There Cosmic Microwave Background Anomalies? ApJ arXiv: 1001.4758v1

The Cosmological Constant and the Dark Sector

cosmological constant, einstein,
What is the phenomenology of the dark sector? That is the my question. The dark sector refers to dark energy and dark matter, which are two distinct phenomena which seem to have no direct connection other than in name. In this post I am going to talk about the cosmological constant, dark energy, and look at some landmark literature on the subject. I am going to show the origin of the 10120 order of magnitude error that results from the quantum field theory prediction and cosmological observation. I am going to outline the physicists theoretical case and the astronomers observational case, and we will see how deceiving the cosmos can be.

Dark energy is a form of energy attributed to the nature of empty space which increases the rate of expansion of the universe; that is if you observe a distant galaxy not only is it moving away from you in time, but the rate at which it recedes from you is accelerating. In the last 30 years or so a wide range of observations have corroborated a model of the universe wherein a majority of energy is attributed to the dark sector. The current consensus is that there is a dark energy component of our universe that represents 2/3 of the entire energy content of the universe that explains the observed cosmic acceleration. This dark energy can lead to other strange phenomena such as repulsive gravity and ultimately a universe that tears itself apart.
cosmological energy budget, dark energy, dark matter
The composition of the cosmos.
The classic and simplest explanation for dark energy is the cosmological constant. The cosmological constant was originally introduced by Einstein as a term in his gravitational field equations in order to allow a steady state non-empty universe solution to his equations. The cosmological constant introduces a non-zero vacuum energy into the universe. This vacuum energy acts as a negative pressure (conversely a negative vacuum energy would result in a positive pressure) and this vacuum energy is known as dark energy. The idea of a vacuum containing energy is very much expected by physicists, but the observed value of the vacuum energy is what is surprising as we will see. The cosmological constant represents the particularly simple case of constant vacuum energy and is represented by the Greek character lambda (Λ). A seminal paper (also see Weinberg 1989 or for more recent general reviews see Carroll 2000, Frieman et al. 2008, and Peebels & Ratra 2002) on the topic was published in 1992 by Carrol, Press & Turner. The abstract
The cosmological constant problem is examined in the context of both astronomy and physics. Effects of a nonzero cosmological constant are discussed with reference to expansion dynamics, the age of the universe, distance measures, comoving density of objects, growth of linear perturbations, and gravitational lens probabilities. The observational status of the cosmological constant is reviewed, with attention given to the existence of high-redshift objects, age derivation from globular clusters and cosmic nuclear data, dynamical tests of ΩΛ, quasar absorption line statistics, gravitational lensing, and astrophysics of distant objects. Finally, possible solutions to the physicist's cosmological constant problem are examined.
Roughly following Carrol et al. (1992) I will explore further the origin of the cosmological constant and the question of why the observed vacuum energy is so small in comparison to the scales of predicted by fundamental physics. We start with the Friedman equation derived from Einstein's field equations. It relates the Hubble parameter, H, to the scale factor, a, and other basic quantities.
cosmological constant, Friedman equation
Where the dot denotes a time derivative, G is the gravitational constant, ρM is the cosmological density of matter, and k is the curvature parameter which can take on values of -1,0, and +1 corresponding to a negative, flat, and hyperbolic universe geometries respectively. The Friedman equation can be viewed in terms of the contributions from matter (ρM), curvature (k), and vacuum energy (Λ). It is customary to parametrize these quantities in terms of their fractional value at the current epoch, that is today. We denote the current values with a subscript 0. For example the current value of Hubble's Constant is H0~70 km/s/Mpc.
So in total then we have simplified the problem to the statement that ΩMkΛ=1 for consistency with the Friedman equation. The astronomers cosmological constant problem is whether a nonzero ΩΛ is required to achieve consistency.

The physicists cosmological constant problem begins with the statement that there are virtual vacuum states present in a vacuum due to the Heisenberg uncertainty principle. For example consider a relativistic field as the collection of harmonic oscillators of all possible frequencies, ω. The vacuum has a zero-point energy E0 (for a scalar field φ of spinless bosons with mass m) which is the sum of contributions over all possible modes of the field, i.e. over all wave vectors k.
vacuum zero-point energy
We preform the sum by considering the system in a box of side length L and letting L tend to ∞. An appropriate periodic boundary condition implies λj=L/nj for some integer nj with a wave vector kj=2π/λj. In the range ( kj, kj + dkj) there are dkj Lj/(2 π) discrete values of kj such that the sum becomes the integral:
The energy density of the vaccum is simply this ground state energy divided by the volume, L3. In order to properly obtain an answer we must use ωk2=k2+m2/h2 and most importantly we impose a cutoff at a maximum wave vector kmax»m/h (note that I must use h where I mean hbar here). The result is then
cosmological zero-point vacuum energy
The vacuum energy density can be shown to approach infinity as kmax (the physicist will recall ultraviolet catastrophe) approaches infinity therefore it is expected that there is some cutoff value near the Planck energy (Eplanck is about 1016 ergs). The logical choice is then to choose kmax=Eplanck/h. The resultant prediction for this vacuum zero-point energy density is that ρvac ≈ 1074 GeV 4h-3 ≈ 1092 g/cm3 (the net cosmological constant is more nuanced in that it can be viewed as the sum of a number of disparate contributions including potential energies from scalar fields, zero-point fluctuations, as well as a pure cosmological constant, so only the dominant term has been addressed here), however observational cosmology has constrained ρobs≈ 10-47 GeV 4h-3 ≈ 10-29 g/cm3. The difference between the predicted and observed value is 120 orders of magnitude. This discrepancy is devastatingly incomprehensible large and can only be described as an EPIC FAIL. However, it may be misleading to characterize the discrepancy this way since energy density can be expressed as a mass scale to the fourth power. Writing ρλ= Mvac4 we find the difference is only 1030. The theoretical predictions from quantum field theory have been sound in predicting vacuum effects such as the Casimir force so there is no a priori reason to doubt predictions in this cosmological realm. The unsolved problem in physics is why doesn't the vacuum energy produce a very large cosmological constant?

Sean Carrol recently posted on the Cosmic Variance blog a list of 24 Questions for Elementary Physics for the next 100 years. Number six is, what is the phenomenology of the dark sector? Indeed, while current observations have demonstrated the expansion of the universe is now accelerating, there are questions associated with the exact nature that are some of the most challenging problems in physics. The cosmological constant is only one form of dark energy, but what cosmologists really want to know is what is the dark energy equation of state?

Observational support for accelerated expansion is strong from observations of type Ia supernova, standard rulers, the cosmic microwave background, gravitational lensing, etc. (the interested reader is directed to Percival et al 2009 for results using the standard ruler baryonic acoustic oscillations technique and Riess et al 2004 using the type Ia supernova technique), but some methods could be biased by unknown systematics. The baryonic acoustic oscillations technique is very promising; the introduction to Percival et al 2009 explains:
Distinguishing between competing theories will only be achieved with precise measurements of the cosmic expansion history and the growth of structure within it. Among current measurement techniques for the cosmic expansion, Baryon Acoustic Oscillations (BAO) appear to have the lowest level of systematic uncertainty (Albrecht et al. 2006).
The strategies for distinguishing between a cosmological constant and other forms of dark energy all revolve around precision astrophysics measurements. Measuring the dark energy equation of state will provide a check on fundamental physics and general relativity, however it is of note that as Peebles & Ratra 2002 state:
the empirical basis [for dark energy] is not nearly as strong as it is for the standard model for particle physics: in cosmology it is not yet a matter of measuring the parameters in a well-established theory.
The gravity of the statement is that particle physics is right and observational cosmology is still just grasping in the dark. I am not a particle physicist, but to me the standard model does seem like a prediction machine and observational cosmology is just beginning to hold its own. The resolution of the dark sector may come from strange new physics like modified gravity, brane worlds, and so on (for discussions of various solutions see Carroll et al. 2005 for cosmology of generalized modified gravity models, Deffayet 2002 for modified brane worlds, or Ishak et al. 2006 on measuring the cosmological equation of state). So it is possible that the accelerated expansion of the universe is an illusion of our position in the universe, a misunderstanding of fundamental physics, or an unsatisfying tautology. No matter what it is though, everyone agrees that we need more data. That is where I come in. Consider for example, if we live in a non-homogeneous region of the universe then the solutions to Einstein's equations of general relativity would not result in the standard Friedmann-Lamaire-Robertson-Walker metric from whence we obtained the Friedman equation. Hence our entire model would be wrong. Soon much more powerful precision astrophysics experiments (like BOSS) will give us the observational data we need to achieve precision cosmography. I think that statements from Célérier 2009 sum up the fundamental issue
It is commonly stated that we have entered the era of precision cosmology in which a number of important observations have reached a degree of precision, and a level of agreement with theory, that is comparable with many Earth-based physics experiments. One of the consequences is the need to examine at what point our usual, well-worn assumption of homogeneity associated to the use of perturbation theory begins to compromise the accuracy of our models. It is now a widely accepted fact that the effect of the inhomogeneities observed in the Universe cannot be ignored when one wants to construct an accurate cosmological model. Well-established physics can explain several of the observed phenomena without introducing highly speculative elements, like dark matter, dark energy, exponential expansion at densities never attained in any experiment (i.e. inflation), and the like.
In conclusion we can only say the that cosmological constant or dark energy may be an unexpected component of our universe. Currently, the best observational constraints on dark energy come from the cosmic microwave background (CMB) and type Ia supernova measurements and all the data is largely consistent. We must continue to gather data in order to quantify the large scale structure of our universe, its geometry, and its energy budget. Ultimately, in determining the energy budget of the universe, it helps to have a monetary budget here on earth. The NSF Dark Energy Task Force has taken interest in this fundamental question so soon, we may know, what is the phenomenology of the dark sector?

Carroll, Sean M., Press, William H., & Turner, Edwin L. (1992). The cosmological constant ARA&A, 30, 499-542

Marie-Noëlle Célérier (2009). Inhomogeneities in the Universe with exact solutions of General
Relativity Invisible Universe International Conference arXiv: 0911.2597v1

Peebles, P., & Ratra, B. (2003). The cosmological constant and dark energy Reviews of Modern Physics, 75 (2), 559-606 DOI: 10.1103/RevModPhys.75.559

Will J. Percival, Beth A. Reid, Daniel J. Eisenstein, Neta A. Bahcall, Tamas Budavari, Joshua A. Frieman, Masataka Fukugita, James E. Gunn, Zeljko Ivezic, Gillian R. Knapp, Richard G. Kron, Jon Loveday, Robert H. Lupton, Timothy A. McKay, Avery Meiksin, Robert C. Nichol, Adrian C. Pope, David J. Schlegel, Donald P. Schneider, David N. Spergel, Chris Stoughton, Michael A. Strauss, Alexander S. Szalay, Max Tegmark, Michael S. Vogeley, David H. Weinberg, Donald G. York, & Idit Zehavi (2009). Baryon Acoustic Oscillations in the Sloan Digital Sky Survey Data
Release 7 Galaxy Sample MNRAS arXiv: 0907.1660v3

Einstein’s blackboard

Einstein’s blackboard as used in a lecture in Oxford on 16 May 1931:
At that time Einstein’s theories of relativity were being combined with astronomical data to explain the shifts towards the red in the spectra of distant galaxies, which indicated that the universe was expanding. In his lecture Einstein outlined a fairly simple model to explain this apparent expansion. In the first line on the blackboard, D, the measure of expansion in the universe, is defined in terms of the expansion factor P. The expression for the density of matter in the universe, given by ρ in the third line, is derived from the field equations. The last four lines contain numerical data, giving values for density, radius and age of the universe, where ‘L. J’ stands for ‘Licht Jahr’ (light year) and ‘J’ for ‘Jahr’ (year). According to the last line, the age of the universe is about 10, or perhaps 100 billion years (the bracket indicates an alternative figure, not a product of two figures).
From Bye-bye blackboard... from Einstein and others