Thursday, December 31, 2009

Most viewed posts

2009 is the year I started blogging and I thought it would be interesting to see which posts on this blog have attracted the most viewers, partly because I find sometimes the results of such an analysis surprising. Thanks to Google Analytics this is easy to find out. The top 10 posts (and number of views) are below:
I found the results really encouraging: both the volume and that posts which I think were important and/or original and high on scientific content [except for 8.] attracted the most attention. I was surprised that some of the career advice and "better science" posts did not attract more attention since they potentially have a much wider readership.

So I will keep going in 2010....
Best wishes for the New Year!

Wednesday, December 30, 2009

Emergence in economics

Paul Krugman was awarded the Nobel Prize in Economics in 2008. He is also a New York Times columnist. Why mention him here? He is mentioned in Steven Johnson's book Emergence (p. 89-91) for a mathematical model which describes how spatially segregated business centres emerge. This is described in Krugman's book, The Self-Organising Economy.

Here are Krugman's "rules for research", taken from a talk, "How I work"

1. Listen to the Gentiles

2. Question the question

3. Dare to be silly

4. Simplify, simplify

1. means "Pay attention to what intelligent people are saying, even if they do not have your customs or speak your analytical language."

Emergence is universal

Last Christmas holidays I started reading Emergence: The Connected Lives of Ants, Brains, Cities, and Software, by Steven Johnson.
But the holidays ended.... and so I just got back into it.
It is a really nice book and he is a very gifted writer. I also find it helpful and fascinating because although it is about emergence it virtually never mentions physics or chemistry! Johnson's background is in software but he does a really nice job connecting emergence in software (e.g. genetic algorithms and SimCity) with emergence in ant colonies, slime moulds, formation of neighbourhoods in cities, ....

Johnson says (p. 77) there are five fundamental principles to follow "if you're building a system where macro-intelligence and adaptability derive from local knowledge"
  • More is different
  • Ignorance is useful
  • Encourage random encounters
  • Look for patterns in the signs
  • Pay attention to your neighbours

Monday, December 28, 2009

I am in more than one mind about this...

Howard Gardner is a Professor of Education and Psychology at Harvard. He is best known for developing and promoting the concept of Multiple Intelligences.
His latest book, Five Minds for the Future, defines five specific cognitive abilities that he claims will be sought and cultivated by leaders. Roughly here is my paraphrase of each of the minds, as applied to scientific research.
  • The Disciplinary Mind: You need to master a specific discipline or research area. This takes about ten years.
  • The Synthesizing Mind: You need to learn to integrate ideas from different disciplines into a coherent whole and to communicate that integration to others.
  • The Creating Mind: You need to develop the capacity to uncover and clarify new problems, questions and phenomena.
  • The Respectful Mind: You need to be aware of and appreciate different approaches and values within your discipline and between disciplines.
  • The Ethical Mind: You need to fulfill your responsibilities as a worker within your institution, your discipline, and as a citizen.

Thursday, December 24, 2009

Embrace failure?

Seth Olsen alerted me to a thought-provoking article in Wired about the importance of failure in science.

Wednesday, December 23, 2009

Think before you calculate, measure, fabricate,....

What will be your New Year's resolution for your research in 2010?

Write more papers? Get more students? Apply for more grants? Get more lab space? Get a paper published in Nature or Science? Learn a new technique?

I think mine is going to be:

Spend the first half hour of each day thinking and writing in a notebook about the important science questions I am interested in and want to try and answer. And, specifically coming up with multiple alternative hypotheses and devising ways to distinguish them.

Where did this come from?

Previously I wrote a post about a beautiful Science paper about Strong Inference by John R. Platt. He references a book he published in 1962, The Excitement of Science. Unfortunately, it is out of print and only two universities in Australia have a copy in their libraries. I got a copy on interlibrary loan and just finished reading it. I have scanned a copy of chapter 7 and chapter 8, which I found the most helpful and challenging.

Monday, December 21, 2009

Electronically driven hula dancers

Over the 25 years Robert Liu [from the University of Hawaii!] has emphasized that the
"hula twist" is ubiquitious in photoisomerisation reactions [where after a molecule absorbs a photon it undergoes a structural change] and is beautifully summarised in the figure below.

This short review by Liu and Hammond [whose address is Aloha, Oregon!] documents this and also argues that the hula twist is driven by steric considerations because it is "volume conserving".

However, it turns out that this geometrical change can be preferred purely by an electronic mechanism and does not require steric hindrance. Seth Olsen and I show this [amongst many other results, some of which I have discussed in a previous post] in a paper just published in Journal of Chemical Physics.

We refer to the "hula twist" as disrotatory motion and discuss it briefly on page 12 of our paper.

Saturday, December 19, 2009

Trust, but verify

Earlier this year a Nature paper reported the data below [black squares with error bars] for the spectrum of high-energy cosmic-ray electrons. The peak was interpreted as evidence for 500 GeV particles (dark matter) predicted by generalisations of the standard model that include extra dimensions.

However, more recent data [shown as red points] from NASA's Fermi Gamma-ray space telescope was recently published in PRL. The data has much better statistics and shows no peak. More details can be found here.

This is a good cautionary tale. There is no substitute for two or more independent experiments to test a hypothesis.

I don't think Ronald Reagan was a good U.S. president, but his signature phrase "Trust, but verify" has merits.

Thursday, December 17, 2009

Talks that go pear shaped....

Every now and then you go to a seminar which goes horribly wrong for the speaker. Someone asks a question that the speaker answers poorly or cannot answer. Then other people start asking questions or offering critical comments and it gets worse.....

Why does this happen? How can the speaker prevent it?

I think it may be because the speaker violates the important principle:

Never offer undefendable ground.

i.e. do not make claims that you cannot back up

Speakers will sometimes make claims that are not necessary for the actual talk but will irritate members of the audience, particulary senior people. I think students who "parrot" lines they have learnt from their advisor about justification for their work.

A random set of sample claims which you may hear variants of include:

-our results will allow the design of new materials
-silicon based electronics has no future
-density functional theory cannot described electronic correlation effects
-molecular dynamics simulations of biomolecules tell us nothing
-the Hubbard model oversimplifies the true Hamiltonian
-BEC's allow us to tune parameters in a manner that is not possible in traditional condensed matter systems
-quantum information processing is going to revolutionise computing
-our theory agrees with all the experimental results
-everyone elses theory is wrong

So if you don't want your talk to go pear shaped, don't claim anything you wont be able to defend.

Wednesday, December 16, 2009

How big a Hilbert space do you need?

How big a Hilbert space do I need to describe the electronic properties of a molecule?
Specifically, suppose in the molecule there are N valence electrons.
One must decide then how many spatial orbitals are required and how many Slater determinants? This issue is brought out in this review paper. Benzene is an illuminating case. McWeeny shows that a "brute force" approach based on molecular orbital theory requires hundreds and sometimes thousands of Slater determinants to obtain results of comparable accuracy to that obtained with a valence bond description.The latter uses just six localised orbitals and two Slater determinants (corresponding to the two Kekule structures).

Why does this matter? First, the priority of chemical insight favours the valence bond description over the "black box" approach embodied in the molecular orbital theory approach. The issues are described nicely in this Nature paper, which emphasises that the delocalised molecular orbitals are physically misleading. Second, computational efficiency may be more likely to be obtained with the simpler description.

A key question is whether one can codify these issues in a systematic way (perhaps with ideas from quantum information theory) to develop quantitative criteria to decide what is the minimal number of orbitals and Slater determinants.

Thanks to Anthony Jacko for providing the cartoon.

Sunday, December 13, 2009

When business people think they can run a scientific organisation...

Articles from Nature, ScienceInsider, and The Age newspaper summarise recent problems concerning the strained relationship between the board (chaired by a corporate lawyer) and scientists at the Australian Synchrotron. The International scientific advisory committee has resigned in protest.

Saturday, December 12, 2009

Listening to referees

It does not take long in science to get a negative referee report that makes ones blood boil. However, as frustrating (and silly) as some reports are I think we can gain a lot by reading them carefully and reflecting on why the referee expressed the view they do.

This was brought home to me recently when within a week I had two papers outright rejected. As painful as it was to acknowledge I can now see there is some basis for some of the referees criticisms. I still claim that in both papers the science was both valid and important. However, I now see that the way the papers were written that a quick reading (which I do not begrudge since I do it too) could frustrate a referee and lead to a negative report. So I am now rewriting both papers. I think the end result will be better papers.

So, try and put your shoes in the referee [wow what a Freudian slip! ]
I mean put yourself in the shoes of the referee and see if you can see what they said and why.

Thursday, December 10, 2009

Networking with Nobel laureates

My family continues to enjoy watching the TV show The Big Bang Theory. In the episode we watched tonight Sheldon unsuccessfully tried to give Nobel Laureate George Smoot (appearing as himself) a copy of his latest paper at a conference.

Non-equilibrium Green's functions also got a mention, probably the first (and only time) in the history of prime time TV!

Wednesday, December 9, 2009

When does the wavefunction collapse in nuclear collisions?

I had some great discussions today at ANU with Cedric Simenel and David Hinde about decoherence in nuclear collisions. One of the key issues became clearer to me. Suppose a projectile nucleus in its ground state |P> collides with a target nucleus in its ground state |T>. After the collision one observes the projectile to be in state |P> with probability |a|^2 and in state |P*> with probability |b|^2.
Simple scattering theory would say that the state of the whole system is
|Psi> = a |P>|T> + b |P*>|T*>
and the reduced density matrix for P has non-zero off-diagonal terms which only disappear after the measurement is made by the detectors.

However, I suspect that if the nuclei are large enough (i.e., have enough internal degrees of freedom) then the collision itself will decohere the superposition.

So, which is the correct picture? Presumably there is a "quantum-classical" crossover as the nuclei get heavier? Are there smoking gun experiments (e.g., Mott scattering of identical particles) to distinguish the two pictures?

Tuesday, December 8, 2009

Friction in nuclear collisions

Heavy nuclei are complex quantum many-body systems with many degrees of freedom. The observation of deep inelastic collisions (DHIC) in the 1970's led to the notion of friction in nuclear physics. This concept can be used to describe the transfer of energy from the relative motion of the nuclei to the internal degrees of freedom.

I am eager to quantify this friction because it will also enable us to say something about the role of decoherence in nuclear physics.
Today at ANU, Cedric Simenel brought to my attention a recent paper that is relevant.
For several nuclear collisions the position dependent friction was recently evaluated from the
Dissipative Dynamics version of TDHF (Time-dependent Hartee-Fock) theory.

It was found the friction monotonically increases with decreasing inter-nuclear separation, with a value of about 10^{21} s^{-1} at the barrier. For the 40Ca + 40Ca reaction at a centre of mass energy of 100 MeV, about 10 MeV of energy is converted to internal excitation energy as the nuclei move beyond the barrier.
These numbers show that decoherence/friction may be important in tunneling because the dissipation rate is comparable to the barrier frequency.

Monday, December 7, 2009

In the wrong place?

This week I am in Canberra at ANU working with nuclear physicists on quantum decoherence in nuclear collisions. Discussions today inspired the previous post.

But, it is ironic that I am quoting Zurek because he is at UQ right now, and giving a seminar tomorrow.

Decoherence and irreversibility

How long does it take for Schrodingers cat to live or die?

For a quantum system interacting with an environment (containing many degrees of freedom) what causes decoherence? What sets the time scale on which it occurs?

It turns out the decoherence and dissipation are intimately connected. Decoherence arises from fluctuations in the phase of the system quantum state due to its interaction with the environment.

An important calculation was performed by Zurek in the 1980's and elegantly summarised in an expanded and updated version of his famous Physics Today article from 1991). Consider a free particle which is in a superposition state consisting of two Gaussian wave packets a distance Delta x apart.
Let gamma be the damping rate associated with friction from interaction with the environment. Then the decoherence time (i.e., the rate at which the off-diagonal parts of the density matrix decay) is given by

Thus we see that the same physics that causes friction (energy flow from the system to the environment) causes decoherence. But, they occur on completely different timescales.

Note that once Delta x is larger than the deBroglie wavelength [which it must beif we want to think about superpositions of semiclassical states] then the decoherence time is much less than the energy dissipation rate.

This connection between decoherence (fluctuations) and dissipation is another manifestation of the fluctuation-dissipation theorem.

Sunday, December 6, 2009

Mental health issues for researchers

In my talk on academic career advice I mentioned the importance of mental health issues, which struck a chord with many. The anecdotal evidence is that this is a significant problem among people in academia at all career stages, but particularly among graduate students.

Academics (on average) tend to be highly gifted, driven, creative, critical, introspective, and sensitive. This makes them more vulnerable to mental illness, particularly depression, than the average person.

Here are just a few prominent examples, at the more extreme end of the spectrum.

John Nash (A Beautiful Mind) was a young faculty member at MIT when he was afflicted with schizophrenia. He never returned to any form of employment. In 1994, he received the Nobel Prize in Economics for work laying founds in game theory, completed in his Princeton Ph.D in mathematics.

Ludwig Boltzmann and Paul Ehrenfest both suffered from depression and committed suicide.

In 2007, the Times Higher Education Guide listed Michel Foucault as the most cited intellectual in the humanities. He suffered from severe depression as an undergraduate. He became famous for Madness and Civilization, an abridged version of Folie et déraison: Histoire de la folie à l'âge classique, which originated in his doctoral thesis.

So, protect your mental health. Here are some questionnaires to evaluate whether you are clinically depressed.

Saturday, December 5, 2009

Quantum limited detectors

Yesterday John Clarke (University of California, Berkeley) gave a nice
colloquium at UQ on Applications of SQUIDs (Superconducting Quantum Intereference Devices).

Clarke is arguably the "father" of the development of SQUIDs in both science and technology. Many of the people currently leading the development of superconducting qubits were at one time his students or postdocs. Tim Duty is to be thanked for bringing Clarke to UQ to give this fascinating talk.

The two key physical effects on which the SQUID
is based are Josephson tunneling and magnetic flux quantisation.

In a DC SQUID the I-V curve is modulated by magnetic flux, and so the device is basically a flux to voltage transducer with noise that can approach the quantum limit. They can detect magnetic fields as small as a femtotesla!

Roger Highfield's book, The Science of Harry Potter describes how the sorting hat that Harry used in the first book is based on SQUIDs. [When I told my son that, he said this is not correct because the hat is based on magic and that is the whole point!]

Clarke described two nice projects he has been involved in building highly sensitive detectors to answer fundamental questions in Cosmology.

The first involves the search for dark matter, specifically looking for clusters of galaxies, largest bound objects in the universe. This uses Sunyarev-Zeldovich effect involving shifts in the Cosmic Microwave Background because CMB photos are scattered by hot gas bound to clusters.

This invovles a Transition edge sensor which is limited by photon shot noise. 100 squid array, multiplexer, current summing circuit, comb of frequencies....
This is now installed at the South pole telescope.

The second project involves the search for cold dark matter, specifically the axion, a particle proposed in 1978 to explain the magnitude of the electric dipole moment of neutron, which is three orders of magnitude smaller than standard model predicts. [This was news to me. I thought the standard model explained everything!, more or less].

How can axions be detected? Primakoff conversion (Sikivie, 1983) of an axion to photon. I did not follow how this worked.

Noise temperature of amplifiers needs to be reduced. Want quieter ones than HEMTs (High electron mobility transistors (based on GaAs).
With SQUID noise temperature becomes about 30-40 times lower than HEMT.
Then the required measurement time decreased from 270 years to 100 days, so graduate students may be interested in working on this project!

Clarke then discussed Microtesla magnetic resonance imaging.
A standard clinical MRI machine uses 1.5 Tesla, costs 2$M, and you sometimes need to reinforce floor. Inspired by Michael Crichton novel (1999) to use earths magnetic field for imaging. [Was this tongue in cheek?]

Clarke showed results from a PNAS paper in 2004, which imaged a Red Pepper [capsicum to some?], with a resolution of 0.7mm. He also showed 3D images of arm with a field of 132 micro tesla. He went on to discuss T1 weighted contrast imaging (contrast agent is Gd salt). I then had to leave to go to my son's futsal game....

And to think that all this technology and science has come about because a young Ph.D student about 40 years ago was challenged to think about physical signatures of spontaneous symmetry breaking in superconductors....

Friday, December 4, 2009

Images of condensed matter physics

Irina Bariakhtar, webmaster for the Division of Condensed Matter Physics of the American Physical Society
has established a gallery of images that are fascinating and may be useful for talks and teaching.
Some of these images are used in a 2010 calendar.

Deconstructing charge transport in complex materials II

Following up on my previous post on impedance spectroscopy I came across two helpful reviews.
The first is a "Colloquium" article by D.L. Sidebottom in a recent Reviews of Modern Physics. It shows how in glasses which have conducting ions the rms motion of the ions can be extracted directly from the frequency dependence of the conductivity.

The second article is a tutorial review applying impedance spectroscopy to conducting polymers, especially when they are used as electrodes in biomedical applications

Thursday, December 3, 2009

Read the ad, answer the ad

If you are applying for a job:

read the advertisement very carefully and note answers to the following questions

what sort of person are they looking for?
is the job to work
-with a specific person, with a specific group, or to work independently?
- on a specific project?
- in a specific research area?

is the job supported by a specific grant or program?

After you have gleaned answers to these questions answer:
Do I want this specific job?
Would should they be specifically interested in me?

Then if you apply make sure your cover letter specifically tells them why you want THIS specific job and why they should interview specifically YOU.

Wednesday, December 2, 2009

Model Hamiltonian parameters from electronic structure theory

An important step in modelling complex materials is writing down effective Hamiltonians which capture the essential physics. One approach to estimating model parameters for a specific material is to calculate them using methods from electronic structure theory.

About ten years ago, following earlier work by Kino and Fukuyama, I wrote a review arguing that the relevant Hamiltonian for the kappa-(BEDT-TTF)2X family of superconducting organic charge transfer salts was a Hubbard model on the anisotropic triangular lattice at half filling.

A recent PRL by a group from Goethe Universitat Frankfurt is of particular interest to me because it describes state-of-the-art calculations based on density functional theory. For three different anions X and two different pressures, the authors calculate the parameters t and t' which define the band structure. Figure 4 from the paper gives a nice summary of the results.
[Similar results were obtained at the same time by a group in Japan and reported here.]

The ratio t'/t has a significant effect on the ground state of the system, as can be seen in the proposed phase diagram from a PRL by Ben Powell and I.

A few specific things that stood out to me about the results:

1. The ratio t'/t does vary significantly with anion X.

X, t'/t , experimental ground state at ambient pressure
Cu[N(CN)2]Cl, 0.44 +- 0.05, antiferromagnetic Mott insulator
Cu(SCN)2, 0.58 +- 0.05, superconductor
Cu2(CN)3, 0.83 +- 0.08, spin liquid Mott insulator

This variation can explain why one does see a "chemical pressure" effect. i.e., the anion does have a significant effect on the ground state.

2. The level of DFT used (LDA vs. GGA) does not seem to have a large effect (less than ten per cent) on the results, increasing confidence that the results are somewhat robust.

3. The calculated magnitudes of t~0.05-0.07 eV are comparable to those estimated from comparision of the experimental optical conductivity to that calculated using dynamical mean-field theory (see this PRL).

4. The values of t and t' can be used to estimate the bare density of states at the Fermi energy. Comparison with measurements of the cyclotron effective mass and the specific heat coefficient gamma provides a means to estimate the renormalisation of these by many body effects. This is discussed in great detail in this paper.

5. The value of t1 (the intradimer hopping) ~ 0.2 eV is comparable to that estimated from the intradimer transition in the optical conductivity.

6. The authors estimate U ~ 2t1, where t1 is the intradimer hopping, following what I did a decade ago. However, this is only true in the limit that 2t1 >> U0, the single molecule U. This turns out not to be justified and is discussed in this review.

7. The authors calculate that t varies by as much as 30 per cent as the pressure increases to 0.75 GPa ( 7.5 kbar). This can explain why the ground state does change with pressure (e.g., from Mott insulator to superconductor to metal). When I wrote my review a decade ago this was an unsolved problem.

8. The value for X=Cu2(CN)3 of t'/t = 0.83 +- 0.08 means in the relevant Heisenberg model that describes the spin degrees of freedom in the Mott insulator, J'/J ~0.6, a value comparable to that at which magnetic order disappears, as calculated in this paper.

One question I have is:
What is the physical basis of the chemical pressure effect?