Tuesday, May 23, 2017

How should undergraduate quantum theory be taught?

Some of my colleagues and I have started an interesting discussion about how to teach quantum theory to undergraduates. We have courses in the second, third, and fourth years. The three courses have independently evolved, depending on who teaches each. Some material gets repeated and other "important" topics get left out. One concern is that students seem to not "learn" what is in the curriculum for the previous year. The goal is to have a cohesive curriculum. This might be facilitated by using the same text for both the second and third-year courses.
This has stimulated me to raise some questions and give my tentative answers. I hope the post will stimulate lots of comments.

The problem that students don’t seem to learn what they should have in pre-requisite courses is true not just for quantum. I encounter second-year students who can’t do calculus and fourth-year (honours) students who can’t sketch a graph of a function or put numbers in a formula and get the correct answer with meaningful units. As I have argued before, basic skills are more important than detailed technical knowledge of specific subjects. Such skills include relating theory to experiment and making order of magnitude estimates.

Yet, given the following should we be surprised?
At UQ typical lecture attendance probably runs at 30-50 per cent for most courses. About five per cent watch the video. [University policy is that all lectures are automatically recorded]. The rest are going to watch it next week… Only about 25 per cent of the total enrolment in my second-year class are engaged enough to be using clickers in lectures. Exams are arguably relatively easy, similar to previous years, usually involve choosing questions/topics, and a mark of only 40-50 per cent is required to pass the course.
I do not think curriculum reform is going to solve this problem.

Having the same textbook for 2nd and 3rd year does have advantages. This is what we do for PHYS2020 Thermodynamics and PHYS3030 Statistical Mechanics. But, some second years do struggle with it... which is not necessarily a bad thing. The book is Introduction to Thermal Physics, by Schroeder.

Another question is what approach do you take for quantum: Schrodinger or Heisenberg, i.e. wave or matrix mechanics? The mathematics of the former is differential equations, that of the latter is linear algebra. Obviously, at some point you teach both, but what do you start with. It is interesting that the Feynman lectures really start with and develop the matrix approach, beating the two level system to death...
At what point do you solve the harmonic oscillator with creation and annihilation operators?
When do you introduce Dirac notation?

I would be hesitant about using Dirac notation throughout the second year course. I think this is too abstract for many of our current students. They also need to learn and master basic ideas/techniques about wave mechanics: particle in a box, hydrogen atom, atomic orbitals, … and connecting theory to experiment... and orders of magnitude estimates for quantum phenomena.

What might be a good text to use?

Twenty years ago (wow!) I taught second (?) year quantum at UNSW. The text I used is by Sara McMurry. It is very well written. I would still recommend it as it has a good mix of experiment and theory, old and new topics, wave and matrix mechanics….
It also had some nice computer simulations. But it is out of print, which really surprises and disappoints me.

Related to this there is a discussion on a Caltech blog about what topics should be in undergraduate courses on modern physics. Currently, most "modern" physics courses actually cover few discoveries beyond about 1930! Thus, what topics should be added? To do this one has to cut out some topics. People may find the discussion interesting (or frustrating…). I disagree with most of the discussion, even find it a little bizarre. Many of the comments seem to be from people pushing their own current research topic. For example, I know it is Caltech, but including density matrix renormalisation group (DMRG), does seem a little advanced and specialised...
There is no discussion of one of the great triumphs of "modern" physics, biophysics! I actually think every undergraduate should take a course in it.

What do you cut out?
I actually think the more the better, if the result is covering a few topics in a greater depth that develops skills, creates a greater understanding of foundations, that all leads to a greater love of the subject and a desire and ability to learn more.
In teaching fourth year condensed matter [roughly Ashcroft and Mermin] it is always a struggle to cut stuff out. Sometimes we don't even talk about semiconductor devices. This year I cut out transport theory and the Boltzmann equation so we could have more time for superconductivity. This is all debatable... But I hope that the students learned enough so that they if they need to they have the background they need to easily learn these topics.

A key issue that will divide people concerns the ultimate goal of a physics undergraduate education. Here are three extreme views.

A. It should prepare people to do a PhD with the instructor.
Thus all the background knowledge needed should be covered, including the relevant specialised and advanced topics.

B. It should prepare people to do a physics PhD (usually in theory) at one of the best institutions in the world.
Thus, everyone should have a curriculum like Caltech.

C. It should give a general education that students will enjoy and will develop skills and knowledge that may be helpful when they become high school teachers or software engineers.

What about Academic Freedom?
This means different things to different people. In some ways I think that the teacher should have a lot of freedom to add and subtract topics, to pitch the course at the level they want, and to choose the text. I don't think department chairs or colleagues should be telling them what they "have" to do. Obviously, teachers need to listen to others and take their views into account, particularly if they are more experienced. But people should be given the freedom to make mistakes. There are risks. But I think they are worth them in order to maintain faculty morale, foster creativity, maintaining standards, and honouring the important tradition of academic freedom. Furthermore, it is very important that faculty are not told by administrators, parents, or politicians what they should or should not be doing. Here, we should bear a thought for our colleagues in the humanities and social sciences, particularly in the USA, who are under increasing pressure to act in certain ways.

I welcome comments on any of the above.
My colleagues would particularly like to hear any text recommendations. Books by Griffiths, Shankar, Sakurai, and Townsend have been mentioned as possibilities.

Tuesday, May 16, 2017

A radical procedure for evaluating applicants: read one of their papers!

Like most faculty I have to evaluate the scientific "performance" and "potential" of applicants for jobs, promotion, prizes, and grants. This is a difficult task because we are often asked to evaluate people who we don't know, are unfamiliar with their work in our (somewhat related) field of expertise, or are working in completely different fields we know nothing about. For example, I have been on a committee where I had the ridiculous task of evaluating people in fields such as veterinary medicine, geography, and agriculture! This is one reason why metrics are so seductive and deceptive.

Here I want to focus on evaluating applicants who work in an area that is close enough to my own expertise, according to the following criteria. I can read one of their papers and make a reasonably informed assessment of its value, significance, and validity. This is important because it is easy to loose sight of the fact that there is only ONE measure that really matters: the ability of a person to produce valuable scientific knowledge. All the metrics, invited talks, grants, hype, slickly presented grant applications, enthralling presentations, .... are not what really matters.  They are not research accomplishments. This measure can only be assessed from the actual content of papers.

I am embarrassed to admit that I am finally trying to work harder at ``practising what I preach.'' When I need to assess a credible application I try to identify just one paper that the applicant identifies as significant or for a grant application a paper that is central to the proposed project. I then look at this paper and then go back to the (copious) paperwork of the submitted application. You might think that this takes more time. But, actually it may save time because it can be so definitive to my view (positive, neutral, or negative) that I have to read and agonise less about my assessment.

I have always done this before when assessing applicants for postdocs to work directly with me, but much more irregularly for other situations.

Ideally, this should not be necessary. Rather, a good applicant would be someone whose papers we have already read because we wanted to. However, that is not the world we live in. 

Is this a reasonable approach? Do you do something like this? Any other recommended approaches?

Friday, May 12, 2017

How should you engage with Trump and Brexit supporters?

Trump's election win surprised many, including me. Most people underestimated the level of resentment towards "elites" and the "establishment", particularly among working class white voters. This was also a factor in Brexit.

This post is about how scientists and university faculty might engage more effectively with such groups. Some of the concerns I have are similar to those I have about Marching for Science.

Over the past six months I have read some articles and had some interesting conversations with people in both the USA and Australia, who either voted for Trump or would have. What surprised and shocked me was, even among some ``well educated'' people, was the level of distrust and resentment towards the "elites"? [``You should not believe anything in the New York Times... Trump is telling the truth about crime statistics... We aren't safe.. All these liberals had it coming....'']
I also found helpful the book, Hillbilly Elegy.

My wife is a US citizen and one of her relatives in the USA works at a state university and sent us a popular newspaper article, written by a local faculty member, that aims to open a dialogue with a particular demographic that voted heavily for Trump. I largely agree with the author and have some similar political and religious views. I particularly liked the sincerity of the attempt to open a dialogue on several specific issues. However, there were several aspects to the article that I thought underlies the problem we face, rather than moving towards a real dialogue. I have made the debatable decision to not link to the actual article, because of the negative comments I make about the author and I am worried some of the content and issues discussed may distract readers from my points, which I think are part of broader challenges.

We are the elite.
The author claims they are not elite because they have a working class background and never studied at, or worked at, an Ivy League university.
I disagree. The author is a full Professor and has an annual salary of US$100 K [At that state university the salary of all employees is public and can be looked up in minutes.] In addition, they and their family may receive significant health care benefits and college tuition subsidies. They will keep their job until they choose to retire in their sixties (or even later) on a generous pension.
A survey found that the average american thinks that $122K per year makes someone "rich".
I would think the author (like me) is in the global one per cent. Furthermore, they have a job where they have fun teaching and researching subjects [in the social sciences and humanities] that many (but not me) would consider are "soft" or "left wing", and of no "economic" benefit.
Let me be clear, I think the remuneration, job security, and academic freedom of faculty is generally appropriate and important. We may not be Wall Street investment bankers or highly paid political consultants. But, we should acknowledge that we are part of the elite and privileged, regardless of our backgrounds.

Why should people trust us?
The author says that Trump voters should trust and respect the expertise of faculty on scientific, social,  economic and policy matters. "You don't seem to understand that our work goes through the extremely rigorous process of peer review."
Seriously!
In physical sciences, a lot of nonsense still gets published, especially in ``high impact'' journals. The social sciences and humanities are arguably even worse.
Hype from scientists, soft action by universities on scientific fraud (this New York Times article includes the photo below of the relevant professor with his private art collection!), and excessive university administrator salaries, is not helping create trust and respect.

Don't talk down to people who are less educated.
Although, I felt the author tried hard to engage the Trump supporters I could not but help feel (perhaps unfairly) that they were talking down to their audience. ``We know what is right and what is best for you. You really aren't smart enough to understand...''
This problem can also occur when scientists interact with groups such as climate change ``skeptics'' and young earth creationists.
I know that at times I am not as patient or as diplomatic as I could be.


What do you think? Are these general concerns part of the problem?

Tuesday, May 9, 2017

Is this a reasonable exam?

I struggle to set good exam questions. One wants to test knowledge and understanding in a way that is realistic within the constraints of students abilities and backgrounds.

I do not have a well-defined philosophy or approach, except for often recycling my old questions...
I think I do have a prejudice towards two goals.

A. Testing higher level skills [e.g. relating theory to experiment, putting things in context, ...] as much as specific technical knowledge [e.g. state Bloch's theorem or solve the Schrodinger equation for a charged particle in constant magnetic field].

B. Testing general and useful knowledge. For basic undergraduate courses [e.g. years 1 to 3] the question should be one that another faculty member could do, even if they have not taught the course. Sometimes, colleagues write questions that I cannot do. You have to have done the problem before, e.g. in a tutorial. We seeming to be testing whether someone has done this course, not "essential" knowledge.

However, I am not sure I really go anywhere near reaching these goals.
Here is a recent mid-semester exam I set for my solid state class of fourth-year undergraduates.
Is it reasonable?

How do you set exam questions?
Do you have a particular approach?

Friday, May 5, 2017

Talk on "crackpot" theories

At UQ there is a great student physics club, PAIN. Today they are having a session on "crackpot" theories in science. Rather than picking on sincere but misguided amateurs I thought I would have a go at "mainstream" scientists who should know better. Here are my slides on quantum biology.

A more detailed and serious talk is a colloquium that I gave six years ago. I regret that the skepticism I expressed then seems to have been justified.

Postscript.
I really enjoyed this session with the students. Several gave interesting and stimulating talks, covering topics such as flat earth, last thursdayism, and The Final Theory of gravity [objects don't fall to the earth but rather the earth rises up to them...]. There were good discussions about falsifiability, Occam's razor, Newton's flaming laser sword, ...
There was an interesting mixture of history, philosophy, humour, and real physics.

I always find to encouraging to encounter students who are so excited about physics that they want to do something like this on a friday night.

Wednesday, May 3, 2017

Computational density functional theory (DFT) in a nutshell

My recent post, Computational Quantum Chemistry in a nutshell, was quite popular. There are two distinct approaches to computational approaches: those based on calculating the wavefunction, which I described in that post, and those based on calculating the local charge density [one particle density matrix of the many-body system]. Here I describe the latter which is based on density functional theory (DFT). Here are the steps and choices one makes.

First, as for wave-function based methods, one assumes the Born-Oppenheimer approximation, where the atomic nuclei are treated classically and the electrons quantum mechanically.

Next, one makes use of the famous (and profound) Hohenberg-Kohn theorem which says that the total energy of the ground state of a many-body system is a unique functional of the local electronic charge density, E[n(r)]. This means that if one can calculate the local density n(r) one can calculate the total energy of the ground state of the system. Although this is an exact result, the problem is that one needs to know the exchange-correlational functional, and one does not. One has to approximate it.

The next step is to choose a particular exchange-correlation functional. The simplest one is the local density approximation [LDA] where one writes E_xc[n(r)] = f(n(r)), where f(x) is the corresponding energy for a uniform electron gas with constant density x. Kohn and Sham showed that if one minimises the total energy as a function of n(r) then one ends up with a set of eigenvalue equations for some functions phi_i(r) which have the identical mathematical structure to the Schrodinger equation for the molecular orbitals that one calculates in a wave-function based approach with the Hartree-Fock approximation. However, it should be stressed that the phi_i(r) are just a mathematical convenience and are not wave functions. The similarity to the Hartree-Fock equations means the problem is not just computationally tractable but also relatively cheap.

When one solves the Kohn-Sham equations on the computer one has to choose a finite basis set. Often they are similar to the atomic-centred basis sets used in wave-function based calculations. For crystals, one sometimes uses plane waves. Generally, the bigger and the more sophisticated and chemical appropriate the basis set, the better the results.

With the above uncontrolled approximations, one might not necessarily expect to get anything that proximates reality (i.e. experiment). Nevertheless, I would say the results are often surprisingly good. If you pick a random molecule LDA can give a reasonable answer (say within 20 per cent) of the geometry, bond lengths, heats of formation, and vibrational frequencies... However, it does have spectacular failures, both qualitative and quantitative, for many systems, particularly those involving strong electron correlations.

Over the past two decades, there have been two significant improvements to LDA.
First, the generalised gradient approximation (GGA) which has an exchange-correlation functional that allows for the spatial variations in the density that are neglected in LDA.
Second, hybrid functionals (such as B3LYP) which contain a linear combination of the Hartree- Fock exchange functional and other functionals that have been parametrised to increase agreement with experimental properties.
It should be stressed that this means that the calculation is no longer ab initio, i.e. one where you start from just Schrodinger's equation and Coulomb's law and attempts to calculate properties.

It should be stressed that for interesting systems the results can depend significantly on the choice of exchange-correlational functional. Thus, it is important to calculate results for a range of functionals and basis sets and not just report results that are close to experiment.

DFT-based calculations have the significant advantage over wave-function based approaches that they are computationally cheaper (and so are widely used). However, they cannot be systematically improved [the dream of Jacob's ladder is more like a nightmare], and become problematic for charge transfer and the description of excited states.

Thursday, April 27, 2017

Is it an Unidentified Superconducting Object (USO)?

If you look on the arXiv and in Nature journals there is a continuing stream of people claiming to observe superconductivity in some new material.
There is a long history of this and it is worth considering the wise observations of Robert Cava, back in 1997, contained in a tutorial lecture.
It would have been useful indeed in the early days of the field [cuprate superconductors] to have set up a "commission" to set some minimum standard of data quality and reproducibility for reporting new superconductors. An almost countless number of "false alarms" have been reported in the past decade, some truly spectacular. Koichi Kitazawa from the University of Tokyo coined these reports "USOs", for Unidentified Superconducting Objects, in a clever cross-cultural double entendre likening them to UFOs (Unidentified Flying Objects, which certainly are their equivalent in many ways) and to "lies" in the Japanese translation of USO. 
These have caused great excitement on occasion, but more often distress. It is important, however, to keep in mind what a report of superconductivity at 130K in a ceramic material two decades ago might have looked like to rational people if it came out of the blue sky with no precedent. That having been said, it is true that all the reports of superconductivity in new materials which were later confirmed to be true did conform to some minimum standard of reproducibility and data quality. I have tried to keep up with which of the reports have turned out to be true and which haven't. 
There have been two common problems: 
1. Experimental error- due, generally, to inexperienced investigators unfamiliar with measurement methods or what is required to show that a material is superconducting. This has become more rare as the field matures. 
[n.b. you really need to observe both zero resistivity and the Meissner effect].
2. "New" superconductors are claimed in chemical systems already known to have superconductors containing some subset of the components. This is common even now, and can be difficult for even experienced researchers to avoid. The previously known superconductor is present in small proportions, sometimes in lower Tc form due to impurities added by the experimentalist trying to make a new compound. In a particularly nasty variation on this, sometimes extra components not intentionally added are present - such as Al from crucibles or CO2 from exposure to air some time during the processing. I wish I had a dollar for every false report of superconductivity in a Nb containing oxide where the authors had unintentionally made NbN in small proportions.
There is also an interesting article about the Schon scandal, where Paul Grant claims
During my research career in the field of superconducting materials, I have documented many cases of an 'unidentified superconducting object' (USO), only one of which originated from an industrial laboratory, eventually landing in Physical Review Letters. But USOs have had origins in many universities and government laboratories. Given my rather strong view of the intrinsic checks and balances inherent in industrial research, the misconduct that managed to escape notice at Bell Labs is even more singular.

Monday, April 24, 2017

Have universities lost sight of the big questions and the big picture?

Here are some biting critiques of some of the "best" research at the "best" universities, by several distinguished scholars.
The large numbers of younger faculty competing for a professorship feel forced to specialize in narrow areas of their discipline and to publish as many papers as possible during the five to ten years before a tenure decision is made. Unfortunately, most of the facts in these reports have neither practical utility nor theoretical significance; they are tiny stones looking for a place in a cathedral. The majority of ‘empirical facts’ in the social sciences have a half-life of about ten years.
Jerome Kagan [Harvard psychologist], The Three Cultures Natural Sciences, Social Sciences, and the Humanities in the 21st Century
[I thank Vinoth Ramachandra for bringing this quote to my attention].
[The distinguished philosopher Alasdair] MacIntyre provides a useful tool to test how far a university has moved to this fragmented condition. He asks whether a wonderful and effective undergraduate teacher who is able to communicate how his or her discipline contributes to an integrated account of things – but whose publishing consists of one original but brilliant article on how to teach – would receive tenure. Or would tenure be granted to a professor who is unable or unwilling to teach undergraduates, preferring to teach only advanced graduate students and engaged in ‘‘cutting-edge research.’’ MacIntyre suggests if the answers to these two inquiries are ‘‘No’’ and ‘‘Yes,’’ you can be sure you are at a university, at least if it is a Catholic university, in need of serious reform. I feel quite confident that MacIntyre learned to put the matter this way by serving on the Appointment, Promotion, and Tenure Committee of Duke University. I am confident that this is the source of his understanding of the increasing subdisciplinary character of fields, because I also served on that committee for seven years. During that time I observed people becoming ‘‘leaders’’ in their fields by making their work so narrow that the ‘‘field’’ consisted of no more than five or six people. We would often hear from the chairs of the departments that they could not understand what the person was doing, but they were sure the person to be considered for tenure was the best ‘‘in his or her field."
Stanley Hauerwas, The State of the University, page 49.

Are these reasonable criticisms of the natural sciences?

Wednesday, April 19, 2017

Commercialisation of universities

I find the following book synopsis rather disturbing.
Is everything in a university for sale if the price is right? In this book, the author cautions that the answer is all too often "yes." Taking the first comprehensive look at the growing commercialization of our academic institutions, the author probes the efforts on campus to profit financially not only from athletics but increasingly, from education and research as well. He shows how such ventures are undermining core academic values and what universities can do to limit the damage. 
Commercialization has many causes, but it could never have grown to its present state had it not been for the recent, rapid growth of money-making opportunities in a more technologically complex, knowledge-based economy. A brave new world has now emerged in which university presidents, enterprising professors, and even administrative staff can all find seductive opportunities to turn specialized knowledge into profit. 
The author argues that universities, faced with these temptations, are jeopardizing their fundamental mission in their eagerness to make money by agreeing to more and more compromises with basic academic values. He discusses the dangers posed by increased secrecy in corporate-funded research, for-profit Internet companies funded by venture capitalists, industry-subsidized educational programs for physicians, conflicts of interest in research on human subjects, and other questionable activities. 
While entrepreneurial universities may occasionally succeed in the short term, reasons the author, only those institutions that vigorously uphold academic values, even at the cost of a few lucrative ventures, will win public trust and retain the respect of faculty and students. Candid, evenhanded, and eminently readable, Universities in the Marketplace will be widely debated by all those concerned with the future of higher education in America and beyond.
What is most disturbing is that the author of Universities in the Marketplace: The Commercialization of Higher Education is Derek Bok, former President of Harvard, the richest university in the world!

There is a helpful summary and review of the book here. A longer review compares and contrasts the book to several others addressing similar issues.

How concerned should we be about these issues?

Thursday, April 13, 2017

Quantum entanglement technology hype


Last month The Economist had a cover story and large section on commercial technologies based on quantum information.

To give the flavour here is a sample from one of the articles
Very few in the field think it will take less than a decade [to build a large quantum computer], and many say far longer. But the time for investment, all agree, is now—because even the smaller and less capable machines that will soon be engineered will have the potential to earn revenue. Already, startups and consulting firms are springing up to match prospective small quantum computers to problems faced in sectors including quantitative finance, drug discovery and oil and gas. .... Quantum simulators might help in the design of room-temperature superconductors allowing electricity to be transmitted without losses, or with investigating the nitrogenase reaction used to make most of the world’s fertiliser.
I know people are making advances [which are interesting from a fundamental science point of view] but it seems to me we are a very long way from doing anything cheaper [both financially and computationally] than a classical computer.

Doug Natelson noted that at the last APS March Meeting, John Martinis said that people should not believe the hype, even from him!

Normally The Economist gives a hard-headed analysis of political and economic issues. I might not agree with it [it is too neoliberal for me] but at least I trust it to give a rigorous and accurate analysis. I found this section to be quite disappointing. I hope uncritical readers don't start throwing their retirement funds into start-ups that are going to develop the "quantum internet" because they believe that this is going to be as important as the transistor (a claim the article ends with).

Maybe I am missing something.
I welcome comments on the article.

Tuesday, April 11, 2017

Should we fund people or projects?

In Australia, grant reviewers are usually asked to score applications according to three aspects: investigator, project, and research environment. These are usually weighted by something like 40%, 40%, and 20%, respectively. Previously, I wrote how I think the research environment aspect is problematic.

I struggle to see why investigator and project should have equal weighting. For example, consider the following caricatures.

John writes highly polished proposals with well defined projects on important topics. However, he has limited technical expertise relevant to the ambitious goals in the proposal. He also tends to write superficial papers on hot topics.

Joan is not particularly well organised and does not write polished proposals. She does not plan her projects but lets her curiosity and creativity lead her. Although she does not write a lot of papers she has a good track record of moving into new areas and making substantial contributions.

This raises the question of whether we should even forget the project dimension to funding. Suppose you had the following extreme system. You just give the "best" people a grant for three years and they can do whatever they want. Three years later they apply again and are evaluated based on what they have produced. This would encourage more risks and save a lot of time in the grant preparation and evaluation process.

Are there any examples of this kind of "no strings attached" funding? The only examples I can think of are MacArthur Fellows and Royal Society Professorships. However, these are really for stellar senior people.

What do you think?

Thursday, April 6, 2017

Do you help your students debug codes?

Faculty vary greatly in their level of involvement with the details of the research projects of the undergrads, Ph.D students, and postdocs they supervise. Here are three different examples based on real senior people.

A. gives the student or postdoc a project topic and basically does not want to talk to them again until they bring a draft of a paper.

B. talks to their students regularly but boasts that they have not looked at a line of computer code since they became a faculty member. It is the sole responsibility of students to write and debug code.

C. is very involved. One night before a conference presentation they stayed up until 3 am trying to debug a students code in the hope of getting some more results to present the next day.

Similar issues arise with analytical calculations or getting experimental apparatus to work.

What is an appropriate level of involvement?
On the one hand, it is important that students take responsibility for their projects and learn to solve their own problems.
On the other hand, faculty can speed things along and sometimes quickly find "bugs" because of experience. Also a more "hands on" approach gives a better feel for how well the student knows what they are doing and is checking things.
It is fascinating and disturbing to me that in the Schon scandal, Batlogg confessed that he never went in the lab and so did not realise there was no real experiment.

I think there is no clear cut answer. Different people have different strengths and interests (both supervisors and students). Some really enjoy the level of detail and others are more interested in the big picture.
However, I must say that I think A. is problematic.
Overall, I am closer to B. than C, but this has varied depending on the person involved, the project, and the technical problems.

What do you think?

Tuesday, April 4, 2017

Some awkward history

I enjoyed watching the movie Hidden Figures. It is based on a book that recounts the little-known history of the contributions of three African-American women to NASA and the first manned space flights in the 1960s. The movie is quite entertaining and moving while raising significant issues about racism and sexism in science. I grimaced at some of the scenes. On the one hand, some would argue we have come a long way in fifty years. On the other hand, we should be concerned about how the rise of Trump will play out in science.


One minor question I have is how much of the math on the blackboards is realistic?



Something worth considering is the extent to which the movie fits the too-common white savior narrative, as highlighted in a critical review, by Marie Hicks.

Saturday, April 1, 2017

A fascinating thermodynamics demonstration: the drinking bird

I am currently helping teach a second year undergraduate course Thermodynamics and Condensed Matter Physics. For the first time I am helping out in some of the lab sessions. Two of the experiments are based on the drinking bird.



This illustrates two important topics: heat engines and liquid-vapour equilibria.

Here are a few observations fo in random order.

* I still find it fascinating to watch. Why isn't it a perpetual motion machine?

* Several more surprising things are:
a. it operates on such a small temperature difference,
b. that there is a temperature difference between the head and bulb,
c. it is so sensitive to perturbations such as warming with your fingers or changes in humidity.

* It took me quite a while to understand what is going on, which makes me wonder about the students doing the lab. How much are they following the recipe and saying the mantra...

* I try to encourage the students to think critically and scientifically about what is going on, asking some basic questions, such as "How do you know the head is cooler than the bulb? What experiment can you do right now to test your hypothesis? How can you test whether evaporative cooling is responsible for cooling the head?" Such an approach is briefly described in this old paper.

* Understanding and approximately quantifying the temperature of the head involves the concept of humidity, wet-bulb temperature and a psychometric chart. Again I find this challenging.

* This lab is a great example of how you don't necessarily need a lot of money and fancy equipment to teach a lot of important science and skills.

Thursday, March 30, 2017

Perverse incentives in academia

According to Wikipedia, "A perverse incentive is an incentive that has an unintended and undesirable result which is contrary to the interests of the incentive makers. Perverse incentives are a type of negative unintended consequence."

There is an excellent (but depressing) article
Academic Research in the 21st Century: Maintaining Scientific Integrity in a Climate of Perverse Incentives and Hypercompetition
Edwards Marc A. and Roy Siddhartha

I learnt of the article via a blog post summarising it, Every attempt to manage academia makes it worse.

Incidentally, Edwards is a water quality expert who was influential in exposing the Flint Water crisis.

The article is particularly helpful because it cites a lot of literature concerning the problems. It contains the following provocative table. I also like the emphasis on ethical behaviour and altruism.


It is easy to feel helpless. However, the least action you can take is to stop looking at metrics when reviewing grants, job applicants, and tenure cases. Actually read some of the papers and evaluate the quality of the science. If you don't have the expertise then you should not be making the decision or  should seek expert review.

Tuesday, March 28, 2017

Computational quantum chemistry in a nutshell

To the uninitiated (and particularly physicists) computational quantum chemistry can just seem to be a bewildering zoo of multiple letter acronyms (CCSD(T), MP4, aug-CC-pVZ, ...).

However, the basic ingredients and key assumptions can be simply explained.

First, one makes the Born-Oppenheimer approximation, i.e. one assumes that the positions of the N_n nuclei in a particular molecule are a classical variable [R is a 3N_n dimensional vector] and the electrons are quantum. One wants to find the eigenenergy of the N electrons. The corresponding Hamiltonian and Schrodinger equation is


The electronic energy eigenvalues E_n(R) define the potential energy surfaces associated with the ground and excited states. From the ground state surface one can understand most of chemistry! (e.g., molecular geometries, reaction mechanisms, transition states, heats of reaction, activation energies, ....)
As Laughlin and Pines say, the equation above is the Theory of Everything!
The problem is that one can't solve it exactly.

Second, one chooses whether one wants to calculate the complete wave function for the electrons or just the local charge density (one-particle density matrix). The latter is what one does in density functional theory (DFT). I will just discuss the former.

Now we want to solve this eigenvalue problem on a computer and the Hilbert space is huge, even for a simple molecule such as water. We want to reduce the problem to a discrete matrix problem. The Hilbert space for a single electron involves a wavefunction in real space and so we want a finite basis set of L spatial wave functions, "orbitals". Then there is the many-particle Hilbert space for N-electrons, which has dimensions of order L^N. We need a judicious way to truncate this and find the best possible orbitals.

The single particle orbitals can be introduced
where the a's are annihilation operators to give the Hamiltonian

These are known as Coulomb and exchange integrals. Sometimes they are denoted (ij|kl).
Computing them efficiently is a big deal.
In semi-empirical theories one neglects many of these integrals and treats the others as parameters that are determined from experiment.
For example, if one only keeps a single term (ii|ii) one is left with the Hubbard model!

Equivalently, the many-particle wave function can be written in this form.

Now one makes two important choices of approximations.

1. atomic basis set
One picks a small set of orbitals centered on each of the atoms in the molecule. Often these have the traditional s-p-d-f rotational symmetry and a Gaussian dependence on distance.

2. "level of theory"
This concerns how one solves the many-body problem or equivalently how one truncates the Hilbert space (electronic configurations) or equivalently uses an approximate variational wavefunction. Examples include Hartree-Fock (HF), second-order perturbation theory (MP2),  a Gutzwiller-type wavefunction (CC = Coupled Cluster), or Complete Active Space (CAS(K,L)) (one uses HF for higher and low energies and exact diagonalisation for a small subset of K electrons in L orbitals.
Full-CI (configuration interaction) is exact diagonalisation. This only possible for very small systems.

The many-body wavefunction contains many variational parameters, both the coefficients in from of the atomic orbitals that define the molecular orbitals and the coefficients in front of the Slater determinants that define the electronic configurations.

Obviously, one expects that the larger the atomic basis set and the "higher" the level of theory  (i.e. treatment of electron correlation) one hopes to move closer to reality (experiment). I think Pople first drew a diagram such as the one below (taken from this paper).


However, I stress some basic points.

1. Given how severe the truncation of Hilbert space from the original problem one would not necessarily to expect to get anywhere near reality. The pleasant surprise for the founders of the field was that even with 1950s computers one could get interesting results. Although the electrons are strongly correlated (in some sense), Hartree-Fock can sometimes be useful. It is far from obvious that one would expect such success.

2. The convergence to reality is not necessarily uniform.
This gives rise to Pauling points: "improving" the approximation may give worse answers.

3. The relative trade-off between the horizontal and vertical axes is not clear and may be context dependent.

4. Any computational study should have some "convergence" tests. i.e. use a range of approximations and compare the results to see how robust any conclusions are.

Thursday, March 23, 2017

Units! Units! Units!

I am spending more time with undergraduates lately: helping in a lab (scary!), lecturing, marking assignments, supervising small research projects, ...

One issue keeps coming up: physical units!
Many of the students struggle with this. Some even think it is not important!

This matters in a wide range of activities.

  • Giving a meaningful answer for a measurement or calculation. This includes canceling out units.
  • Using dimensional analysis to find possible errors in a calculation or formula.
  • Writing equations in dimensionless form to simplify calculations, whether analytical or computational.
  • Making order of magnitude estimates of physical effects.

Any others you can think of?

Any thoughts on how we can do better at training students to master this basic but important skill?

Tuesday, March 21, 2017

Emergence frames many of the grand challenges and big questions in universities

What are the big questions that people are (or should be) wrestling within universities?
What are the grand intellectual challenges, particularly those that interact with society?

Here are a few. A common feature of those I have chosen is that they involve emergence: complex systems consisting of many interacting components produce new entities and there are multiple scales (whether length, time, energy, the number of entities) involved.

Economics
How does one go from microeconomics to macroeconomics?
What is the interaction between individual agents and the surrounding economic order?
A recent series of papers(see here and references therein) have looked at how the concept of emergence played a role in the thinking of Friedrich Hayek.

Biology
How does one go from genotype to phenotype?
How do the interactions between many proteins produce a biochemical process in a cell?


The figure above shows a protein interaction network and taken from this review.

Sociology
How do communities and cultures emerge?
What is the relationship between human agency and social structures?

Public health and epidemics
How do diseases spread and what is the best strategy to stop them?

Computer science
Artificial intelligence.
Recently it was shown how Deep learning can be understood in terms of the renormalisation group.

Community development, international aid, and poverty alleviation
I discussed some of the issues in this post.

Intellectual history
How and when do new ideas become "popular" and accepted?

Climate change

Philosophy
How do you define consciousness?

Some of the issues are covered in the popular book, Emergence: the connected lives of Ants, Brains, Cities, and Software.
Some of these phenomena are related to the physics of networks, including scale-free networks. The most helpful introduction I have read is a Physics Today article by Mark Newman.

Given this common issue of emergence, I think there are some lessons (and possibly techniques) these fields might learn from condensed matter physics. It is arguably the field which has been the most successful at understanding and describing emergent phenomena. I stress that this is not hubris. This success is not because condensed matter theorists are smarter or more capable than people working in other fields. It is because the systems are "simple" enough and the presence (sometimes) of a clear separation of scales that they are more amenable to analysis and controlled experiments.

Some of these lessons are "obvious" to condensed matter physicists. However, I don't think they are necessarily accepted by researchers in other fields.

Humility.
These are very hard problems, progress is usually slow, and not all questions can be answered.

The limitations of reductionism.
Trying to model everything by computer simulations which include all the degrees of freedom will lead to limited progress and insight.

Find and embrace the separation of scales.
The renormalisation group provides a method to systematically do this. A recent commentary by Ilya Nemenman highlights some recent progress and the associated challenges.

The centrality of concepts.

The importance of critically engaging with experiment and data.
They must be the starting and end point. Concepts, models, and theories have to be constrained and tested by reality.

The value of simple models.
They can give significant insight into the essentials of a problem.

What other big questions and grand challenges involve emergence?

Do you think condensed matter [without hubris] can contribute something?

Saturday, March 18, 2017

Important distinctions in the debate about journals

My post, "Do we need more journals?" generated a lot of comments, showing that the associated issues are something people have strong opinions about.

I think it important to consider some distinct questions that the community needs to debate.

What research fields, topics, and projects should we work on?

When is a specific research result worth communicating to the relevant research community?

Who should be co-authors of that communication?

What is the best method of communicating that result to the community?

How should the "performance" and "potential" of individuals, departments, and institutions be evaluated?

A major problem for science is that over the past two decades the dominant answer to the last question (metrics such as Journal "Impact" Factors and citations) is determining the answer to the other questions. This issue has been nicely discussed by Carl Caves.
The tail is wagging the dog.

People flock to "hot" topics that can produce quick papers, may attract a lot of citations, and are beloved by the editors of luxury journals. Results are often obtained and analysed in a rush, not checked adequately, and presented in the "best" possible light with a bias towards exotic explanations. Co-authors are sometimes determined by career issues and the prospect of increasing the probability of publication in a luxury journal, rather than by scientific contribution.

Finally, there is a meta-question that is in the background. The question is actually more important but harder to answer.
How are the answers to the last question being driven by broader moral and political issues?
Examples include the rise of the neoliberal management class, treatment of employees, democracy in the workplace, inequality, post-truth, the value of status and "success", economic instrumentalism, ...

Thursday, March 16, 2017

Introducing students to John Bardeen

At UQ there is a great student physics club, PAIN. Their weekly meeting is called the "error bar." This friday they are having a session on the history of physics and asked faculty if any would talk "about interesting stories or anecdotes about people, discoveries, and ideas relating to physics."

I thought for a while and decided on John Bardeen. There is a lot I find interesting. He is the only person to receive two Nobel Prizes in Physics. Arguably, the discovery associated with both prizes (transistor, BCS theory) are of greater significance than the average Nobel. The difficult relationship with Shockley, who in some sense became the founder of Silicon Valley.

Here are my slides.


In preparing the talk I read the interesting articles in the April 1992 issue of Physics Today that was completely dedicated to Bardeen. In his article David Pines, says
[Bardeen's] approach to scientific problems went something like this: 
  • Focus first on the experimental results, by careful reading of the literature and personal contact with members of leading experimental groups. 
  • Develop a phenomenological description that ties the key experimental facts together. 
  • Avoid bringing along prior theoretical baggage, and do not insist that a phenomenological description map onto a particular theoretical model. Explore alternative physical pictures and mathematical descriptions without becoming wedded to a specific theoretical approach. 
  • Use thermodynamic and macroscopic arguments before proceeding to microscopic calculations. 
  • Focus on physical understanding, not mathematical elegance. Use the simplest possible mathematical descriptions. 
  • Keep up with new developments and techniques in theory, for one of these could prove useful for the problem at hand. 
  • Don't give up! Stay with the problem until it's solved. 
In summary, John believed in a bottom-up, experimentally based approach to doing physics, as distinguished from a top-down, model-driven approach. To put it another way, deciding on an appropriate model Hamiltonian was John's penultimate step in solving a problem, not his first.
With regard to "interesting stories or anecdotes about people, discoveries, and ideas relating to physics," what would you talk about?

Wednesday, March 15, 2017

The power and limitations of ARPES

The past two decades have seen impressive advances in Angle-Resolved PhotoEmission Spectroscopy (ARPES). This technique has played a particularly important role in elucidating the properties of the cuprates and topological insulators. ARPES allows measurement of the one-electron spectral function, A(k,E) something that can be calculated from quantum many-body theory. Recent advances have included the development of laser-based ARPES, which makes synchrotron time unnecessary.

A recent PRL shows the quality of data that can be achieved.

Orbital-Dependent Band Narrowing Revealed in an Extremely Correlated Hund’s Metal Emerging on the Topmost Layer of Sr2RuO4 
Takeshi Kondo, M. Ochi, M. Nakayama, H. Taniguchi, S. Akebi, K. Kuroda, M. Arita, S. Sakai, H. Namatame, M. Taniguchi, Y. Maeno, R. Arita, and S. Shin

The figure below shows a colour density plot of the intensity [related to A(k,E)] along a particular direction in the Brillouin zone.  The energy resolution is of the order of meV, something that would not have been dreamed of decades ago.
Note how the observed dispersion of the quasi-particles is much smaller than that calculated from DFT, showing how strongly correlated the system is.

The figure below shows how with increasing temperature a quasi-particle peak gradually disappears, showing the smooth crossover from a Fermi liquid to a bad metal, above some coherence temperature.
The main point of the paper is that the authors are able to probe just the topmost layer of the crystal and that the associated electronic structure is more correlated (the bands are narrower and the coherence temperature is lower) than the bulk.
Again it is impressive that one can make this distinction.

But this does highlight a limitation of ARPES, particularly in the past. It is largely a surface probe and so one has to worry about whether one is measuring surface properties that are different from the bulk. This paper shows that those differences can be significant.

The paper also contains DFT+DMFT calculations which are compared to the experimental results.

Monday, March 13, 2017

What do your students really expect and value?

Should you ban cell phones in class?

I found this video quite insightful. It reminded me of the gulf between me and some students.



It confirmed my policy of not allowing texting in class. Partly this is to force students to be more engaged. But it is also to make students think about whether they really need to be "connected" all the time?

What is your policy of phones in class?

I think that the characterisation of "millennials" may be a bit harsh and too one dimensional. Although I did encounter some of the underlying attitudes in a problematic class a few years ago. Then reading a Time magazine cover article was helpful.
I also think that this is not a good characterisation of many of the students that make it as far as an advanced undergraduate or Ph.D programs. By then many of the narcissistic and entitled have self selected out. It is just too much hard work.

Friday, March 10, 2017

Do we really need more journals?

NO!

Nature Publishing Group continues to spawn "Baby Natures" like crazy.

I was disappointed to see that Physical Review is launching a new journal Physical Review Materials. They claim it is to better serve the materials community. I found this strange. What is wrong with Physical Review B? It does a great job.
Surely, the real reason is APS wants to compete with Nature Materials [a front for mediocrity and hype] which has a big Journal Impact Factor (JIF).
On the other hand, if the new journal could put Nature Materials out of business I would be very happy. At least the journal would be run and controlled by real scientists and not-for-profit.

So I just want to rant two points I have made before.

First, the JIF is essentially meaningless, particularly when it comes to evaluating the quality of individual papers. Even if one believes citations are some sort of useful measure of impact, one should look at the distribution, not just the mean. Below the distribution is shown for Nature Chemistry.


Note how the distribution is highly skewed, being dominated by a few highly cited papers. More than 70 per cent of papers score less than the mean.

Second, the problem is that people are publishing too many papers. We need less journals not more!
Three years ago, I posted about how I think journals are actually redundant and gave a specific proposal of how to move towards a system that produces better science (more efficiently) and more accurately evaluates the quality of individuals contributions.

Getting there will obviously be difficult. However, initiatives such as SciPost and PLOS ONE, are steps in a positive direction.
Meanwhile those of us evaluating the "performance" of individuals can focus on real science and not all this nonsense beloved by many.

Wednesday, March 8, 2017

Is complexity theory relevant to poverty alleviation programs?

For me, global economic inequality is a huge issue. A helpful short video describes the problem.
Recently, there has been a surge of interest among development policy analysts about how complexity theory may be relevant in poverty alleviation programs.

On an Oxfam blog there is a helpful review of three books on complexity theory and development.
I recently read some of one of these books, Aid on the Edge of Chaos: Rethinking International Cooperation in a Complex World, by Ben Ramalingham.

Here is some of the publisher blurb.
Ben Ramalingam shows that the linear, mechanistic models and assumptions on which foreign aid is built would be more at home in early twentieth century factory floors than in the dynamic, complex world we face today. All around us, we can see the costs and limitations of dealing with economies and societies as if they are analogous to machines. The reality is that such social systems have far more in common with ecosystems: they are complex, dynamic, diverse and unpredictable. 
Many thinkers and practitioners in science, economics, business, and public policy have started to embrace more 'ecologically literate' approaches to guide both thinking and action, informed by ideas from the 'new science' of complex adaptive systems. Inspired by these efforts, there is an emerging network of aid practitioners, researchers, and policy makers who are experimenting with complexity-informed responses to development and humanitarian challenges. 
This book showcases the insights, experiences, and often remarkable results from these efforts. From transforming approaches to child malnutrition, to rethinking processes of economic growth, from building peace to combating desertification, from rural Vietnam to urban Kenya, Aid on the Edge of Chaos shows how embracing the ideas of complex systems thinking can help make foreign aid more relevant, more appropriate, more innovative, and more catalytic. Ramalingam argues that taking on these ideas will be a vital part of the transformation of aid, from a post-WW2 mechanism of resource transfer, to a truly innovative and dynamic form of global cooperation fit for the twenty-first century.
The first few chapters give a robust and somewhat depressing critique of the current system of international aid. He then discusses complexity theory and finally specific case studies.
The Table below nicely contrasts two approaches.

A friend who works for a large aid NGO told me about the book and described a workshop (based on the book) that he attended where the participants even used modeling software.

I have mixed feelings about all of this.

Here are some positive points.

Any problem in society involves a complex system (i.e. many interacting components). Insights, both qualitative and quantitative, can be gained from "physics" type models. Examples I have posted about before, include the statistical mechanics of money and the universality in probability distributions for certain social quantities.

Simplistic mechanical thinking, such as that associated with Robert McNamara in Vietnam and then at the World Bank, is problematic and needs to be critiqued. Even a problem as 'simple" as replacing wood burning stoves turns out to be much more difficult and complicated than anticipated.

A concrete example discussed in the book is that of positive deviance, which takes its partial motivation from power laws.

Here are some concerns.

Complexity theory suffers from being oversold. It certainly gives important qualitative insights and concrete examples in "simple" models. However, to what extent complexity theory can give a quantitative description of real systems is debatable. This is particularly true of the idea of "the edge of chaos" that features in the title of the book. A less controversial title would have replaced this with simply "emergence", since that is a lot of what the book is really about.

Some of the important conclusions of the book could be arrived at by different more conventional routes. For example, a major point is that "top down" approaches are problematic. This is where some wealthy Westerners define a problem, define the solution, then provide the resources (money, materials, and personnel) and impose the solution on local poor communities. A more "bottom up" or "complex adaptive systems" approach is where one consults with the community, gets them to define the problem and brainstorm possible solutions, give them ownership of implementing the project, and adapt the strategy in response to trials. One can come to this same approach if ones starting point is simply humility and respect for the dignity of others. We don't need complexity theory for that.

The author makes much of the story of Sugata Mitra, whose TED talk, "Kids can teach themselves" has more than a million views. He puts some computer terminals in a slum in India and claims that poor uneducated kids taught themselves all sorts of things, illustrating "emergent" and "bottom up" solutions. It is a great story.  However, it has received some serious criticism, which is not acknowledged by the author.

Nevertheless, I recommend the book and think it is a valuable and original contribution about a very important issue.

Monday, March 6, 2017

A dirty secret in molecular biophysics

The past few decades has seen impressive achievements in molecular biophysics that are based on two techniques that are now common place.

Using X-ray crystallography to determine the detailed atomic structure of proteins.

Classical molecular dynamics simulations.

However, there is a fact that is not as widely known and acknowledged as it should be. These two complementary techniques have an unhealthy symbiotic relationship.
Protein crystal structures are often "refined" using molecular dynamics simulations.
The "force fields" used in the simulations are often parametrised using known crystal structures!

There are at least two problems with this.

1. Because the methods are not independent of one another one cannot claim that a because in a particular case they give the same result that one has achieved something, particularly "confirmation" of the validity of a result.

2. Classical force fields are classical and do not necessarily give a good description of the finer details of chemical bonding, something that is intrinsically quantum mechanical. The active sites of proteins are "special" by definition. They are finely tuned to perform a very specific biomolecular function (e.g. catalysis of a specific chemical reaction or conversion of light into electrical energy). This is particularly true of hydrogen bonds, where bond length differences of less than a 1/20 of an Angstrom can make a huge difference to a potential energy surface.

I don't want to diminish or put down the great achievements of these two techniques. We just need to be honest and transparent about their limitations and biases.

I welcome comments.

Friday, March 3, 2017

Science is told by the victors and Learning to build models

A common quote about history is that "History is written by the victors". The over-simplified point is that sometimes the losers of a war are obliterated (or at least lose power) and so don't have the opportunity to tell their side of the story. In contrast, the victors want to propagate a one-sided story about their heroic win over their immoral adversaries. The origin of this quote is debatable but there is certainly a nice article where George Orwell discusses the problem in the context of World War II.

What does this have to do with teaching science?
The problem is that textbooks present nice clean discussions of successful theories and models that rarely engage with the complex and tortuous path that was taken to get to the final version.
If the goal is "efficient" learning and minimisation of confusion this is appropriate.
However, we should ask whether this is the best way for students to actually learn how to DO and understand science.

I have been thinking about this because this week I am teaching the Drude model in my solid state physics course. Because of its simplicity and success, it is an amazing and beautiful theory. But, it is worth thinking about two key steps in constructing the model; steps that are common (and highly non-trivial) in constructing any theoretical model in science.

1. Deciding which experimental observables and results one wants to describe.

2. Deciding which parameters or properties will be ingredients of the model.

For 1. it is Ohm's law, Fourier's law, Hall effect, Drude peak, UV transparency of metals, Weidemann-Franz, magnetoresistance, thermoelectric effect, specific heat, ...

For 2. one starts with only conduction electrons (not valence electrons or ions), no crystal structure or chemical detail (except valence), and focuses on averages (velocity, scattering time, density) rather than standard deviations, ...

In hindsight, it is all "obvious" and "reasonable" but spare a thought for Drude in 1900. It was only 3 years after the discovery of the electron, before people were even certain that atoms existed, and certainly before the Bohr model...

This issue is worth thinking about as we struggle to describe and understand complex systems such as society, the economy, or biological networks. One can nicely see 1. and 2. above in a modest and helpful article by William Bialek, Perspectives on theory at the interface of physics and biology.

Tuesday, February 28, 2017

The value of vacations

This is the first week of classes for the beginning of the academic year.

In preparation for a busy semester, I took last week off work (my last four posts were automated) and visited my son in Canberra (where I grew up) and spent some time hiking in one of my favourite places, Kosciusko National Park. One photo is below. This reminded me of the importance of vacations and down time, of the therapeutic value of the nature drug, and of turning off your email occasionally.


Above Lake Albina on the main range.

Friday, February 24, 2017

Excellent notes on the Quantum Hall Effect

In the condensed matter theory group at UQ we regularly run reading groups, where we work through a book, review article, or some lecture notes. This is particularly important as our PhD students don't take any courses.

Currently we are working through some nice lecture notes on the Quantum Hall effect, written by David Tong. They are very accessible and clear, particularly in putting the QHE in the context of topology, edge states, Berry's phase, Chern insulators, TKNN, ...

On his website he also has lectures on a wide range of topics from kinetic theory to string theory.

Wednesday, February 22, 2017

Desperately seeking Weyl semi-metals. 2.

Since my previous post about the search for a Weyl semimetal in pyrochlore iridates (such as R2Ir2O7, where R=rare earth) read two more interesting papers on the subject.

Metal-Insulator Transition and Topological Properties of Pyrochlore Iridates 
Hongbin Zhang, Kristjan Haule, and David Vanderbilt

Using a careful DMFT+DFT study they are able to reproduce experimental trends across the series, R=Y, Eu, Sm, Nd, Pr, Bi.

They show that when the self energy due to interactions is included that the band structure is topologically trivial, contrary to the 2010 proposal based on DFT+U.

They also find that the quasi-particle weight is quite small (about 0.1 for R=Sm, Nd and 0.2 for Pr). This goes some way towards explaining the fact that the infrared conductivity gives an extremely small Drude weight (about 0.05 electrons per unit cell), a puzzle I highlighted in my first post.

Field-induced quantum metal–insulator transition in the pyrochlore iridate Nd2Ir2O7 
Zhaoming Tian, Yoshimitsu Kohama, Takahiro Tomita, Hiroaki Ishizuka, Timothy H. Hsieh, Jun J. Ishikawa, Koichi Kindo, Leon Balents, and Satoru Nakatsuji

The authors make much of two things.

First, the relatively low magnetic field (about 10 Tesla) required to induce the transition from the magnetic insulator to the metallic phase. Specifically, the relevant Zeeman energy is much smaller that the charge gap in the insulating phase.
However, one might argue that the energy scale one should be comparing to is the thermal energy associated with the magnetic transition temperature.

Second. the novelty of this transition.
However, in 2001 a somewhat similar transition was observed in the organic charge transfer salt, lambda-(BETS)2FeCl4. It is even more dramatic because it undergoes a field-induced transition from a Mott insulator to a superconductor. The physics is also quite similar in that it can also be described by Hubbard-Kondo model, where local moments are coupled to interacting delocalised electrons.

Monday, February 20, 2017

Senior faculty position in Experimental Condensed Matter available at UQ

My department has just advertised a faculty position. 

I will be interested to see how many applicants want to escape Trumpland for sunny Queensland [which BTW has excellent gun control and national health care...].


Friday, February 17, 2017

A new picture of unconventional superconductivity

Two key ideas concerning unconventional superconductors are the following.

1. s-wave and p-wave pairing (in momentum space) are associated with spin singlet and spin triplet pairing, respectively. This can be shown with minimal assumptions (no spin-orbit coupling and spatial inversion symmetry).

2. If superconductivity is seen in proximity to an ordered phase (e.g. ferromagnetism or antiferromagnetism) with a quantum critical point (QCP) then the pairing can be "mediated" by low energy fluctuations (e.g. magnons) associated with the ordering.

3. Non-fermi liquid behaviour may be seen in the quantum critical region about the QCP.

However, an interesting paper shows that neither of the above is necessarily true.

Superconductivity from Emerging Magnetic Moments 
Shintaro Hoshino and Philipp Werner

They find spin triplet superconductivity with s-wave symmetry. This arises because there is more than one orbital per site and due to the Hund's rule coupling spin triplets can form on a single site.

They also find the pairing is strongest near the "spin freezing crossover" which is associated with the "Hund's metal", i.e. the bad metal arising from the Hund's rule interaction, and has certain "non-Fermi liquid" properties.

The results are summarised in the phase diagrams below, which has a striking similarity to various experimental phase diagrams that are usually interpreted in terms of 2. above.
However, all the theory is DMFT and so there are no long wavelength fluctuations.