Wednesday 23 March 2016

Philip Morse's imaginary glass tube

I've been reading a lot of acoustics textbooks to try to overhaul my ideas about how acoustics can and should be taught in the twenty-first century and today I settled down with Philip Morse's Vibration and Sound (2nd ed. McGraw-Hill, 1948). Morse was really driving the subject: Leo Beranek's description of his first research project at MIT was that Morse had developed the idea of impedance as a way of characterising walls and wanted someone to measure it for the first time (sorry, I don't have the exact quote from his memoirs to hand; they're well worth reading too). No wonder Morse's own memoirs were called In at the Beginning; though I've not had a chance to read them yet.

There's lots about Vibration and Sound that shows how the attempt to explain flows from the attempt to understand; even though it's avowedly mathematical it's also full of thought-experiments. One that particularly struck me introduces wave motion on a stretched string, something I talk about a lot when lecturing on Musical Instrument Acoustics, both to engineers and musicians. What he imagined was a string passing through a glass tube bent into the shape of a plane curve with its ends co-axial. The string is under tension and is being wound from one spool to another at a constant speed, and it travels through the tube without friction (as in all the best mechanics textbook problems). He works out the centrugal force on a short curved section of string (which varies with the square of the string's speed) and the force on the side of the tube on the inside of the curve due to tension (which is independent of speed). He then shows that when the string is spooling at a particular speed the resultant force between string and tube will vanish and:
"...we can carefully break away the tube from around the string and leave the string moving with velocity c, still retaining the original form of the tube, a wave form standing still in space."
Thoughts:
  1. Before we had tape recorders we had wire recorders, my father used one when he did his national service with the Royal Signals. Could they have inspired this? Their 'brief heyday', according to that article I just linked was from 1946 to 1954. My copy's a 2nd edition from 1948, the first was published in 1936. If anyone has a first edition could they tell me if this is in it?
  2. The tone implies that the calculation of the forces on the tube is an elementary matter. Maybe so for students drilled on endless mechanics sets but maybe less so now. Much as I like this example I don't know that it would help my students as much. In those days acoustics was a graduate subject, often for students who'd learned electronics in the army and were, like Philip Doak, studying on the GI Bill. It might make a nice exam question though.
  3. Someone's got to do this experiment, it would be so cool. Not with a glass tube but the wire could run between rollers which could be withdrawn once the wire's up to speed. Then you could adjust the tension and move the wave back and forth.

Sunday 28 February 2016

Slowness limits

A stretch of road that I regularly drive along recently had its speed limit reduced from 60 miles per hour to 50. No complaints from me, but I would like to know how much extra time to allow for my journey. A glance at the odometer tells me that the section with the new limit is two miles long so that means it'll take an additional ... err ... well ... an amount of time that I found rather tricky to work out in my head while driving.

Perhaps I'm just lousy at mental maths? For a week or so I annoyed my friends with this puzzle. Some worked it out sooner than others but none got it straight away and quite a few couldn't be persuaded/shamed into working it out without using pen and paper, or in some cases at all. Why is such a simple question so hard to answer?

The problem is speed, which is distance per unit time. If you're travelling at a speed of x 'distance measures per time measure' then, if you keep doing so for a whole time unit you'll have travelled x distance units. Hardly clear, I realise, and it doesn't help that I'm using 'unit' in two senses:in 'per unit time' it means with a value of one; in 'a whole time unit' it means the amount we use to measure time, be it hours, seconds or whatever.

The size of those measurement units determines the numerical value of the quantity - the Young's modulus of a material is an enormous number in SI units because it is the stress necessary to cause unit axial strain (i.e. doubling of axial length) and stress is force per unit area (in SI units a square metre). So the Young's modulus of steel is the force that would have to be applied to opposite faces of a one metre steel cube to stretch it till it was two metres long. No wonder it's a big number.

The measurement units might determine the numerical value, but the definition determines the meaning. With stress and strain there's a clear cause-and-effect: the stress causes the strain and the higher the material's modulus the more stress will be needed to cause the same strain. Is that the case with speed?

How often do you get in a car knowing how much time you intend to drive for but with an open mind about how far you're going to travel? In that strange situation knowing your speed is useful, because it tells you how many more miles each of the remaining seconds will add to your total distance. But what we almost always do is travel a set distance and want to know how long it will take. You might have a time constraint as well as a distance constraint but the distance constraint always wins: "Sorry, I couldn't drive fast enough so I got to the meeting five minutes late" might annoy your colleagues, but "Sorry, I couldn't drive fast enough so I stopped a mile away when it was time for the meeting to start" will annoy them more.

So we adjust speed in order to control arrival time, hopefully subject to speed limits. When you're running late it's tempting to go faster in order to get there sooner, but do you have a clear idea of how much journey time you save for a given increase in speed? I'd say no for two related reasons: as we've seen above it's a hard calculation to do, and the effect on journey time depends on how fast you're already going. As a result I suspect that some drivers who break speed limits do so because they overestimate the time it will save them. That's a testable proposition and I'd be interested in knowing if anyone's tested it, but for now I'm just going to assume it's true.

We could avoid the whole situation if we switched from speed to slowness, the reciprocal of speed. A speed of 60 miles per hour is a slowness of one sixtieth of an hour per mile, or 60 seconds per mile. If I tell you that a two-mile stretch of road has had its slowness increased from 60 seconds per mile to 72 seconds per mile the additional journey time will be obvious, and it'll also be clear that it's not a big deal; the additional 24 seconds is down in the noise compared to the uncertainties in my journey time.

The slownesses corresponding to our current speed limits mostly come out to whole numbers of seconds per mile because the number of seconds in an hour, 3600, is highly composite (thanks, Babylonians). So 10, 20, 30, 40, 50, 60 and 70 mph correspond to 360, 180, 120, 90, 72, 60 and 51.43 spm (oh well Babylonians, you did your best). Let's round that last one up up 52 spm, or 69.23 mph, rather than down to 51 spm, or 70.59 mph.

If we're going to argue that using slowness could reduce the perceived advantages of going faster when you're already going fast then we should consider what the effect will be when you're going slowly. Schools on main roads, which already have 30 mph speed limits, quite rightly have signs saying "Twenty is Plenty" - might motorists be less willing to increase their slowness from 120 spm to 180 spm than they are to reduce their speed from 30 mph to 20 mph? They might be, but the remedy is to address what they care about, journey time, with a sign saying "Take an extra 10 seconds [say] to go past our school at 180". OK it's not as snappy as "Twenty is plenty", nor does it rhyme, but someone can work on that.

You might be wondering what this has to do with acoustics: when we use rays to find approximate solutions to short-wavelength wave problems it's very common to calculate the slowness along a ray, when can be integrated over the ray's path to give its total travel time. This usage started in seismology (i.e. low-frequency short-wavelength solid acoustics) but shows up in plenty of other areas of acoustics.

Rays, however, have an important difference from cars: they don't stop at traffic lights. If we're calculating journey time by integrating slowness with respect to distance and the car stops we'll have a singularity, where slowness is infinite for an infinitely short section of the journey. Fortunately most cars have clocks and time proceeds untroubled when the car stops, so the time spent stopped can just be added to the journey time. As a bonus your slow-ometer (the old speedometer with a new dial) would have an infinity symbol on it which would a) look cool and b) encourage mathematical literacy.

Unlikely though this is to happen there's a more serious point here: the fact that speed and slowness are mathematically equivalent does not mean that they are psychologically, sociologically or political equivalent and there are plenty of similar choices to be made. Sticking with cars do you measure its mileage or its fuel consumption? Which map projection should you use when you plan an international trip? There are arguments to be made in each such case; I don't intend to make them here but my overarching point is that engineering is, or should be, a person-centred discipline and engineers have to think about these things - getting the maths right is necessary but not sufficient.

Postscript: There's an important case in acoustics where reciprocal quantities are less equivalent than they seem. When I first learnt that admittance was the reciprocal of impedance my reaction was to wonder why I was being asked to, effectively, remember a new name for something I already knew, and the fact that the real and imaginary parts of each had a special name didn't change my mind. It was much later that I encountered the multi-channel case where the admittance matrix is the inverse of the impedance matrix that I appreciated the value in having both concepts, so I try to explain that when introducing the concepts to my students.

Saturday 26 September 2015

Memories of Geoff Lilley

On Monday morning I got a phone call telling me that Professor Geoffrey M Lilley had died the previous day at the age of 95. The news, though sad, wasn't entirely unexpected; he'd been quite frail for some time and several planned excursions to visit him earlier in the year had had to be put off as he hadn't been feeling up to it. This obituary, posted by the University of Southampton refers to his 'wit and skill as a raconteur', and when I emailed some of his former friends and associates I wrote "Doubtless lots more will be written about him in due course - everyone who met him came away with a story'.  I thought I ought to write down my 'Geoff stories'while I remember them and here seems as good a place as any to make a start - he takes a while to make his appearance in this one, I'm afraid.

Non-Standard Analysis

 My first paid work (as opposed to study) at the ISVR was on boundary-layer suction. I needed to brush up my knowledge of boudary-layer theory - the fluid dynamics lectures I'd had from the formidable Professor P O A L Davies as a BEng Engineering Acoustics and Vibration student in the late 80s, while fascinating and challenging, hadn't given me as solid a grounding as our current MEng Acoustical Engineering get nowadays.

Around the same time I read The Problems of Mathematics by Ian Stewart and was intrigued by the chapter about Abraham Robinson's Non-Standard Analysis. During my PhD Joe Hammond, my supervisor, had encouraged me to make contact with David Chillingworth in the Maths department and take his course on Advanced Calculus with Applications. Another fascinating course, this one from a pure mathematician (the 'application' turned out to be that if we were to cut out two particular cardboard shapes, glue them together with cork spacers, stand the result on its side and persuade a heavy enough beetle to walk along one of the perpendiculars, the structure would topple over when the beetle crossed a particular curve) and it introduced me to rigourous methods while showing me how little I, an engineer, knew about that whole area. Non-Standard Analysis seemed to offer a way to formalise the way engineers thought about infinitesimals and, Stewart suggested, allowed results to be obtained that would be much harder to derive by standard methods - 'canards' for instance.

The chapter's last section was called Logic for engineers (no offence, eh?) and mentioned some areas of perturbation theory where it had been applied, one of which was boundary layer flow! This was exciting - perhaps I'd stumbled across a skeleton key that would enable me to unlock wonderful new results in boundary layer theory that couldn't be found any other way? I had to find out more, but all the references for the chapter seemed to be mathematical expositions of the method rather than applications, none more so than Robinson's original book on the subject which I flicked through but found very dense after Stewart's gentle introduction. I asked David Chillingworth if he knew who had applied Non-Standard Analysis to boundary layers. He didn't know but asked Ian Stewart. He couldn't remember where he'd got the boundary layer story from. I searched for references to Non-Standard Analysis in the engineering literature and found that Feri Farassat at NASA had been using them for infinitesimal shock thicknesses, which looked promising but the next time I met Feri I asked him about not only did he not know the boundary layer reference he warned me against using it for perturbation problems at all.

I'd run out of leads when I met Geoff in the staff dining room. (In those less crowded days every table had paper napkins arranged alternately white and coloured - it was understood that the coloured ones were absorbent and were for mopping up spills, while the white ones were for sketching graphs and equations.) Geoff asked what I was up to and I told my tale, somehat surprised by the delight he seemed to be taking in it as I really wouldn't have expected him to have much interest in that sort of thing. His broad grin made it clear he knew something I didn't. The story soon unfolded: when he was at Cranfield College of Aeronautics Geoff had been friends with the inventor of Non-Standard Analysis Abraham Robinson who in the 1940s, despite his main field being logic and analysis, had thrown himself into aerodynamic theory as a contribution to the war effort and become a senior lecturer there. Geoff explained that they used have endless arguments about the importance of rigour and that Geoff had teased him that his ideas were all very well but irrelevant to anything he was interested in. So when 'Abie' published his comprehensive book Non-Standard Analysis he took great delight in giving Geoff a copy and telling him "I've even put in a boundary layer example, just for you!". I went back to the Library and there it was hidden away at the back, a derivation of the basic equations in one paragraph. And if it hadn't been for Geoff it wouldn't even have been there at all.

Sunday 11 August 2013

An engineer's approach to the Instant Insanity puzzle

Last month we had the pleasure of a visit from my old friends Colm Mulcahy and Vicki Powers, both US maths professors. Vicki had been attending a meeting at the Isaac Newton Institute and Colm was giving talks based on his new book Mathematical Card Magic including one, at my suggestion, at Winchester Science Festival where I was speaking on "How Science Shaped Music". It was great to catch up and for my family to meet them, and Colm kindly brought several puzzles for David, including the one I want to write about here. 

Instant Insanity is the usual name for a set of four cubes with each face having one of four colours. The challenge is to pile them up so that each side of the stack shows each colour once. When I'd first heard of it as a child I'd mistaken it for Charles Hinton's four colour cubes, which he claimed helped develop intuition about four-dimensional shapes, mentioned in Martin Gardner's Mathematical Carnival. Gardner quotes a correspondent who claims Hinton's cunes drove him to the edge of madness; I must have heard that a four-colour cube puzzle was called Instant Insanity and conlcluded that this was it. But it isn't.

There's an analysis here (ppt) by Patrick K Asaba that uses a decomposition pronciple and graph theory to find a solution and show that it's unique for a particular set of cubes (how many different sets are out there, I wonder?) I've nothing against it, but not being a graph theorist I preferred to attack it with more basic tools and, possibly, more of an engineering approach.

First, how hard is the puzzle? As Asaba observes, each cube has 24 symmetries, though rotating the stack about its axis doesn't destroy a solution so I'd count the configurations to be chosen from as 82,944 rather than 331,776. That's still a lot, and suggests that brute force won't be any use so some thing elegant (like graph theory) will be needed. On the other hand, imagine that each cube had been drilled through, and a rod inserted through them. From that point, it would be fairly easy to either turn the cubes so that they solved the puzzle or to conclude that the wrong faces had been drilled and that no solution is possible. In fact here's a short piece of Mathematica code that allows you to 'turn' the (unfolded) cubes, and to reverse them, since they can be slid onto the rod either way up.

s = {{0, 0}, {0, 1}, {1, 1}, {1, 0}};
cols = {Red, Blue, Green, Yellow};
f = {{2, 3, 1, 4}, {3, 2, 4, 1}, {1, 4, 4, 2}, {2, 3, 1, 3}};
Grid[Table[
  Module[{q = j}, {Button["<", f[[q]] = RotateLeft[f[[q]]]],
    Graphics[
     Dynamic@Table[{cols[[f[[q, i]]]],
        Translate[Polygon[s], {i, 0}]}, {i, 1, 4}]],
    Button[">", f[[q]] = RotateRight[f[[q]]]],
    Button["<>", f[[q]] = Reverse[f[[q]]]]}], {j, 1, 4}]]


So how many ways of drilling and sliding the cubes are there? Each cube can be drilled three ways, and once the first one is on the rod each subsequent one can be oriented two ways, so 3 x 6 x 6 x 6 = 648. Still more than I'd care to try individually but a lot less daunting than 82,944. Here are the four sets of three choices for our set of cubes:


B G R Y
G G R Y
B G R R




Y B Y R
Y G Y Y
G B Y R




R Y G B
R R G Y
R Y Y B




B B R Y
B G R G
G B G Y

(the set Colm gave us had yellow faces rather than white ones.) Our task, then, is to pick one line from each group, each of which can then be flipped and rotated until a solution is found. There are only 81 ways of doing this; can we exclude some at this stage?

Once we've chosen our four rows there have to be four of each colour, and that's by no means guaranteed with a random choice. We can exclude the Y G Y Y row from the second row from further consideration, because that would mean that only one other row can have a Y in it, so that the first and fourth cubes would have to show B G R R and B G R G. The remaining row from the third cube would have to one Y and no G, which none of them do. 

We can tabulate the number of times each colour appears on each row:




R B G Y
B G R Y 1 1 1 1
G G R Y 1 0 2 1
B G R R 2 1 1 0








Y B Y R 1 1 0 2
Y G Y Y 0 0 1 3
G B Y R 1 1 1 1








R Y G B 1 1 1 1
R R G Y 2 0 1 1
R Y Y B 1 1 0 2








B B R Y 1 2 0 1
B G R G 1 1 2 0
G B G Y 0 1 2 1

We can't choose four lots of 1 1 1 1 because the fourth cube doesn't have any. We could interpret rows of numbers as the digits of single numbers and try the 54 remaining ways of adding them up to see how many give 4444. As it happens there are five. This might be a little too much brute force for your tastes but the chances you'd go insane before you found them are reasonably slender.

[to be continued]

Friday 19 October 2012

Skeptics in the Pub: forum or echo chamber?

Unless you've been living under a rock you'll have heard of Skeptics in the Pub; it's a brilliant idea where (as the name implies) a bunch of skeptics meet once a month or so in a pub or other convivial location for an address by a guest speaker. Mobility and other commitments haven't let me get to nearly as many sessions as I'd like, and no one seems to have invited me to speak at one though I've given skeptical talks elsewhere in the past based on my Short History of Bad Acoustics as well as more cafe scientifique type stuff. But most of those I've been to I've greatly enjoyed and I'm grateful to their organisers for doing something I wish I'd done when I was younger and  had the energy. I just think they might be missing a trick.

All the speakers I've heard of being invited to SITP sessions are skeptics themselves, and many of them are fine speakers with important things to say. A few, and I won't name names, seem to be there to tell skeptics to be skeptics, which strikes me as 'preaching to the choir' (though as an atheist ex-choirboy that's not as pointless as it sounds).

Recently a SITP group announced that one of their future speakers would be Rupert Sheldrake, proponent of 'morphic resonance', the idea that you can tell when you're being looked, and that dogs can tell when their owners are coming home. Not many people take these ideas seriously, and  disabusing those who do doesn't seem to me to be the world's most important job. But for the record I disagree with all his conclusions and dispute the reasoning that leads him to them. So is his appearance at a SITP meeting a good thing or a bad thing?

Several twitterers made it clear that they thought it was a bad thing. I'm not so sure, but don't find it easy to condense my reasons into twitter-length, hence this blog-post. The thing is, regardless of how wrong I think his conclusions are, I find the process of identifying and articulating the flaws in his reasoning useful. I'm also aware that I'm not as good as I'd like to be at calmly and lucidly expressing and explaining my opposition to some ideas.

It's a sadness to me that the last time I saw an old friend before his death it was over a cheeseboard and our conversation went from Shropshire Blue to organic farming to homeopathy where it became clear we had a difference of opinion that I allowed to escalate into an unproductive slanging match. I don't suppose I had much if any chance of changing his mind but I could have expressed my reasons better than I did, and maybe influenced some of the other people present if any of them were on the fence. As it happened neither of us was much of an advertisement for our viewpoint, not helped by the fact that we were both in wheelchairs, so a bystander wouldn't have seen either of us as an example of healthy living.

So I'm prepared for the possibility that, as predicted by my twitter-chums, Rupert's SITP session will descend into a 'slagging match'. But I hope it doesn't, because if we Skeptics can't disagree with someone without losing our individual or collective rags then we've got a problem. And frankly I think we can do with the practice, myself included. Some have suggested that this is like the 'false balance' that programmes like Today are often accused of. Maybe it is, but sometimes false balance is all you've got: suppose Today asked you to come on and discuss Rupert's views with him and a presenter who imagines that the truth must lie somewhere between your viewpoint and Rupert's. Opt to stay in bed and he'll go unchallenged. Appear and employ all the withering scorn you like at whichever preposterous idea he's pushing this week but I guarantee that he'll come across as more reasonable and persuasive than you will. Are you quite sure you don't wish you'd come to his SITP session and tried out a few counter-arguments before getting in the Radio Car?

Dog telepathy and so on  is all very well, but homeopaths who provide malaria 'prevention' are potential killers, that should make any self-respecting skeptic's blood boil, shouldn't it? No disagreement from me, but remember when Simon Singh  took them on on Newsnight; each time he was firm and eloquent but he was also calm and respectful. I know too many skeptics who just couldn't manage it, and could do with some practice.

Has the SITP group that invited Sheldrake inadvertently endorsed his views by inviting him to speak? I don't think so, but it would be a lot easier to counter that claim if he weren't seemingly the only non-skeptic ever to be invited to such an event. By the way I'm struggling to find a word for the type of speaker I mean; 'woo' is nice and short for twitter purposes but doesn't really capture it, and non-skeptic has the drawback that everyone considers themselves to be skeptical. Anyway, whatever you call them I'm not for a moment suggesting that they'd all make appropriate SITP speakers. Many are so incoherent and or deluded that debating with them is impossible and an attempt would probably just exacerbate mental health problems. The only place I want to see 'Jasmuheen' is in prison, possibly a psychiatric one.

But that's not the case with all of them and there's a difference between someone with whom debate is impossible and someone who's opinion it's impossible to change by debate. I don't for a minute think that Rupert Sheldrake is going to change his mind during his SITP session, or that any skeptics are going to come around to his views, but debate is worthwhile even so. I'd suggest that my University of Southampton colleague Professor George Lewith would be an ideal candidate for an SITP invitation. I don't agree with his conclusions but I'm sure he can respond to counter-arguments without blowing his top, and I wish I could be surer that the same could be said of a SITP audience.

One last point: you might think that this class of invitee would be unlikely to accept such an invitation (though apparently Sheldrake did). Fine. Being able to say "We invited X to present his case for alien abduction/crop triangles/etc to an audience of skeptics but he/she declined" is not without value. In the meantime, debate is too important to be left to the Institute for Unspeakable Ideas.

Wednesday 30 November 2011

US National Academy of Sciences announces new patron: Jenny McCarthy

 For immediate release

Jenny McCarthy to be next Patron of US National Academy of Sciences

Washington DC: At a crowded press conference today a spokesperson for the US National Academy of Sciences confirmed that its next patron would be noted actress, author and activist Jennifer McCarthy. Reading from a prepared statement the spokesperson explained that the role of patron is principally that of a ceremonial figurehead and is traditionally given to a figure from the entertainment industry whose public profile and connections would allow them to showcase the work of the Academy. The spokesperson continued:
"Ms McCarthy is Hollywood Royalty, and will attract the sort of attention that we could never hope to on our own. We look forward to strengthening our relationship with her, which began when she was awarded honorary Membership of the Academy under the special rules that allow us to admit selected showbusiness legends, though without voting rights."
Responding to questions from reporters the spokesperson dismissed as 'malicious gossip' the suggestion that Ms McCarthy's record of statements claiming a link between vaccination and autism and sustained criticism of the scientific community might conflict with the Academy's stated aims, stating that "As I already said, the role is ceremonial, her views are her own, and anyone who thinks they disqualify her from playing a role in the Academy's mission is obviously nursing some kind of anti-showbusiness grudge." Asked to confirm that the appointment was for life the spokesperson reminded reporters that Ms McCarthy would only take up the post after the death of the current incumbent, Shirley MacLaine.

 Ends

[The above press release is, of course, both fictitious and absurd. I wish the Royal Society (the UK Academy of Science) the best of luck when the time comes for them to issue an announcement that, while equally absurd, will sadly be all too real.]

EPSRC: "Hope you like our new direction (but if not we don't care)"

EPSRC, the Engineering and Physical Sciences Research Council, which has funded most of the research I've done, and is one of only a few bodies likely to fund the research I want to do, has made what seems like a radical change of strategy over the last few years, though hints toward it go further back than that. The thumbnail version is that they used to fund whichever ideas the STEM community deemed best and now they fund the areas they've decided are most important to the UK economy. In reality it's not quite that black-and-white but it's certainly true that the Delpy Axe is cutting back vast swathes of STEM research that might once have been seen as viable in the same way that the Beeching Axe closed so many branch lines on the British railway system in the 1960s. There's a lot to say about this, but I've only got a day to get a lot done so what follows is more a series of marginal notes than the surgical dissection the issue deserves.

The bellwether for this was the e-Science initiative of the 90s, which ring-fenced a portion of EPSRC's budget for whatever could be passed off as e-science, though this tended not to include new ways to exploit the base of natural logarithms. For this we can thank Gordon Brown who, as Chancellor, was persuaded by then Science Minister Lord Sainsbury that this stuff, whatever it was, was obviously too good to risk the possibility that the STEM community might think there was anything better and had to have the money regardless. Wands were waved and it happened, though not without some disconcerting dislocations, such as the Connections cover story in which e-science czar Tony Hey simultaneously announced that e-science would be judged on its achievements, and that e-science mustn't be judged on its achievements because they were in it for the long haul. Or the EPSRC Fellows' Seminar I attended where a retiring official gave us a remarkably patronising haranguing in the course of which he twitted us for (he assumed) not knowing about the Haldane Principles while defending the e-science initiative that made a mockery of them. He also opined that science funding gave the best return on any investment and that the government ought to do less of it, and gave us a distorted printout of a Monet painting and ordered us to hang it in our toilets. We shall not see his like again, I dare say.

So what about those results? Some very interesting work has been done with e-science funding, but only because, in the words of one Professor of my acquaintance "It became apparent that EPSRC were utterly determined to piss a vast amount of money away, and the only thing to do was to grab a bucket and get in position." He was highly successful at this but many weren't; I heard about so many attempts to recast ideas that involved computer networks, however tangentially, as e-science and it was all time wasted. I mentioned this to another official at the fellows' seminar, in between the country dancing, and she demanded to know where I'd heard of this happening. The first three places that came to mind were Southampton, Cambridge and Edinburgh. "Well, they're all e-science Universities" she said, "so that's not a problem". As for the planned outcomes, I talk to lots of people who use high-performance computing and very few of them care about e-science. The idea that giving a percentage of our science budget to computer scientists makes computing faster is akin to the idea that carrying this stone around prevents tiger attacks; it should be tested by seeing if the effects abate when the purported cause is removed.

In the mean time, this approach has spread and spread so that now EPSRC have a portfolio of priority areas and a timetable for how they see them growing and shrinking, so woe betide you if you, a mere scientist, have had what you fondly imagine is a good or important idea in what they deem to be a shrinking area. And the specificity is scary. To take an example that's close to home for me, sustainable energy is a priority area, one of the least contentious ones by common consent. The UK needs, according to the IMechE 2050 Energy Plan a 40-fold increase in wind energy capacity to meet its targets. The UK also has a much lower rate of installation of onshore capacity than comparable EU countries, instead we're sending it all offshore at considerable extra expense per delivered kilowatt-hour. The reasons for this appear to be a fascinating confluence of political, sociological, psychological, acoustical and other engineering issues which a number of colleagues and I think we have a real chance of untangling and addressing. But EPSRC have determined that onshore wind does not count as the kind of sustainable energy they want to prioritize research into, though offshore wind does. It won't stop happening, but it might not get any quieter. Remember that when planning permission is requested for a wind farm near your retirement cottage.

But they don't just want to decide what should be researched but what should be researched where, a circumstance that allows me to drag in not one but two (near) folksong references.  Not only should all academics be like the Vicar of Bray, ready to subvert their vision of what needs to be done to that of the Monarch/Council, they should be prepared to up stumps and travel to wherever the centre for that activity. A significant number of distinguished academics are what are tastefully called 'trailing spouses'. From (the Universities of) Hull and Halifax and Hell, good Lord deliver me. 

In these straitened times it might be said that the market will supply ample academics, as long as the policy is good for the country as a whole. Is it? There's really hard to find evidence, instead we tend to fall back on anecdotes. You might say that without a national program we wouldn't have gone to the moon. I might reply "who's this 'we'? I still haven't been." I could quote Samuel Broder, former director of the US National Cancer Institute who said
If you had demanded that the NIH solve the problem of polio not through independent, investigator-driven discovery research but by means of a centrally directed program, the odds are very strong that you would get the very best iron lungs in the world - portable iron lungs, transistorized iron lungs - but you wouldn't get the vaccine that eradicated polio.
I could also observe that in the 1970s physicists were widely criticised for wasting their time playing with toys when there was an economic crisis going on and they could have been doing something useful. The toys in question were lasers. You might like to pause and count how many are in the room you're reading this in.

Of course a concentrated push can achieve a lot; just look at the Manhattan project. That was a wartime effort and you could argue that we are at war with global warming. Are we at war with the Digital Economy? With Nanotchnology, and Complexity? In the 1970s there was great excitement about Catastrophe Theory, which was going to take advanced mathematics out of the ivory tower and embed it in the social sciences. It was going to explain anorexia, fight-flight response and cold-war escalation. This didn't happen, and the word 'explain' in the previous sentence was found to be an overambitious substitute for 'be a bit like'. In time the fanciful stuff and the bandwagon jumpers washed away and what was left became bifurcation theory, part of chaos, the next big hype. Now it's complexity, some of which is amazing and some of which is nonsense, and it's still a little early to say which is which in many cases. I've got some ideas that could be framed as complexity, and will be pathetically grateful for whatever scraps of funding I can get for them. But imagine if there'd been a Doctoral Training Centre in Catastrophe Theory back then? We'd know no more now, we'd have spent a lot more, the blind alleys would have been exhaustively mapped and a large number of researchers would have specialised in something they couldn't find much use for later.

Notice that I said 'could be framed' above. Whether they will be won't depend on scientific merits, but on funding likelihood. Every University department/school/faculty will have one or more academics whose role, on top of their own research, is to make sure that all opportunities for research funding are being exploited to the maximum extent possible, so it may not be just the individual'choice. In fact, the department's research income depends on both the overheads on its grant income and its REF score. As both EPSRC and HEFCE become more prescriptive about what they do and don't want to see, this becomes an increasingly tricky optimization problem, not helped by the large element of double-counting: EPSRC like to give money to people they've given money to before, presumably on the basis that they imagine they rarely fund bad work or fail to fund good work.

This arrangement certainly isn't good for scientist's, but is it good for the country? Imagine two professors in adjacent offices. Prof A applies for and gets a million pounds worth of grants, employs five research assistants and publishes ten papers in reputable journals. Prof B thinks very hard and publishes ten papers in reputable journals. What should we say about these two? I  wouldn't for a moment say that all academics should be like B, apart from anything else A has helped train a number of young scientists. But as taxpayers shouldn't we be a little pleased about the million pounds that B has saved us, and a little perturbed that from the point of view of his employers, the University, that 'saving' is seen as nothing but a failure?

Both those Professors will increasingly find that they are trying to serve two masters. EPSRC have made no bones about wanting to move from funding research to sponsoring it, though they don't yet go as far as, say, the MRC who directly employ researchers. EPSRC's previous vision statement (which I can't dredge up at this moment) made it sound like they already did, and back when I was doing my PhD on a SERC quota studentship their published guidance to reject all paid employment until we'd written up certainly sounded as though they thought they'd bought us wholesale, and pretty cheaply too, given the size of maintenance grants in those days. Nowadays EPSRC have the power to shut down successful departments with a small change of policy. Is it unreasonable to wonder whether things could be better managed?

Anyway, I'd like to make a couple of modest proposals, not about policy, good heavens no, far be it from me etc., just about the dialogue we have about it:

  1. Please could we have an immediate moratorium on the use of the phrase "hard choices"? If you're going to kill someone's career by making the research their life has been leading up to impossible then its rather poor form to ask for their sympathy because it was so difficult for you.
  2. Please could EPSRC collectively recognise that the path that they have chosen in the last few years is, while clearly a response to undeniable conditions, not the only possible response to those conditions, and that many emminent people are unconvinced that it is the best one, and accordingly 'wind their necks back in'. I've seen a mid-level EPSRC functionary scowl and wag his finger at a roomful of seriously heavy-hitting engineers and tell them "you have to live in the real world" when one of them dared point out a possible negative effect of the strategy he was describing. That's completely inappropriate; you're not talking to relativist postmodern philosophers, if you don't want a dialogue stay at home.
  3. Please, if you're going to kill responsive mode (as the rumours claim) just announce it so we can plan accordingly. The 'non-denial denials' are driving us mad.
 One last thing, I hope you don't need to be told that the Beeching axe utterly failed to achieve the savings that it set out to.