12.27.2013

Forget Smarm and Snark. This is bigger than being ironic.

Note: Started this post a few weeks ago; now I've just come back and completed it.

I just finished eating lunch with Rich Brown, a faculty member in UW-Madison's Department of Family Medicine and the speaker for today's Neuroscience and Public Policy (NPP) seminar, and I have a curious case of the warm fuzzies. I'll be seeing him talk at 4.

For the past 24 hours -- that is, since walking out of a discussion section on the last seminar in the series, delivered by UW Life Sciences professor and wunderkind Dietram Scheufele -- I've been preoccupied with dark and gloomy thoughts on the state of the American political system. Research done by Scheufele, his colleague Dominique Brossard, and others on science communication has revealed some depressing implications for how we think informed dialogue and debate should go in the U.S. Basically, that body of work has exposed shortfalls in the so-called "deficit model" of public engagement in science: the idea that if only people were more familiar with the facts and figures, if only they knew what we who study an issue know, they would see things as we do and shake themselves loose of deeply-entrenched ideological positions.

As it turns out, that's not true. At the best, we've overestimated the persuasiveness of scientists or the open-mindedness of Americans (not to mention the possibility of being wrong in the first place). At worst, all the people who listen to evidence have already joined the team, and the people who remain are recalcitrant or simply subscribe to a different view on how facts matter in determining what is right and good. The extent to which new information changes their opinions tends to decrease with increased ideology and religiosity, though I'm not interested in falling head-over-heels into an ad hominem attack. The point is, we can't just sit people down in a room, show them the graphs, make sure they understand, and send them on their way to building a brighter, more rational world. We have probably been arrogant all along to believe academia has (or deserves?) that kind of power, at least to the extent the findings hold up.

You may have also heard some hubbub about comment sections, especially in science reporting, and how flamers and trolls really do distort people's perceptions of the science so they walk away with the wrong ideas. Again, that was Scheufele and Brossard.

At the end of Scheufele's lecture a couple weeks ago, the cameraman recording the talk for online access piped up during the Q&A session, saying that of the hundreds of lectures he'd thus recorded, this one was the most depressing. If the evidence suggests that many people just don't listen to facts and reason after all, how the heck are we going to solve the increasingly American problem of policy and politics divorced from reality-based solutions?

Scheufele, cheery and charming though he is, only said that his work identified a problem rather than posing a solution... and furthermore, he apologetically grimaced, he might be wrong?

Don't worry, I'll get back to Rich Brown and my warm fuzzies. Hang in there, I want you to get there with me.

------------------

So yesterday's discussion section, concerned with Scheufele's talk, was a lightning round of political philosophy, legal analysis, and fruitless efforts to pin the cause of the problem on anything or anyone in particular. My friend in the Neuroscience and Law program, Joe Wszalek, quite defensibly felt there existed enough precedent and latitude in the framers' views on tyranny of majority to see our current gridlock as an unusually bad case of democratic flu, rather than the first symptoms of something more sinister. I adopted the view that, to extend the conclusions of Hacker and Piersson's Winner-Take-All Politics (also mentioned in my last post), our system is approaching the state of a "solved game" such as Connect 4, in which there exists an opening-move strategy that can never lose. I argued that in the current situation, that solution stems from the convergence of three things:
  1. The American political system is designed so intently to check power that it is much more disposed to resist radical change than it is to adopt it, especially over the protests of a minority. It thus takes a lot less political will to keep things as they are than it does to change them.
  2. Our electoral mechanisms ensure the survival of a two-party political system, in which gridlock is basically inevitable as party territory will forever shift to accommodate changes in political attitudes until it reaches a sort of Nash equilibrium. It's not coincidence that Presidential elections are routinely won 51-49.
  3. One party has as its primary policy agenda the simple preservation of the status quo.
If that party chooses to use the leverage afforded them by our inertia-favoring system to enact inertia, they will rarely have to work harder than the other party or expend more political capital to win every fight they enter. This blog is not a book, at least not yet, so I will not go into a very long discussion on the particulars of my position, though I encourage you to research more on your own with Hacker & Pierson, and their skeptics, as a jumping-off point.

Then yesterday evening, I read this article in Gawker which introduced into the memetic lexicon a new (or at least, revitalized) buzzword, smarm, whose insidious spectre looms large over the internet's other favorite accusation, snark. In it Tom Scocco argued that, for all the legitimacy of complaints against needless snark in criticism, smarm -- a form of bulls**t characterized by hollow calls for civility and indignance at pettiness -- is the deadlier disease, for it can never be well-intentioned. It is "cynicism in practice," not in attitude, says Scocco, and he has a point. We all hate political talking points, even as we all acknowledge that in the current paradigm they're an inevitable part of playing the game. I, more than most progressives I hang around, insist we musn't dehumanize politicians and instead should blame the perverse incentives of this great big Connect 4 game we keep playing, but it doesn't change the fact that campaigns are more about (metaphorically, and maybe literally) gauging whether the median voter prefers McDonald's or Burger King than about what policies will best help the poor even eat there.

I even ended up reiterating with my girlfriend a previous debate we'd had about the character of politicians, the extent to which we should hate the players or the game, and whether it was even possible to run a nationally-visible campaign for public office primarily on evidence-based policy platforms.

Finally, this morning while putzing around waiting to meet Brown for lunch, my NPP colleague Princess Ojiaku bumped into me, and we talked about the implications of Scheufele's work and our fears that all this was intractable, despite my convictions that Americans are smart, deserve the smartest policies and politicians in the world, and would gladly bring all of it about given the chance. Even a follow-up to the discussion from our program's advisor, noting that researchers were working on ways to expose social media users to new ideas while still taking their preferences into account in generating newsfeeds, seemed like a concession prize; a footnote.

I was genuinely worried, for the first time in a long time, that nothing I or anyone else could do would stand a chance of changing all that. Princess and I left the building (which is dedicated to public and industry research cooperation), crossed University Ave. (on the campus of a school dedicated to sifting and winnowing for the truth and putting that truth to work for its state), and without remembering any of that, trudged down the hall towards the conference room for lunch.

----------------------

Richard Brown is a very nice guy, in a way that is both stereotypical and specific. Not a generic, neighbor-cuppa-sugar nice, but an I-bet-you're-interesting nice. He asked us about our research interests, and then he told us about his while we ate sandwiches. He's primarily interested in promoting the use of screening for mental health and substance use issues as a means of entry to interventions, which his research, done in collaboration with the La Follette School's Dave Weimer, has found to be a very effective way of ensuring that people get care they don't realize they need. He also told us about his favorite thing in the world, I'd judge: a style of medical support called Motivational Interviewing (MI). I'd heard of it, but I wasn't as familiar with it as I was with better-known tools in the therapist's toolkit -- Cognitive Behavioral Therapy, mindfulness therapy, etc. I remembered it had something to do with giving the patient more credit for their successes.

He explained that MI was a means of interacting with patients that focused on maintaining the patient's agency. Instead of hunting for poor health behavior, harping on potential adverse outcomes, and hammering the desired changes, MI is structured to help patients volunteer useful information, form discrete goals, and analyze their own health problems. In particular, it requires the clinician to be supportive, non-adversarial, and above all empathetic towards the patient. The doctor is there to be a guided sounding board, a source of information, and affirmatively supportive. Ultimately, the objective is to help patients resolve ambivalence, arrive at a goal, and then commit to the changes they need to make to reach that goal.

Here's why this is relevant. MI has been shown to be more effective than traditional clinical practice in a number of different settings since it was formulated in the early 80's by Miller & Rollnick. The man on whose work MI is based, Carl Rogers, developed the humanistic approach to psychological counseling, and was eventually nominated for a Nobel Peace Prize because of the applications of his methods to cross-cultural conflict resolution in places like South Africa and Northern Ireland.

In other words, MI represents the evolution of a rhetorical style whose central tenet is that perspective-taking and empathy are the keys to persuasion. Now, I've been lucky enough to study the mechanisms of empathy, and its role in psychopathy, in a terrific lab, and I've been treated to some of the most cutting-edge thinking in the academic world on the subject by collaborators. Empathy alone is not the only thing that separates most people from psychopaths, but it's a big part, and I freely admit it's an idea at the center of a lot of my personal pet theories. With that disclaimer out of the way, in my view MI provides a certain measure of validation for empathy as a fulcrum in facilitative social transactions. If it's true that the "deficit model" fails to predict the stubbornness of many Americans in the face of scientific evidence, the role of empathy in MI may explain what science communication lacks, and naturally suggest how we should change our approach.

If the "deficit model" were accurate, then any amount of accurate information a doctor provided a patient would cause the patient to behave in a more health-promoting way. The success of MI relative to a more confrontational approach indicates that's not the case. Similarly, whenever scientists or organizations antagonize people who seem to either lack or ignore evidence, perhaps they are unthinkingly making their own task harder. The distortion of understanding caused by vitriolic comment sections, which would seem to suggest there's power in the adversarial approach, could just as easily be interpreted as polarization in response to overzealous troll-hunting by defenders of science. If we really want to change people's minds, we cannot afford to frame science communication as an us-versus-them decision. Despite every temptation to do so, the evidence here seems to indicate that, at least at the mass level, and given the tools of social media we have at our disposal, the high road really is the only road. And in particular, starting with an understanding of where those we disagree with are coming from is an important technique for changing minds.

____________________

Here's the thing. Scocco's point in the Gawker article is that snark, while somewhat destructive, is at least honest, while smarm is disingenuous and amounts to misdirection in the pretended name of civility. Rhetoric is not a new field, nor is persuasion a new technique, and I realize that by choosing to focus on the work of a few current science policy experts I'm omitting literally millenia of thought on how to reach people who refuse to face the truth when it's available. But all I'm really trying to say is the following.

  • Ideological debate often precludes the implementation of science-based policy.
  • Freedom of religion and a commitment to independent thought make it unlikely science will simply overrun ideologues in the near future, when a lot of science-based policy would be most effective to implement.
  • Studies predict that convincing religious and ideological people of scientific fact is next to impossible.
  • But maybe we've just been doing it wrong, and doing it right means treating our ideological opponents with the respect we want for ourselves.
I know that extremists piss scientists off all the time. I know they even put people's lives in danger. I know they make town-hall meetings hair-raising experiences, and we've been trying to take the high road a long time. But if the contrast between internet flame wars and cooperative doctor-patient interactions provides any insight at all, it's that the fact that it's hard doesn't mean we can compromise on our good-faith efforts to do science outreach not just diplomatically, but empathetically.

That's where the warm fuzzies came from. Maybe the quest to build society around shared knowledge isn't as hopeless as we thought, and maybe all that cheesy stuff that gives you the warm fuzzies is a necessary part of the solution after all.

12.08.2013

Why Income Inequality Isn't About Preserving Jobs

A college friend of mine posted this article in the Guardian, by the creator of The Wire, on the drivers of economic inequality. It's a flash flood of an argument, a passionate and unintimidated review of why we can't expect capitalism to solve our social problems, even as it solves many of our economic ones. I enjoyed reading it and parts of it were terrific.

I think the bit that I liked the best is this:
If you watched the debacle that was, and is, the fight over something as basic as public health policy in my country over the last couple of years, imagine the ineffectiveness that Americans are going to offer the world when it comes to something really complicated like global warming. We can't even get healthcare for our citizens on a basic level. And the argument comes down to: "Goddamn this socialist president. Does he think I'm going to pay to keep other people healthy? It's socialism, mother****er."
What do you think group health insurance is? You know you ask these guys, "Do you have group health insurance where you …?" "Oh yeah, I get …" you know, "my law firm …" So when you get sick you're able to afford the treatment.
The treatment comes because you have enough people in your law firm so you're able to get health insurance enough for them to stay healthy. So the actuarial tables work and all of you, when you do get sick, are able to have the resources there to get better because you're relying on the idea of the group. Yeah. And they nod their heads, and you go "Brother, that's socialism. You know it is."
And ... you know when you say, OK, we're going to do what we're doing for your law firm but we're going to do it for 300 million Americans and we're going to make it affordable for everybody that way. And yes, it means that you're going to be paying for the other guys in the society, the same way you pay for the other guys in the law firm … Their eyes glaze. You know they don't want to hear it. It's too much. Too much to contemplate the idea that the whole country might be actually connected.
Granted, I think most people have a slightly more sophisticated register of complaints -- the idea that encouraging people to overuse the healthcare system will disincentivize good behavior, that dependence on the wealth of others means people won't allocate the correct part of their resources to preserving their own health, etc. etc. -- even if all of it washes out when you do the longhand math. I don't think it's outright obliviousness. But it's got punch.

So as I said, this is in many ways a really good article. What I think would make it better is a more nuanced view of the relationship between labor, equity and growth -- in particular, there's some incomplete reasoning in the part of the article my friend quoted, which reads as follows (preceding paragraph included):
I'm utterly committed to the idea that capitalism has to be the way we generate mass wealth in the coming century. That argument's over. But the idea that it's not going to be married to a social compact, that how you distribute the benefits of capitalism isn't going to include everyone in the society to a reasonable extent, that's astonishing to me.
And so capitalism is about to seize defeat from the jaws of victory all by its own hand. That's the astonishing end of this story, unless we reverse course. Unless we take into consideration, if not the remedies of Marx then the diagnosis, because he saw what would happen if capital triumphed unequivocally, if it got everything it wanted.
And one of the things that capital would want unequivocally and for certain is the diminishment of labour. They would want labour to be diminished because labour's a cost. And if labour is diminished, let's translate that: in human terms, it means human beings are worth less.
Okay, this is where I think we can make the biggest correction. The first paragraph? Absolutely. Couldn't agree more, as I'll expand on later. But in the second and third bits, I think we lose sight of the mechanics by which we actually grow.


apologies to un-self-aware patriots.
via knowyourmeme
The thing to remember is that in a political climate where Job Creation! is the only thing anyone's talking about, there is a headlong rush to artificially inflate industries that are no longer the most effective ways to, as Simon puts it, "create mass wealth." This is the knee-jerk policy reaction to the idea that labor should be diminished because it's a cost: no! we say, it should be promoted! We should build a stronger labor base by creating a bunch of jobs!

The problem is this: the improvement in our quality of life is in most ways a result of technological production multipliers. According to Robert Solow's Nobel Prize-winning work, economic growth in the early 20th century was due more to sci/tech innovations than to capital investment or growth in the workforce. The machines that make our cars faster than people, well, it's true that it puts auto workers out of work, but it also makes cars cheaper, which gives more people cars and the auto industry a broader customer base; that also allows for more investment in the auto industry, which may not counteract the potential loss in labor, but it results in the same amount of money in American pockets being able to acquire more wealth. (Econ nerds: I'm not gonna get into inflation and interest rates here, because even taking those into account we still see growth in whatever Consumer Price Index we choose. Correct me if I'm wrong.) Economic growth always looks a little like this: while we obviously care about the employment rate, we might not want to be primarily concerned with the number of people working in a particular industry, but rather with the participatory leverage afforded the people working in that industry. And anyway, it increasingly looks like the minimum wage doesn't affect employment rates much in the U.S., so we're already set to lift the floor in that regard.

This is why Bangladesh is almost entirely employed by the garment industry right now -- it's a better economic engine than subsistence farming, but they don't yet have access to the more lucrative industries that are available here and elsewhere in the developed world. They may soon, however, as the increased income due to the greater workforce production makes Bangladeshis able to seriously contemplate education, entrepreneurship, and the greater opportunities that follow. South Korea looked in the 70's like Bangladesh does today, and now look at them.

The point is this: as economies advance, they (in theory) should replace the harder, more dangerous, more repetitive jobs with healthier, safer, more fulfilling and self-directed ones. We ABSOLUTELY need a better social compact, because as the economy advances and that turnover occurs, people working in jobs at the "bottom" will be more vulnerable than people working in jobs in the middle or the top. There's some (inconclusive) reason to believe income inequality is not just unjust, it actually hurts economic growth (or at least, it doesn't help), and forgetting either point means we do ourselves and our most needy a huge disadvantage. However, the social compact also can't freeze the economy in place by refusing to shift toward more humane, more productive opportunities for fear of losing the Triangle Shirtwaist Factories of the world -- it just has to ensure that, as we  produce more growth and make possible a higher standard of living, that everyone has the opportunity to REACH that standard of living, even when they're at a disadvantage. Otherwise, we'll create a more equitable society, but we'll stop making progress as a country.


These countries had similar growth over the same period. So.
Hacker & Pierson (2010), via MotherJones

By the way, that increased wealth should also give us the opportunity to engage in work lifting the rest of the world out of poverty, as well as those Americans who have been left out of the social compact up to this point. You don't need to be a Marxist to see that. The social compact is about fairness, but fairness is also about what works. Unfair systems eventually either collapse or get brought down by Simon's proverbial riot-brick, whether monarchy, slavery or every step in the fight for civil rights -- and that's because people who are oppressed do have value. They continue to be worth not less, but more. An economy that's fair is an economy that harnesses all its members' potential, that gives them the greatest part of the world they can get, and that includes mechanisms that ensure we help each other to do so. That's how we grow.

So to Simon I would say this: fairness matters, both morally and practically. Labor unions are important, as is advocacy for the disadvantaged and disenfranchised in every city and state, and in every year. But part of the reason a social compact, a supportive safety net, is so necessary is that economic growth isn't equally fair to everybody. We need to recognize that if we want to grow, we need to be sure we are fair to the people who lose out, and help them get back where they want to be.

So I want policymakers to stop obsessing over job creation, or fearing backlash when jobs are lost to technological advancement, because in many ways artificial job creation is just a totally inadequate substitute for a deteriorating social safety net, and the connectedness and mutual understanding that it represents. Don't save or create already-obsolete jobs for poor people -- help poor people live well and get good jobs.


***


As a footnote, check out the work my former classmates did on the "skills gap" in Wisconsin. It turned out that the dominant political narrative in the state, that there weren't enough technically skilled people for a 21st century labor market, may be backwards -- we may have such a top-heavy group of skilled people that everyone is working in a job they're overqualified for, and people who are actually best suited for lower-skill jobs just get pushed out of the economy entirely. So we also need to be careful to research where those people who are screwed by economic turnover are, what their lives look like, so that we can put our support where it needs to go.

As a second footnote, if you want to REALLY go down the rabbit-hole, you can read some thoughts about what happens when you let ideas with a capitalist flavor spill out of economics into social policy. This is a review of a book by a scholar I'm a fan of, Bernard Harcourt, in the Harvard Law Review; but not only is it a neat summary of Harcourt's theories on how free-market capitalism encourages high levels of incarceration (no, seriously!), but it's an even-keeled look at the related literature as well. I did not know this was a thing, and I'm even more surprised that it doesn't sound completely crazy when it gets into the nitty-gritty of labor markets.

11.27.2013

Google Earth for ya Head

Brain iz confusing place.
ichcb, via flickr


My friend Katherine called my attention to this fun little exercise: the brain represented as a subway map, as imagined by artist Miguel Andres and picked up by Know More, an offshoot of the Washington Post's much-loved Wonkblog. Unfortunately, I told her, I have to disavow it completely. (Sorry if you just spent 5 minutes memorizing it.) It's a darn cool idea, but it's not a teaching tool.




I'd say (somewhat generously) that it's about 30% right on anatomy, 20% on functional localization, and -- most damningly -- less than 10% right on how the brain actually works. It's more misleading than informative.

I don't know whether Andres ever hoped it would be used at a place like Wonkblog; it could have been just a creative work, by which standard it's cool enough. But since it was, and now it's going around the web, I'll try to 'splain what it does wrong.


Anatomically, it's a crapshoot. For one thing, it seems to say that analogous systems on opposite sides of the head are doing totally different things. This isn't true. Now granted, while the old "left brain vs right brain person" thing is deeply exaggerated, it's true that the two halves of the brain do subtly different, but coordinated, things. However, those processes are usually complimentary -- for instance, the area on the left that does language production, called Broca's area (hence the reference on the map), has an analog on the right that does pitch inflection, and other "non-verbal" communication stuff.

But then half the time, this map's routes are just totally unrelated to how the brain is actually set up. Not only don't your eyes go straight to different functions, input from your eyes goes ALL THE WAY TO THE BACK OF THE BRAIN, with a stop or two in the middle, to do visual processing. The image we reconstruct is then passed forward into the brain to do things like physical spatial awareness, object recognition, etc., and then further forward still to do things like emotional associations, decision-making, etc.

Where this map will get you.
Peter Ward, via geograph

The biggest problem is that looking for a particular instantiation of something, like "aggression," isn't gonna get you anywhere. You want to look at "mood," or "social cognition"? Well we're still arguing about it, but at least we believe there are areas within networks that might underlie stuff like that. Pointing at a brain region and saying "aggression" is like looking at a computer motherboard, pointing at an area and going, "PDF." It's like what, no.

Also, doing a 2D brain anatomy lesson is hella hard, cuz... it's not a 2D organ. Imagine doing a subway map, only instead of stops being at intersections, they're at offices. ("The next stop is: Lexington and 53rd. And the 26th floor.") Not the easiest thing to stick on a poster or a t-shirt.


Disappointingly, scientists are often pretty bad at this kind of thing, surprise. Arguably the best free, lay-centered thing you can get on neuro right now is the Brain Facts book published by the Society for Neuroscience (SfN), a professional organization. But it's not exactly "multimedia," and BrainFacts.org in general is a great idea, but it doesn't seem to be a lot of centralized learning resources so much as a feed of relevant articles.

This little feature, on the other hand, is kind of fun and to the point -- but it's talking about the project neuro researchers are currently tackling, not delivering the latest approximation of their results in an intelligible or interesting way. And it also brings up kind of an interesting analogy -- Google Earth.

Google Earth is an official product, obviously, but the dorkier among us remember when it had alpha and beta stages, and a lot of that was available to the public. They release funky little plugins now and again, like last year when they made an ancient Rome map you could overlay over the modern-day area. And when we look at 3D Manhattan and the buildings are wonky and the textures don't load, we're slightly peeved but much more amused. We want to play. And play we did, to the point where Google collected a lot of feedback by farming their testing out to interested people.

Making something similar for the brain would be a great outcome for neuro in the next decade; it's just harder because a) scientists are more afraid of being wrong than app developers, and b) people know what Manhattan looks like without Google Earth. We can't really say the same for, you know, the left inferior parietal lobule. Plus, a road is pretty easy to interpret; the brain's function is way less obvious a consequence of its structure.

...

HOWEVER! A Google search revealed that we do kind of have something like this now! Much excite! It's called the BigBrain, and it was rolled out in June of this year thanks to folks at Research Centre Jülich and Heinrich Heine University Düsseldorf in Germany, and the esteemed Montreal Neurological Institute at McGill University in Montreal. I'm going to be playing with it a lot. As for the functional part -- you know, getting off the brain train at "social cognition," etc. -- we've got a ways to go. Thanks in part to the BRAIN Initiative, however, which I'll discuss in another post soon, we might be just years away.

My only reservation is that a physical reconstruction, while hugely important and useful, isn't that interpretable a map (especially to non-neurogeeks). Cartographers, demographers, etc. are huge, lucky nerds because they get to fiddle around with how to present geographic information in the most novel and informative ways; to them, Diffusion Tensor Tractography is a map they'd want to delve into, whereas the BigBrain is more like satellite images of a mountain range -- nothing's highlighted for you. However, I'd bet the tractographic equivalent is right around the corner.

Now THAT'S worth taking for a spin, amirite?
AFiller via wikimedia
Anyway. I'm pumped for the neuro community to come together over the next few years and democratize this knowledge, even if, as the subway demonstrated, it won't always be easy. But hey, everybody should be able to have the same fun we do -- taking a hike in unexplored terrain, and getting wonderfully, confoundingly lost.

11.12.2013

Significance is... Significant. But Also, Not Everything.

A post in which I write about statistics for non-scientists, and then stick it to the man. Gently.

Fun things you'll learn to impress friends at parties:

  • statistical power
  • effect size
  • my true place in the academic food chain
  • that you kno nuthin, Jon Snuuuu

The Incubator, a science blog at the Rockefeller University in New York, just posted a link to this paper by aggie Valen Johnson about the somewhat foggy standards for statistical significance in science. PSA: you should check out the Incubator, it's great; and my friend and former classmate Gabrielle Rabinowitz writes and edits for it!


Another XCKD, because look at it.
Personally, I think that if we only taught one science/math class to all Americans (though heaven forbid), it would have to be statistics. Since stats is the study of things we are too dumb, too big, too small, too slow, etc. to do perfectly -- e.g. make accurate predictions, snag individual molecules, measure the economy, or anticipate a dice roll -- it is one of the most powerful and simple ways of becoming smarter than one person's worth of day-to-day experiences can make you. Simply put, most people can't gather enough accurate data to reliably know what's happening outside the bubble of what they can see, hear, and touch.

An even more important lesson stats teaches is that everything we "know" is really only known with some degree of certainty. We may not know what that degree is exactly, but we can still get a sense of how likely we are to be wrong by comparison: for example, I am much more confident that I know my own name than that my bus will show up on time. However, as a lifetime's worth of twist endings to movies can tell us, there's still a teeny-tiny chance that my birth certificate was forged, or I was kidnapped at birth, or whatever. Years of hiccup-free experience as myself provide a lot of really good evidence to believe it's true, and if my family and I were to get genetic tests, that would make me even more confident -- say, I'd go from 99.99999% sure to 99.9999999999%. May seem arbitrary, but that's still a million times more confident. And even though I don't feel that way about my bus I'm yet more confident that the bus will show up on time than that most political pundits can predict the next presidential election better than a coin toss. (Come back, Nate Silver.)

When scientists conduct significance tests, we're basically doing the same thing -- we want to know the truth, but instead of saying "when this happens, that other thing happens," we want to say, "this reliably precedes that," and if possible, "this reliably causes that." The last one is a lot harder, but as for what constitutes "reliable," or "meaningful," the word we use is significant and by convention we say an effect is significant when it would only happen one out of twenty times if it was just by chance.

Now, you don't have to be a scientist to see both the pros and cons in this strategy. Obviously, one such experiment on its own doesn't so much prove anything as make a statement about how confident we are in our conclusions. The lower the odds of something happening just by chance, the more we feel like we know what we're talking about. For example, rather than use the 1 in 20 cutoff, physicists working on the Higgs Boson had enough data to use a benchmark closer to my confidence in my own name. And in recent months and years, the science community, especially in the life and social sciences, has become more and more suspicious that our confidence is too high -- or put another way, that things we say could only happen by chance one in twenty times could actually happen a lot more often. That maybe the things we think are real, are sometimes wrong.

Scary, innit?

Well, it seems reasonable, then, to do what that paper is proposing and move the goal-posts farther away, so only stuff we're reeeeeaaaally confident in will pass for scientific knowledge. But there are big hurdles to this -- some practical, some theoretical. First of all, just like we can calculate how likely something is to happen by chance, we can calculate how likely we are to see how unusual such an event would be. If buses can all be late sometimes, and vary in how much, how many buses would I have to take to say that the 28 line is more likely to be late than the 80, and I didn't just take the 28 on a rough week -- even if I'm right? (Right now, all I have is a feeling, but just you wait.) The odds that I'll be able to detect a significant difference where such a difference exists is called statistical power. And it's one of scientists' oldest adversaries.

See, most science labs are pretty small, consisting of a handful to a few dozen dedicated, variously accomplished nerds, under the command of one or two older, highly decorated nerds. (Grad students are whippersnapper nerds who have only demonstrated we have potential, though collectively we do a lot of the legwork.) Most labs don't have all that much money, depending on the equipment we have to use, and we don't have that much time before we're expected by the folks who control the money to publish our results somewhere. It's not a perfect system -- that critique is for another time -- but it works okay. Yet with the exception of really big operations like the public is often familiar with, projects like the Large Hadron Collider and the Human Genome Project, or labs whose subject matter lends itself to really high 'subject' counts like cell counts or census data, it's really hard to get enough rats, patients, elections or what-have-you to guarantee you'll detect any tiny difference that is really there. A lot of fields, including and especially neuroscience, are slaving away the months and years in lab on experiments where, even if they're right, the odds are they won't be able to tell.

So the idea of moving those goalposts way out there, while in many ways very necessary, also necessitates a huge shift in the way science is funded and organized. Studies would need to be much larger, there would be fewer of them (which would restrict individual labs' ability to explore new directions or foster competing views), and money would tend to be pooled in really big spots. We know -- exactly because of successes like LHC and HGP -- that this can work, and indeed might be the only way to ensure that certain parts of the controversialif dialed-down, BRAIN initiative by the White House will yield anything concrete. There's no question, though, that some disciplines would be hit harder than others with such a change.

But that pales in comparison to questions about the role p-values, which are those odds that it happened by chance, should play in how science is published and reported. They may be the gold standard to which science has aspired for the better part of a century, but I think they can only really paint a complete picture with some help.


***


Last year I had the privilege of working on a project with classmates at the La Follette School of Public Affairs, part of UW-Madison, that tried to estimate how much value would be generated by a non-profit's efforts to provide uninsured kids with professional mental health services, right there in their schools. In order to estimate that, we needed to know not just whether counseling helped kids, but how much it helped. So in looking through the literature on different kinds of mental health interventions and how well they treated different mental illnesses, we often focused on effect size, which is a measure of how big a difference is. It sounds related to significance, and it is, but here's where they diverge. Let's say that we want to know whether Iowans are taller than Nebraskans. We go and take measurements of thousands of people in both states, giving us really good power, so if there's a difference we'll probably see it. We find that a difference exists, say that Iowans are really taller. We also know that based on our samples, the odds are less than one in a thousand that we just happened to pick some unusually tall Iowans. Great job, team!

But... what if Iowans are, on average, less than a quarter-inch taller? Even if we're right, who cares?

That's what we wanted to know for our research -- if these kids see counselors regularly enough for therapy to work, how much better will they get? Once we'd read the work of countless other researchers, we had a pretty good idea, and we used that in our calculations. (As a side note, we found that the program probably saves the community about $7 million over the kids' lives for every $200,0000-costing year it runs -- in other words, it's almost definitely a good call.)

But effect size isn't what makes your work important. In most cases I've seen, it's not even reported as an actual number. In fact, as a graduate student with several statistics courses under my belt, I never formally learned how to do it for class. I figured it out, and applied it to datasets and published results, for the first time for that project.


What different effect sizes look like.
via Wikipedia.
For those who are wondering, briefly: effect size (at least, as expressed by Cohen's d) is how big the difference is, expressed in standard deviations in the variable's distribution. In other words, if people in Iowa are 5'11 plus or minus four inches, and people in Nebraska are 5'10 and 3/4, the effect size is 1/4 inch divided by four inches = 1/16, or .0625. In contrast, the effects of therapy on mental illness are on the order of 0.5 - 1, or about ten times larger relative to the underlying average.

Significance is what makes differences believable; effect size is what makes them meaningful. And power, the other number I think should be estimated and reported, shows how well-prepared a study was to find a real effect -- which, especially for studies that fail to confirm their hypotheses, would provide a measure of rigor and value to their publication. While science has, correctly, always strived to prove its best guesses wrong before declaring them right, it's about time we got a sense of whether the status quo is right either, and whether either answer matters.

However the scientific establishment, despite much wailing and gnashing of teeth, is, like any large institution, having a hard time moving forward with such sweeping normative changes. It's taken Nobel Laureates, brilliant doctor-statisticians with axes to grind, and dramatic exposés of mistaken theories and sketchy journals to make our systems of measurement a real issue in the science community. I feel strongly about this, but I'm just a grad student: a foot-soldier of science training to become an accredited officer. So I'm glad people, like the author of the article that kicked off this post, are continuing to publish seriously about it and propose real changes in the community's expectations.


I'm just here to say, I think most of the changes on the table are only part of the picture, and wouldn't succeed on their own. We need standards for reporting effect-size and power, so we can see for ourselves what the truth really looks like.

10.09.2013

Dogs are (like) people. Blobs are not feelings.

I saw this article in the New York Times, by Emory researcher Gregory Berns, come up on my newsfeeds in the last several days, and I've gotten emails about it from friends and family. After thinking it over, I couldn't let it pass without comment.

As anyone I know could tell you, I am a dog person. To an unhealthy extent.
Don't look, Vincent...
via Lostpedia
  • The only time I cared about anyone in Lost was when the dog Vincent tried desperately -- and ultimately in vain -- to follow his human friend Dawson, who was leaving the island on a raft, into the ocean.
  • If there is a dog at a party, I will begrudgingly leave it for a few minutes at a stretch to interact with my human friends. I would rather just lie on the floor with it, in whatever clothes I'm wearing, and pick up its vibe.
  • I will also play with it until parts of my body stop working, and usually only when somebody points out that fact out of concern for my continued health.
So when I initially saw Berns' article, and the subject it broached, I was pleased! Yes, I say, let's consider the question of whether dogs are people. Personally, I think that's too simple a proposition to capture the truth, but we'll get to that.

The bottom line of my reaction is this: I appreciate what the article was trying to do -- and unlike a lot of scientists I know I don't say this next bit a lot, because popularizing science isn't always bad -- but I found it really inappropriate in tone, scope and scientific content.

The elements of neural activation Berns was citing basically correspond to evidence of pleasure, reward and motivation. (My friend Ryan, who works on animal behavior in rats, points out that regardless of what they actually can do, the dogs in question weren't demonstrating "love and attachment," they were demonstrating "preference.") The caudate, which is part of a structural assembly called the striatum, interfaces with some of the evolutionarily oldest structures in the mammalian brain. As per the experiments on drugs and rats in a recent post, these areas -- while varied in their exact purposes and connectivity profiles -- largely support the dopamine-powered "reward" circuit, technically called the mesocorticolimbic pathway. I'm not an expert on it, and I'm fuzzy on the caudate's specific role within the circuit, but Ryan agrees Berns' attribution to it of such nuanced emotions (much less personhood) is an overreach.

Yes, it's swirly. Brains are weird. via Brainposts
In a very crude sense this circuit is the reason we do anything -- without integrating a sense of motivation and anticipation of reward into our value judgments, we'd be so apathetic we wouldn't bother to eat, and we'd just die. It's also the circuit implicated in drug addiction, or for that matter, *anything* addiction. In many cases, the kinds of things that circuitry is responsible for are the impulses we actually need to fight to be considered persons, at least in the conventional moral sense. If you've heard the phrase "he was behaving like an animal," there's reason to believe the culprit was, colloquially speaking, letting his striatum drive the car unsupervised.

If anything, what we need to show dogs are people is indication of "higher" functions, or whatever you want to call them.

Side note: personally, I think the moral significance of living beings occurs on a sliding scale, where a goldfish registers and its well-being is worth something, but isn't equal to a person; and a dog is closer to people but generally comes up slightly short (though sometimes very slightly... and maybe there's some overlap, with dogs I'd choose to keep alive at somebody else's expense). Some other blog post I'll explain how I see this as consistent with Giulio Tononi's work on consciousness. Other scientists, including my one-time boss Julian Paul Keenan and his mentor Gordon Gallup, have considered the value of using self-consciousness (as determined by the ability to understand the significance of one's own reflection in a mirror) for one of the criteria of "personhood." The point is there are a lot of ways of approaching this problem, and few of them are simple, which stems from the simple fact that people are complex. (*snap*)
Complexity. Deal with it.
via WiffleGif
Anyway... if we hypothetically bought Berns' reasoning, i.e. that evidence of comparable activation patterns in comparable cognitive paradigms is evidence of comparable function, and thus personhood (which can get a bit fishy), I argue we'd need a lot more and better benchmarks. We'd need dorsolateral prefrontal cortical activity, indicating self-control and abstract thinking. We'd need ventromedial prefrontal activity, and insula, maybe anterior cingulate -- or their homologues, anyway -- corresponding with complex emotional responses, especially social ones like guilt and empathy, being part of dogs' decision-making processes. After all, these are among the things people say separate us from animals.
All four areas -- ventral striatum (VS), ventromedial prefrontal cortex (vmPFC), insula (INS), and anterior cingulate cortex (ACC) in one handy, murderous image.
Front of head is to the right. via PLoS ONE, h/t Al Fin.
But we probably shouldn't buy Berns' reasoning. That's because:
  1. If you're going to try to build homologues, most of the time mammalian brains -- which are, after all, like a series of iPod models with incremental improvements -- will roughly line up, so you'll have something to compare. It's really a question of how developed and interconnected those areas are. Or at least, we know it'll probably be a combination of tricky things.
  2. Dogs are generally going to be able to do, and show activation in, a lot of the simple tasks we use in humans, simply because making really sophisticated experimental designs to probe thinking and feeling is complicated. Neuroscientists and psychologists have to constantly refine and revise experiments to ask more and more specific questions.
  3. Dogs don't have parts of their brains that don't work. NOTHING does. Brains don't have areas that can't be made to "light up" under the right circumstances. If we did, those traits would be selected against in evolution, because we'd be burning calories with useless brain matter that we could've used for something else. (Next time somebody says we only use 10% of our brains, hit 'em with that.)

Judges? Bzzzzzzzzzzzt.

So ultimately, this is like... well, it fails to expand on anything we don't already know about dogs from just hanging out with them, for the most part. Except that there are some commonalities in what areas do what things, which we always would have expected to see. And on top of that, Berns used one of the relatively few structures, outside of sensory areas, that we couldn't use to persuasively argue that dogs share some of the most important aspects of personhood.

And just to wax advocate for a moment here, the sentence "by looking directly at their brains and bypassing the constraints of behaviorism..." makes Ryan, generally a very peaceful person, very inclined to violence. Without behavioral research, we wouldn't know (or continue to learn) about what makes animals AND people behave the way we do, and how our brains work that magic. Nobody is saying we should all think of people like Skinner thought of rats. The only thing imaging provides in this context is a more global perspective on neural activation patterns during that behavior. And I'm an imaging person saying that!

Look, lots of people criticize neuroimaging researchers for vastly overreaching in their claims based on relatively fuzzy, and difficult to interpret, imaging data, and this is a perfect example of that. So while I in large part agree with the premise of the article, I completely disagree with how Berns arrived at it, and how he depicted the science that brought him to that conclusion. I'm pretty disappointed with the article as a high-profile outreach on behalf of the science community.


Next time, maybe I won't have to be so ruff on him.

Yes, I'm here all week.

10.05.2013

Rat Park: Science Caper or Curio?

caruba via flickr, h/t to Joe Kloc
Funny that this was the thing that got me back to my blog after an extended hiatus (though I've got like 4 drafts 80% done in the pipeline, waiting for polish). Procrastination can work wonders.

My friend Ryan, a fellow neuroscience grad student working on reward, motivation, and addiction using a rat model, pointed me to this blog post. The writer is Tom Stafford, one of the two authors of the popular science book Mind Hacks, and a researcher at the University of Sheffield. The post recapped a series of studies conducted in the '70's by Bruce Alexander and colleagues at Simon Fraser University, in which they built their test rats a large, well-furnished playpen, then tried to replicate classic drug addiction studies.


What they found, as Stafford describes, surprised them -- the rats living socially in the open, enriching pen actually avoided drinking water laced with morphine, instead of consuming the drug to the exclusion of nearly all other behavior. Stafford also points readers to this comic by Stuart McMillen that illustrates the story. Stafford then concludes the post by musing, among other things, that "even addictions can be thought of using the same theories we use to think about other choices, there isn’t a special exception for drug-related choices."


Okay, so. We've got a lot of crazy ideas kicking around here, so let's take this slow.


First off -- while I haven't read Mind Hacks, Stafford appears to be a pretty accomplished researcher, and my default position is to be glad that people are writing fun science books unless and until I have reason to suspect they're doing more harm than good. After all, I wouldn't have heard about this if not for his post. McMillen's comics likewise seem fun and clever, and remind me of a more serious counterpart to nerdy staples like The Oatmeal or Saturday Morning Breakfast Cereal. (If you haven't seen these -- not to even speak of XKCD -- run, don't walk, to way funner and smarter nerd porn than anything I've done yet. Go ahead, close this tab.)


With that out of the way, I'm... not placated by the story here. There are a number of reasons for this. In order of increasing discomfort, here they are:


(Note: apologies for linking to academic papers. I know many would-be readers don't have access to them. If you're affiliated with a University, try searching for them using your library's website; if not, try to reach out to somebody you know who is.)

  1. While the role of context and social structure in drug abuse is still not a big enough issue, it isn't anything new either. Drug addicts who enter rehab clinics, get clean, then go right back to their old stomping grounds are surrounded by people and paraphernalia that remind them of their temptation, which can undo all their progress. Going beyond addiction, behavior of all kinds can be triggered almost automagically just by finding yourself in familiar settings -- as anybody who's moved and then started to drive home to the wrong house can attest. And I can't even begin to broach the social science literature on the influences of socioeconomic status, education, access to healthcare, etc. on propensity for drug use (here's just one example out of hundreds). So implying the ideas derived from this study should rewrite the rules on addiction is a sizable overstatement.
  2. The biggest question-mark in this article, and the statement that I think needs the most gratuitous linkage, reads "There have been criticisms of the study’s design and the few attempts that have been made to replicate the results have been mixed" [emphasis added]. The fact that literally the subsequent sentence hand-waves that away -- "Nonetheless the research does demonstrate that the standard “exposure model” of addiction is woefully incomplete," -- using such strong language to criticize a whole field seems, to me at least, way off-base. You don't say a model (even a simple one constituting only part of prevailing theories on drug use) is "woefully incomplete" because of a 40-year-old paper with replicability issues, and you certainly don't say that without at least pointing readers to those attempts to replicate.*
  3. To say that rats could be put in any environment where they'd stop taking drugs entirely, given a choice and even encouragement, is a seriously bold claim; or at least it is today. In the decades since those studies were conducted, rats have been -- beg pardon, for folks who have concerns about the ethics of animal research -- tested on every drug under the sun and given practically any task imaginable, and a huge portion of those results have translated pretty well into human findings. An entire literature has emerged on the science of reward and addiction, a field in which Ryan and his mentors, Brian Baldo and previously Kent Berridge, are participants. And we've got pretty solid ideas about the systems in the brain where these behaviors are generated. So it would require more than a little replication to validate those claims.
  4. Most unsettlingly, there seem to be some oddities about the experiments and the way they're being presented. Correct me here, dear readers, if I missed something.
    • In McMillen's comic, he writes that Alexander et al. "covered the floor with fragrant cedar shavings for the rats to nest in". When Ryan and I read that, we Macaulay Culkin'd so hard: cedar is toxic.
    • quicheisinsane via flickr
    • Here's an example of one such finding. A 1997 review of bedding materials in labs around the world found that pine shavings were extremely cytotoxic compared to corncob, straws and other materials. In fact, a paper published as early as 1968 -- that's almost a decade before the Rat Park experiments -- found cedarwood to be a bad environment. If it's true, then, that the bedding was cedar, then Houston, we have a problem.
    • Even more confusing, the paper linked to in Stafford's post said the floor of the pen was sawdust -- different, but still linked with a few major respiratory problems. So was McMillen reading a different paper? Did the research team use different bedding systems in their different studies? It looks like the latter, based on this 1981 study that mentions cedar shavings. (Once again, though, since I'm not an expert I don't know how these problematic conditions would affect the results; they just add a lot of uncertainty, and speak to the possibility that there were unaddressed or as-yet-unknown problems in their methods.)
    • *For the interested, here's a thesis published in 1985 that suggests that "during a colony conversion the supplier inadvertently introduced strain differences making the present rats more resistant to xenobiotic consumption." It's only one non-replication, but even so I didn't have an easy time finding it.

So in general, the post made me a little uncomfortable at times, and the study did too. When it comes to a topic as stigmatized as drugs, there are always people with pet opinions looking for validation; so while nobody should be hushed, everybody should try to speak carefully. Implying that we can think of drug use as a totally non-compulsive act, and therefore subject to the same moral culpability as all other actions, is a proposition the neuro and psych communities have spent years and years trying to overcome. Drug courts, which have experienced so much success as alternatives to prison, were only made possible by thinking about addiction as a problem to fix, not a sin to punish.

To say otherwise -- to put all that progress at risk -- should not be done lightly.