Thursday, December 10, 2009

A PLoS ONE Success Story--Taxol Crystals Masquerading as Microtubules


ResearchBlogging.org
Andy Maloney, a Ph.D. student in our lab, recently read and summarized a very interesting paper in his open lab notebook. The paper, "Taxol Crystals Can Masquerade as Stabilized Microtubules," was published in PLoS ONE in January of 2008 by Margit Foss, Buck W. L. Wilcox, G. Bradley Alsop, and Dahong Zhang1. Since our lab is now heavily involved in experiments involving kinesin and microtubules, and because it addresses something that had been a mystery to us, the paper really caught my interest. I'll explain more about that below. But before doing that, I wanted to talk about something probably of more general interest: a success story for publishing in PLoS.

Andy noticed that in their methods they defined BRB80 as having 4% glycerol. Glycerol is used to promote tubulin polymerization, and I've never seen it included in the BRB80 (aka PEM) definition. It could also affect solubility of Taxol, so it's an important detail whether or not a substantial amount of glycerol was in their standard BRB80 buffer. I strongly suspected that this was just an oversight by the authors...and I could easily have assumed this and moved on. But what about future readers of the article? Was there anyway to correct that article? For most journals today, even in the year 2009, the answer would have been, "no." However, this is no ordinary journal, this is PLoS ONE! All I had to do was select the text in question, and then click to add a note. After adding my note, an icon appeared in the article, allowing any future reader to see the question.
PLoS Comment Image

I don't know whether authors are notified when their article is commented on. (If not, it would be an important feature for PLoS to add.) So, I sent an email to the corresponding author of the paper (D. Zhang) pointing out the question. In less than a day, D. Zhang wrote back saying that he'd asked M. Foss to look into the issue. And then again in less than a day, Margit wrote me back to say that she'd looked at the original lab notes and indeed they'd made a bit of a typo in how they described BRB80 in their report. She added a very clear response to my note. She also went out of her way to point me to two subsequent papers that have extended their taxol microcrystal research2,3. These authors deserve a lot of praise for responding to this question so quickly! A few months ago received a similarly rapid response from authors of another PLoS article...only two data points, but I wonder if PLoS authors are indeed more likely to respond quickly to questions from readers?

Now, why am I so happy and why do I think this is a success story for PLoS? It's because now, for the rest of time, when readers of this excellent paper do look into the methods, they will be able to see the corrected definition of the buffer used. Given how many times I've been burned by incomplete or incorrect methods, I do believe this will save substantial amount of time for at least a couple people down the road. (Will the PDF version of the article ever incorporate this note? As it stands now, I don't think it does...it would be very valuable if technology could be worked out to include links to these comments in future PDF downloads.) One more thing: I just noticed that Margit Foss today also posted a new comment on her article. She links to the two papers she'd told me about in her email, as "Relevant references on Taxol crystals." This is a great service to readers, especially since the newer reports2,3 support a different mechanism for Taxol microcrystal / fluorescent tubulin binding. In summary, many thanks to PLoS for this wonderful journal and to these authors for their dedication to excellent science!

Now, if you're still reading, I'd like to also comment on the very interesting science in their report. Taxol (generic name is paclitaxel, I think) is a drug used in cancer chemotherapy. It's proposed mechanism of action is to inhibit mitosis by stabilizing microtubules in the spindle apparatus. In vitro, Taxol dramatically reduces the rate of microtubule depolymerization. Many people, including kinesin researchers in our lab, leverage this microtubule-stabilizing effect by adding Taxol to microtubule-containing solutions. What I learned from the Foss et al. paper is that the concentration of Taxol typically used in microtubule gliding assays (10-20 micromolar) is far above the solubility limit of Taxol (somewhere around 0.8 micromolar in aqueous solutions). Furthermore, they show that Taxol forms microcrystals above this solubility limit (even at 0.92 micromolar) and that often these microcrystals form a striking resemblence to microtubule bundles and asters! DIC images of these microcrystals (formed in absence of tublin) are shown in these images from Foss et al.1:

(scale bar 10 microns)


The final piece of crucial information provided by this article is: these Taxol microcrystals rapidly bind fluorescently-labeled tubulin! (Later reports indicate that it's the fluorophore, not the tubulin that is binding to Taxol2,3.) This means that many kinesin researchers (including me) likely have Taxol microcrystals in their samples, and because they become coated with fluorescent tubulin, there is a huge risk of misidentifying these structures as microtubule structures. Indeed, here is a recent fluorescence microscopy video that Andy took of something that at the time was a mystery but which we now know is likely a Taxol microcrystal decorated with rhodamine-labeled tubulin!

Likely Taxol microcrystal in kinesin / microtubule gliding motility assay (using rhodamine-labeled tubulin). Andy Maloney data.

In my past, I've also often seen these structures which I attributed to "clumpy" or "weird" microtubule structures. For example, I often noticed very bright, thick, and stick-like structures that I called "microtubule logs." It never occurred to me that they were Taxol crystals! (Also I remember that these structures were much less prone to photobleaching. I wonder if that's because (a) there are buried fluorophores inside the crystals, protected from oxygen, or (b) even on the surface of the crystals, Taxol somehow protects fluorophores from photobleaching?)

Foss et al., go further and speculate on whether this has important implications in vivo (i.e. in cancer chemotherapy). I can't really comment on that, but it's interesting to think about. What's most important for us is that we now know we have a problem with our buffers (too much Taxol!) and we may be able to fix it. The concentration of tubulin that we typically use is about 0.4 micromolar of tubulin dimers. Thus, for a 1:1 ratio of Taxol to tubulin dimers, we'd need 0.4 micromolar starting concentration of Taxol, which is below the solubility limit. There's at least two things I don't know: (a) What is the binding affinity of Taxol for microtubules? and (b) Do we need a 1:1 ratio to get significant stabilization? If the answer to (a) is something like a few nanomolar, then we may be OK with something around 0.5 micromolar (500 nanomolar) Taxol. If not, then we may have to hope the answer to (b) is "no."

A quick search just now yielded a paper from 1994 that says the binding constant for taxol to microtubules in 10 nM. That'd be good, except that they also seem to say that they only get stabilizing effects when the concentration is in the micromolar range4. Dang! Well, it shouldn't be too hard to try out 500 nM Taxol and to see whether MTs are reasonably stable. It's possible our MTs may be more stable than those used in the Caplow et al. study. It's also possible that the Taxol microcrystals are not affecting the kinesin activity in our system, and that we can do our studies at high Taxol concentration. Even if so, it's great to know about this issue so we can keep on the lookout for Taxol problems.

References

1. Foss M, Wilcox BWL, Alsop GB, Zhang D (2008) Taxol Crystals Can Masquerade as Stabilized Microtubules. PLoS ONE 3(1):e1476. doi:10.1371/journal.pone.0001476

2. Castro, J. S., Deymier, P. a., Trzaskowski, B., & Bucay, J. (2009). Heterogeneous and homogeneous nucleation of Taxol crystals in aqueous solutions and gels: Effect of tubulin proteins. Colloids and surfaces. B, Biointerfaces. doi: 10.1016/j.colsurfb.2009.10.033.

3.
Castro, J. S., Trzaskowski, B., Deymier, P. a., Bucay, J., Adamowicz, L., Hoying, J. B., et al. (2009). Binding affinity of fluorochromes and fluorescent proteins to Taxol™ crystals. Materials Science and Engineering: C, 29(5), 1609-1615. doi: 10.1016/j.msec.2008.12.026

4. Caplow, M., Shanks, J., & Ruhlen, R. (1994). How taxol modulates microtubule disassembly. The Journal of biological chemistry, 269(38), 23399-402. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/7916343.

Foss M, Wilcox BW, Alsop GB, & Zhang D (2008). Taxol crystals can masquerade as stabilized microtubules. PloS one, 3 (1) PMID: 18213384

Link to FriendFeed discussion thread.

Tuesday, June 9, 2009

My first rating and commenting of a PLoS article in my own field (Scary!)

SJK 6/9/09: Here is a link to related friendfeed discussion.

I just finished reading and commenting on a PLoS One article that is near my own field of research. The article is titled, "Dissection of Kinesin's Processivity." The authors are: Sarah Adio, Johann Jaud, Bettina Ebbing, Matthias Rief, and Günther Woehlke. You can see my rating and overall comments here. (Since I'm not sure if that link will work, I'll also repost my comments below.)

Throughout the process of reading and commenting on this article, I learned a lot more about my fears and barriers to PLoS commenting. I discussed some of these in my prior post about my first PLoS rating. In contrast to my first rating, this article is smack in the middle of my field of interest (the kinesin molecular motor). I deliberately chose the most relevant PLoS article I could find. I'd estimate that my fear of placing comments was at least 10 times higher than for an article outside my field. I definitely felt like my comments were piping directly into the author's email inbox, ready to enrage them at any misunderstanding or criticism I posted. I still feel this way and am a bit worried. My worries are probably justified to some extent, since I am very new to this field. Thus, I could easily be seen as an ungrateful newcomer who hasn't paid his dues. And of course the people who wrote the article could end up anonymously reviewing my own papers and grants.

Given those worries, I came close to deciding not to post my rating. However after much reading and thinking about their results, I felt compelled to make a serious comment about error analysis supporting one of their conclusions (not their major conclusion). I was confident that my criticism was fair, and convinced myself that posting the comment was the right thing to do--perhaps I can save another reader a lot of time, or even help the authors out if they read it. I posted my criticism directly in the article, along with several typo corrections. After doing that (late last night), I realized that if / when the authors DO see my comments, they'll see a string of petty typo corrections and then this criticism, but nothing positive at all. That's a problem!!! Because of this, I decided to sleep on it, and compose an overall rating with positive comments today. I was busy most of the day, but finally tonight was able to finish my rating. In all honesty, though, without having travelled that slippery slope of commenting, I don't think I would have posted this rating tonight. I would have balked at the risk of angering the authors, sticking my neck out, and possibly being wrong. I probably would have convinced myself that these risks outweighed any meager potential gain that the world of science would get from my remarks.

I'm a bit worn out now. Hopefully in the comments here or more likely, on FriendFeed, we can talk about these things. I hope in the next couple days to expand on my review of the paper in my research blog, and to include it as my first Research Blogging attempt.

Reposting of my rating and overall comments on the article

This is what I submitted to PLoS as my rating:

Insight: 4 stars, Reliability 3 stars, Style, 4 stars.

The authors recently characterized NcKin3, which is the first known,
naturally dimeric but non-processive and plus-end motor. In this
report, they are leveraging this discovery to study chimeric constructs
between NcKin (a dimeric, processive Kinesin-1 motor in the same
organism) and NcKin3. They make two different chimeric constructs: one
with the head of NcKin and the neck of NcKin3, and the other with head
of NcKin3 and neck of NcKin. Importantly, the head included the core
motor domain AND the neck linker region.

I congratulate the
authors on a lot of very nice work that must have been very difficult!
The results they report come from an impressive array of difficult
assays spanning single-fluorophore position tracking, single-molecule
bead motility assays with optical tweezers, gliding assays, and a
variety of ensemble biochemical assays.

Study of the two
chimeric constructs, in comparison with the NcKin and NcKin3 wildtypes
allowed the authors to gain insight into which parts of the kinesin
motor are important for conferring processivity onto dimeric
constructs. (And also, which parts are important in NcKin3 for
inactivating one of the heads.) As far as I know, these are the very
first two chimeras created between these two kinesins and thus open the
door for many more investigations into how processivity is regulated in
the motor domain, neck-linker, and neck regions. The results here
indicate that many more chimeric structures and site-directed
mutagenesis studies will be necessary and valuable. Of course, that is
a lot of work, but the results here open the door for those further
studies.

For me, the most fascinating result was point (iii) on
page 4. The authors show that the Head3/Neck1 construct seems to get
stuck in a "kinetic dead end." As they say, the kinesin-1 neck appears
to confer some elements of processivity, but not all. Combined with the
missing elements (which kinesin-3 head lacks), the motor is actually a
bit more handicapped, as shown by a gradual decrease in gliding
velocity as the concentration of motors is increased.

I also had a couple questions about the paper that I noted previously (see prior article comments):

* Statistical significance of processivity measurements.

* Lack of discussion and comparison with previous Ncd/Kinesin-1 chimera results

DISCLOSURE:
Our lab (http://openwetware.org/wiki/Koch_Lab) has recently obtained
major funding to study kinesin. I do not think we have competing
interests with these authors or the work they've presented here, but I
thought it worth mentioning.

Thursday, May 28, 2009

Pondering our Kochlab graduate student compass...how about "Always Contribute?"

Last weekend, my friend Richard Yeh posted a couple essays by Paul Graham onto Facebook. I loved the essays and linked to one of them on friendfeed. Michael Nielsen, in turn pointed me to another essay by Graham that he thought I'd like, "How to Do What You Love." Michael was completely right, I loved the essay. If you have not read that essay, I command you to stop reading my blog and to go read that article! You'll get much more out of his essay than this blog.

OK, now that I have you defiantly reading my blog, intent on garnering something useful from it, to spite me, let me continue. The "Do What You Love" essay resonated with me very strongly. It reminded me of the discussion of talents in "First Break All the Rules" by Buckingham and Coffman. I think Graham and Buckingham and Coffman are talking about the same thing: that finding work you love is a key to happiness (and productivity), but that finding out what you love is a very difficult task worth working very hard on. The language of Buckingham and Coffman is to talk about finding one's "talents." I've been talking with my graduate students about this a lot for the past six months. (In fact, it's time for me to have another awkward talent-finding session with them, I do believe!) I also preach to all of my undergraduate students about the importance of finding their talents and I give them an end-of-semester assignment to think about their talents. I'm delighted to have been shown the Graham essay, because I think it is yet another way of presenting this argument to students, and a very eloquent one.

Since I loved the essay so much, I sent it to the person who gave me the First Break All the Rules book. He wrote back to me and keyed in on the "always produce" part of the Graham essay:

"Always produce" is also a heuristic for finding the work you love. If you subject yourself to that constraint, it will automatically push you away from things you think you're supposed to work on, toward things you actually like. "Always produce" will discover your life's work the way water, with the aid of gravity, finds the hole in your roof.

While reading the note, the Do What You Love essay finally clicked with another I read by Graham last weekend, "How to Make Wealth."[1] It's another fantastic essay that I feel like commanding you to read. One premise in that essay is that people in start-up companies can be 20-30 times more productive than they can in an ordinary 9 to 5 job. Thus, a small group of people can create a tremendous amount of wealth by working really hard for a few years. They can also get financially rich as a reward for their production of wealth for the world. The thing that clicked for me is that you cannot make the world a better place without producing. Most people are producing at a rate at least 20 times less than they could be producing, if they found what they loved and were able to do it all the time. I garner great optimism from this fact that we're on average so incredibly inefficient. It means most people are not even close to any absolute point of diminishing returns, and with the right kinds of changes, they could easily multiply their productivity and impact on the world by manyfold.

So, then I started thinking about our research lab and the students in our lab. I thought over things that each student has done in the past year that made me profoundly happy. As I thought over all these things, I realized they had a common theme: I was recalling instances of those students being unusually productive. Furthermore, my favorite recollections involved those where the students had shared their work on our public wiki, or our blog, or in some other fashion open to the world. This made me think that it is now very easy for me to summarize my main expectation and goal for my students: "always produce," borrowed from Paul Graham, of course. My students like to make Kochlab slogans, so I thought of "Kochlab: Produce" or "Kochlab: Always Produce," but if you pronounce the "Koch" correctly ("Cook"), then it has the problem of making one think of produce the noun, e.g. apples and bananas. Thus, I am thinking something like "Kochlab: Contribute" or "Kochlab: Always Contribute."

In some sense, "always contribute" means the same thing as "always produce." The point of the producing, in regards to Graham's essays is that you're creating wealth, and therefore contributing something to society. However, I like "contribute" much better, because it has much more clarity in the science world. "Contribute" automatically points the way towards open science (aka Science 2.0). Whereas, production in the traditional scientific world ("closed science") can be done with a very limited amount of contribution.

I have mulled it over for a couple days now, and I think I really like this as the main piece of advice and constant guidance to give to our students: "always contribute." Does this work? Let me try it out in a few ways:

1. By the time the students get their Ph.D.s, I want them to have learned a tremendous amount about what their talents are. I want them to clearly see what the next step in their career should be in order to leverage those talents and help them be successful and happy. A compass of "always contribute" will lead the students towards finding ways of being productive instead of spinning their wheels. These activities will be the means by which the students and I discover what their talents are. This is the point of Graham's "always produce" advice. Check.

2. By the time the students get their Ph.D.s, I want them to have a strong and large professional network of people that know them and the work they have done. "Always contribute" tells them that Open Notebook Science is a good thing to do. Sharing code, design drawings, personal summaries of research papers, tips and tricks on protocols -- these are all ways to contribute. In our limited experience in our lab, we have received validation after validation after validation that open contributions get attention. We can see this vaguely via page views or Google search rank or quite vividly via positive feedback from people that we admire and people whom we've helped. Combined with traditional publishing (also a contribution) and attending scientific meetings (contributing), I think "always contribute" will make building a powerful professional network almost automatic. Check.

3. I want our lab to produce innovative, exciting, and high-impact scientific results. Will "always contribute" point us in the right direction for this goal? Does it point in any direction? I need to think about this one some more. I feel like it must point in the right direction--for example, innovations are contributions. But there's some risk that focusing on contributing could lead towards a lack in overall production. Basically I am thinking of the standard arguments against open science -- increased likelihood of scooping, which in turn reduces chances of funding and publishing. I fundamentally believe that those arguments are strong enough to tip the balance, but I don't think they've been proven yet. Another example of how "always contribute" may be counter to our lab's scientific productivity: some students may discover that they are wickedly talented at contributing in ways that do not advance their research projects. That's a great thing to discover! An example that hasn't happened in our lab yet would be for a student to discover that they're fantastically talented at writing popular science articles and want to do so at the expense of doing any research. I want students to discover something like this. It fits perfectly with items 1 and 2 above. However, it is clearly a problem in regards to item 3. This is not a new problem, though. My job as a research professor is to both mentor students as well as ensure production of research results. The way the system is set up those goals are not always aligned, and sometimes in conflict. It's possible to be rewarded with research grants, even by abandoning the best interests of your graduate students. Most people in the system know this and have seen the devastating results it has for too many Ph.D. students. I am absolutely against doing that and despise many people who have chosen that route. On the other hand, I don't have a good idea about what do do if "always contribute" turns into "I can't do my research." That's definitely going to happen eventually. In many cases, it will be possible for the student to discover their true calling in life, but then re-focus on making the research contributions necessary to finish the Ph.D. that they've invested so much time in. Will there come a time when the student should rightly choose abandoning the Ph.D.? Ugh, this is a tough one: Item #3 gets an: almost check / need more thinking.

I've painted myself into a corner now. If I were Paul Graham, I would figure out a way to backtrack. But I'm not, and I don't really know how to end this blog post, so I think I'm going to end it by linking to another Graham essay about writing essays. This one was linked to me by Kartik Agaram on friendfeed. It's an essay that explains why high school and college writing assignments sucked so badly. If you hated those assignments but never quite knew why, you'll love this story. Plus, you'll feel vindicated and it will give you one more reason to trust your gut in the future. For example, if your gut were telling you that "always contribute" is a fantastic compass to present your graduate student mentees.

Footnote:
[1] I'm taking a bunch of liberty here with my own story. It wasn't until I started writing this blog that I realized the two essays had clicked. But I think subconsciously this is what was happening. Also, I probably had the Gin, Television, and Social Surplus essay by Clay Shirky in my head, as Joelle Nebbe had linked to it recently.

Tuesday, May 26, 2009

My first PLoS comment: High rating of an article on TSLP being the cytokine link between eczema and asthma

5/27/2009 SJK Note: After I wrote this, Bora Zivkovic sent me links to the PLoS community blog where he talks about commenting and rating PLoS articles. Both are very much worth reading! Bora is the Online Discussion Expert for PLoS.


Recently, William Gunn Mr. Gunn composed an excellent article discussing online identity and the making of public comments in scientific circles. Without immediately spiraling into a stream of ridiculous conversation, I can't really comment on his post, or the ensuing friendfeed thread. Suffice to say that Mr. Gunn and others on friendfeed inspired me to be a lot bolder in commenting on PLoS articles.

So, tonight I made my first comment on a PLoS article. Previously, I had viewed commenting on the actual article site as a very formal procedure that required attaining the highest level of understanding of the article before submitting a comment. Essentially, I was viewing commenting on an online article the same way I viewed submitting an official comment to an article published in Science or Nature (or other journals). Published comments in those journals are almost always refutations of the article that seemingly without fail lead to concomitantly published rebuttals by the original article authors. Thus, the culture of commenting on articles is fraught with nastiness and putting one's scientific reputation on the line. This could be the reason that so far "official" online commenting on peer-reviewed articles has been very limited, whereas "unofficial" or off-site commenting has been more common. By "unofficial," I am loosely referring to comments made anywhere that is at least one link removed from the actual published article site. For example, an external blog, friendfeed discussion, or notes left on article managing services such as citeulike.

It occurred to me while laughing and crying my way through the recent friendfeed discussions (OK, fine, here's a link to perpetuate the madness) that this culture may be relatively easy to change. (Aside from any questions of whether it's necessary to change.) In my opinion, PLoS has already made one innovation that vastly increases the odds of a user making a public "comment." They have separated the article ratings into three categories: Insight, Reliability, and Style. From my personal experience, that opens the door almost all the way in terms of inviting some kind of reader feedback. Rating an article on "Style" does not carry much professional risk from my viewpoint. Rating on "Insight" requires understanding of the possible impact of the article, and is thus much more weighty than the "Style" rating. However, I personally feel I can rate an article on "Insight" without assessing the quality or reliability of the methods and data. I recently did this with a PLoS ONE article I saw on single-cell sequencing of uncultured organisms. To rate an article on "Reliability," I feel requires the kind of in-depth understanding that would be required for me to send a formal letter into the editor of Science or Nature that could be published. Thus, the barrier for me to rate on "Reliability" is quite high. Especially since if I'm going to put in enough effort to feel completely justified in rating, it's likely to be less than a 5-star rating. (I guess I'm feeling like I spend more time reading articles that I disbelieve than those I do believe?)

Another reason that placing online comments does not have to be as formal and negative as with traditional published comments is that the comments are published without a delay waiting for the original authors to compose a response. This then reduces the expectation that the publishing authors must respond and therefore takes the formality down a bunch of notches in my opinion. Also, in terms of PLoS the whole mission of the journal is to make research more broadly and rapidly available--and thus I think there is an expectation that the comments should also come from a broader base of readers.

So, that is what inspired me to take the time to read a PLoS Biology article and compose my first online comment tonight. I was also inspired by the belief that we're still very early in the process of dictating the culture of online discussions of peer-reviewed research--and thus a concerted effort can make impact in what ends up happening. This inspiration was combined with the coincidence that my wife sent me an article from BabyCenter today that caught my interest because it was discussing the recent PLoS Biology article. Finally, the thing that finally tipped the balance and convinced me to take the leap and make my first PLoS comment was a healthy dose of "WTF" So I stopped worrying and took the leap. :)

Monday, May 25, 2009

Time for more blogging! Warming up... You can't believe what you see...

I'm just now coming out of the end-of-semester fog.  I've been through three academic years so far as an assistant professor.  That's six semesters, five in which I've taught a course.  In all five of those semesters, I ran out of steam and could not keep up with all the things I'd have liked to have done in my areas of research, mentoring, teaching, and family.  In my opinion, 16 weeks is too long for a semester...I notice myself and my students beginning to burn out after 8 weeks. 

It's a pattern for me that I take on too much at the beginnings of semesters and then have to cast things aside as I get overwhelmed.  Before this semester I took a big leap into communicating with scientists on the internet, via blogging and discussions with a bunch of new friends on friendfeed.  As the semester engulfed me, I ended up casting aside blogging, but actually was able to maintain a lot of dialogue on friendfeed (e.g. in the Science 2.0 and The Life Scientists room).  So, even though I was disappointed to not keep up the blogging, I can look back and see that overall I made a huge amount of progress in terms of scientific communication and meeting (virtually) many great people around the globe.  I'm very happy with that, and I'm even happier that a couple of my graduate students came along with me.  They have made their own connections with other scientists and made substantial progress in open science and open scientific communication.

I've been excited for the last week or two to ramp up the blogs again.  In particular, I'm excited about a couple things.  One is to try out Research Blogging.  This was suggested to me by Michael Nielsen in a friendfeed thread in which I learned a lot, but which I started by spouting off way too ignorantly.  My apologies to Richard once again!  As I understand it, the service will allow me to write up blog reviews of specific research papers and then label my posts as suitable for listing in Research Blogging.  For me, it will be a step up from what I started doing this semester, which is trying to make a few notes on every paper that I add to citeulike.  You can see my citeulike RSS feed, with my comments added, on this Yahoo pipe.  It will be a lot more work to write what I think is a worthy blog report about a paper, but I'm excited about testing the waters.  The way citeulike is set up, I feel like my comments there are pretty much wasted as far as benefit to others goes.  In the future, I'm expecting my group to communicate more via citeulike (or another service) as a form of "journal watch."  But as it stands now, I'm pretty sure nobody reads the comments I add to my citeulike library.

The other thing that should be exciting is to write a guest blog for Lisa Green at NextBio.  We've talked about this a bit, and I think it may happen in the next few weeks.  What I will blog about that is worthy of a "guest blog," I don't know...but it should be a fun experience!

Finally, I needed some inspiration to come in here and start dusting off my blogs.  I had been a bit depressed at their dormancy, which was a postive feedback loop preventing me from blogging.  A blog warm-up idea came to me earlier tonight as I was staring at the ceiling fan.  I recalled something I'd noticed maybe 6-10 years ago in graduate school, which made me recall something else I'd noticed at the same time.  They're two "illusion" kind of things that I think are fun, and which I'm going to describe here without actually researching them scientifically.  Hopefully someone who knows something will pipe in and tell us something about them!

1. Blurry motion seems slower with peripheral vision


I first noticed this illusion when I was trying to make large graphs in Origin back in graduate school.  Whenever I made the mistaking of clicking on very large matrix plots, the graph would flash for like a minute before opening the graphing preferences window.  I would look aside in frustration each time.  And then I noticed that when I looked at the flashing graph (which was more like quick vertical scrolling of a dark patch), it seemed to scroll much slower in my peripheral vision.  A few years later, I noticed the same thing when looking at a ceiling fan, an experiment much easier to reproduce.

Here is the experiment you can try

Find a ceiling fan that's not too far away, and spinning at about 2 hertz.  When you look at it directly, if you're like me, the default image is to ignore the individual fan blades, and perceive a blur.  You can change the image dramatically by following an individual blade with your eyes.  For me, that motion seems "slower" than the blurry motion, but it's not the illusion I'm talking about here.  The illusion that I see is when you quickly switch from looking at the fan with your central vision and instead use your peripheral vision.  When doing that, the motion seems to slow down substantially.  For me, it is substantial and repeatable.

I couldn't resist checking into this a bit on wikipedia, and I found something called "flicker fusion threshold" in this wikipedia article on peripheral vision.  In the flicker fusion article, it is said, "so flicker can be sensed in peripheral vision at higher frequencies than in foveal vision."  Given this, I wonder if the effect I am seeing has to do with some part of the visual system normalizing a given flicker relative to the maximum possible perceived flicker?  The fan produces a constant rate of flicker...but it is a larger percentage of the fastest possible flicker when looked at with foveal vision?

2.  I think you can hear individual splashes in the roar of a waterfall.
(Photo by Ant J on Flickr.)
This is an experiment I can't replicate easily now that I live in Albuquerque.  We do have waterfalls in the mountains, but it's nothing like when I lived in Ithaca, NY during graduate school.  Almost every day, I would cross the bridge over the waterfall at the end of Beebe Lake.  This was on my way between Clark Hall (the physics building) and A-Lot, the parking lot the bus dropped off at. I would often stop and stare at the falling water. I noticed one day that if I followed with my eyes a particular part of the broken stream from the lake to the point at which it hit the rocks, I could audibly hear it "splash" within the "blurry" roar of the waterfall.

Here is the experiment you can try

If you have access to a waterfall, this experiment isn't too difficult to try.  As I mentioned above in item 1, you can track the blades of a ceiling fan with your eyes, and this is the same thing you need to do with the waterfall.  Track a "patch" of water all the way from the point at which it starts falling to when it hits the rocks or water below.  Keep doing this repeatedly in a cycle, and you should "hear" the individual splashes--or at least I do.  It's also fascinating to just look at the water as it breaks into pieces on the way down.

I just attempted 30 seconds of lazy google research for this phenomenon and was not successful at uncovering a wikipedia article to give insight into this effect.  I am 50% sure it's an auditory illusion, and the other 50% of me thinks, "why not...maybe the sound isn't as 'blurry' as it seems?"  It would be possible to check this with an objective audio/visual recording system.

3. The Blue Field Entoptic Effect -- Mystery Solved!

Also known as Scheerer's phenomenon.  As opposed to the above two illusions, this one I now know what it is.  I first noticed it on airplanes when flying in a really bright sky.  In Albuquerque, we have bright blue skies frequently, and I can see the effect.  According to wikipedia, most people can see what I see in these conditions: counteless point-like bright white things that travel in squiggly paths in the field of vision.  It turns out that these are white blood cells flowing through the capillaries that cover the non-foveal parts of the retina.  On a blue background (e.g. the sky), those capillaries produce very dark lines all across the field of vision--due to red blood cells absorbing blue light very well.  Somewhere in the visual processing system, these dark lines are "edited" out, so we don't perceive them.  However, when a white blood cell travels through, it is mostly transparent, and the increased light is perceived as a tiny white thing in the field of view (which it is, I guess).  One of the most fascinating things to me is that it allows you to actually visualize your own blood cells flowing.  According to the article, some doctors have tried to leverage this as a diagnostic technique.

How to try out the experiment:

This one is easy.  Wait for a nice sunny day.  Pick a big blue patch of the sky and stare at it for a couple minutes.  Keep your attention focused on looking for bright dots appearing and traveling in squiggly paths.  You won't be able to follow individual ones, but by trying to use your peripheral vision,  you can see hundreds or thousands of them.  They are quite different than "floaters."  They are smaller (point-like), quicker, and more fleeting.

4. The McCollough Effect -- An amazing optical illusion

I'm including this one because I realized that I ended up having a common theme of "can you believe your eyes?"  The answer is "no!" apparently.  So, I'm including this final, amazing illusion that is well worth trying out.

How to try out the experiment:

If you have 10 minutes, you should go try out this optical illusion: The McCollough Effect.  Spend the 5 minutes they recommend and then test it out for yourself.  You can read a description of it on wikipedia.  I found this illusion mind boggling, because for me it persisted for DAYS after I'd spent the five minutes training whatever part of my visual system that is being trained.  Just an amazing demonstration of an ability to unwillingly and semi-permanently "program" part of your brain (or visual system at least) just by staring at some images for a few minutes.

Friday, March 27, 2009

The value of an open-access publication record for an academic job search and tenure & promotion.



11:22 am update

Steve says: I just read Gideon Burton's excellent post about "Intellectual Apartheid." One of his recommended steps for administrators is "Update promotion and tenure policies to favor open access publications and to accommodate evolving scholarly genres (such as data sets, software, and scholarly tools that build the cyberinfrastructure)."


Earlier this week, my department chair sent our department a link to an article in The Chronicle of Higher Education about MIT's open-access policy announcement. (I believe there is the standard irony that the article from the Chronicle is limited-access, but you may be able to find freely available stories on Google news.)

Without being an expert on open-access or doing much background research, I decided to send the following email to my department. I'll let you know what happens (if anything)!

Email to Physics & Astronomy faculty:

This got me to thinking. Our department could adopt a simple & public policy, such as: "Regarding new faculty hires and promotion & tenure decisions, we highly value an open access publication record. We place a value on open access publishing comparable to the value we place on publishing in top-tier scholarly journals which may have limited access." I don't know whether we could agree on such a statement, but if we could, I think it would place a positive light on our department, similar to how the MIT and Harvard statements below do for those universities.

As a tenure-track faculty in our department, I do feel that open-access publishing will be viewed positively by the voting faculty. It would be good to know that more formally, but I'm not worried. A much more worrisome thing for me is how open-access publishing will affect my Ph.D. students.
Will they lose out in job searches or will they stand out? Our own department's stand on this issue won't help our own students. But maybe by taking a public stand, we can set an example that other departments can follow.

I think it'd be worth spending a bit of time discussing at an upcoming faculty meeting.

--Steve

Tuesday, February 24, 2009

Assistant to Robot, Promoted to Robot

I was telling my grad students this story last week, and they liked mocking me so much as "assistant to a robot" that I thought I should post the story on here so more people can mock me. My first job in a research lab was the summer before starting my undergraduate career at the University of Michigan as a physics major--1992. I was really lucky get a summer job in one of Francis Collins' labs at UM. Yes, I am name dropping. The name I just dropped was that of Francis Collins, who was leader of the NHGRI from 1994-ish to 2008. Prior to that he was at the University of Michigan, with primary roles of hiring me as a work-study student and also leading teams that found the genes for cystic fibrosis, Huntington's disease, neurofibromatosis, and other diseases.

I have been lucky so often in my life, and in particular in my career "planning." I'll tell you some other day how lucky I got in obtaining my current job at UNM. This is how I obtained my first job in a research lab: I was friends with Dr. Collins' daughter, and I liked science. I knew he had a research lab because he had visited our classroom in Junior High to tell us about cystic fibrosis and genetics. So, I asked my friend if I could work in her Dad's lab. A few days later, she told me, "he says yes," or something along those lines. I was 17 years old at the time. But when writing this story, it seems like I was younger, as I recognize this strategy as the same one I used for obtaining a rollerskating "skate" with a girl in the 6th grade. I think the song was "Manic Monday."

I actually worked in a lab led by Chandra Sekharappa, who I think now has this lab. He was such a great guy and I am eternally grateful to him, Dr. Collins, and the other people in that lab who welcomed the unusual physics undergraduate to their lab. As I am writing this blog entry, a flood of memories are coming back to me. I learned so many things from working in this lab, and now, 17 years later, they are still coming back to me and helping me in my research (which coincidentally, or probably not coincidentally is tending towards genomics applications). In this lab is where I learned to pipette. I learned what PCR was. I unfolded paper towels for Northern blots. I "stuffed tips" (FYI: I could use each hand independently on two different boxes). I helped with "rows and columns." I washed dishes. Wow, did I wash dishes. I became obsessed with: -80 freezers; dry ice; vacuum-bake ovens; centrifuges; liquinox; reverse-osmosis water; latex gloves; latex gloves filled with water and frozen in the -80C freezer; and latex gloves in the vacuum-bake oven.

I cannot even come close to expressing how important this experience was to my career. Being immersed in this environment was so valuable -- whether I knew it at the time or not. The lab was focused on cloning the gene for early-onset familial breast cancer. (I believe another lab ultimately beat them by identifying BRCA1, but I'm not sure.) There was such a palpable excitement about the race to find this gene and I loved watching it. I distinctly remember that Dr. Collins welcomed me into group meetings, where the postdocs or grad students (I'm not sure what they were) would pass around these developed images of gels with the faintest of bands that proved something about their PCR reaction. I distinctly remember that they'd let the ignorant physics undergraduate stare at the film and then tolerate it when I said, "I think you're crazy, there's no band there." Somehow they kept inviting me, and they kept trying to explain to me "gene jumping" or "chimerisms" or "FISH" or other topics. The collaborative atmosphere in Chandra and Francis's lab is something I'm striving to replicate in our lab at UNM.

OK, now onto the good part. Of course, being the undergraduate in the lab made me a target for the grunt work. More than that, I wasn't even a biologist! So, it happened that the lab (or someone on the floor) had gotten their hands on a robotic system that could essentially print microarrays on filter paper. Or perhaps the predecessor to microrrays. The "robot" could print media from sixteen 96-welled plates onto a single filter paper. Then, these 4x4 arrays could be used for some kind of hybridization assays. This was a big deal, and the robot cost something in the 100's of thousands of dollars. Basically, you'd put a stack of 16 microtiter plates in the holder next to the robot. You'd set up the filter paper, and then the robot would proceed to: grab a plate; take lid off plate; put plate down; stick pins in plate; stick pins on filter paper; clean pins; put lid on plate; put plate away; repeat with new plate.

The problem was, the robot was controlled by some kind of SGI machine that nobody knew how to program. It cost a whole bunch of money to have the tech rep come out and program the thing. Everything about the robotic system worked well. Except, after taking the lid off the plate, it accelerated too fast, and media would splash from one well to another. This was terrible. I know what you're thinking: ask the physics undergrad to reprogram the robot! This is what I was thinking too when the grad students (or postdocs) explained to me the problem. I'm pretty sure I could have figured this out, no matter how obscure and proprietary the programming language. But, this was not my fortune. Instead, what they had figured out was that my $5.50 / hour salary was a perfect solution. I could perform the first part of the robotic sequence (grab plate; take off lid) and then at the appropriate time, hand the plate to the robot. So, this is when I took on my esteemed position as "assistant to the robot." I don't know how many days this lasted ... probably not too many, I think maybe for a few hundred plates or so. I do remember how utterly boring it was. I actually tried to read a book in 20 second increments while I tag-teamed with the robot.

Perhaps during my time as Assistant to the Robot, I impressed people enough to get my first promotion in the lab: to actual robot. (I previously mentioned my prowess at stuffing tips and unfolding paper towels, which probably factored into this promotion.) This job took most of a summer (1993 maybe?) and actually I'm pretty proud of it. My task was to copy the Washington University YAC (yeast artificial chromosome) library. I think it was about 200 96-welled plates and it took me most of a summer to make two copies. I became the most efficient plate-pourer of all time (in my own mind), and discovered that you can actually pour them so thin that even yeast can't grow. I wonder if these YAC libraries are still around nowadays?

Well, that's the anti-climactic ending to my story. I don't have a coherent point, and I know this goes against all of the how-to-be-a-good-blogger advice. My points are: (1) I collaborated with a robot in the past because it was cheaper than fixing the robot and (2) I had an awesome undergraduate research experience that has profoundly impacted my career. In regards to (2), there are so many lessons I can learn to help me in my current position as a research mentor. The main thing I have been thinking is that undergraduate research can be valuable for the lab, and incredibly valuable for the undergraduate. I feel like we're not even coming close to achieving what we could at UNM in regards to undergraduate research, and I would like to change this over the next couple years. I routinely meet Junior-level physics majors who are interested in research, but haven't yet been in a lab. We are next door to Sandia National Labs, and only 2 hours away from Los Alamos National Lab...both of which have amazing resources and opportunities for undergraduate scientists. And of course, we have plenty of our own labs at UNM. One of my goals over the next few years is to help our students find research jobs earlier in their careers...perhaps even before they start at UNM. Whether their jobs can be as prestigious as my own assistant to robot jobs, I don't know, but I can definitely strive for that!

SJK Note 4/2/09: I found a picture in my garage of the completed robot project. That's me admiring my 400? or so microtiter plates, all nicely stacked and labeled.


Wednesday, February 11, 2009

A science outreach idea, what do you think?

I live on a cul-de-sac where we are lucky enough to know and enjoy hanging out with many of our neighbors. Many have kids who play with our kids. It's like when I was a kid, and I thought those days had passed, but they haven't. (As an aside, I love hyphen-ating words, but it bothers me that cul-de-sac is hyphen-ated.)

Many of my neighbors really enjoy hearing about the science we're doing in the lab, and I really enjoy talking about it with them. This actually led to a very fun event we did during Winter Break where I brought in some neighbors to our Junior Physics lab course so they could get hands-on experience with some very cool physics. You can check out our OpenWetWare page (unfinished) for the event (sorry the facebook page seems to be private). A brief summary is that I only had to invest a few hours of time, and I think the attendees really enjoyed it. I know I did.

Recently I had an idea for science outreach that I'd like your opinion on. The idea is that I (or a student) will explain our research to one of our biggest neighbor fans. Then, we'll record an interview with him describing our science, what we do, it's importance, etc., from his point of view. Or we could do it with a couple neighbors talking. But the main point is that the non-scientists will be explaining the science to the (mostly) non-scientist audience on youtube.

There are a few reasons I think this may be useful and fun. First, I always find it informative and fun to hear people "re-describe" our research to someone else after I've described it (unless it's printed in a magazine). Second, I have an inkling that it would be effective for communicating to non-scientists. Third, the people I have in mind for this project are very good at picking out the essence of what I'm telling them, and distilling it down to the exciting parts in layman's terms.

So, do you think this is a good idea? Maybe it's been tried before many times, and if so, please send me the links. If we do give this a whirl, there is one thing I'd like to figure out how to do:
  • Record video with two cameras (for example, one on me, one on him)
  • Splice and edit the video to make a good video for posting to youtube
I'd really appreciate advice on software and hardware to use for those purposes. Thanks!

Saturday, February 7, 2009

Personal open science challenges

There was recently a very interesting thread regarding open notebook science in the Science 2.0 friendfeed room. This was in response to Michael Nielson announcing that Tobias Osborne had begun doing open notebook quantum information theory. I think this is fantastic, and my kudos go to Tobias (whom I don't know). The friendfeed debate had to do with whether Tobias's work can be called open notebook science, which has a specific definition.

The debate got me thinking again about something that's been bothering me recently. I've been having a hard time getting my thoughts straight, and that's still true. I'll quote myself and then try to clarify:

A really good motto for a scientist who wants to be open could be this: "Be as open as I personally want to be." This is very different than "be as open as possible." What I am specifically thinking is that young scientists (i.e., not yet beaten-down) seem to usually have very natural tendencies towards open science. But the overall level of natural talent for openness may vary enough that "open notebook science" may just not be the best method of openness for some people. But everyone can strive to "be as open as they want to be", and resist pressure to be closed coming from outside (fear of scooping; lack of technical means; resistance from colleagues). In contrast to these external pressures, I think it may be legitimate for someone to want to be open, but also maintain some privacy so they can get a personal reward of doing something all by themselves, for example. Perhaps posting all of their electronic notes 6 months or a year down the line.

"Be as open as I want to be." I don't know if that has value for anyone else, but it a very powerful mission statement for me right now. It's powerful, because I really believe in it, but I am not achieving it. I'll talk about that later in the post. But, first I want to talk about it in a more positive light.

What kind of openness should be required?


I am starting to decide that I'm not going to try to force my lab members to do specific kinds of open science. I am thinking instead that my goal will be to remove as many barriers as possible so that my lab members can achieve the level of openness they desire. I believe that adults have unchangeable natural talents, and I think that scientists will be cutout for different kinds of openness. For example, Anthony in our lab has recently started doing open notebook science, true to its definition. I am really excited about this. He is a natural for ONS. I don't think that he has any problem writing anything in public. In fact, I think his notebook being open is a motivator for him to make it even better than he would a private notebook. This is the way he's wired, and it's not surprising if you know him. In contrast, I think some people would find that their creativity and drive are seriously hampered by doing ONS. For example, me as a graduate student. I don't know whether doing ONS would have worked or not. I actually kept what I think is a very good electronic lab notebook. But it was private, and I don't know whether I would have taken as many notes (and dropped as many F-bombs) if I knew it was public. I also don't know if I would have reacted well to someone posting a suggestion to me when I was immersed in trying to figure out something by myself. I do know that I would have been fine posting my notebook in public with some time delay. In fact, if anyone posts a comment to this blog asking me to post my grad school notebook in public, I'll go ahead and do that...f-bombs and all.

So, while I don't think I'll require ONS for all lab members, I may have other requirements, such as delayed notebook publishing. What I am worried about is hampering creativity and productivity of young scientists by striving for inappropriately selected open science goals. I do want my students (and postdocs in the future) to strive for open science, but I want them to do it in the way that best leverages their talents.

I am failing at my own principles

"Be as open as I want to be." I and our lab have made some great strides in the past few months towards this principle. For me, I think the transformation was fueled by a strong belief in the power and even morality of open science. But it did take a heavy dose of "what the fuck" to spark the flurry of steps I took this past winter break. (I think that may be my first f-bomb while blogging; I feel alive.) I'm happy and excited about what we're doing. But I'm also not achieving openness as much as I'd like. And I'm confused. Two themes dominate my struggles with openness:
  • The students in my labs and their scientific careers
  • My collaborators, their careers, and my gratitude for their assistance and mentoring
I'm not trying to sound altruistic here. One of my talents is that I get genuine happiness out of feeling like I've helped other people succeed. You can see that both of those items above feed that desire in me. I do think those two items are what is confusing me. In contrast, the issue of being scooped, in itself does not impact my thinking. I do worry about being scooped, but I have already concluded that being open does not increase the chances of being scooped. I believe being open decreases the chances of being accidentally scooped substantially. Furthermore, I even believe that being completely open would reduce the chances of being purposefully scooped. This is because the published track record would make it easier to shame the person who did the scooping.

Being scooped would be emotionally devastating. This is true. And it would have an impact on my lab and my students. This is what my students and I have been discussing the past couple years, and I think we've developed a collective (perhaps unspoken) understanding that we'll be OK even if that does happen. I think I can protect and rescue my students from that scenario. The collaborator issue is so much more complicated.

The collaborator issue is what is bothering me quite a bit now, and I really don't have any answer. Most of the scientists I know personally are "traditional." The ones I am trying to collaborate with are outstanding and highly respected by everyone, including me. The ones I am thinking about right now have put in a huge amount of effort helping me throughout various stages of my career. These traditional scientists, of course, are not Scientists 2.0, but they are fantastic scientists. I suspect, and in some cases directly know, that they would not approve of my science openness. So, I don't know how to deal with this external pressure towards closed science. The "what the fuck" strategy seems so disrespectful to people who've put energy into my career. But the "try to convince" them strategy is futile. "Showing them the way" will work...but at the risk of looking like "what the fuck" along the way and angering them. If we do get scooped, my students and I will be OK. But our mentors may never forgive us?

OK, I'm going to stop now...those are the challenges that are really bothering me this weekend.

Monday, January 26, 2009

Update on Our New Open Science Activities

I think it's been about a month since I started these blogs and joined friendfeed. It's been a whirlwind, really. I've e-met dozens of scientists around the globe in that short time, most of them much farther along than I and my lab are in terms of open science. The community of scientists out "here" is incredibly welcoming and helpful, and I want to send a thank you out to whomever of you read this post.

It's also been a time of huge change in terms of our lab's open science activities. We have started a lot of new open activities, and I thought I would make a list of them here. I'll only list those things that are new since mid-December, and I think it's quite a lot.

Blogging

Of course, I started blogging. We also started a blog that I and our lab members to contribute to. So far, only Anthony and I have contributed to it, but that will evolve over the year, I think. So far for me, blogging is a treat and I've been able to rationalize doing it based on potential synergies with the activities I'm supposed to be doing :)

Posting grants in public


We started posting our grants on Scribd. I chose this site from advice from Jean-Claude Bradley and Cameron Neylon. So far, I've liked the site as a place for sharing grants and other documents and it seems to work well. As an example, here is the grant we submitted last week. Posting grants has been really helpful so far and I expect it to continue to be helpful. We've received helpful comments from a couple people, and also made some science connections because of it. For example, Cameron and I realized we have a lot of science interests in common!

Paper preprint

A big step we took is we drafted our first paper out of our lab and we posted it on Nature Precedings. Larry Herskowitz is the lead author on this paper. We posted it a week ago, and we immediately received very helpful comments, questions, and suggestions from Richard Yeh. We're using OpenWetWare to talk about the paper with Richard and any others who want to join. My opinion right now is that OpenWetWare is a better place for these kinds of detailed lists of questions and suggestions, because we'll easily be able to break it into different topics, create sub-pages, and post supporting data, figures, etc. (In contrast, we found that trying to write the paper on a wiki just did not work for us at all.)

Open research projects

We have also taken some big steps towards "open notebook science" in the past month. We've been using a private wiki for about two years now, hosted by OpenWetWare. As I understand it, providing us with a private wiki was part of an experiment to see if it could draw in more users and lead them towards open science. You can't scientifically extrapolate from our experience, but then again, you don't have to approach it scientifically...so, my opinion is that providing the private wiki worked out beautifully for OWW's mission. I think they should continue to provide private wikis, including for select new users on a trial basis. It's quite possible (impossible to prove, though) that none of the open science activities I'm describing in this post would have been started had not Jason Kelly offered me the private wiki two years ago. Thank you Jason & all the OWW founders! I'd also like to thank Bill Flanagan who has helped me tremendously in many areas of the public and private wikis.

Having prepared via our "warmup time" with the private wiki, many of the students in my lab have begun to take open notebook steps in the past few weeks. You can find links to these on our open-research projects section. Anthony Salvagno has started doing real open-notebook science, keeping his daily notes on OWW, using the Lab Notebook system that Ricardo and others developed. Anthony is about to start learning molecular biology in our collaborator's lab, with guidance from Kelly Trujillo. The lab is not accustomed to e-notebooks, so it's going to be really tough for Anthony to not be driven to use a paper notebook. We'll see how it goes, I'm hoping he can show them the way!

Caleb Morse is embarking on some MediaWiki projects and we're trying to do our communication via OWW. There are many interesting things he might pursue this semester, many of them improvements to OWW and / or MediaWiki that can make the conduction of open research much easier. For example, he's currently working on modifying an extension to MW that uses cookies to prevent data loss when the browser crashes or closes while editing a page. This would be a huge plus for OWW.

Finally, Andy Maloney joined our lab in October and has learned to use the wiki very quickly. He recently took his first leap into the public wiki by posting his incredible instructions on how to build a laser diode control system from OEM parts. I'm also pushing him to post some of his earlier research accomplishments on OWW, including a custom microscope he built for imaging ultrasonic fields via the sonoluminescence. His Google SketchUp drawings and fly-by animations of the thing are amazing and I want you all to see them!

Open teaching

I've also started posting teaching material on Scribd. I'm trying to be careful about copyrighted material, so I'm not sure whether I can keep that up. One of the things I try to do after lecturing is to "debrief" to help with next year's lecture. So, combining blogging with Scribd is a good way to do that.

Back Pat

Looking back over that list of new things we started doing only recently has made me feel great about our lab. Obviously that rate of science "opening" can't continue. But I really do think we'll be able to keep up most of the things we've started, and I'm excited about that. On our private wiki, use a template that Anthony wrote for giving yourself a pat on the back. You just add {{BP}} to a page to use the template, and then you get an electronic pat on the back and feel good about yourself. Or at least some of use do. So, I'm going to put {{BP}} on this article and it's for me and all the students in our lab for these accomplishments. It's also a {{BP}} for all of the scientists I've been talking to recently and who have helped us take all these steps. They've provided very valuable advice and examples about how to do it, as well as encouragement and feedback for the steps we've taken. Thank you!

Friday, January 16, 2009

Talents in the Lab

So, I just got back from a vacation--no internet access for a week! Ugh--I think some people find getting away from it all rejuvenating, but that is not me at all. In fact, this would be a complete non-talent for me. "Non-talent" is terminology from a book that I re-read while on vacation: "First Break All the Rules..." by Buckingham and Coffman. I first read this book several years ago, when I was immersed in the misery of being mismanaged. The concepts in this book are not complicated and to me even seem obvious (now that I've thought about them), but it still seems to be true that most managers ignore these concepts. Re-reading the book last week was even more illuminating, now that I've had a couple years of being a manager myself and can reflect on my own strengths and weaknesses as a lab manager and plan small changes that may have a big effect on outcomes for our students and the science we're doing.

One of my talents is the ability to read these kinds of management and leadership books without getting too hung up on the fact that they're not perfect science. Though, I do like this one especially because it is founded in a whole bunch of research (the authors are out of Gallup Consulting) and objective analysis. Via hundreds of thousands of interviews with employees and managers across all types of industries they tried to determine common qualities of managers whose groups far outperform the others. Thus, they tried to find out what great managers do differently than good and bad managers. The main points they found out ring very true for me, and really, THE main point is that great managers recognize the following things:

  • By the time people are grownups, their brains have been "wired" in unique ways because of their genetics and their experiences growing up. These genetics and experiences produce a set of talents and "non-talents" for any individual. The authors refer to this as the unique "filter" each individual uses for their everyday experiences. The key point is that these talents are not teachable to grownups and great managers recognize this. Skills and knowledge contrast with talents, in that they are teachable. Learning skills and knowledge is easy for someone with underlying talent in that area. Learning skills and knowledge without the underlying talent is a constant uphill battle.
  • In order to succeed and be happy in a particular job, a person needs certain talents. Great managers figure out what those talents are, and try to assess those talents when hiring people. Contrast this with the system most of us are familiar with, where people are assessed based on resumes and interviews which focus on skills and knowledge. Determining what talents a job requires is not easy. It's even more difficult for a manager to assess someone's talents. And possibly, for many people, the most difficult thing is for an individual to assess their own personal talents--I know it is very difficult for me.
  • A great manager spends time helping his people discover their own talents and helps them make career decisions based on those talents.

I think while reading this book in graduate school was the first time I'd consciously considered talents existing for things besides athletics, music, acting. I think I easily accepted, for example, that professional musicians had innate "hard-wired" abilities that enabled them to enjoy the hard work it took them to achieve that kind of excellence. I knew I didn't have those talents and didn't entertain any notion that I could just "work really hard" to become a professional musician or athlete. But I don't think I ever considered the vast array of other ways people could be talented, or non-talented, and I think I was probably a subscriber to the popular notion that I could succeed at any kind of job I landed just by forcing myself to work hard. And if I was failing it was my own fault for not working hard or smartly.

While considering that last paragraph, I think it's not quite correct. Back then, I probably did recognize the existence of many talents, but I had not sensibly defined them. For example, I may have thought I had a "talent for science," since from 1st grade through graduate school I had received good grades and succeeded in science "things." Therefore, I would have deduced that I have a talent for any kind of career related to science: graduate student, professor, R&D, science writer, elementary school science teacher, science policy advisor, etc. In fact, I think I entertained the notion of all of those careers at some point. I probably made the reasonable step of considering whether I'd like those careers, but I did not even come close to considering that it was my own talents and non-talents that would determine whether those careers would be thrilling or miserable.

Ever since reading that book, I've been wondering about my own talents. Ironically, I'm untalented at discovering my own talents. I suppose some people are very talented in this. When I preach to my students about this topic, I'm often asked, "What are your talents?" I know that I have many talents, and some I know specifically. For example, I know that I absolutely love computer programming. I can work on data analysis applications for 16 hours straight days at a time and love it. This is a talent for me, and considering that I was spending 10 hours a day in 3rd grade playing around with BASIC on my Commodore 64 this is not a surprising talent. But it's one I don't get to use very much any more, due to my career choice. I also know that I have a lot of non-talents. I think it's just as important to discover these, and for me these are easier to find. I'm still not quite sure what it's called, but I have at least one non-talent that would be required for easy writing of scientific papers and grant applications. In contrast, I find these blogs fun and fairly easy to spew out--I enjoy this kind of writing, and I probably have some kind of talent that is being used by blogging. But there is something about the precision or brevity or efficiency or whatever about formal papers and in particular grant writing that give me serious writer's block. I have been writing grants for two years now and it is always unpleasant and very difficult. I feel like I produce good documents, but it is very far from easy.

That last point is the key: it's not easy. (And I don't enjoy most parts of it.) This is a great way of discovering talents. In the book, they cite a manager (anonymously, unfortunately) who developed the "Sunday Night Blues Test." He asked his employees to stop and think on a Sunday night whether they were happy the weekend was over, or whether they were a little depressed. (Assuming a five day work week.) The employees then were to consider what specific things they had planned to do the next day. Their level of happiness / unhappiness about their activities the next day would be a way of understanding what talents or non-talents they possessed. I like this test, and it's helped me quite a bit in assessing myself.

I've used a variation of the test in the courses I've taught, in the hopes that my students will learn something about themselves far earlier than I ever did. On the last day of class in the four semesters I've taught, I've presented them this last un-graded homework assignment. You can view it on this openwetware page. I ask them to reflect back on the semester that's ended and to ask themselves which courses they're most sad are ending and which they are elated to be done with. I ask them to think about specific assignments that were fun or others that were dreaded. I don't have any kind of evidence, but I feel like there's enough variety of things students are asked to do that they may be able to discover talents and non-talents this way. A few of my students "turn in" this assignment via email or WebCT, and I always find it fascinating and pleasurable to read what they have to say. Usually these are students that I've come to know a bit, so I can give them a little feedback on it too, which I enjoy. (This probably indicates a talent I have for getting true pleasure out of students' successes.)

I realize this blog is getting long. I think I did say above that brevity is a non-talent of mine. I'm considering breaking this into two posts, but instead, I think I'm going to leave it as one post glued together by this added paragraph.

So, as I mentioned above, I re-read this book a couple weeks ago, and I must have marked up every other page and wrote down several dozen ideas it gave me for how to better teach, manage our lab, and be a better person. One of those ideas which I've followed through on is to work on identifying talents of the students in our lab. I have a few reasons for wanting to do this. The most important reason is that I want to maximize the success of every student that comes through our lab. I really believe that the more they can understand about themselves and their talents, the happier and more successful they will be in their next career step. The next reason is that I can manage the lab much more effectively if I know what talents and non-talents my students have. I'm not sure I have the talent to do this, but I am sure that it can't hurt for me to know more.

Believe it or not, I actually had a "talents" meeting with all three of my graduate students this week...between 2 and 3 hours with each person. I even went so far as to use the interviewing questions from the book. This was really cheesy, but my students trust me enough to have followed through honestly with the process. The questions are designed to reveal talents. I left the book in my office, so I can't quote any of them directly now, but some of the questions I found most revealing were:

  • What keeps you working here? (in the lab)
  • What is the best kind of praise you have ever received? What made it so good?
  • What is a productive partnership or mentorship you've had? Why did it work so well?
  • What are your current goals and what is your timeline for achieving them?
  • How often do you want to meet with me to discuss your progress?

All the questions are good, but those are coming to mind now as having elicited responses that pointed towards talents or non-talents.

I don't want to get into any specific results here, because my students and I didn't really discuss yet whether this process would be open or not. Actually, what I'm hoping is that through this continuing dialog, it will become a habit of our lab to point out to each other obvious talents and even non-talents. I think we have a lot of respect and trust in each other, so it's likely we can achieve this kind of productive openness. In any case, without being too specific, I can tell you that I was really surprised at how much I learned from these meetings. Even considering that I already expected them to be productive meetings. Again, I think I am lucky to have very good and trusting students, so our dialog was very open. In addition to the questions from the book, I also asked each student to talk about their most productive time(s) in the lab so far. (Another variation on the Sunday Night Blues Test.) I found this incredibly useful.

One of the most humorous, surprising, and potentially useful result was what I learned about the students' and my own talent for competitiveness. If someone has this talent, they are driven to compete and win against other people. A different kind of talent is a need for achievement. This is different, because it is not relative to other people, but an internal measuring stick and a desire to constantly improve. It turns out I and one of my students have a strong competitiveness quality. This didn't surprise me too much. What did surprise me was that the other students did not have this quality. One common flaw of managers is to follow the golden rule of treating others the way you'd like to be treated. I have this flaw too. Up until this week, I think I'd pretty much assumed that everyone was competitive. But what I discovered is that's completely not true. I had also down-played my own competitiveness, and I now realize it's an important part of me and my motivations. So, what can I do with this information? I'm not sure, but given how surprising some of it was, I don't see how it can make me a worse manager. Just a simple example is that if you try to motivate an achiever and a competitor by having them compete against each other it's not going to work well. I don't think I've tried do that, but as a manager I'm always doing something , whether or not I'm trying to.

I'm going to try to wrap up this wandering post now. If you have a talent for reading management or self-help books, I strongly recommend you read the book I've linked above, "First Break All the Rules." If you don't have that talent, I do recommend trying out different ways of discovering your own talents. During the few weeks I've been on FriendFeed, I've already seen a few people making or considering career moves. I think talents are the number one thing that will determine whether these moves produce success and happiness. A perfect example is the ineffective way in which university faculty are chosen. Successful graduate students and postdocs are evaluated for professorships based upon their achievements, skills, and knowledge as researchers in the lab. The job, however, is not at all similar. I am now a manager, teacher, grant-writer, leader, and I don't know what else...but I'm not doing research in the lab very often if at all. In light of the selection process (and the utter lack of training), it's not surprising that there is so much struggle seen in this career path. Luckily, I think I do have enough of the talents required for my job, and I can ultimately succeed.
 
Creative Commons License
This work is licensed under a Creative Commons Attribution-Share Alike 3.0 Unported License.