When I was learning proteomics, I grappled with the then burgeoning field of proteome informatics. With the help of others, I installed the newer algorithms and started to get comfortable operating in a command window (Cygwin to be exact). Then, I met the hurdles of extracting different types of spectral data into a format for searching FASTA files — sheesh… as an analytical chemist, it was enough to make my head spin. Fast forwarding through a few years of post-doctoral fellowships, I got comfortable making box plots, K-clustering analysis and other statistical calculations, graphs and figures in the software package R. I was 'doing biology, however it needed to be done;' what C. Titus Brown blogged about this past week,
Enter Sean Eddy's recent talk, "High Throughput Sequencing for Neuroscience", transcript here. This is a great talk that brings up several good points, though I won't go into detail here. However, I do want to highlight the rather contentious point in the talk, "Scripting is a fundamental lab skill, like pipetting." Here's a counter argument from Iddo Friedberg, "Why scripting is not as simple as… scripting."
All of these scholars make great arguments, but here's my take, or shall I say, addition to their discussion. Some of us know just enough to get into trouble (like myself under a bathroom sink with a plumbers wrench), but all of us should know what we don't know. Whether it's determining the workflow tool for your analysis, designing appropriate control experiments, knowing the limitations of a particular technology, or applying the correct statistical tests to determine significance, ask for help. It's ok not to know everything, learn everything, and do everything. Outsource to trusted parties. And finally, be sure to make all the experiments count and never throw out any data.