Part 6: Experimental work
..or modelling, as the case may be. And I'm not talking about the "make love to ze camera" modelling, but rather the application of mathematical methods to extract and predict trends. Most likely, your Ph.D. is going to entail massive amounts of experimental work, computer modelling, or a linear combination thereof. Moreover, proper planning and execution of lab work will probably be crucial in your future job, so you might as well get good at it. While there are many ways to becoming a good experimentalist, I personally consider the following to be of utmost importance:
Don't take any shortcuts
The quality of your data is directly correlated to the quality of any publications or presentations derived from your work. Without rigorous data collection, you've got nothing. This also means reproduction, as a single data point or a trend means very little unless it's reproducible. To paraphrase my former co-advisor, there's a reason they call it re-search and not just search. If at all possible, you should also try to confirm your results via a complementary technique.
Statistical methods are your friends.
Never exclude data points unless you are certain they are outliers. Which either means that you know something went wrong with that particular experiment, or (more likely) that you've performed the proper statistical tests to ascertain whether said data point indeed belongs to the remaining set.
Know Thy Experimental Techniques
Never, ever, ever, ever just use black box technology. ALWAYS know the working principle, the possibilities and limitations of the experimental techniques you use. Each and every experimental technique comes with it's own set of possibilities and limitations, and this counts double for mathematical models. Don't be the guy (or gal) who accepts instrument output as truth without any form of bias. In principle, you could be using a metal detector to be looking for unicorns in your sock drawer and never see the problem unless you really know your instruments. As instrumentation gets more advanced and data collection becomes increasingly automated, more data can be collected within a short period of time. Which is awesome. However, this places more responsibility on the individual researcher, both with respect to data treatment and with respect to trusting the instrument output. 'Cause the instrument is gonna spit out a number or a data matrix no matter what - whether or not it means anything and if so what, is for you to assess.
Plan your data collection.
Walk into the lab with a hypothesis and a planned set of experiments to put this to the test. Look at your data during the process, and change the experimental set-up according to the outcome. Data collection is an iterative process, and a plan of research is just that, not a set of commandments. Start the data collection with a goal in mind.
For me, I've never started an experiment if I'm not reasonably certain the data is going to end up in a publication since I started grad school, and I'm also inclined to plan the research with a specific journal in mind. This has worked out pretty well for me, but there have also been cases where it failed spectacularly. Most notably, experience has taught me that if a study is assumed to a) be quick and b) the outcome is expected to be predictable, it's very likely to be neither.
..or modelling, as the case may be. And I'm not talking about the "make love to ze camera" modelling, but rather the application of mathematical methods to extract and predict trends. Most likely, your Ph.D. is going to entail massive amounts of experimental work, computer modelling, or a linear combination thereof. Moreover, proper planning and execution of lab work will probably be crucial in your future job, so you might as well get good at it. While there are many ways to becoming a good experimentalist, I personally consider the following to be of utmost importance:
Don't take any shortcuts
The quality of your data is directly correlated to the quality of any publications or presentations derived from your work. Without rigorous data collection, you've got nothing. This also means reproduction, as a single data point or a trend means very little unless it's reproducible. To paraphrase my former co-advisor, there's a reason they call it re-search and not just search. If at all possible, you should also try to confirm your results via a complementary technique.
Statistical methods are your friends.
Never exclude data points unless you are certain they are outliers. Which either means that you know something went wrong with that particular experiment, or (more likely) that you've performed the proper statistical tests to ascertain whether said data point indeed belongs to the remaining set.
Know Thy Experimental Techniques
Never, ever, ever, ever just use black box technology. ALWAYS know the working principle, the possibilities and limitations of the experimental techniques you use. Each and every experimental technique comes with it's own set of possibilities and limitations, and this counts double for mathematical models. Don't be the guy (or gal) who accepts instrument output as truth without any form of bias. In principle, you could be using a metal detector to be looking for unicorns in your sock drawer and never see the problem unless you really know your instruments. As instrumentation gets more advanced and data collection becomes increasingly automated, more data can be collected within a short period of time. Which is awesome. However, this places more responsibility on the individual researcher, both with respect to data treatment and with respect to trusting the instrument output. 'Cause the instrument is gonna spit out a number or a data matrix no matter what - whether or not it means anything and if so what, is for you to assess.
Plan your data collection.
Walk into the lab with a hypothesis and a planned set of experiments to put this to the test. Look at your data during the process, and change the experimental set-up according to the outcome. Data collection is an iterative process, and a plan of research is just that, not a set of commandments. Start the data collection with a goal in mind.
For me, I've never started an experiment if I'm not reasonably certain the data is going to end up in a publication since I started grad school, and I'm also inclined to plan the research with a specific journal in mind. This has worked out pretty well for me, but there have also been cases where it failed spectacularly. Most notably, experience has taught me that if a study is assumed to a) be quick and b) the outcome is expected to be predictable, it's very likely to be neither.
No comments:
Post a Comment