RE: productivity standards (long)
From: | "Morken, Tim" <tim9@cdc.gov> |
Bert wrote:
<I disagree that simply looking at labor utilization versus workload may be
used for staffing decisions because such a decision making process invites
technicians and supervisors to "game" the system.>
If you count minutes to show workload I guess people could claim more time
spent than acutally was used. That's why, for staffing justification, I
always used only the hard numbers of work produced by the lab (never by
individual tech). I kept databases of all countable work done - blocks made,
H&E slides, IHC slides, unstained slides, and even wasted cuts that were
thrown out (they did work to make those too!), whatever could be counted.
That sort of data cannot be argued against and when I can show that we did
30 percent more work with the same number of people it is much easier to ask
for more positions, especially if you can document yearly increases with no
plateau in sight.
From my experience, no tech is going to cut extra blocks or slides, or do
extra special stains, or whatever, to boost the lab output - they are busy
enough with the work requested!
As to whether it is worthwhile to do this, nobody is going to listen to
complaints about being overworked unless you can prove it, so in that sense,
doing this helps your lab, and helps the people working in your lab.
Tim Morken
Atlanta
-----Original Message-----
From: Bert Dotson [mailto:amdj@duke.edu]
Sent: Thursday, February 22, 2001 4:33 PM
To: Histonet (E-mail)
Subject: Re: productivity standards (long)
One must first ask the obvious questions: (1) what is the purpose of the
standards? (2) how will they be used? (3) how will performance relative to
the standards be measured?
There may be a number of purposes behind the decision to use some sort of
standard that I am unaware of but in my experience they really come down to
two. Standards are set up to measure laboratory performance or to measure
individual performance. The first instance should be less controversial so
I will tackle it first.
Measuring laboratory performance can be used for cost accounting,
setting staffing levels, benchmarking against peers/competitors or any
combination. In these instances, it is very important that the standard be
tied to some relatively universally acknowledged measurements or the data
are meaningless.
On this continent we have two such systems that are up-to-date and
widely used and one that is hopelessly out-of-date and still frequently
used. The two current standards are LMIP measurements from the CAP and
Canadian workload measurement system. The hopeless one is the old CAP
workload values that are still used by some but reflect none of the
technological and market-place changes (automation, wider availability of
ready-made reagents, shift from pathologist gross to technician or PA
gross...) that have occurred over the past 10 years. The LMIP is a
subscription system that carries a healthy price-tag but roughly measures
the amount of output (slides) that can be expected from an average FTE over
a period of time. This works out to about 1000 slides per FTE per month
inclusive of a certain percentage of special stains etc. The Canadian
system assigns a number of units of effort (minutes) to each technical
task. The numbers of each task performed are then counted up and multiplied
by the unit and you have an approximation of the level of effort required
for a given lab. This system is much more versatile for cost accounting and
for laboratories that perform a lot of specialty tests or have
significantly different mix of workload from those in the LMIP (all
research labs). The Canadian value to produce one finished H&E slide from
one cut and blocked tissue (but not processed) is nine minutes.
No system can be perfect and there will be justifiable variations from
lab to lab. Canadian standards are based on actual time-studies in
laboratories and contain small increments of overhead activities and
step-function costs such as supervision and processor changing. Larger
laboratories can perform better because they spread these activities over a
larger specimen volume. LMIP is simply not suitable for research settings
because it is based on actual clinical laboratories performance over longer
time-spans (thus accounting for some fluctuations in daily workload). So as
Tim Morken pointed out, these measures must be put into the context of the
specific lab and its performance versus these measures over time. I
disagree that simply looking at labor utilization versus workload may be
used for staffing decisions because such a decision making process invites
technicians and supervisors to "game" the system. Academia and the
government are notorious havens for such "gaming behavior."
The use of "standards" for assessing individual performance is a real
problem. A government study (post office or census, I can't find the
reference) in the '50s examined the performance of card punch operators
when given various performance expectations. The study found that those
that were given specific expectations in terms of the number of cards to be
processed seldom reached the expectations (regardless of the actual number)
and experienced significant stress when they did so. Those who were not
given expectations soon surpassed the productivity of those that were and
experienced no stress--go figure.
If you must provide quantitative measures of employee productivity (as I
must thanks to policies beyond my control) be VERY careful. From almost six
years now of experience with this I can tell you there is no good way to do
it. The best method I have used is a multivariable regression that allowed
me to identify individuals whose presence in the lab significantly impacted
total laboratory productivity. The powers that be found that method too
complicated (they didn't understand it). Currently each individual keeps
track of the number of blocks they cut and embed and these are monitored
over long periods of time to control for differences in daily assignments
and workloads. This is less than adequate but it does tend to give a more
realistic picture and stifle some of the negative behaviors.
I can provide a few pointers for those establishing a new system or
modifying an old one:
Do Not set a standard "X blocks per hour." There is no way to properly
monitor this unless you intend to stand over the techs with a stop watch
and an abacus all the time. If you only monitor sometimes then performance
when you monitor will be significantly different from when you don't. You
will be measuring ability and not productivity. I disagree to some extent
with the statement someone made that some techs are more talented. Many
poor performing techs are capable of cutting at rates close to those of the
better techs. They just don't.
Do Not count blocks over short periods of time (days or weeks). You do not
have an unlimited supply of blocks. Those one person cuts are those another
person doesn't. You will create block hogs.
Do have behavior and quality measures in addition to productivity. If
productivity is the primary basis for deciding compensation or continued
employment, poor quality and counter-productive behaviors will abound.
I've tried to be brief but given the amount of time I have wrestled with
this issue it is difficult. And it is silly. Everyone in the lab knows who
is not pulling their weight--just introduce peer evaluation and avoid all
this busy work.
Bert Dotson
<< Previous Message | Next Message >>