I recently came across a piece on fisheries management—more specifically,
Gulf of Maine cod management—that was written by a former National Marine
Fisheries Service biologist, who spent a decade of his life as the lead
scientist performing the assessment of Gulf of Maine cod, which might just be
one of the most frustrating and thankless jobs on the East Coast.
I have been involved in the fisheries management process and
conservation advocacy for a very long time, and I’m not sure that I have ever
before read anything that engaged me in the same way that “Bankers’ Hours to
Bankruptcy” did, for it speaks with the voice of someone who labored within the
management system for a very long time, someone who cared very much about getting
things right for both fish and fishermen, and who touches on just about every
aspect of why fisheries management efforts, and particularly efforts to manage
cod, so often go wrong.
He begins with describing how surface appearances can often
be deceiving.
“Boats [from Massachusetts and New Hampshire] were sailing at
reasonable hours, towing close to home, coming back with what, on the surface,
looked like solid trips. The joke among
Gloucester fishermen was that cod fishing had turned into “bankers’ hours”;
no more brutal all-nighters chasing scattered fish over the horizon. Cod seemed thick near the western Gulf of
Maine ports, and for a little while the mood—if not jubilant—was at least
cautiously hopeful…
“In 2008, a federal stock assessment…concluded that Gulf of
Maine cod had rebuilt to about 58 percent of its target spawning biomass, with
projections that the stock might be fully rebuilt by 2010. After decades of decline and increasingly
strict regulations, it was the storyline everyone wanted: sacrifice, recovery,
vindication.”
But there was a problem.
Many of the assumptions underlying the 2008 assessment were too
optimistic. Thus, when Michael Palmer
became the lead scientist for the 2011 stock assessment, the underlying
assumptions were revisited.
“Much of what we changed would have sounded like housekeeping
to anyone outside the room. We fixed how
we converted between estimated fish numbers and weights—reshaping our picture
of how much cod biomass we thought was out there. We stopped pretending every survey number was
equally solid; some estimates were clearly noisier than others, so the model
let them tug less on the final answer.
And we gave the model a bit more room in how it followed the catch
history. On paper, it was just a
different way of reading the same history—in practice, it was better science.”
Such changes, which occur in many stock assessments, not
just Gulf of Maine cod, are largely unseen by the public, although the
recent Atlantic menhaden stock assessment, which corrected a previous error in
the calculation of natural mortality and, as a result, reduced the size estimate
of the menhaden stock by about 37%, was a well-publicized exception.
The changes included in the 2011 assessment reduced the
estimated size of the Gulf of Maine cod stock, too, and by a far greater percentage,
for
“once the new model was fully wired up and the data was
pushed through, the stock we thought was more than halfway rebuilt suddenly shrank. Cohorts we’d been counting on all but
vanished, and historical biomass estimates fell by more than 70 percent. The recovery narrative that had been built
over the previous decade—sacrifice, rebound, vindication—collapsed in a few
pages of output.”
At that point, the scientists had done their job. They had improved the model used to estimate
Gulf of Maine cod abundance, and they had developed a more accurate stock
assessment as a result. But what the
scientists couldn’t do—what no fisheries scientist can do, regardless of the
species involved—is translate the stock assessment into effective
regulations. That job falls to the regional
fishery management councils, to NMFS and, in the case of many species (but not
in the case of cod), to the Atlantic States Marine Fisheries Commission and/or
state regulators.
And the regulatory folks weren’t very happy with the results
of the 2011 stock assessment.
“My lane in all this was narrow but well defined: assemble
and vet the data, choose and run the models, and explain what the results did
and didn’t mean. I didn’t vote on
quotas; I handed managers the best picture we could produce, uncertainty and
all, and they decided what to do with it.
“The people holding the levers of management didn’t like what
they saw. Neither did much of the
industry. The assessment was criticized
from every angle—data inputs, model choice and structure, reference
points. Under that pressure, the big
cuts implied by the 2011 results were softened and delayed, and instead of
fully acting on them, the system asked for a do-over.”
Because there’s a funny thing about stock assessments: If an assessment comes out that requires a
reduction in landings—and often, the reduction doesn’t have to be all that
large—we hear members of the commercial, for-hire, and, more and more in recent
years, the recreational fishing industries complain about “bad science,” and
allege that “the numbers are wrong,” but if the assessment allows landings to
increase, no one questions the science or the data at all.
It's all good if it allows them to bring home more fish, and
all bad if it restricts their landings.
There was almost no discussion of what would make the most
sense from a policy or management perspective.
And then there are the politicians who get involved. In the case of the Gulf of Maine cod, Michael
Perry wrote,
“Years earlier, Congress had written the law that said we
would base catch limits on science and rebuild depleted stocks. We were just doing the work the statute
required. But when the results pointed toward
painful cuts, some of the same elected officials who had helped pass that
framework into law turned around and attacked the science and the policies that
flowed from it.
“As Senator John Kerry wrote to the Secretary of Commerce on
December 14, 2011: ‘This GOM cod situation is further proof that the entire
research and data process needs to be completely overhauled. Therefore, in conjunction with the new
assessment for GOM cod, I ask that you undertake an end to end review of the
stock assessment process that includes the analysis and recommendation of outside
parties.’”
That was not a unique occasion. It is routine for politicians, who might
have, at best, a rudimentary understanding of fisheries management, to try to
undercut the fishery management process and impeach fisheries scientists just
so their constituents can kill more fish than science or good judgment would
allow.
That sort of political interference may have reached its
peak in the recreational red snapper fishery in both the Gulf of Mexico and the
South Atlantic where, a
decade ago, we saw former Congressman Garret Graves (R-LA) introduce H.R. 3094,
the Gulf States Red Snapper Management Authority Act which, although never
passed, would have stripped NMFS of its authority to manage Gulf of Mexico red
snapper, and vested that authority in a new management body composed of
fisheries managers from the five Gulf states, after “anglers’ rights”
organizations headquartered in the region actively opposed the science-based
measures needed to rebuild the red snapper stock.
In both instances, the goal was to block science-based
efforts to manage the recreational red snapper fishery.
Not surprisingly, in the case of Gulf of Maine cod, the
fishing industry worked hard to impeach the science.
“For years, some in the industry argued that the surveys were
simply missing cod. Their skepticism was
understandable. If you can still fill
your hold in your best spots but the survey index is falling, it’s tempting—almost
irresistible—to believe the survey must be wrong.
“And there were, to be fair, plenty of technical questions to
point to. The survey trawls weren’t the
same as commercial gear. Their doors
spread differently; their nets fished a little higher or lower; their tows were
shorter, slower, more standardized…
“Those were real, worthwhile scientific questions. The problem is how they were used.
“A small but influential set of voices in the management
process—industry representatives, academic consultants, and a few advisors—leaned
hard on those uncertainties. They
highlighted every potential bias that might make the surveys look too
pessimistic and treated them as proof that the stock was healthier than the
assessments suggested. Questions about
gear efficiency, selectivity, calibration coefficients, and survey design
became a kind of fog. Whether
intentionally or not, the effect was to keep attention focused on what might be
wrong with the warning lights, rather than on the very real possibility that
the engine itself was failing.”
When merely questioning the methodologies used in the cod
assessment failed to impeach its conclusions, the fishing industry went a step
further.
“When official assessments warned that cod were in deep
trouble, segments of the industry increasingly responded by commissioning their
own analyses. Outside consultants—often respected
quantitative academic scientists—were hired to critique government models,
reanalyze data, or generate alternative population estimates.
“Sometimes those critiques caught real problems. No assessment is perfect; outside eyes can be
invaluable. But over time, a pattern
emerged that was hard to ignore: industry-funded science almost always bent in
one direction. It emphasized
uncertainties and alternate interpretations that could justify higher catches
or delay cuts, rarely the reverse.
“In public debates, phrases like ‘science for hire’
started to surface. In council meetings,
dueling narratives about stock status became weapons rather than tools.
“The erosion of precaution wasn’t abstract. You could see it in the model choices. Industry consultants often pressed for
strongly domed selectivity in the assessment models—telling the models
that mid-sized cod were easy to catch while the biggest, oldest fish mostly
slipped through. On paper, it turned the
absence of large fish into ‘cryptic biomass’ lurking just out of view…
“You could see it again in the population projections built
off those consultant runs. The
rebuilding deadline stayed the same on paper, but the bar for what counted as ‘rebuilt’
moved. By swapping in different
recruitment assumptions that said cod could hit peak production of young fish
at a smaller stock size, it made it easier to claim we were on track without
actually putting more cod in the water.
“From my seat at the science table, I watched the
uncertainties I saw as reasons for caution repurposed as excuses for
inaction. If surveys might be
missing cod, if models might be biased low, if a consultant could spin
up an alternative set of numbers with a higher biomass line—there was always an
argument for waiting one more year before making the really hard cuts.”
And once again, such industry actions were not unique to
Gulf of Maine cod.
In
the Gulf of Mexico, there was the so-called Great Red Snapper Count, an effort
to impeach NMFS’ red snapper data through what was touted as an “independent”
study not conducted by federal fisheries scientists, although funded with about
ten million taxpayer dollars. Although
the Count did find far more red snapper in the Gulf than NMFS believed were
there—primarily fish widely scattered over low-profile bottom, where surveys
didn’t expect the structure-loving snapper to be—when biologists considered
that data, it didn’t lead to a large increase in the total allowable catch,
largely because of the high level of uncertainty surrounding the Count’s
findings. One conservation group, The Ocean Conservancy,
noted that
“Invited reviewers from the Center for Independent Experts,
who performed the first external peer review of the Great Red Snapper Count,
identified issues around methodology, calibration, sample sizes and uncertainty
that warrant further review, particularly given the magnitude of changes to red
snapper management being considered.”
“NOAA pledged to take the findings of [the Great Red Snapper
Count] and incorporate them into its next assessment of red snapper which was
scheduled to begin in 2021. While it
would have been reasonable to expect the results of the [Great Red Snapper Count]
to simply become the new benchmark, NOAA insisted that those findings would
have to be calibrated and synched up with the data streams and techniques it
had used in the past. The same data
streams and techniques that the [Great Red Snapper Count] had shown to be
inaccurate by a factor of at least three.”
Because in its efforts to impeach federal fisheries science,
the Coastal Conservation Association, which wants to see an increase in
recreational red snapper landings, naturally wants to see its preferred studies
prevail, regardless of the true accuracy of their conclusions.
Now, in the
South Atlantic, something called the South Atlantic Red Snapper Research
Program, utilizing scientists from various universities, is conducting a
similar “independent” study, which will almost certainly be used by various
recreational fishing organizations to challenge federal scientists’ findings IF
it develops data that seems more favorable to the recreational fishing
industry.
Perhaps the greatest tragedy to come out of the whole Gulf
of Maine cod affair wasn’t the failure to rebuild the cod stock, but rather
that the constant battle to develop and present the best possible science
ultimately wore down Michael Palmer and, despite his dedication to the effort,
convinced him to give up his scientific career.
“I never stopped believing in the work itself. I trusted the science, respected the skill
and hard-won knowledge of working fishermen, and believed in the colleagues in
the trenches with me—survey technicians, modelers, analysts, port samplers,
observers—doing their best to wrestle meaning from noisy data, not script a
convenient answer. What wore me down
wasn’t some grand conspiracy; it was seeing how, when uncomfortable results
landed, uncertainty could be amplified while what we did know slipped into the background. Support from senior leadership often felt
thin, and the hardest conversations fell to the people closest to the
work. In that kind of environment, the
science often felt like background noise instead of the basis for decisions.”
We can only surmise how many other scientists, dedicated to
the truth as Michael Palmer was, have chosen the same route rather than see
their work constantly derided by industry spokesmen who, seeking to put more
dead fish on the dock regardless of the long-term cost to the public and to the
resource, claim that the science-based federal fishery management system is “broken”
and needs to be replaced by state fishery managers, knowing that such state
managers are much more susceptible to political pressure and, unlike federal
fishery managers, are generally not legally bound to prevent overfishing or to
rebuild overfished stocks.
We can only guess how many stocks of fish—not only Gulf of
Maine cod, but winter flounder, striped bass, red snapper, and others—have fallen
victim to the sort of obstructionism that hindered the implementation of
effective, science-based rebuilding plans, became overfished or, in the case of
winter flounder, collapsed altogether.
Michael Palmer’s writing gives us a look into the real world
of the professional fishery manager, a world where scientists are castigated,
rather than rewarded, for doing their jobs well, and where science is too often
shunted aside when political and industry forces combine to suppress what
should be the guiding principle of fisheries management.
In this blog, I often make special efforts to recognize, and
offer special respect toward, the fisheries scientists who seek to rebuild and
maintain healthy fish stocks. Michael
Palmer’s story helps to explain why.