There is no such thing as the typical angler. Some fish for food, some fish for fun, most seek a little of both. Some anglers fish only lures, some stick with bait, some use whatever works at the time. Some fish offshore. Some stay in the bays. Some fish despite rain and wind, some wait for fair weather.
Some…
But that’s enough. I've hopefully already made the point that even anglers fishing in the same place can be very
different; anglers fishing for different species, along different parts of the coast are likely to be even less
alike.
Yet despite all those
differences, there seems to be one common thread connecting them all. Just about no one has anything good to say
about the
Marine Recreational Information Program, the survey that the National Marine Fisheries
Service uses to estimate recreational effort, catch, and landings.
The hostility toward MRIP is, in
many ways, understandable.
After all, MRIP is the tool that both
NMFS and the Atlantic States Marine Fisheries Commission use to set
recreational fishing regulations, and people usually dislike anything that
restricts or regulates their actions. Speed
limits, customs inspections, and MRIP all attract the same sort of
disdain.
That’s particularly obvious when
it comes to MRIP, because every time a bag limit or season is cut, or a size
limit upped, as the result of MRIP data, you hear all sorts of folks crying
that MRIP is flawed, that “the numbers are bad” or that, for some other reason, MRIP data is faulty and
should not be used. But when the
opposite happens, and the MRIP data allows managers to increase a bag limit,
lengthen a season, or let anglers take home smaller fish, you never hear a single angler complain
that such data is wrong and should not be believed, or that regulations should
remain more restrictive.
It is almost a cliche that "bad data" reduces landings, while "good numbers" let anglers kill more fish.
Which is enough to tell you where
a lot of the hostility toward MRIP comes from.
But then, it comes from other
places, too.
People tend to distrust the
unfamiliar, and things that they don’t understand, and the fact is that MRIP is
a complex construct involving multiple surveys and intricate statistical
calculations. Most anglers don’t really understand
how MRIP works, and few in the angling community (myself very much included)
have the mathematical background needed to understand its statistical basis.
So they fall back on arguments
like “No one ever asked me what I caught” and “I don’t know anyone who was
surveyed,” write off the math as the next thing to witchcraft, and label MRIP a
fraud.
Unfortunately, the angling press
is often complicit in such misdeeds. One
of the most recent examples of that appeared in an article in Delaware’s Cape
Gazette, titled “The problems with the Marine Recreational Information
Program.”
There, the author, a local
outdoor writer, led off with the somewhat remarkable statement that
“The Marine Recreational Information
Program is the basis of all regulations made by federal and state agencies, and
these regulations are what we have to live with. The regulations are formed by scientists, many
of whom do not know a striped bass from a sea bass, and then these regulations
are reviewed by the Scientific and Statistical Committee that is made up of
people who are supposed to know what’s going on in the field. But after looking at the numbers, I must
conclude they do not.”
He then goes on to pick a few
specific estimates which, in his view, demonstrate that MRIP is badly flawed.
In reality, he only demonstrated his
own ignorance about MRIP and about the greater management process.
It would be easy to write off the first line—that the biologists who manage East Coast fisheries “do not know as striped bass from as sea bass”—as mere hyperbole, and not something that readers were meant to believe, if we didn't hear something very similar at most management meetings, where someone from the recreational or commercial fishing industry will get up and announce that the scientists are mere pencil-pushers who only view fisheries through their data sets, and have no idea of “what’s really going on out on the water.”
There are far too many people,
including those who ought to know better, trying to convince us that such things are true.
The problem is that such folks, to the extent that they actually believe what they're saying, likely just don’t know enough fisheries scientists.
Over the course of my life, I’ve
had the privilege of knowing, conversing with, and even fishing with,
quite a few fisheries scientists, people who range from
fledgling biologists still working on their graduate degrees to experienced
researchers who have been in the field for decades, and can assure anyone
reading this that every one of them can tell the difference between a striped
bass and a sea bass—and distinguish between the various species of skates,
herrings, and hakes, too, which is something that I doubt the author of the
piece in question could manage.
The plain truth is that many
fisheries scientists are attracted to the career because of their early
appreciation of the outdoors, whether as anglers, scuba divers, or members of a commercial fishing family. Even as
students, they participate in trawl surveys and other activities that require them to sort, count, and identify a host of fish species as part of their day-to-day activities.
Suggesting that they can’t tell common fish apart is absurd.
Yet the writer’s comment does more than demean fishery scientists; it demonstrates his ignorance of the fisheries management system.
Regulations and underlying fishery management measures, at least at the federal level, where MRIP plays the greatest role, are not
determined by scientists, but by regional fisheries
management councils dominated by members of the fishing community. While such councils are guided by scientific
advice, the management measures themselves are determined by council vote; by
law, NMFS may only approve, disapprove, or partially approve such council
actions. Except under very unusual
circumstances, the agency itself is not authorized to initiate management
actions.
And when regulations/management
measures are adopted, they are not “reviewed by the Scientific and Statistical
Committee," for that’s not the SSC’s role.
Instead,
Magnuson-Stevens states that
“Each scientific and statistical committee
shall provide its Council ongoing scientific advice for fishery management
decisions, including recommendations for acceptable biological catch,
preventing overfishing, maximum sustainable yield, and achieving rebuilding
targets, and reports on stock status and health, bycatch, habitat status,
social and economic impacts of management measures, and sustainability of
fishing practices.”
The SSC might play many roles, but
reviewing regulations is definitely not one of them.
Having said those things, MRIP is
far from perfect. All of its estimates
are just that—estimates—and all include some degree of uncertainty. Sometimes the degree of uncertainty can be
very large. And sometimes, as in the
case of the recent discovery of error generated in the Fishing Effort Survey, there
can be flaws in its methodology that need to be fixed.
But that doesn’t mean that MRIP
is as badly flawed as its detractors insist.
Often, the biggest flaws are in its detractors’ notions of how MRIP
ought to be used.
The Cape Gazette article
illustrates that very clearly, when it makes statements that intend to impeach
the overall accuracy of MRIP by citing particular estimates, limited to a single
state and a single sector, which may seem to be wildly inaccurate, when MRIP
already warns users about such estimates' their flaws.
MRIP is no different than any
other statistical survey, in that the probable level of error is reduced when
the number of samples is increased. Estimates that, by their nature, reflect very few samples are likely to include
a high degree of uncertainty. Thus, when
the author of the Cape Gazette article questions the data related to summer
flounder landings by party boats based in Delaware, Maryland, and Virginia, he
reveals his ignorance about how the survey actually works.
Uncertainty in MRIP estimates is calculated
by a statistical measure called “percent standard error.” The
percent standard error in the estimates of Delaware, Maryland, and Virginia party
boat landings of summer flounder in 2022 are 89.3, 57.8, and 91.9, respectively;
the NMFS website on which such estimates appear warns, in bold red type, that
“MRIP does not support the use of
estimates with a percent standard error above 50 and in those instances, recommends
considering higher levels of aggregation (e.g., across states, geographic
regions, or fishing modes)."
The website also warns that
because of the estimates’ high percent standard error, such estimates are not
significantly different from zero, effectively warning that they are
meaningless. In answer to the question
“Does Harvest…Total Weight (lbs) Meet MRIP
Standard,”
States “NO” on all three
occasions.
Yet the author of the Cape
Gazette article, who has been warned by NMFS that the MRIP estimates he cites
are not accurate enough for management use, nonetheless uses them as examples
of why MRIP ought not to be trusted.
In view of such clear misrepresentation of MRIP's precision, it could be easily argued that it isn't MRIP, but the author, that ought not to be trusted.
Yet such misrepresentation of MRIP
data, and how it ought to be used, often occurs in the angling press, in articles and
editorials that either reflect the ignorance of their authors or, more darkly,
seek to impugn MRIP as part of a greater effort to undermine the federal
fishery management system, in an effort to increase landings beyond prudent
levels in order to increase short-term economic benefits.
Next Sunday, we'll look into the latter situation in greater detail in "Maligning MRIP for Fun and Profit Part II: For Profit, Full Coolers and Political Gains."
No comments:
Post a Comment