If fisheries managers want to have any hope of properly regulating recreational landings, they have to know just how many fish anglers actually kill. That’s a hard thing to do.
Estimating commercial landings is relatively easy. The universe of commercial fishermen is fairly small, they are generally required to have licenses and, in most fisheries, have to fill out vessel trip reports, often before their boat returns to shore. Shoreside fish buyers and packing houses generally have to fill out and file their own reports of the fish that pass through their hands, and who they purchase them from, which provides a sort of ground-truthing for the commercial fishermen’s filings.
Yes, some fish fall through the cracks. There are illegal landings, and a few fish sold directly from the boat to end users, which don’t make it into the reported landings, but those instances probably make up a very small proportion of the commercial landings. Sometimes fisherman and fish dock conspire to hide or misreport landings, as was the case with Carlos Rafael up in New Bedford, and with some folks right here on Long Island, who were engaged in the summer flounder fishery. Those instances can substantially distort the commercial landings picture, but they don’t occur often.
Generally, the commercial landings estimates are pretty good.
Getting accurate recreational fishing data is a lot harder.
There are far more anglers than there are commercial fishermen. They fish from for-hire boats, and from private boats that might be docked at one of the many marinas scattered all along the coast, or tied up in an obscure creek or canal that abuts someone's back yard. They fish from boats that might be cartopped or trailered, and not kept in the water at all. They fish from rental boats, kayaks and paddleboards, and from every bit of accessible land that touches saltwater. And with very few exceptions, they’re not required to report what they catch; they can just take it home, with no one the wiser.
For many years, the National Marine Fisheries Service has tried to figure out the best was to estimate recreational landings. The earliest surveys, conducted prior to 1981, were slapdash and very badly constructed affairs, which didn’t produce meaningful data. In 1981, NMFS rolled out the Marine Recreational Fishery Statistics Survey, which was light years ahead of what existed before, but was still badly flawed. A National Academy of Sciences report, Review of Recreational Fisheries Survey Methods, released in 2006, found many problems with that MRFSS program, and said, in part, that
“The MRFSS (as well as many of its component or companion
surveys conducted either indirectly or independently) should be completely
redesigned to improve its effectiveness and appropriateness of sampling and
estimation procedures, its applicability to various kinds of management
decisions, and its usefulness for social and economic analyses. After the revision is complete, provision
should be made for ongoing technical evaluation and modification, as needed, to
meet emerging management needs…”
In response to that advice, NMFS has replaced MRFSS with the Marine Recreational Information Program, which was intended to address and avoid all of the problems inherent in MRFSS.
On the whole, the new MRIP represented a marked improvement over the old program. In early 2017, the National Academy of Sciences released its Review of the Marine Recreational Information Program. That report didn’t say that MRIP was perfect—there is always room for improvement—but it did say that
“Work to redesign the National Marine Fisheries Service’s
recreational fishery survey program (now referred to as the Marine Recreational
Information Program) has yielded impressive progress over the past decade in
providing more reliable data to fisheries managers. Major improvements to the statistical
soundness to the survey designs were achieved by reducing sources of bias and increasing
sampling efficiency as well as through increased coordination with partners and
engagement of expert consultants.”
Overall, it was a solid endorsement.
Yet, despite the National Academy of Sciences’ general improvement of the MRIP survey, it is still often disdained by anglers and the angling industry, which often addresses it in a very negative and dismissive manner. For example, the Mid-Atlantic Fishery Management Council’s Summer Flounder, Scup, and Black Sea Bass Fishery Performance Report, from the August 2019 Advisors’ meeting, notes that
“Multiple advisors said they had no faith in the data from
the Marine Recreational Information Program (MRIP), which they see as inaccurate
and fundamentally flawed,”
and lists a number of complaints that the advisors had about the program.
While some of those complaints undoubtedly arose out of an MRIP survey that returns data, and leads to management measures, that some advisors just plain don’t like, regardless of the data’s validity, and some other of the complaints seem to have arisen out of some advisors’ misunderstanding of how the MRIP program operates, there is no question that MRIP-based management measures often fail to constrain recreational landings to prescribed levels, and thus are deserving of criticism. Striped bass in Maryland’s portion of Chesapeake Bay, and black sea bass in the northeast are two examples that immediately come to mind.
However, a closer examination demonstrates that MRIP-based measures normally fail when MRIP is used in inappropriate ways, something that most often happens when MRIP is linked with the Atlantic States Marine Fisheries Commission’s concept of “conservation equivalency.”
Boiled down to its essentials, conservation equivalency is a policy that allows a single state, or perhaps a group of states, to adopt management measures for a species that differ in one or more respects from the measures specified in the relevant ASMFC fishery management plan. The idea is that conservation equivalency allows a state to craft measures that best suit its local fishery, without doing harm to the stock.
In theory, it should work quite well. In reality, it hasn’t been so successful.
One of the reasons for that is that, in allowing states to adopt measures that differ from the coastwide standard, conservation equivalency degrades the precision of MRIP.
There are two things to think about when considering the accuracy of an MRIP estimate. One is that, as NMFS tells us,
“The more samples you draw, the more precise your estimate.”
The other is that, no matter how many samples you draw, there will still be some level of error in the resulting estimate. As NMFS explains,
“Because a sample does not include all members of a population,
an estimate based on a sample is likely to differ from the actual population
value. Indeed, sampling error is
inherent in all sample statistics…
“The most common measure of sampling error is precision,
which measures the spread of independent sample estimates around a true
population value. This is sometimes
understood as the standard error or confidence interval. We account for standard error in our
recreational fishing estimates by ensuring these estimates are made up of two
parts: a point estimate, which represents our
estimate of total recreational catch, and a percent standard error, which
represents our confidence in this value and is similar to the margin of error
used in polling. The lower the percent
standard error, the higher our confidence that the estimate is close to the
actual population value.”
If we apply that to, say, the 2018 black sea bass fishery, we find that the percent standard error for black sea bass caught by anglers between the Canadian border and North Carolina is 5.7, which suggests a high degree of precision, while the PSE for all black sea bass harvested by those anglers in 2018 is 7.7, which is still very good. If fishery managers base recreational management measures on that coastwide data, then absent any extraordinary events such as an unusually stormy season, those recreational management measures would have a good chance of keeping landings right around the recreational harvest limit.
However, the Mid-Atlantic Council decided that the recreational black sea bass fishery could be managed on a conservation equivalency basis, with the ASMFC approving each state’s proposals. So at its February 2018 meeting, ASMFC’s Summer Flounder, Scup and Black Sea Bass Management Board decided to allocate 61.35% of the recreational harvest to New York and New England, where most of the recreational harvest is caught, to include New Jersey with the southern states between it and North Carolina (even though New Jersey arguably shares most of its black sea bass fishing grounds with New York, and fishes on the same sub-stock of black sea bass as western Long Island and New York City during the summer, and the on the same sub-stock as New England and eastern Long Island during the winter), to give New Jersey and the other southern states 38.65% of the recreational landings, and to give New Jersey 78.25% of the fish awarded to the entire southern region.
So far, so good, but then conservation equivalency sets in.
Addendum XXX to the Summer Flounder, Scup, and Black Sea Bass Fishery Management Plan set a basic 15-fish limit, 12 ½-inch minimum size and May 15-December 31 season for the states between Delaware and North Carolina. North of that, conservation equivalency held sway. New Jersey was permitted to adopt a complex set of regulations that included four different combinations of size limit, bag limit and season, interrupted by two different season closures, while New York and New England were provided with a standard set of rules, but was permitted to choose conservation-equivalent regulations so long as such regulations did not
“exceed a difference of more than 1” in size limit and 3 fish
in possession limit from the regulatory standard.”
That set the stage for a situation where no state between New Jersey and Massachusetts—the states that account for more than 90% of the black sea bass catch—shared the same regulations, even though their boats might fish side by side when targeting sea bass in federal waters.
It also illustrated the wrong way to employ MRIP estimates.
Consider New Jersey, which provides the most extreme example.
Just going from a coastwide estimate of black sea bass harvest to a New Jersey-specific estimate significantly reduced the precision of landings estimates, with the present standard error increasing from 7.7 to 16.3; even so, a PSE of 16.3 is good enough to support reasonably effective management measures, if they were calculated on an annual basis.
But that’s not what New Jersey did.
Instead, the state adopted an intricate set of rules that saw the season begin on May 15, with a 12 ½-inch size limit and 10-fish bag, only to close on June 22. At that point, the season closed for 8 days, only to reopen on July 1 with the same size limit, but a bag limit reduced to just 2 fish. Those rules remained in place through the end of August. The season then closed again for 37 days, to reopen on October 8, when the bag limit returned to 10 fish. Finally, on November 1, the bag limit increased to 15 fish, the size limit increased to 13 inches, and those regulations remained in force through the end of the year.
In order for New Jersey anglers to come close to, but not exceed, their share of the coastal black sea bass harvest, all of those measures would have to depend on one of two things—precise data, or pure luck.
So first, let’s look at the data. The first of New Jersey’s four different sets of black sea bass regulations fell during Wave 3, May and June. The PSE for New Jersey’s 2017 black sea bass landings during that wave (we have to look at 2017, because that’s what the 2018 regulations were based on) was 41.1, a level of precision far worse than that of the annual estimate, and far, far worse than the coastwide figure. The PSE for Wave 4, July and August, at 23,5, was much closer to the precision of the annual estimate, although still far from stellar. Things went downhill from there. For Wave 5, September and October, the PSE was 58.7—so bad that NMFS highlighted it in red as a warning—and for Wave 6, November and December, the PSE dropped off the red scale to a still-imprecise 36.9.
Those sort of numbers provide little reason to believe that regulations based on such imprecise landings estimates will come close to achieving their goals.
Yet, while New Jersey provides the worst example of regulations based on imprecise estimates, the states to its north, most of which also changed regulations from wave to wave and/or from mode to mode, also based rules on data of dubious precision. New York’s 2017 Wave 3 estimate had a PSE of 73.1; it’s Wave 6 estimate, with a PSE of 66.5, almost certainly seriously overestimated that state’s landings, and had regulatory implications not just for the state, but for the whole northern region.
Things don’t have to be that way. There are two very obvious ways to rein in conservation equivalency run amok, and the related misuse of MRIP data.
The first is so obvious that it’s surprising that the ASMFC hasn’t adopted it: more effective regional management.
In 2004, the ASMFC adopted Addendum XI to the Summer Flounder, Scup and Black Sea Bass Management Plan. That addendum grouped the four states then responsible for 97% of the recreational scup landings—Massachusetts, Rhode Island, Connecticut and New York—into a single region. All states in the region would share the scup resource, and would be required to adopt the same size limit, bag limit and season (although they had some flexibility in when they would set the special 60-day “bonus season” that gave anglers fishing from for-hire boats a higher bag limit than others). It was still a form of conservation equivalency, but one that did not compromise the precision of landing estimates by permitting states in the region to adopt their own, supposedly equivalent regulations.
As a result, the recreational scup fishery has been stable for well over a decade, with regulations changing to adapt to stock health, but no longer whipsawing from year to year.
The ASMFC also had brief success with recreational management of summer flounder, but that quickly died when New Jersey was allowed to exit the region it once shared with Connecticut and New York, in order to accommodate the supposedly different fishery in Delaware Bay, and subsequently went out of compliance not only with regional regulations, but with the ASMFC’s summer flounder management plan.
The other solution to the problems created by conservation equivalency would be to hold states accountable for the effectiveness of their supposedly “equivalent” regulations. As recently noted in a blog that appeared on the Saltwater Edge website, there should be
“No [conservation equivalency] proposals without payback
provisions the following year.”
It just makes sense.
If a conservation equivalency proposal really is equivalent to the measures in a management plan, no overage should ensue. But if the conservation equivalency plan is somehow wanting—if, say, it was based on imprecise data—then it’s only right that the state that proposed it make restitution for any harm to the stock.
I can hear state representatives complaining now, saying that the state-level data isn’t good enough to support payback plans. And I agree.
But if that’s the case, the same state-level, wave-level and/or mode-level data isn't good enough to support conservation equivalency, either.
No comments:
Post a Comment