Thursday, August 24, 2023

THE PROBLEM WITH ANGLER-PROVIDED INFORMATION

 

I’ve attended a lot of meetings related to recreational fisheries management.  One of the hot issues has always been the need to improve the precision of recreational fishing data, and one of the proposed solutions, at least over the past decade or so, has been supplementing, or even replacing, the existing catch and effort surveys with data gathered directly from anglers.

To many, it seems like a good idea.  In the age of computers and smartphones, it would seem a simple task to create an application that would compile angler-provided information, and provide fishery managers with a much clearer picture of where, when, and how anglers fish, as well as the size, numbers, and composition of their catch.

The idea gained enough support that the Modernizing Recreational Fisheries Management Act, which was signed into law on the last day of 2018, calls on the Secretary of Commerce to prioritize

“the evaluation of electronic data collection, including smartphone applications, electronic diaries for prospective data collection, and an internet website option for…the public.”

It all sounds very modern and appropriately high-tech, but there is just one glaring problem:  There is no evidence that any such angler-based program would work.  

Looking back at past efforts to obtain information directly from anglers, there is plenty of evidence that suggests that any such effort, if applied to the broad community of recreational fishermen, is more likely to fail than succeed.

It all comes down to the fact that recreational fishermen venture out on the water seeking—not surprisingly—recreation.  They are not scientists who record the size, weight, time, and location of every fish caught; while some keep logs of their trips, most are far more casual.  Any information they have is collected without any sort of statistical rigor, and is generally recorded only in fallible memory.

The result is “data” that is unreliable at best, and very possibly completely worthless.

The latest example of that came earlier this month, when the National Marine Fisheries Service announced that they found a problem with the Fishing Effort Survey, an essential component of the Marine Recreational Information Program.  It seems that anglers responding to the Survey fell victim to “telescoping error,” misremembering when trips were taken, and thus claiming that they took more trips during the relevant period than they actually did.

It's not that the anglers were intentionally misrepresenting the number of trips that they took; NMFS even speculated that the error might have arisen because anglers were so eager to help that some reported a non-existent trip because they were reluctant to say that they didn’t fish at all—and so couldn’t help with the Survey—during the past two-month “wave.”

But it’s just that sort of bias, which can’t easily be detected by NMFS statisticians, which makes angler-generated data suspect.

At least in the case of the Fishing Effort Survey, the errors were not intentional.  There’s plenty of reason to believe that in other cases, anglers intentionally skew their responses.

Sometimes that might be because anglers are contemptuous of either the management process or, at least, of the surveys and surveyors.  When members of one popular striped bass-oriented website discussed fishery surveys earlier this year, it drew responses such as

“I’ve been interviewed a few times over the years.  Always some find [sic] of college program worked in conjunction with NYS and interviewers were usually set up (blocking) the point of entry/exit.  So yes there were ‘catch reports’ but in reality few and far between and were interviewers told the truth in the first place?  Never by anyone I ever knew.”

A follow-up comment to that one, written by a different angler, read

“A few times at boat launches and at a few state parks we were interviewed.  Always some college student always in the middle of the day and we always lied about types and size of fish caught.”

No one commenting on the issue ever suggested that such misreporting was wrong.

Such dishonesty seems to arise out of a general antipathy to MRIP, or perhaps the management process, but some members of the recreational fishing community also seek to tailor their responses to those which, they believe, are most likely to manipulate the management process in their favor. 

There used to be a website called Nor’East Salt Water, which was popular among party boat crew and the rest of the pirate wing of the recreational sector.  Its forums often contained lengthy discussions on what folks ought to tell callers from theCoastal Households Telephone Survey—the former, badly flawed predecessor to today’s Fishing Effort Survey—to best assure the least restrictive recreational fishing regulations. 

Speaking of the for-hire fleet, it’s no secret that some of their members intentionally understate the number of fish they release, in order to minimize regulators’ allowance for discard mortality, again to achieve the least restrictive regulations possible. 

Someone I know once filed a Freedom of Information request with New York’s Department of Environmental Conservation, seeking to obtain the number of striped bass caught and released by the state’s for-hire fleet.  The identities of the boats remained confidential, but it was interesting to see that there were a number of for-hire boats that reported landing well over one hundred bass over the course of a season, but never reported catching even a single undersized fish that had to be released (this occurred during the days of the 28-inch minimum size rather than the slot limit), while other boats landing the same hundred-something fish, reported releasing at least as many bass as they retained; some of the vessels landing the largest number of striped bass reported releasing twice, or even three times, as many fish as they kept.

So any report of a vessel landing, say, 150 striped bass without releasing a single short is “fishy” data at best.  But when there is no way to ground-truth what anglers or boatmen report, that sort of thing is going to happen.

And then there are anglers who don’t provide bad data, but instead provide no data at all.

It’s not an unusual problem, despite supposedly “mandatory” reporting requirements.  Non-reporting is rife in the recreational bluefin tuna fishery, where it has been estimated that only about 20 percent of the fish landed are reported to the National Marine Fisheries Service. 

Down in Alabama, where recreational red snapper management has been a hot issue for a decade or more, state officials have complained that anglers weren’t complying with Alabama’s “Snapper Check” system, which also requires catch reporting, once it had been put in place.  In 2016, Alabama fishery officials believe that only 31 percent of red snapper anglers reported their catch; that figure fell to just 22 percent a year later.  While we can only hope that the compliance rate has increased since then, such rates of noncompliance clearly demonstrate why anglers can’t be trusted to provide accurate catch numbers.

And then there’s the matter of bias.

We can debate the ideal form of a recreational data program, but there’s one thing that we ought to agree upon:  The data gathered must come from a representative sample of the angling community, or it has little value.

Thus, when folks talk about implementing a smartphone app to replace MRIP, the first question should be, “Do all anglers have smartphones?”  Or would data collected by such app underreport effort, catch, and landings by older anglers or less tech-savvy anglers, perhaps by less affluent anglers, and maybe by anglers who fish in remote areas or miles offshore, where cell phone reception is, at best, problematic.

There is also the issue of avidity.  That is, some people fish a lot more, and a lot more intently, than others.  Will the casual angler remember to log in their catch with the same consistency as the angler who fishes more intently and more often?  Will the hard-core “sharpie” report all of their fish, or hedge on landings and/or releases, in an effort to skew the regulatory process?

Finally, there is the issue of, for lack of a better term, literacy.  Will the willingness to report, and rate of reporting, be influenced by education, the ability to read and write English, or an understanding of the scientific process?  Will reporting truly be representative, or will it miss the folks who fish from the bulkheads and broken down piers, and from under roadways and bridges, and only reflect the activities of the more educated, computer-literate, and largely native-born angler?

We are a very long way from the day when some sort of electronic self-reporting will replace some sort of well-designed survey.  I strongly suspect that such day will never come.

At the same time, there is plenty of room for voluntary angler involvement in the recreational fishing process. 

I’ve participated in NMFS Cooperative Shark Tagging Program since the late 1970s; it’s the largest program of its kind in the world.  But my role is merely to tag sharks that I catch, and report the date and place of the catch, along with the sex, length, and condition of the fish, so that biologists can compare that information with similar information obtained if the shark is recaptured.  I don’t have to tag every shark that I catch, I don’t have to tell anyone about unsuccessful trips, and I don’t have to remember information about trips that were taken six or eight weeks before.  Everything is written down on a card that is mailed to the Program.

Things like that are well within most anglers’ competencies.  And no one is forced to take part.

Similar voluntary programs exist for billfish, tuna, and various inshore species, and provide real benefits to fishery managers. 

Many states also sponsor logbook programs, which see anglers volunteer to provide information on their trips for various species.  Such programs are not without bias, as the fishermen who participate in them are typically more experienced anglers, and it’s far from certain that their actions parallel those of the angling community as a whole, but they nonetheless provide insight into catch per unit effort (provided that anglers also report trips where nothing is caught) and how that changes from year to year, on the size and proportion of fish released (which became a hot topic in the bluefish debate a few years ago), and similar matters.

Recently, we’ve seen the release of smartphone apps that can act as virtual logbooks, provide additional environmental data and, because no pen or paper is required, are simpler for anglers to use.  Such apps have been embraced by some states as a boon to fisheries research.

After considering the successes and failures of using angler-sourced data in fisheries management, the key appears to be accepting its limitations.

Relying on large-scale collection of recreational fishing data, which is either requested or required from the general angling population but cannot be ground-truthed by managers, will almost inevitably take managers down a road paved with unintentional errors, purposeful misdirection, noncompliance, and bias.  Provided that enough samples are taken, MRIP’s Access Point Angler Intercept Survey provides relatively accurate landings figures, but only because surveyors can physically identify, count, and measure the fish filling anglers’ coolers.  Once surveyors are forced to rely only on anglers’ representations, inaccuracies are virtually assured.

Voluntary tagging and logbook programs, involving participants who choose to support the management process, can provide valuable information, provided that managers understand and account for sources of bias that will inevitably arise.

Ultimately, the problem with relying on angler-supplied information is that, in the end, managers are asked to accept fish stories as truth, and that’s seldom a good idea.

 

 

No comments:

Post a Comment