We present a computer vision method for identifying and reading hatchery marks in salmon otoliths. Synthetic
otolith marks are used at hundreds of hatcheries throughout the Pacific Rim to record the release location of
salmon. Each year, human readers examine hundreds of thousands of otolith samples to identify the marks in
captured salmon. The data guide hatchery investments and inform dynamic management practices that maxi-
mize allowable catch while preserving wild-hatched populations. However, the method is limited by the time
required to process otoliths, the inability to distinguish between wild and un-marked hatchery fish, and in some
cases classification processes are limited by the subjective decisions of human readers. Automated otolith reading
using computer vision has the potential to improve on all three of these limitations. We tested the classification
accuracy of transfer learning using previously published deep neural networks pretrained with the ImageNet
database and compared it to the classification accuracy achieved using shallow networks developed specifically
for otolith reading. The shallow networks achieved better classification accuracy with the available training and
test sets. In particular, we report a novel otolith classification algorithm that uses two neural networks trained
with an adversarial algorithm to achieve 93% classification accuracy between four hatchery marks and un-
marked otoliths. The algorithm relies on hemi-section images of the otolith exclusively: no additional biological
data are needed. Our work demonstrates a novel technique with modest training requirements that achieves
unprecedented accuracy. The method can be easily adopted in existing otolith labs, scaled to accommodate
additional marks, and does not require tracking additional information about the fish that the otolith was
retrieved from. Future work should determine the value of expanding the training set and applying the algorithm
to a more diverse set of otolith marks.