Joseph Smith

Computer Vision Researcher | Open to Research Scientist roles

Joseph Smith

✍️ Blog

This section collects practical, implementation-oriented write-ups and reflections from my PhD research. I focus on robust computer vision and deep learning methods that transfer to real deployment settings: handling image quality variation, adapting models as data evolves, and improving segmentation and fine-grained classification under distribution shift.

Posts are written to be readable by both researchers and engineers. I include method framing, concrete lessons, failure cases, related papers, and occasional career-stage reflections on what this work means outside the thesis.

Latest Posts

Mobile phone image-based anti-copy pattern detection framework

Fourier Image Similarity: From Paper to Open-Source Package

March 27, 2026 · 10 min read

Frequency Domain Image Similarity Open Source

A walkthrough of the Fourier-domain image comparison pipeline extracted from my anti-copy pattern detection paper, now available as a standalone Python package.

  • Why periodic structure is better compared in frequency space than pixel space
  • How peak detection, local prominence, and entropy combine into a similarity score
  • Tuning the pipeline for a new image domain
Printed PhD thesis cover for Robust and Explainable Deep Learning for Fine-Grained Visual Inspection in Anti-Counterfeiting Applications

Passing My PhD Viva and What Comes Next

March 14, 2026 · 6 min read

PhD Career Robustness

A personal reflection on passing my viva, the thesis contributions I care about most, and the kind of Research Scientist work I want to do next.

  • Why robustness and explainability mattered more to me than benchmark-only gains
  • The three contributions that best capture the PhD
  • What I want to work on after finishing minor corrections
Augmentation-based model re-adaptation framework overview

Model Re-Adaptation Under Domain Shift

March 11, 2026 · 13 min read

Robustness Segmentation Transfer Learning

A practical framework for preserving performance while adapting to evolving data distributions over time.

  • How to choose between retraining and sequential fine-tuning
  • Avoiding catastrophic forgetting while absorbing new classes
  • Why evolving augmentation pools improve segmentation robustness
Quality effects on deep neural network classification

Image Quality Filtering for Fine-Grained Classification

March 11, 2026 · 12 min read

Robustness FGVC

Lessons from no-reference quality scoring and cut-off point selection in FGVC pipelines.

  • Why training quality distribution matters for deployment reliability
  • How to derive quality cut-off points from confidence-quality trends
  • When quality pruning helps more than architecture changes