Allergic Rhinitis

Segmentation Completed Sep 2022 – Jan 2023 Project Participant
Documents

Highlights

Results

F1 (RGB)
93.10
F1 (Lab)
92.13
F1 (Ab)
89.13

Details

Allergic rhinitis diagnosis from nasal endoscopic images.

Problem

Allergic rhinitis diagnosis from nasal endoscopic images requires accurate localization and classification of disease patterns under significant variability in illumination, orientation, and anatomical structure.
This project explored deep learning–based image analysis pipelines to evaluate the feasibility of automated segmentation and classification under such conditions.

My Contribution

I contributed during the early phase of the project, shortly after joining the lab, with a focus on model exploration and data-centric analysis.
My work primarily involved:

  • Systematic evaluation of multiple CNN backbones (VGG, ResNet, Inception, Xception, DenseNet) under a unified training protocol
  • Comparative analysis of aligned (rotated) vs non-aligned (original) image inputs
  • Hyperparameter sensitivity studies (learning rate, dropout) using 5-fold cross-validation
  • Investigation of preprocessing strategies, including cropping, rotation artifacts, and color correction effects

These experiments informed subsequent architectural and preprocessing decisions adopted in later stages of the project.

Approach

Data

  • Endoscopic images with both aligned (rotated) and non-aligned variants
  • Standard 80:20 train–validation split, followed by 5-fold cross-validation
  • Data augmentation: rotation (0–20°), zoom (0–0.15), shift (0.2), shear (0.15), horizontal flip

Model

  • Transfer learning–based CNN classifiers with a shared head:
    • Base CNN backbone
    • Global pooling
    • Fully connected layers with ReLU
    • Dropout
    • Softmax output
  • Input resolution fixed at 224×224×3 across all models

Training

  • Optimizer configured with learning rates in the range of 1e-3 to 5e-4
  • Batch size: 8
  • Epochs: up to 200
  • Binary cross-entropy and alternative loss functions evaluated

Evaluation

  • Accuracy and F1 score across cross-validation folds
  • Comparative analysis between preprocessing pipelines
  • Error inspection to identify artifacts introduced by rotation and color normalization

Notes

  • Rotated (aligned) images introduced subtle interpolation noise that measurably affected model performance
  • Square cropping based on edge/contrast information improved stability compared to rotation-based alignment
  • Color correction degraded performance by suppressing discriminative cues relevant for feature extraction
  • Xception and Inception architectures consistently outperformed other backbones in early experiments

Although I was not involved in the final model integration or manuscript preparation, this project marked my first hands-on exposure to medical image analysis research and helped establish foundational practices in experimental rigor, ablation design, and result interpretation.

Stack

TensorFlow CNNs Transfer Learning Medical Imaging