Development of Components for Barrier-Free Pupillometry

Datum

2023

Autor:innen

Weitere Beteiligte

Herausgeber

Zeitschriftentitel

ISSN der Zeitschrift

Bandtitel

Verlag

Zusammenfassung

Investigating the pupillary light reflex (PLR) toward a predefined light stimulation protocol as an indicator of the retinal state and the function of their photoreceptors remains a common practice among ophthalmologists. Apart from being a diagnostic tool, PLR could also reveal therapeutic progression and monitor its influence on inherited retinal degeneration (IRDs). However, the classical form of this examination presumes a minimum level of cooperation from the patient. Therefore, because some genetic mutations could manifest early, a successful PLR-based investigation suggests adapting this examination for patients from infancy.
In classical pupilometers, automatic extraction of pupil size is achievable via classical image processing methods due to the straightforward environment of the acquired eyecentered images. Extending pupillography systems to integrate a broader spectrum of patients suggests equipping its processing technics with convenient tools to tolerate the non-cooperation situation. The degree of cooperation is measured by the ability of the patient to maintain some degree of head and eye stillness during the measurement sessions. Therefore, non-cooperative patients, such as infants, are recognized by the new image space integrating several sources of variability due to their free head pose and eye behavior.
In the last decade, deep learning (DL) has emerged as a powerful approach to computer vision tasks, in contrast to classical methods. This is because DL models could extract the necessary features from complex background images, resulting in significant success in the computer vision field. In this work, the problem of extending pupillometry to integrate very young patients is approached by exploiting DL techniques. This will be achieved by providing an end-to-end solution that automates the collection of PLR information from non-cooperative patients. First, Convolution Neural Network (CNN) is employed to reduce the image space complexity to the traditional eye-centered one. Second, to achieve accurate PLR measurements, a novel post-processing algorithm is proposed that utilizes depth information to define pupil size at the subpixel level. Third, a decision support tool for enhancing the objectivity of the measurements is being proposed. This tool provides valuable gaze information for guessing the accommodation reflex which is a major factor in altering the PLR objectivity toward the predefined light stimulus. This is achieved via a second model leveraging DL techniques.
It is worth noting that each mentioned process was tested on publically available datasets and exhibited a satisfying performance adequate for the case of use. The pupil region extraction model achieved a Normalized Mean Error (NME) of around 4% on the 300w and the WFLW datasets.
The pupil size estimator was tested against three different datasets and performed an accuracy level of 76.04% and a precision level of 81.75% in the Swirski dataset.
In terms of the objectivity enhancement solution, the employed DENSNET achieved a mean error of 7.6◦ on the EVE dataset.
By integrating the aforementioned components into a pupillometry framework, increased flexibility in accommodating the patient’s behavior could be achieved. The structure of this thesis will revolve around three key components: pupil detection, pupil size estimation, and objectivity enhancement. These components will be presented in an alternating manner, discussing their foundations in the literature in Chapter 1, the methods employed in Chapter 2, and the performance measurements in Chapter 4. Furthermore, Chapter 3 provides detailed information about the materials used in the implementation and evaluation processes. To provide readers with more comprehensive information, appendices containing additional material on various aspects discussed in the main body of the text are included in Chapter A.

Beschreibung

Inhaltsverzeichnis

Anmerkungen

Erstpublikation in

Sammelband

URI der Erstpublikation

Forschungsdaten

Schriftenreihe

Erstpublikation in

Zitierform