Otolaryngology (ENT) is a surgical speciality involved with pathologies
of the head and neck. Pathologies vary from very aggressive malignancies
to benign, yet vastly functionally detrimental diseases for patients.
ENT has classically been seen one of the surgical specialities with
rapid advancement and routine incorporation of newer technologies into
routine clinical practice. Interventional Technology and Imaging is a
broad topic, and in particular I will explore the use or potential use
of artificial intelligence, as well as new technologies/equipment
involving artificial intelligence within ENT by exploring case studies
of particular pathologies.
Artificial intelligence (AI) refers to machines demonstrating human-like
intelligence and being able to perform ‘human-intelligent’ tasks. It
revolves around the concepts of learning, problem solving and decision
making. By definition, they should be able to perform tasks to the same
standard or better than humans. (1) Despite this
notion proving extremely popular in science fiction, it has rather been
relatively slow to be adopted into current clinical practice. Clinical
medicine is evidence-based, and as of yet currently there remains a
distinct lack of wide-scale solid reliable evidence that suggests AI is
more effective than clinical practice. This is one of the main
challenges to its widespread adoption. Two fundamental principles of AI
are machine learning and deep learning. (2) Machine
learning involves learning via a feedback loop: the machine receives
large volumes of data, it identifies patterns and relationships in the
data, and then feeds it back in. Deep learning, however, involves
algorithms structured in layers. Each subsequent layer is able to
analyse a different part of the data and then feed it back. This allows
the machine to progressively automatically analyse increasingly complex
features from the initial data.
Benign paroxysmal positional vertigo (BPPV), by nature is a benign
condition but it can have severe debilitating effects for patients. In
fact, it is the most common cause of dizziness in the primary care
setting. (3) Especially in elderly patients, the risk
of falls secondary to BPPV can result in significant morbidity and
mortality. As such it is important to ensure it is appropriately
diagnosed and managed. Diagnosis is dependent on eliciting
positional-dependent nystagmus. However, in clinical practice this can
usually be difficult to detect, and patients undergo unnecessary imaging
involving radiation. Using deep-learning, an algorithm was developed as
a diagnostic decision support system. (4) 91,778
nystagmus videos, annotated by four ontological experts were fed into
the network to train and establish the algorithm. It led to a highly
sensitive and specific (both > 80%) valid model for the
diagnosis of BPPV. This holds huge potential implications. As a
diagnostic aid, it can lead to earlier detection and then potentially
less sequalae of the disease. It can save costs within NHS, as a lot of
these patients are treated repeatedly in primary care before referral to
otologists. Primary care doctors in particular lack significant exposure
to ENT pathologies within their training (5), and this
can therefore ease their diagnostic uncertainty as well as be a learning
tool. It can be used in rural areas as a screening tool where ENT
services are not readily available. There is a potential for this
program to be embedded into handheld technology, and therefore massively
increase accessibility, especially as it tracks eye movements, which can
in theory be performed by any camera.
Artificial intelligence as diagnostic aids has a lot of challenges to
their adoption. In fact, a report released by the Royal College of
Surgery (RCS) highlighted some key concerns to widespread adoption(6): Both doctors and patients fear the medical
personnel being replaced by machines, ones that lack the sensitive
personal interaction associated with a doctor-patient
relationship.(7) Ethics and governance of data remains
a topical issue, as breach of huge volumes of patient confidential data
is potentially exposed (1). The algorithm generated is
also heavily reliant on the data inputted, known as the black-box
conundrum. This can lead to learning bias such as over gender or
ethnicity. The complexity of AI decision making can potentially lead to
an over-reliance on the system, despite only being an aid. A promise to
overcome these concerns can help allay public and clinician fear. For
example transparency over the use of data by private companies, further
training and awareness of AI for all clinicians and strict guidelines
and protocols regarding its use can all help ease a smooth transition
into AI in healthcare, and in ENT in particular.
AI also holds further huge potential within ENT surgery. As well within
the diagnostic process, treatment options can be optimised using AI. A
deep learning model was used to predict the hearing outcomes for
patients who had suffered sudden sensorineural hearing loss (SSNHL)(8). This is a multifactorial disease, and as such
outcomes typically vary widely from permanent hearing loss to complete
resolution with no residual symptoms. There is currently no
well-accepted and accurate predictive system. 149 predictive variables
were identified amongst 1220 patients in this study. It showed that the
AI model was far more accurate compared to a conventional logistical
regression (LR) method of analysis. However, this only proved true with
large data variables. In fact, once 3 main key predictive variables were
identified: initial level of hearing, audiogram and time between onset
of symptoms and entry into study, the AI model actually fared far less
accurate than the standard LR method. In clinical practice, it is far
easier to gather 3 key pieces of information than it is to gather data
for 149 variables. Here-in lies a fundamental fault with AI- to be truly
accurate, it requires a large set of data, and in practical terms, much
of this gathered data is either inaccurate, incomplete or unobtainable.
Extrapolating this even further in the treatment algorithm, AI also has
a role to play in hearing aids. It can lead to quicker and more accurate
fitting of cochlear implants (9). An algorithm can be
derived from large audiological data on hearing thresholds and audiogram
results, and then used to automate the fitting process. An added
advantage is that due to the automatization, it can be done remotely and
allows a much more tailored service for the patient. Furthermore, a
device has been developed to be used in rural remote regions. Building
upon an automated diagnosis of otitis media system(10), AI has been incorporated into a smart-phone
device attachment called ‘HearScope’ to reliably diagnose common
pathologies of the ear. (11) In fact, the neural
network used has an accuracy of 86%, which compares favourably to GPs
and paediatricians using normal otoscopes. In developed countries, the
fact that it doesn’t hold a clear advantage over current practice is a
hinderance, but the potential for use in developing countries,
especially in communities with limited access to healthcare is immense.
The other major branch of ENT which has had advances in AI is head and
neck oncology. Machine learning in particular is able help with clinical
research as it can process large amounts of data, especially genetic
data, and identify patterns. For example, it has already helped identify
a 40 gene profile for predicting nodal disease in HPV-related
oropharyngeal cancers. (12) The implication of this is
that it can identify high risk patients and then tailor treatment for
them personally before they are found to have nodal disease clinically.
AI involving imaging also holds huge benefit. A machine learning
algorithm was able to accurately visually identify oropharyngeal cancers
using a combination of white light and narrow band imaging. Furthermore,
examining pre-operative CT images, a neural network was able to identify
nodal metastasis and extranodal extension. The aim of both of these were
early detection. A classification algorithm has been developed that is
able to distinguish between normal oral mucosa and malignant tissue on
hyperspectral imaging alone. (13) The consequence of
this is it can be applied to ‘on the table’ tumour margins to prevent
unnecessary re-operations when all the malignant tissue has not been
removed.
Clearly the implications for AI within ENT are huge. However, there has
yet to be of any real beneficial clinical applications. The majority of
these advances in AI have been within the last few years, and as such a
lot more studies demonstrating needs to be shown before it can be
incorporated into today’s evidence-based clinical medicine. Ethical
considerations remain a challenge to adoption of AI.(14) The respect for autonomy is a key ethical
principle in medicine. There is a severe lack of understanding around
how the algorithm decides the output based on the data provided, in
which case doctors will not be able to accurately disclose all the
necessary information about the diagnosis and treatment to the patients.
Likewise, the autonomy for patients is at serious risk because there is
a huge potential risk of patient confidentiality with large
multi-national corporations responsible for these AI systems. Also for
the doctors, they may no longer be able to deviate from established
algorithms, and that decision subjectivity is removed entirely. The AI
system is entirely dependent on the data received and such if poor data
is received, then the risk for maleficence to the patients is present.
Also, patient-specific factors are potentially infinite, and there is no
way a system will be able to account for all of those. No two patients
may want the same treatment despite the machine determining that a
specific treatment is best for both of them. As doctors, one can
advocate for treatments that are most beneficial for the patients,
something the machine will be unable to do. The cost and resources
required to implement AI is a significant hinderance to its adoption.
There is no way to ensure that location and hospital trusts do not play
a role here. Some hospitals and regions will have more access to money,
hence better AI services and in theory will be providing a better
standard of treatment to patients.
In summary, artificial intelligence has a huge role to play in surgery.
Within ENT particularly, the scope is immense: Applications can vary
from diagnostic aids, imaging and risk stratifying patients to real-time
operative applications. However, there still remains multiple barriers
to widespread adoption of AI within ENT. As technology advances, and AI
becomes more common in society, there will inevitability be an
inevitability be a push to integrate it within surgery. It stands to
reason then that ENT surgeons can proactively help support and
participate in AI projects at this current fundamental highly
developmental stage as it will ultimately influence future clinical
practice.