If you don't remember your password, you can reset it by entering your email address and clicking the Reset Password button. You will then receive an email that contains a secure link for resetting your password
If the address matches a valid account an email will be sent to __email__ with instructions for resetting your password
AI is a tool that will not replace sonographers but will help them be more efficient.
Artificial intelligence (AI) is emerging as a key component in diagnostic medical imaging, including echocardiography. AI with deep learning has already been used with automated view labeling, measurements, and interpretation. As the development and use of AI in echocardiography increase, potential concerns may be raised by cardiac sonographers and the profession. This report, from a sonographer's perspective, focuses on defining AI, the basics of the technology, identifying some current applications of AI, and how the use of AI may improve patient care in the future.
Visit www.ASELearningHub.org to earn free continuing medical education credit through an online activity related to this article. Certificates are available for immediate access upon successful completion of the activity. Nonmembers will need to join the ASE to access this great member benefit!
Artificial intelligence (AI) is evolving into a major focus in medicine that can be applicable to echocardiography in addressing problems of inconsistency and inter- and intraobserver variability during image acquisition and interpretation.
Compared with other modalities, such as computed tomography, nuclear imaging, and magnetic resonance imaging, echocardiography is often affected by interobserver variability and a strong dependence on experience level. Cardiovascular imaging, and echocardiography in particular, is increasing in demand and complexity. It is imperative to find ways to decrease variability among echocardiographers, improve efficiency, and decrease acquisition time. This is where AI could be extremely advantageous for the benefit of patients, sonographers, and cardiologists. The latest American Society of Echocardiography guideline on performing a comprehensive transthoracic examination recommends obtaining >100 images.
It would be beneficial to have AI assist with adhering to these American Society of Echocardiography guidelines for standardizing views and measurements in echocardiography.
AI: Origins and Definitions
One of the earliest articles on machine learning was published in 1959 by Arthur Samuel, who coined the phrase “artificial intelligence,” when he published an article titled “Some Studies in Machine Learning Using the Game of Checkers.”
Starting in the early 1980s, advances in computer technology and the creation of advanced neural networks led to a rapid evolution in AI. Biological neural networks are circuits of neurons; the human brain has approximately 100 billion neurons. Artificial neural networks are computing systems that mimic the human brain by recognizing relationships in vast amounts of data.
“AI” is a broad term that covers any computer program (algorithms and models) that mimics human logic and intelligence. Many terms are used to describe different subfields and techniques within AI. Machine learning and deep learning are two subfields that serve as the basis of most AI functions
Machine learning uses statistical techniques and data to learn from experiences in order to make predictions about new data. Machine learning involves programming a computer to store, learn, and analyze data. As a subset of AI, machine learning uses statistical methods to enable machines to improve with experience. Machine learning allows a computer to take data that are input, learn from these data, predict an outcome, and improve its own knowledge on how to react the next time it is presented with similar data. Machine learning has many applications today, including speech and natural language processing, image and video processing and recognition, autonomous vehicles, and game-playing computers.
An easy-to-understand example in echocardiography is a machine's ability to identify structures within an image and accurately label them, such as identifying an image as a parasternal long-axis view rather than an apical long-axis view.
Machine learning can be developed by either supervised learning or unsupervised learning. The simplistic difference between the two is that unsupervised learning uses unlabeled data, whereas supervised learning uses labeled or known data. In unsupervised learning, the machine takes unlabeled data sets and detects previously unknown patterns. The term “supervised learning” is the process of giving algorithms known or labeled inputs along with desired outputs and is the most common type of machine learning used today. This method enables a machine to classify or predict objects, problems, or situations on the basis of labeled data fed into the machine. The goal of supervised learning is to have a machine receive new (untaught) input variables and correctly predict output variables.
An example is a program that can analyze two apical echocardiographic views and produce an accurate value of ejection fraction (EF) without drawing borders and assessing ventricular volumes. Because the program was “trained” on thousands of images with known EFs, it is able to produce an EF value that is as accurate as that of an expert cardiologist with >20 years of experience.
Deep learning is a subset of machine learning that is used in circumstances in which a very large amount of data must be processed. These deep learning networks are based on multilayered configurations called an artificial neural network to process data (Figure 2). These artificial neural networks, often hundreds of layers deep, can train themselves from large data sets and make accurate predictions on newly input unknown data. They also have the ability to learn from their experiences but require that the initial “training” data set be accurate and without bias. Training without bias is critically important and means that the initial data are free of any outside influences that might cloud the information, such as demographic information or additional medical information that may suggest a certain diagnosis. Without bias also means that the “training” data set is large enough to contain a wide variety of patients. For echocardiography, such a data set would include studies from both sexes along with a range of body mass index, ages, and image quality. This is imperative for the machine to learn patterns on the basis of the images alone. The components of an artificial neural network are loosely related to the components of a biological neuron consisting of multiple layers in which the data are processed consecutively until the final output is achieved.
We can train a deep learning algorithm on a set of labeled echocardiographic views from patients with hypertrophic cardiomyopathy and use that algorithm to predict hypertrophic cardiomyopathy from new images.
This is an example of AI, because the system recognizes a pattern an experienced sonographer might recognize: seeing a parasternal long-axis view of the heart and being suspicious that the patient might have hypertrophic cardiomyopathy. Table 1 lists the main differences between machine learning and deep learning.
Table 1Machine learning versus deep learning
Can perform with smaller amounts of data
Benefits from large amounts of data
Can work on low-end machines
Requires a high-end machine to train but can be run on low-end machines (e.g., smart phones)
Depends on specific function to reach a conclusion
Can learn very complex functions to reach a conclusion
Important features must be identified by an expert
Neural network determines most important features, do not need to be identified by an expert
Short time to train
Long time to train
Uses statistical methods to improve with experience
Mimics functionality of human brain neural networks
Because of their ability to accurately perform pattern recognition, convolutional neural networks (CNNs) are a class of artificial neural networks often applied to analyze images. In a CNN, the multiple (deep) hidden layers (Figure 2), called convolutional layers, perform pattern analysis using a convolution operation. When an image passes through a layer of a CNN, it is divided into hundreds of small regions, and each region is analyzed by that layer of the CNN. Filters can detect characteristics of the image, like an edge or a corner. The image is scanned by each of the layers and passed on to the next layer.
Initial layers can detect simple geometric boundaries or edges; deeper layers will detect a more complex pattern, such as ventricular chambers. The final output is a summation of the individual analyses performed by each layer of the CNN.
Current State of AI and Echocardiography
AI and Image Acquisition
There are ways that AI is already making cardiac imaging easier, faster, and more accurate. Some of these examples already validated are automated measurement features, including left ventricular EF, chamber dimensions, wall thickness, Doppler measurements, and so on.
Figures 3 and 4 are examples of current software available on echocardiographic machines that can perform some of these functions. Automated measurement packages and image optimization can save sonographers time during the study and help standardize reproducibility of the echocardiographic results for patient data reporting.
99 patients were studied using automatic left ventricular EF by machine learning compared with reference standards of conventional volume-derived EF by clinical readers, high consistency (r = 0.95) and excellent agreement (r = 0.94) were found.
Products have been developed to help guide a novice sonographer toward a technically correct image.
This software uses an algorithm to guide the user to capture a diagnostic image using real-time positional directions. Deep learning software guides the user by recognizing incorrect or off-axis views and providing guidance on how to move the probe in order to obtain diagnostic images. Once the images are deemed diagnostic by the software, they are acquired automatically.
Automated acquisition might help improve the ergonomic burden for sonographers, decreasing the frequency of awkward musculoskeletal movements and examination length. The current technology used in commercial echocardiography is only beginning to scratch the surface of what AI can bring to the field. These products are extremely helpful to work flow, but the real benefit of machine learning will be in the interpretation of echocardiograms.
AI and Image Interpretation
Echocardiography is not only skill dependent in the acquisition of images, but the interpretation of echocardiographic images relies heavily on the highly evolved process of pattern recognition by the human brain.
Pattern recognition develops with experience, yet interpretation of echocardiograms today can remain highly subjective. AI holds promise in echocardiographic interpretation, as it has the potential to extract information not readily apparent to the observer.
AI also has the potential to overcome the human limitations of fatigue or distraction, inter- and intraobserver variability, and the tedious, time-consuming interpretation of large data sets.
A rapidly growing area of AI is in valvular modeling and segmentation. Computers are being used that will help with precise sizing and modeling of minimally invasive structural heart interventional devices, with an emphasis on real-time guidance.
Echocardiographic machines equipped with automated protocols can help students or newly hired employees learn complex protocols using algorithms designed to assist with correct image acquisition and measurements. Modern-day echocardiography simulators offer a wide variety of cases with cardiovascular disease to develop skill in recognizing pathology as well as the ability to accurately measure images. Software on these simulators and on ultrasound machines can train novice users and sonographer students.
The systems provide real-time feedback to imagers that they are obtaining diagnostic-quality images and instructions to correct acquisition if applicable. Medical schools have started to include point-of-care ultrasound into their curricula, even for first-year students, in order to better teach anatomy.
Using AI along with handheld point-of-care ultrasound systems could speed up the learning curve for these medical students. In fact, wherever cardiac ultrasound is used, whether it is in emergency situations such as on the battlefield or screening for rheumatic heart disease in developing countries, pairing AI to assist in obtaining images or diagnosis would be of great benefit.
At no other time has this been more apparent than in the current coronavirus disease 2019 pandemic. In an effort to reduce provider exposure, as well as limit the use of critical supplies, hospitals all over the country are trying to find ways to obtain real-time cardiac imaging by novice users on the front lines. Point-of-care ultrasound with AI is developing as a real-time solution to this complicated problem.
The Future of AI in Echocardiography
With the increasing use of three-dimensional echocardiography, carefully developed tools with supervised learning algorithms have the potential to determine whether structures can be seen and then guide acquisition for a more diagnostic clip. This would help produce better quality three-dimensional images and improved diagnostic value. Besides optimizing echocardiographic images, AI could identify in real time those patients needing additional views or the need for an image enhancement agent.
The future of AI in echocardiography may involve using cluster analysis, which combines both clinical and imaging data, to better characterize disease and predict outcomes. This combination of data may lead to the creation of therapies that are personalized for each individual patient, on the basis of a machine-created prediction of risk.
AI in echocardiographic interpretation offers the likelihood of increasing not only reading accuracy but also timeliness. A physician reading an echocardiogram that was obtained to assess mitral regurgitation before an intervention could ask a program to retrieve all views related to mitral regurgitation. This would save the physician time in sifting through a study with possibly hundreds of images by allowing the physician to quickly visualize all relevant information. This would be especially helpful in collaboration between echocardiographers and surgeons, interventionalists, and/or anesthesiologists.
Another potential application of AI could include comparing a real-time image with an image from a previous echocardiographic examination, allowing the interpreting physician to compare “apples to apples.” This would greatly facilitate laboratory accreditation requirements of comparing the current echocardiogram with previous studies.
Limitations and Challenges
There are many challenges that need to be considered before relying on AI for interpretation. Even in the setting of a perfect algorithm, if the data being input are of poor quality or are biased, the interpretation will also be of poor quality.
Additionally, emphasis must be placed on creating uniform standards that are consistent across vendors, thus allowing integration between the different algorithms and allowing the algorithms to run on different equipment. It would be helpful to have a set of standards for AI data management, similar to the picture archiving and communication system and the Digital Imaging and Communications in Medicine formats. These standards for AI management would include standardized nomenclature used to create a uniform system for data storage and retrieval.
Concerns of Sonographers
Cardiac sonographers take pride in obtaining high-quality diagnostic echocardiographic studies, and they recognize the difficulties and technical expertise required. These skills and techniques are learned over many years and vary greatly from patient to patient. It is understandable that sonographers may have some concerns when reading about the implementation of AI techniques in echocardiography.
They may wonder if echocardiographic examinations could become so automated that sonographers become obsolete. They may question if all health care providers could have transducers that connect to their cell phones and, after obtaining a few images, reach a diagnosis. They may be concerned that they could be replaced by remote scanning systems using robotics, similar to how some surgical procedures are now performed.
Although it is not clear what the future holds, it seems unlikely that AI will lead to the replacement of sonographers but rather will help them become more efficient while using their knowledge and skill sets to focus on more complex patients. In the world of gaming, although a computer is able to beat a human chess master, a chess master combined with a computer will always beat the computer. AI is a tool that, once integrated into the daily work flow of the clinical echocardiography laboratory, will support our clinical work and improve laboratory work flow, laboratory quality, and echocardiographic interpretation, all resulting in better patient care.
AI has made its way into several aspects of health care, and echocardiography is no exception. There are many areas in echocardiography that have already witnessed AI's impact on imaging, measurements, and diagnosis. Although there are concerns among sonographers and echocardiographers, the likelihood remains that AI will improve the jobs of sonographers and decrease the variability currently present in echocardiography. It is important to remain steadfast in the use of critical thinking skills and understand that only in the combination of properly developed AI and experienced human eyes will this marriage of technology and humanity be successful.
Fully automated echocardiogram interpretation in clinical practice.