Faster super-resolution ultrasound imaging with a deep learning model for tissue decluttering and contrast agent localization
Super-resolution ultrasound (SR-US) imaging allows visualization of microvascular structures as small as tens of micrometers in diameter. However, use in the clinical setting has been impeded in part by ultrasound (US) acquisition times exceeding a breath-hold and by the need for extensive offline c...
Gespeichert in:
Veröffentlicht in: | Biomedical physics & engineering express 2021-10, Vol.7 (6), p.65035 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Super-resolution ultrasound (SR-US) imaging allows visualization of microvascular structures as small as tens of micrometers in diameter. However, use in the clinical setting has been impeded in part by ultrasound (US) acquisition times exceeding a breath-hold and by the need for extensive offline computation. Deep learning techniques have been shown to be effective in modeling the two more computationally intensive steps of microbubble (MB) contrast agent detection and localization. Performance gains by deep networks over conventional methods are more than two orders of magnitude and in addition the networks can localize overlapping MBs. The ability to separate overlapping MBs allows use of higher contrast agent concentrations and reduces US image acquisition time. Herein we propose a fully convolutional neural network (CNN) architecture to perform the operations of MB detection as well as localization in a single model. Termed SRUSnet, the network is based on the MobileNetV3 architecture modified for 3-D input data, minimal convergence time, and high-resolution data output using a flexible regression head. Also, we propose to combine linear B-mode US imaging and nonlinear contrast pulse sequencing (CPS) which has been shown to increase MB detection and further reduce the US image acquisition time. The network was trained with
data and tested on
data from a tissue-mimicking flow phantom, and on
data from the rat hind limb (
= 3). Images were collected with a programmable US system (Vantage 256, Verasonics Inc., Kirkland, WA) using an L11-4v linear array transducer. The network exceeded 99.9% detection accuracy on
data. The average localization accuracy was smaller than the resolution of a pixel (i.e.λ/8). The average processing time on a Nvidia GeForce 2080Ti GPU was 64.5 ms for a 128 × 128-pixel image. |
---|---|
ISSN: | 2057-1976 2057-1976 |
DOI: | 10.1088/2057-1976/ac2f71 |