OSCAR Celebration of Student Scholarship and Impact
Categories
College of Engineering and Computing OSCAR

Patient Specific Three-dimensional Biomechanical Eye Models from Magnetic Resonance Imaging

Author(s): SuJung Rodriguez

Mentor(s): Qi Wei, Department of Bioengineering

Abstract
The purpose of this research project was to create a three-dimensional, patient-specific biomechanical model of the eye from magnetic resonance imaging. Previously, a pipeline was developed which took two MRI datasets, data collected in the coronal and sagittal view, as inputs and created a model of the eyeball and extraocular muscles (EOMs) as the output. One limitation of this pipeline is that oftentimes, data in the sagittal view will be unavailable. This is because data collection is very lengthy and uncomfortable for patients. Therefore, this pipeline needs to generate a model when only coronal data is available without sacrificing accuracy. To accomplish this task, we artificially generated sagittal data using data in the coronal view and fed it into our pipeline. We discovered that four of the EOMs, the rectus muscles, did not require sagittal data to be generated. The pipeline was modified to not expect sagittal data when modeling the rectus muscles. We then modeled the inferior oblique and found that it was only visible in a few slices of the artificially generated sagittal data. As a result, we could not see the origin point and had to hardcode it in based on what is documented in literature. When trying to model the last muscle, the superior oblique (SO), we found that it was not visible in our artificially generated sagittal data. We will have to hardcode in where the SO inserts on the eyeball and interpolate a line to follow the path we expect to see the SO follow. Once all muscles have been modeled, we will use this model to simulate surgical outcomes and provide ophthalmologists with a quantitative way to assess surgical outcomes.
Audio Transcript
Hi! My name is SuJung Rodriguez, and I will be presenting my work on 3D biomechanical eye modeling.

So the objective was to create a patient-specific three-d biomechanical model of the eye and the corresponding extraocular muscles

Some background on why we would want to model those muscles. So those muscles are responsible for eye movement, as you can see. Here you have 4 rectus muscles which are responsible for moving your eyes up down, left, right. Then you have your oblique muscles which are responsible for your eye being able to twist. So any deviations in those muscles from healthy physiology can cause vision disorders, a common one, being strabismus. That’s a case where your eyes cannot look at the same target at the same time. So modeling those muscles can provide valuable insight when planning treatment. And currently, when ophthalmologists are trying to treat vision disorders, they have no quantitative measures. So they go in and decide which muscles to recess, and how much to recess them by based on their own intuition and experience. So, being able to actually model those muscles, would be able to give them a quantitative measure and improve surgical outcomes

Some previous work that was done.

So Bassam Mutawak developed a pipeline which uses 2 MRI data sets to generate a patient-specific model of the eyeball and extracurricular muscles. Basically, those 2 MRI data sets, they’re collected in 2 different views. So you have what’s called the coronal view and the sagittal view. So in each data set, those extraocular muscles are traced. They’re sent into the pipeline, and an iterative closest point registration algorithm is used to register points from the 2 data sets. Basically what it does is it takes your points in the coronal view, and then it looks for points in the sagittal view that are the closest to those points, and then it computes the amount of translation and rotation necessary to have those 2 different points be aligned. Those 2 data sets be aligned. The traces of muscles are then plotted, and these centroids are computed, and from there the muscle paths are generated.

So here’s what those datasets look like. This is what the data in the coronal view looks like. As you can see, these dark bands are the extraocular muscles. Here’s what data in the sagittal view looks like. Again, those dark bands you see are the muscles. And here’s the model that gets generated.

So my task was to make this pipeline work when it only has one data set the coronal view available. The reason why is data collection is actually very long and uncomfortable for patients. So most of the times we are only able to get one view. We focus on the coronal view. The only issue is that the inferior oblique and superior oblique, their tendons are only reliably imaged in the sagittal view.

So we focus on one eye. First, the oculus dextrous or OD. Because MRI data is actually a 3D volume, we can actually take our coronal data and re-slice it essentially so that we can artificially generate our own sagittal data and try to get the inferior oblique and the tendon of the superior oblique. We then model the rectus muscles, the oblique muscles, and while we’re doing that, we’re checking against our ground truth measure, which is the model that was generated when we had data from the coronal and sagittal view available.

So to model the rectus muscles, the rectus muscles actually do not require sagittal data to be modeled. As you can see here, these green traces are the data from the coronal view. The black traces you see here are from the sagittal view. So you just modify the code as needed. And basically just making sure that this pipeline isn’t expecting 2 data sets when it’s only going to be getting one

Next is to model the interior oblique. So here we trace it in the coronal view. It’s this muscle here. You see here the point where it’s connecting to the socket. That’s actually the origin point which will be important for later. And here we also trace it in the sagittal view. As you can see, it’s this little black dot here. So we re-register the points from the generated sagittal data that we’ve created to the points in the coronal data. We hard code in the origin point based on values found in literature. The reason we do this is because, as you can see in our artificially generated sagittal data, the top of the eyeball gets cut off, and the inferior oblique is pretty far back. So it’s only visible in anterior slices, and it’s just not visible in our artificially generated satchel data. It’s only there in 2 or 3 image slices. So, after we hardcode in our origin point, we plot the coronal traces of the inferior oblique muscle, and compare it to the origin point that we hard-coded in. If the origin point that we hard-coded in is still very far from that point we see where it’s connecting to the socket, then we adjust it again.

So, as you can see here in order to get our hard-coded origin point from literature, we take one model that was created by Dr. Wei and Bassam Mutawak. That was more generalized. It wasn’t data-driven, patient-specific. It was based on what we expect to find in the literature. And then we take the model that I’ve generated, as you can see here where it is, data-driven and line them up. So as you can see here, these green, these big green asterisks are from Dr. Wei’s model, and then these actual like muscle paths, are from mine. We line them up so as you can see here, because there just was not a lot of data to build it off of the inferior oblique isn’t going in the direction we needed to. It’s starting to point out. And we take this last point that you see here, and hard code that in as our origin point. We then rerun the model, plot the data on the coronal views, see if it’s close to that origin point. We see where it’s joining the socket and make an adjustment. And this is the model that we finally generate. So, as you can see here, it matches up what we see in our ground truth, measure, and the shape also looks as we expected it to, and when we check the muscle lengths they line up with what we see in literature.

Next step is to model the superior oblique. So because the superior oblique tendon is only visible in very anterior MRI images. It’s very difficult to model. When we artificially generate our sagittal data, it’s not visible in any slices. So a possible solution is to hardcode in where it inserts on the eyeball and fit a line to wrap around the eyeball, so that we’re still seeing that shape that we expect to.

So our future directions are to wrap up the superior oblique modeling. Then the model is done, so we’ll send it to a dynamic simulation environment, and what we’re going to do is we’re going to simulate how these patients were seeing before they had any surgery. And then, because we have these surgeon’s nodes, we have the notes of like which muscles they recessed, how much they recessed them by and how their vision was after, and we again in that dynamic simulator basically mimic what those surgeons did, and then see if the way the eye is moving is the same as what the surgeon documented. If that’s not the case and the model is not working, we would make adjustments as needed. But the end goal ultimately is to be able to send these to a dynamic simulation environment and simulate surgical outcomes

And for acknowledgments, thank you to my mentor, Dr. Wei, at the Department of Bioengineering, Dr. Demer for providing us with MRI images. Bassam Mutawak, who created the original pipeline and has helped, and OSCAR GMU for supporting this project.

One reply on “Patient Specific Three-dimensional Biomechanical Eye Models from Magnetic Resonance Imaging”

Really interesting project, Sujung. Ultimately will the models be used for training, or our you hoping they will be used to model actual surgeries before the operation? It’s a great idea, either way.

Leave a Reply