Author(s): Omar Abu-Khalifa
Mentor(s): Doaa Bondok, Civil and Infrastructure Engineering
Author(s): Omar Abu-Khalifa
Mentor(s): Doaa Bondok, Civil and Infrastructure Engineering
Author(s): Jahayra Guzman-Rivas
Mentor(s): Qi Wei, Bioengineering
Abstract
Audio Transcript
It is important to first understand Strabismus for this research project. Strabismus is the misalignment of the eyes. This condition occurs in 0.5 to 5 percent of the global population. Strabismus can be caused by abnormalities in any of the six extraocular muscles and their pulley systems. These five muscles are the medial rectus, lateral rectus, inferior rectus, superior rectus, and superior oblique.
When strabismus is examined in patients, magnetic resonance imaging, or MRI, has been implemented in the clinical spaces as it looks at the neuro-biomechanical factors of eye movements.
However, there are limitations to the use of MRI. When clinicians and trained experts segment the extraocular muscles and other ocular structures manually, it can be very time-consuming and labor-intensive.
In recent years, a specific field of study in Artificial Intelligence, specifically deep learning, has been applied to the process of segmenting the muscles in the eyes. Deep learning is a method in AI that instructs computers how to process data using neural networks. However, these techniques must be improved.
My research involves using deep learning methods to locate extraocular muscles using pixel-based labeling. I will be using MATLAB to implement deep learning methods. I will also use data collected from 13 patients. This data was collected at UCLA and intended for research purposes only.
Before I started using the deep learning methods, I conducted extensive literature reviews to further understand the anatomy of the eye and the utilization of deep learning methods.
I also confirmed which data is available and noted them in a summary sheet.
After noting the available data, I started preparing them for the deep learning methods. I looked at the images for each slice of each muscle for each eye for each patient and renamed them according to their slice and muscle using ImageJ. I then compiled all the slices of all the muscles of each eye of each patient in one folder. This process took about one month as I had 13 patients and 1,662 images to look at.
Since the code I obtained to create the masks required the slices for each eye for each patient to have a different naming format, I had to create new code in MATLAB that organized them into the right format. This took me about a week to complete.
I then used these stacks of images to create masks of the five muscles and have them shown in various colors.
As for the next steps, I must implement them into the deep learning model to train it with masks for each patient for each eye, validate the model, test the model, and adjust the model as needed.
While I made a lot of progress on my project, I could not complete it within this semester. However, I was able to gain a lot from this experience. For example, I was able to enhance my coding skills with MATLAB. Additionally, I gained a better understanding of deep learning algorithms and their implementation in segmentation. I also learned that preprocessing the data before implementing the deep learning methods are critical to the model’s training process.
I want to express many thanks to Dr. Lee and the George Mason University Office of Student Creative Activities and Research as they helped fund this research through the Undergraduate Research Scholars Program. I also want to thank my mentor, Dr. Wei, for guiding me throughout this process. I want to acknowledge Amad Quereshi for guiding me and providing the code needed for my research.
Thank you!
Author(s): Alexia De Costa
Mentor(s): Kasey Thomas, University Life
Abstract
Audio Transcript
Author(s): Abdalla Abdalla, Allan Justine Rowley, Yasmine El Messary
Mentor(s): Kirin Emlet Furst, Civil Engineering
Abstract
A systematic review of studies from three databases (PubMed, Web of Science, and Scopus) identified 50 peer-reviewed articles meeting inclusion criteria. Findings revealed that precipitation causes hydraulic overloading, groundwater infiltration, and physical damage to soil treatment units. High temperatures accelerate microbial digestion, increasing nutrient and solid discharges. Drought exacerbates clogging and reduces soil filtration efficiency. Contaminants identified included nitrate, phosphorus, E. coli, total coliforms, and emerging pollutants like pharmaceuticals and PFAS, with significant research gaps in low- and middle-income regions.
This review underscores the need for climate-adaptive management practices, including integrating green infrastructure for runoff control, advanced treatment units to enhance nutrient removal, and policies promoting regular system maintenance. Addressing vulnerabilities in septic systems is critical to mitigating contamination risks, protecting groundwater resources, and supporting public health. Further research on emerging contaminants and regional differences is essential for sustainable wastewater management in a changing climate.
Audio Transcript
Author(s): Andrew J Ryan, Zeinab Elahy
Mentor(s): Qi Wei, Bioengineering
Abstract
Audio Transcript
So extrocular muscles are just a fancy word for eye muscles. The eye muscles we focused on for this research was this muscle, the medial rectus muscle. And it’s the muscle that is closest to the nose on both eyes. So this right here is the right eye. Here we have, this would be the nose and this muscle is basically in charge of moving this eye toward the nose.
So it’s in charge of moving the right eye to the left and on the opposite side same thing.
It’s in charge of moving the left eye to the right. And any problems with this muscle could result in double vision or being cross-side. So how would a doctor and eye doctor examine that or examine the eyes in general?
Well there are different methods to take images of the eye, there’s retinal imaging, which is probably the one most people have had taken before. That provides information on the optic nerves and your vision. In serious cases, the doctor might want to look at the bone structure or muscles in the surrounding areas of the eyes. And that’s when they would use X-ray or CT or MRI imaging. These are all very innovative, but the problem is they provide still images. What if we want to somehow image or compile a video of the eye while the eye is moving or look at it in real time?
That’s when we turn the ultrasound imaging. A clinician or a technician will run a small probe on your eyelid or under the eyelid. The patient might keep their eye closed or open depending on the procedure. And that will get you this image right here. Ocular ultrasound imaging is used a lot. It can be used to find foreign bodies such as tumors and blood pooling and to evaluate any trauma in the eye, evaluate the optic nerve, any lens detachments. However, there are no studies on evaluating how the muscle changes.
And we can only see that change when the eye is moving. So that’s the topic we wanted to research.
What is the relationship between eye movement and the medial rectus muscle’s echo intensity? And by echo intensity, I mean the ultrasound image pixel intensity of the muscle,
which can kind of quantify as the general shape of it.
For my research, we conducted ocular ultrasound on multiple participants,
multiple trials. In order to process those movements and segment that muscle intensity,
we used MATLAB app designer to trace the MRM on each ultrasound session, tracing every 5 to 10 frames. And then we used MATLAB to convert those tracing into a figure and quantify the results.
All right, so first we have one example of an ultrasound image stack that we took. This is the right eye. And here’s the pupil area right here. So we asked the participant to move their eye left and right while looking at a target. I know it looks like it’s going up and down,
but that’s just because of the position of the probe. And down here is the medial rectus muscle. And as you can see, you can already see the muscle changing. The echo intensity and the shape is changing while the participant is moving their eye left and right.
And just the screen recording of how I did this,as you can see, I have the image stacks loaded here on the MATLAB app. There are 224 frames in this one. I can move them around and I traced every 5 frames or so. And to demonstrate right here is the medial rectus muscle. And so I traced right around that outline. I normally do this zoomed in, but for demonstration purposes, I zoomed out. And then I would go to the next frame. Again, normally I would do it every 5 to 10 frames, but I’ll just trace this one right here. And then after that, I would save the annotations of all the traced muscles
as a MATLAB cell-struct file.
All right, so after processing that, here is one of the ultrasound sessions. And this is the figure that we were able to generate. The y-axis is the muscle intensity while the x-axis is the frame. And again, we can correspond frame with time or as the muscle, as the eye moves. Excuse me. And so these blue dots are representatives of the tracings that I did
and this red line is the periodic regression that we fitted it with. And to kind of illustrate that right here, you can see the beginning frames, the beginning segments that I did. They’re kind of similar. So the muscle was hardly moving, which means the eye was probably not moving. And so if I press play right here, we can see that is true. Okay, right now for those first few seconds, the eye was not moving. So we can say up to frame 60, 50 or so. The eye wasn’t moving and so the muscle wasn’t moving. The muscle wasn’t changing. You could see the shaded areas that I would do were probably not, not too different from each other.
And so right now, the eye moves to the left. It moves to leftmost and this is probably where the leftmost is. And so the muscle echo intensity, you can see here it’s a lot smaller. It’s almost really thin. That means the echo intensity was going down, which again is shown here. That makes sense. So much so that right here, when the eye just changes direction,
so probably around 80, right here. Around frame 80, the eye starts to change direction. And so the muscle is getting bigger as the eye is moving to the right. And that again makes sense. The eye moves to the right, the muscle echo intensity just gets bigger and bigger. Up until this peak right here, again, the eye changes direction and starts to move to the left. And we have that cycle again, comes back now. And again, notice the muscle echo intensity is very big and then it starts to get small again. So we can conclude that at every peak, that’s represented at the eye changing direction because the muscle changes shape.
And so we can say that there is a relationship between gaze and muscle echo intensity. These results suggest it. They don’t necessarily say it, but they suggest it.
Now, there were some limitations to these methods. First off, it was very time consuming.
Tracing every 10 frames took about 10 minutes. Tracing every be 5 frames took longer, about 15, 20 minutes. Now, that doesn’t seem long, but every participant conducted 6 trials. So compiling all the tracings were just one participant takes anywhere between an hour to an hour and a half. Another limitation was that some of these images of the eyes
were bit unclear, which made me not trace it at all sometimes. So if you can see in this image right here, there’s a giant gap between these two frames. I didn’t take any images here or any tracings here because it was just hard to see on the ultrasound. And so that created this giant decrease in the regression. And that would not create some accurate results. Another thing was that there were some problems with the coding and the apps.
And so we spent a lot of time troubleshooting and debugging. And finally, the results were very promising. However, we would need more participants, more data in order to reach a better conclusion.
And so our next steps would be to gather more data from more participants. And then again, because tracing is time consuming, we would want to look into possibly making the tracing process automatic using machine learning processes. And then we can analyze how the maximum muscle thickness changes as the eye moves as well. And for long term, we can examine the same relationship with the other five extracurricular muscles.
All right, so to conclude, again, we found a periodic relationship between the MRM and gaze. In general, ultrasound is a very useful imaging technology. This type of research is very novel and innovative. Research on the eye and eye movement, though fairly understudied, has a promising future with these types of developments. And special thanks to OSCARS for their URSP funding, our participants of the study, the Department of Bioengineering, especially Dr. Wei and the Biomechanics Lab. Thank you very much for watching.
Author(s): Muhammad Sardar
Mentor(s): Shaghayegh Bagheri, Department of Mechanical Engineering
Abstract
Audio Transcript
Author(s): Muhammad Sardar
Mentor(s): Shaghayegh Bagheri, Department of Mechanical Engineering
Abstract
Audio Transcript
Author(s): Erica King, Lauren Distad, Noelle Saine
Mentor(s): Margaret Jones, Sport Management
Author(s): Tiffany Nguyen
Mentor(s): Jeffrey Moran, Mechanical Engineering
Abstract
Audio Transcript
Author(s): Reva Hirave
Mentor(s): Antonis Anastasopoulos, Computer Science
Abstract
The review highlights the strengths and limitations of each approach, emphasizing the need for interdisciplinary methods that combine textual analysis, network modeling, and interactional dynamics. By synthesizing these perspectives, this review identifies gaps and future directions for developing comprehensive and adaptable metrics to better understand polarization and diversity in online conversations.
Audio Transcript
science it’s not a secret that social media plays a huge role in shaping public opinion it’s a place where people from all over the place can come together exchanging ideas and perspectives but it’s also where we see some of the sharpest divides and this is often in the form of of ideological polarization understanding this is crucial not just for social science research but also for real world issues like content matter content moderation and the way online communities evolve so what’s the problem online conversations don’t always have clear boundaries conversations can Veer off in tangents topics shift super quickly and these contexts are always changing so this makes it really hard for researchers to analyze what’s actually going on in these digital spaces so the working research question for this project is how can we develop robust and and adaptable metrics to measure ideological polarization and Viewpoint diversity in online conversations across different social media
platforms to tackle the to tackle these issues researchers currently use three main methods to study these online interactions content based network-based and interactional these approaches each focus on a different aspect of online communication content-based methods look at the actual words people are using network-based methods track how people interact with each other and Inter and interactional methods focus on the back and forth Dynamics so how do people actually respond to to each other in conversation so we’re going to go a little bit deeper into these first let’s talk about content based methods these analyze the actual content of conversations it’s all about the words that people are using and what these words mean um sentiment analysis is one example where text is classified as positive negative for neutral and emotion an Anis takes this one step further tagging text with emotions like anger Joy or fear another useful tool is topic modeling which helps identify themes and conversations this lets us see how different user groups are talking about the same topics but there’s still some but there’s still some challenges here social media data is messy and unstructured and it can vary a lot across platforms plus annotated data sets used for training these models can often be biased which can affect the accuracy of results next next let’s look at network-based methods these focus on how users interact with other with each other for example how often do users do users reply to each other like post or share content there are two key types of networks we analyze here first we have interaction networks which show how users are connected through their actions you might notice P patterns where like-minded people tend to tend to engage with each other and form their own clusters and this can contribute to the ideological Echo chamber effect that we mentioned earlier second co-occurrence networks look at how different vocabularies emerge within groups if we track which groups or which words tend to appear together we can see if different groups are using different um different forms of language however as you might guess these methods do come with challenges of their owns um as analyzing vast amounts of data can be pretty difficult and interpreting these large networks is a whole other is a whole other mess finally we have interactional methods these methods dive into into the Dynamics of conversation itself are these interactions constructive or confrontational are users building on each other’s ideas or just attacking each other a major concern here is that is that marginalized viewpoints often get disproportionately negative feedback this this can actually create a lack of inclusivity and can deepen polarization further we also look at argumentation are people making fact based fact-based points or or emotional appeals another interesting Tren trend is emotional escalation negative emotions like fear and anger tend to spread especially in response to confrontational interactions still these methods have limitations of their own because they’re very context dependent what works on one platform and on one topic might not work on the other and they and these uh and these can also be super computationally expensive to analyze just because there are a lot of interactions that happen that happen on these
platforms so so to summarize understanding understanding polarization and Viewpoint diversity is really important for for understanding the broader impact that online discourse has on society and democracy by using content based network-based and interactional methods we can get a much clearer picture of how people are communicating online but there’s still a lot of work to be done we need to develop more more more robust metrics to track polarization and Viewpoint diversity across different online environments and that’s why my spring 2025 USP project will focus on creating a comprehensive adaptable framework for measuring polarization across a variety of online context hopefully this will help us understand these Dynamics better and ultimately improve social media platforms for everyone thank you for watching
Author(s): Mercy Wolverton
Mentor(s): Collin Hawley, Honors College
Abstract
Studies such as Heinrich and Gerhart‘s found that “while students express concern for their privacy when using mobile devices and express an intent to use additional privacy-enhancing technology, their behavior using mobile device protections does not change, even after an educational intervention” (1). Aivazpour‘s study found that “both the Big Five variables and the impulsivity variables are significant predictors of information disclosure independent of each other” (1). Additionally, research pursues social theories such as the “lemming effect”, investigating how social and peer pressures shaped individuals’ conformity to data-protection norms (Synman et al 1).
However, Synman‘s research has only taken place in Australia (1). Furthermore, the “lemming effect” is not the exclusive social theory to possibly explain the influence of a group on an individual and future research should also be done regarding other social theories (Synman et al 1). My research is dedicated to unraveling the complexities of data privacy behaviors within the broader societal context. Employing a quantitative approach, I utilized a modified Likert scale-type questionnaire, drawing from the Human Aspects of Information Security Questionnaire (HAIS-Q), to gather insights from random GMU students. Through surveys and data analysis, I seek to uncover patterns and predictors of data privacy behaviors, with a particular focus on understanding the enduring ‘Privacy Paradox.’ Preliminary findings suggest a significant disparity between users’ stated concerns about privacy and their actual behaviors, highlighting the need for further exploration into the factors driving this phenomenon.
Audio Transcript
Author(s): Muhammad Sardar
Mentor(s): Shaghayegh Bagheri, Department of Mechanical Engineering
Abstract
Audio Transcript