OSCAR Celebration of Student Scholarship and Impact
Categories
College of Engineering and Computing OSCAR

Monitoring Water And Air Quality at Mason

Author(s): Chayanan Maunhan

Mentor(s): Viviana Maggioni, Department of Civil, Environmental, and Infrastructure Engineering

Abstract
Environmental quality plays a key role in both human health and campus sustainability. This
research project investigates air and water quality across George Mason University’s Fairfax and
Arlington campuses to better understand how campus operations, weather, and traffic contribute
to local pollution in suburban and urban settings.
The primary goal is to observe patterns and collect baseline environmental data that can support
long-term comparison efforts. While this is a short-term project, the findings will help identify
how differences in campus layout and activity, such as stream restoration at Fairfax versus dense
traffic at Arlington affect air and water conditions throughout the year.
By sharing results publicly, this research will not only contribute to ongoing sustainability
planning at Mason but also provide students with hands on experience and encourage data driven
decisions for future campus and community environmental strategies.
Audio Transcript
Have you ever wondered where stormwater goes after it runs off campus sidewalks, or how construction might affect the air we breathe?
At George Mason University, I’m working as part of the Patriot EnviroWatch project to monitor how our everyday activities impact water and air quality, and ultimately, the health of our environment.

Hello! My name is Chayanan Maunhan and I am an undergraduate researcher in the Department of Civil, Environmental, and Infrastructure Engineering at George Mason University. Today, I’ll be presenting my research work, which is part of the broader Patriot EnviroWatch project.

My specific focus within the Patriot EnviroWatch project is monitoring water quality across Mason’s Fairfax campus, along with participating in preliminary air quality data collection.
The photos you see here show a few of the key sites where I collected samples under different weather and seasonal conditions.
Research like this is critical because stormwater runoff can carry pollutants that harm local streams, rivers, and eventually the Chesapeake Bay, while air pollution affects campus health and sustainability.
By measuring these indicators, we can evaluate the effectiveness of campus restoration efforts and help guide future environmental management.

In my research, I primarily focused on monitoring water quality across George Mason University’s Fairfax campus.
I used Vernier probes to measure key water quality parameters: pH, turbidity, conductivity, temperature, and dissolved oxygen concentration.
Chlorophyll levels, which provide insight into algae growth and nutrient enrichment, were measured using a Vernier spectrophotometer.
Although my main focus was water quality, I also contributed to preliminary air quality data collection at Mason’s Arlington campus using portable PurpleAir PM2.5 monitors.
The air quality data generally remained within EPA’s acceptable range, but a few instances exceeded 12 micrograms per cubic meter.
While still considered safe for most of the population, these elevated levels could pose some risk to sensitive groups, such as individuals with respiratory conditions.
These early results demonstrate the importance of continuing both water and air quality monitoring as part of Mason’s sustainability goals.

One important factor in environmental monitoring is that conditions constantly change.
After rainstorms, turbidity and nutrient levels often rise due to runoff carrying sediments and pollutants into streams.
During hot weather, dissolved oxygen levels can drop, stressing aquatic life.
In dry periods, conductivity often increases because of accumulated salts.

My research activities included collecting water quality field data during different seasons and weather conditions, contributing to preliminary air quality measurements, and analyzing trends in environmental conditions.
These efforts help support Mason’s broader sustainability goals, including improving stormwater management and protecting the Chesapeake Bay watershed.

I would like to sincerely thank my faculty mentor, Dr. Viviana Maggioni, the Patriot EnviroWatch research team, and Mason Facilities for their support and collaboration.
I would also like to acknowledge the OSCAR Undergraduate Research Scholars Program for providing funding and making this research opportunity possible.

Thank you for listening to my presentation.
Through this research, I’m gaining valuable experience in environmental monitoring and helping protect both Mason’s environment and the broader Chesapeake Bay watershed.

Categories
College of Science OSCAR

Validations of Estrogen Assays in Baleen of North Atlantic Right Whales (Eubalaena glacialis)

Author(s): Sarah Fenstermacher

Mentor(s): Kathleen Hunt, George Mason University Department of Biology & Smithsonian-Mason School of Conservation

Abstract
Whale baleen has proven to be an accurate method in the retrospective longitudinal analysis of hormones. Baleen plates, the filter-feeding apparatus attached at the upper jaw in mysticete whales, continuously grow and represent a multi-year endocrine record that remains stable without undergoing post-mortem decomposition. While previous studies have quantified steroid and thyroid hormone concentrations in baleen from multiple species to evaluate different life-history events, the role of estrogens remains relatively understudied. Understanding reproduction in the critically endangered North Atlantic right whale (NARW), for example, is vital for accurate population estimate models.Therefore, archived baleen samples from 2 female NARW baleen plates were drilled every 4 centimeters using a dremel, and pulverized into a fine powder. Hormones were extracted from the baleen powder and Arbor Assays enzyme immunoassays (EIA) were used to quantify hormone concentrations. Three estrogen hormones: estrone (E1), estradiol (E2), and estriol (E3), were all validated for NARW baleen through parallelism tests using a pooled sample from non-pregnant females. This demonstrated a sample curve that was parallel to the standard curve, both of which were serially diluted samples: estrone (F1,8 = 0.09058, P = 0.771, r2 = 0.99), estradiol (F1,8 = 4.482, P = 0.0671, r2 = 0.98), estriol (F1,8 = 0.9084, P = 0.3685, r2 = 0.99). These hormones were quantified and compared to previously collected progesterone, stable isotopes, and confirmed calf-sightings to determine the behavior of these hormones during pregnancy, lactation, and resting periods. The data from these two females showed a spike in E2 at the end of pregnancy (after the progesterone (P4) spike) and was stable before pregnancy, which was the expected result. These estrogens appear to provide valuable insight in the study of reproduction (including gestation length and inter-calving intervals) in baleen whales.
Audio Transcript
Hi everyone! My name is Sarah, and I will be presenting my project on Validations of Estrogen Assays for Baleen of North Atlantic Right Whales. The samples used for this research came from the two whales pictured here…
Their names are Stumpy and Staccato, and they’re females who both died from vessel strikes in 2004. Ship strikes and entanglement are the two top-killers of NARW, and they are critically endangered with only 370 individuals remaining. Of those, only 70 are reproductively active females, meaning that the rate of population growth is limited to how often these females can have calves. Before these two were killed, they were a part of the breeding population; so they had documented pregnancies from regular calf sightings, and Stumpy also died with a full-term fetus.
Previous research on their baleen also confirmed that certain pregnancy hormones were elevated at the same time as these two were assumed pregnant, and subsequently seen with calves.
So what is baleen?
Baleen is keratin (the same structure as your fingernails and hair), and it is what they use to filter feed. It’s arranged in vertical strips that hang from the upper jaw as shown in these photos.
My mentor, Dr Hunt, was on the team that first determined that these baleen plates contain stable steroid and thyroid hormones, and repeated sampling along the length of a baleen plate can represent an endocrine record that spans multiple years of a whale’s life.
Because there is still debate in the large whale research community regarding length of gestation and exactly what happens during pregnancy, I was interested in re-examining these two females, this time, focusing on three estrogen hormones: estrone, estradiol, and estriol.

One of these hormones has been measured in NARW before (estradiol), but the other two (estrone and estriol) have never been measured in baleen whales before. We assumed that hormone extraction methods previously used would also work with these hormones, so we followed the protocol that Dr. Hunt developed.
Briefly, we measured the length of the baleen plate and used a dremel to generate powder every 4 cm along the length of the plate and then weighed the powder to 20mg. Hormones are then extracted from the powder using a MeOH-based protocol, followed by resuspension in assay buffer. Next, we performed enzyme immunoassays for each target hormone. This test allows us to calculate the target hormone concentration in each sample.

Because only one of these hormones has been previously validated for use in NARW baleen, my first objective was to ensure all three estrogen hormones could be reliably used in these samples. Specifically, I ran a parallelism test in each estrogen, and these are my results for that. On the x-axis of each graph, you see the log of the relative dose, and on the y-axis of each graph, you will see the percent of bound antibody. The goal for parallelism is for the standard curve to match the sample curve- both of which are made with serially diluted samples. I used a pooled dilution of non-pregnant samples from the two females (Stumpy and Staccato), and all three estrogens passed for parallelism. This meant that the sample curve was not significantly different from the standard curve (that means they were parallel to one another). We can see that the sample curve for E3 (estriol) only has 3 points; we did test other samples, but it appears a dilution greater than 1:4 did not have high enough concentration of the hormone to be detectable (but a 1:1 to 1:4 is detectable).

This project will continue into the summer, but I wanted to provide preliminary results of what we have seen so far. Because estradiol is typically a major pregnancy hormone, we wanted to assess it along the length of each baleen plate, providing longitudinal information during pregnancy, lactation, and non-pregnant (or resting) periods. We are working on continuing these assays along the length of the plate, so you will see some missing points, but we do have the results from one full pregnancy (in Staccato). Just to orient you on this graph, the x-axis provides the distance from the base (in cm), which really means time, and time moves forward from left to right (the very right side of the graph represents when the baleen plate was collected, meaning when she died).
On each of these graphs, the left y-axis and in the color blue, we can see the concentration of estradiol, while on the right y-axis and in the color green, is the previously published progesterone longitudinal profiles for each female. Stumpy on the left graph (a), has roughly the second half of a pregnancy shown on the left side of her graph (earlier in time), while Staccato (graph b) has an entire pregnancy and beginning of lactation shown. Though we are still working to fill in gaps, the results so far match what we expected. The hormone estradiol (E2) was relatively stable before pregnancy, but rose and peaked toward the end of pregnancy. Progesterone starts to elevate at the start of pregnancy, and maintains higher levels to the majority of a pregnancy.

So to summarize, assay parallelism validations were successful for E1, E2, and E3, which means that I will be able to analyze all three hormones along the length of both Stumpy and Staccato’s baleen plates. This furthers our understanding of the relationship between progesterone and the estrogens before, during, and after pregnancy. Once this is established, we may find similar patterns in other baleen whales, which will be interesting upon further study. This type of research will contribute to our understanding of large whale reproductive cycles, which is generally unknown, and will hopefully aid in population models and conservation efforts for this endangered species.

This project was funded by the OSCAR Undergraduate Research Student Program at George Mason. I’d also like to give a special thanks to my mentors Dr Hunt and Ms. Jelincic, for providing me with the guidance needed to complete this project. I also would like to acknowledge the Woods Hole Oceanographic Institute, for letting us borrow these archived baleen plates.
Thank you so much for listening and I hope you enjoyed learning about these incredible females, Stumpy and Staccato.

Categories
College of Engineering and Computing OSCAR

Laser-Induced Graphene for Flexible Graphene-based Doppler Imaging

Author(s): Philip Acatrinei

Mentor(s): Pilgyu Kang, Mechanical Engineering

Abstract
While commercial blood fluid velocity sensors exist, many cannot be used on pediatric patients or require the child to have their chest open and exposed for sensing. If there was a smaller flexible device that could be surgically attached to the aorta, or the largest artery of the heart, then it would be useful for bedside monitoring of pediatric patients as well as adults. This is achieved with a device utilizing porous laser-induced graphene as a flexible high surface area electrode as well as PVDF-TrFE (poly(vinylidene fluoride-trifluoroethylene)) as a flexible piezoelectric polymer. The combination of these two materials increases sensitivity, while retaining mechanical strength and flexibility. Unfortunately, the design of the device had to be changed halfway through testing so there is no data on the central frequency of the doppler device or how well it functions. With more testing, these figures will be known and the device can be properly tuned to achieve the performance numbers that are required by our collaborators at the National Children’s Hospital.
Audio Transcript
The video has the transcript embedded in Youtube’s closed captioning as well.

Hello everyone, my name is Philip Acatrinei, I am an undergraduate student
at the department of Mechanical Engineering for GMU
I’m working with Dr. Pilgyu Kang
to bring laser manufactured 3D Graphene for flexible graphene-based doppler imaging
This video is part of OSCAR URSP’s Spring 2025 Celebration
And without further ado, lets get into it!

So a little bit about our lab,
is, we have a background in 2D materials, micro and nano
manufacturing mechanics,
nano bio sensors, nano-photonics, opto-fluidics, optoelectronics, and plasmonics
We’ve done some collaborative research in the past with Cornell,
NSF, PARADIM, and CNF
but most recently we’ve done a little bit of collaborative research with NASA
and our lab is located at the IABR building
at SCI-TECH campus

So, after a cardio-
-vascular surgery, its really important
to have bedside monitoring of blood fluid velocity, mainly of the aorta
to determine heart health of the patient
that’s great for us adults
but in pediatric surgery, children have much smaller bodies.
and the devices that are currently available for monitoring blood fluid velocity
are made for adults
so for children, they are usually too large and bulky to properly use.
That’s why we believe that it is very important
to have pediatric
blood fluid velocity sensors
to have safe monitoring of post-surgery heart health for children

Now, some commercially available blood fluid velocity sensors
have their advantages and disadvantages
some advantages are:
they’re common in hospitals around the world and the hospital staff are already trained on their use
They’re reusable which means multiple patients can use
the same device
multiple times in the device’s lifespan
and they’re accurate
they have real-time accurate data collection, they can display data, they have data storage available.
But, as touched on before, they do have some disadvantages.
Now, because of their increased size, they increase the risk
so, to use the devices in pediatric surgeries
the child’s chest must stay open and exposed.
Now, that is not a good thing if you want to have a safe monitoring of
blood fluid velocity to determine heart health.
and, again they are not conformable
so they’re not flexible or conformable to the human body, meaning
the chest must stay open and exposed
to integrate these sensors
to monitor blood fluid velocity

Some of the state of the art research tries to address this,
by using the PPG optical method, the DBUD method,
or any other method but most of them
read blood fluid velocity unobtrusively through the skin/fat layer
this is great because it is unobtrusive
But,
this is also its greatest weakness because they must be placed very specifically
or, they must be only
usable on specific parts of the body, say your fingertip
or a specific artery and you have to
very very carefully place it over that artery to make sure you’re aiming for it.
so they have their pros and cons too

Now, our novel approach
is a conformable device specifically developed for children
so we wanted it to be smaller and thinner to ensure flexibility and conformability
the materials need to be body-safe, robust, and flexible
and we want to utilize two materials:
we want to utilize PVDF, or
polyvinylidene fluoride, its a flexible piezoelectric polymer
that is better for this application than traditional ceramic piezoelectric elements
that are not flexible
and we wanted to utilize laser-induced graphene
which is a flexible, high surface area electrode that better interacts with PVDF
and that better interaction increases device performance
so to go about how our
device works, I want to give a practical example

So, everyone has
experienced the doppler effect in their life, whether you know it or not
but as an example, we can take an ambulance
everyone has heard an ambulance drive by
where it sounds high pitched
when its coming towards you and the seconds it passes you it magically lowers in pitch
now, that difference from the heard frequency
that what you’re hearing, say the higher or lower pitch compared to the pitch that the ambulance is constantly putting out
is called the doppler shift
and the doppler shift is directly proportional to the speed that the ambulance is going, or that you are going in relation to the ambulance
and we use this doppler shift as our working principle

So we have two doppler devices, one is an emitter and one is a receiver,
we emit ultrasound at a specific central frequency that we know
it will bounce off of a red blood cell and scatter. It will lose or gain energy
either increasing or lowering in pitch, and by measuring
the shift from the original central frequency, we are able to tell the speed of the red blood cells passing by.

So, we have again an emitter and receiver
but we also use, because we have two, we use constant wave doppler
if we had one emitter that also acted as a receiver, we would get loss of information
as, it could only receive or send it could never do both at the same time
but because we have an emitter and a receiver, we’re able to have loss-less information which is really really great.
Now, our device is specifically tuned to an angle theta of 15 degrees, so that we target
around 4mm into the aorta, which is the center where the velocity is the fastest

So a little talk about the materials that I very briefly glossed over
I wanted to start with our 3D porous graphene
to manufacture it we use a photothermal process via laser, a CO2 laser
that we use to lase polyimide sheets
which make our laser-induced graphene. It makes it simple, scalable and cost-effective

Now, its unique properties are great for our
purpose in flexible electrodes.
So we used a, in this case we used a four point probe method to find the sheet resistance which we found to be 5.35 ohms
which is very low, which is excellent for electronic applications but because of the structure, its very mechanically flexible and strong
and it has a great high carrier mobility, which is great for high speed electronics.

Some more advantages of it
are that it has an increased surface area, so the interface with PVDF is increased
and the electrochemical properties are increased for device performance
and again its mechanically flexible

so a little bit more about PVDF, or polyvinylidene fluoride
it starts off as a liquid that we pour, and then we cure it at 140 degrees Celsius to become a polymer
and because it starts off as a liquid
when we pour it on top of our laser-induced graphene, which is a porous structure,
the porous structure almost acts like a sponge, sucking in the PVDF liquid
so when it turns solid, we have a really incredible interface
between the PVDF and the laser-induced graphene with that high high surface area.
So we can not only have a cost effective additive manufacturing process
for putting the PVDF there, we can adjust the central frequency which is important for the human body
as 10 megahertz is the ideal
central frequency for going through the skin, skin and fat, and,
by changing the thickness of the PVDF layer, which is very easy to do
we can adjust the central frequency to whatever we want
and, PVDF is very flexible, its a very flexible piezoelectric polymer
that is perfect for wearable electronics or flexible electronics that we’re interested in

And a little bit about PDMS which I didn’t touch up on
It is the substrate that we place our sensor on to keep it at the 15 degrees angle theta
PDMS, also known as polydimethylsiloxane, is a sort of silicone
so, it starts off in two parts, a base and a curing agent as a liquid
you can then pour it into a mold and then when we remove it from the mold, we get a very flexible silicone
which is great because it is cost effective, it means we can do whatever we want for molds, its incredibly mechanically flexible
but, the most important thing for us is that it is optically clear
it does not effect our ultrasound waves in any way shape or form as its passing through
it doesn’t refract or lose energy which is incredible for us
and for what we are trying to achieve. So to test sensor performance

we do either d33 characterization, using an LCR meter
to determine piezoelectricity after poling and we use a phantom heart model which can set blood fluid velocity
and we can test our sensor readings against what we know the blood fluid velocity to be, to determine accuracy

Now some conclusions, we have found some advancements in acoustic transducers
via the laser-induced graphene and PVDF layers
we found some innovations in wearable electronics, all of these being flexible and conformable to the human body
now, I did want to say, there were some setbacks with this project over the semester
in the first half of the semester, we worked with a design
and we finished it and we got it ready for testing and then our collaborators at Children’s National Research Institute
told us that it wasn’t good enough and we had to redesign
and so the second half of the semester we spent redesigning and producing the new redesigned
thing, so unfortunately we were not able to test this semester with the d33 characterization or phantom heart
but we hope to do that very soon
and for potential applications we hope to see it used in pediatric surgeries and integrate it with a wireless platform for bedside heart health and blood monitoring

some acknowledgements I wanted to make were for Noemi Lily Umanzor who helped make the CAD model of the
device the new updated device’s design
she also helped me do some basic tasks around and made my life definitely a little bit easier with this project
I wanted to acknowledge the Chitnis lab and give a thanks to Dr. Parag Chitnis and especially Ehsan for helping us pole the PVDF and use their poling machine that they have on Fairfax campus
I want to thank our collaborators at George Washington University and our
contact with Children’s National Research Institute Dr. Kevin R. Cleary
And yeah, I think that is all– I can’t take questions unfortunately because this is a video, but I hope you can find me
on the day that we are doing posters which should be May 6th, and I’ll see you there! Thank you.

Categories
College of Science OSCAR

Behavior of Estuarine Crab Hosts as Affected by Parasite Infection

Author(s): Kiersten Jewell

Mentor(s): Amy Fowler, Environmental Science and Policy

Abstract
Parasites are an understudied portion of ecosystems, considering the impacts they have on their host species. Marine invertebrates such as crabs serve as both primary and intermediate hosts for several different parasite species. In the Chesapeake Bay region, the white-fingered mud crab (Rhithropanopeus harrissi) has been shown to host entoniscid isopods (Cancrion and Cryptocancrion spp.), a rhizocephalan barnacle (Loxothylacus panopaei). Given previous studies that parasites can change host population densities, alter predator-prey dynamics, and impact food web function, we sought to determine how parasite infection affects crab host behavior in the presence of a predator. These parasites are not trophically transmitted; if the host dies, they do too. Therefore, we hypothesize that infected crabs will spend more time hiding and resting compared to uninfected crabs. To test predator response, crabs were placed into an aquaria with open space and shelter habitat available. Their habitat use and behavior were videoed and calculated before and after the addition of a blue crab predator scent cue. Preliminary results show that uninfected crabs spend less of their time moving and more hiding and resting, as compared to their infected counterparts. This project will continue in the fall of 2026, expanding the sample size of crab hosts across all infection statuses.
Audio Transcript
Hello my name is Kirsten and I am an undergraduate researcher in Dr Fowler’s aquatic biology lab here at the Potomac Science Center, and my project for Oscar for this semester is looking at parasite infections in crab hosts and how it affects the behavior at- specifically the white finger mud crab or Rhithropanopeus harrisiiand two parasites that are found in it. One is Loxothylacus panopaei which is a parasitic barnacle that is actually invasive to the Chesapeake bay and it is a barnacle that creates an externa on the outside of the crab’s reproductive organs and it has a lot of morphological changing properties. It feminizes the male crabs and it completely castrates all crabs. I’m also looking at a species of Cryptocancrion which is an entoniscid isopod. The thing about both these parasite species is they are not tropically transmitted. That means thatif the host dies the parasite dies. So this led me to hypothesize that an infected crab is going to spend less time doing bold activities. That means it’s going to have an increase in hiding and resting and a significant decrease in moving around- especially in the presence of a predator. So what does it look like for us to test this? We have an aquaria setup where we are simulating conditions of both presence of predator and absence of predator. We use scent cues which are frozen ice cubes- the predator cue ice cube has frozen water that a blue crab was marinating in and the control ice cube has plain water with no predator scent. The aquaria has water that the crabs are acclimated too, a base layer of substrate along the bottom, a shelter that is comprised of PVC and tiles, as well as a red light because red is the go-to for crab behavioral studies, and a hammock that is saran wrap where the ice cube can rest.
For each crab each experiment starts with an acclamation period where they’re allowed to be in the tank for 20 minutes before they are videoed. Then we start the video and we record a control period – so this has no scent ice cube- It is just the crab in the tank. And then we add either the control ice cube or the predator ice cube with the scent cube and we record again. Then afterwards I analyze this with for an ethogram. So I have a whole suite of behavior options and a suite of location options. At 30 second intervals I record what the crab is doing and where it is, So for our preliminary results we simplified these behaviors into three categories: resting, moving, and hiding. This graph is showing theproportion of times that crabs are spending in these different activity levels per their infection status. And as you can see the uninfected crabs are actually showing less time moving and more time resting and hiding. This graph is again showing proportion of time I these different activity bins but it is showing these in the different cue presences. So we have the control, the predator, and the no cue. And we would expect there not to be a big difference between the no cue and the control cue because there’s no scent on that control cue. But we are noticing here on this percent change graph that there is a
difference which is indicating that maybe it is the ice cube itself that is impacting the crab’s behavior not so much the scent cube I am continuing this project in the fall as an independent research project where I will be increasing the amount of replicates we have across all infection statuses. So hopefully this will allow us to draw some cool conclusions about how parasite level is affecting crab behavior and itwill culminate in a publishable unit. I want to thank you guys for listening and thank you Oscar for funding this project for the spring of 2025.
Categories
College of Humanities and Social Science OSCAR

Changes in BDNF in a mouse model of Alzheimer’s Disease with APOE4 and Tau compared to wild type mice

Author(s): Jasmine Mendoza

Mentor(s): Jane Flinn, Psychology

Abstract
Alzheimer’s Disease (AD) is a neurodegenerative disease characterized by two biological components: an increase in both amyloid beta (Abeta) and hyperphosphorylated tau (p-tau). According to the dual pathway hypothesis, Abeta and p-tau accumulate independently yet simultaneously. A possible mechanism for the dual pathway hypothesis is Apolipoprotein E (APOE), a protein which plays a central role in neurodegenerative diseases such as AD. The E4 isoform of APOE is most commonly associated with a higher risk for late-onset AD, though recent research indicates APOE4 may play a neuroprotective role when not in the presence of Abeta. Another protein that plays a neuroprotective role is brain-derived neurotrophic factor (BDNF), which is essential for neural plasticity and therefore learning and memory. Decreased amounts of BDNF are correlated with more severe cognitive deficits associated with AD, and APOE4 and Tau have previously been found to reduce BDNF levels. However, there has yet to be a study that examines the effects of APOE4 and Tau on BDNF in the absence of Abeta. The present study aims to fill this research gap using Western Blots on 4 different genotypes of mice (E4 x Tau, Tau, E4, and wild type) to target BDNF.
Audio Transcript
Hello everyone, my name is Jasmine Mendoza and my mentor is Dr. Jane Flinn, and my project is titled “Changes in BDNF in a mouse model of Alzheimer’s Disease with APOE4 and Tau compared to wild type mice.”

To begin, I’d like to define some key terms in my project. Alzheimer’s Disease is a neurodegenerative disease that affects learning and memory. Late-onset Alzheimer’s Disease is typically diagnosed in individuals age 65 and up, while early-onset Alzheimer’s Disease, which is much less common, is typically diagnosed in individuals below the age of 65.

Brain-derived neurotrophic factor, or BDNF, is an essential protein for learning and memory, as it promotes neuronal growth and plasticity. In contrast, Apolipoprotein E, or APOE for short, is a protein that helps metabolize fats, and there are three different alleles of the APOE protein. There’s APOE2, APOE3, and APOE4, with APOE4 being commonly associated with late-onset Alzheimer’s Disease, and it’s considered a high risk-factor for the disease.

Finally, the two key biological components of Alzheimer’s Disease are the peptide amyloid beta, or Abeta, which accumulates into plaques, and a version of the protein tau which has an excessive amount of phosphate groups attached to it, which is referred to as hyperphosphorylated tau, or p- tau, and this hyperphosphorylation causes it to accumulate into tangles.

The previous literature on Alzheimer’s Disease has largely found that APOE4 has detrimental effects on AD patients, usually exacerbating the cognitive deficits associated with the disease. For instance, the dual pathway hypothesis proposed by Small & Duff suggests that the accumulation of Abeta and p-tau happens independently but still parallel to each other, which eventually leads to neuronal death and cognitive decline, and that this accumulation may be facilitated by APOE4. Likewise, APOE4 has been found to reduce BDNF levels, which results in more severe cognitive deficits, as BDNF is a neuroprotective protein, therefore the reduced amounts of it are associated with more severe cognitive deficits, as found by Laske et al. in 2011. However, a recent study done by a graduate student who is also in my mentor’s lab, found results that suggest APOE4 and Tau may play a neuroprotective role when they are not in the presence of Abeta. The current literature has not yet examined how APOE4 and Tau interact with BDNF in the absence of Abeta, so this study aims to address that gap in the literature, and is also a continuation of that 2024 study by Booth in my mentor’s lab.

So, because this study is a continuation of the previous Booth study, the brains used come from mice that were separated into four different genotype groups and two different metal ion supplement groups, as those were the groups used in the Booth 2024 study. The four different genotype groups are those with APOE4, those with Tau, those with both APOE4 and tau, and wild type mice without either APOE4 or tau. The two metal ion supplement groups are those with zinc and those without any metal ion supplement. So, this study will use Western Blots, which is a technique which targets specific proteins and isolates them from other proteins in biological samples. And, the proteins are targeted through the use of antibodies which bind to the target protein and are prevented from binding to any other proteins by a process called gel electrophoresis. The primary antibodies that we’ll be using in this study are BDNF, of course, and Glyceraldehyde 3-phosphate dehydrogenase, or GAPDH, which will be used as a loading control. The secondary antibody will be mouse anti-rabbit, which will bind to the primary antibody and make it easier to visualize and quantify BDNF. And then the data will be analyzed in SPSS. As for the possible implications of this study, the findings of Booth (2024) were unlike those of previous studies examining the effects of APOE4 in Alzheimer’s Disease, as most previous studies, as we’ve discussed, have found negative effects of APOE4, while the results of his study actually found a possible neuroprotective effect that APOE plays. In his study, Booth does suggest that the ages of the mice used may have been a mediating factor in this effect, because the mice were younger, so it’s possible that APOE4 only plays a neuroprotective role in younger individuals. Thus, a future study could possibly examine the effects of APOE4 on BDNF in different age groups. Nonetheless, what we are hoping to find with this study is more information on the interaction of APOE4, Tau, and BDNF in Alzheimer’s Disease mice, which could eventually lead to new possible treatments involving APOE4 and Tau.

Finally, I would like to thank my mentor, Dr. Jane Flinn, as well as everyone in the Flinn lab, and I would also like to thank Dr. Karen Lee and URSP as a whole for providing funding and guidance throughout this project.

And here are my references. Thank you for your time, and I hope you enjoyed hearing about my study.

Categories
College of Science OSCAR

Assessing symbiont diversity in restored and wild coral populations in Honduras

Author(s): Karina Cabrera

Mentor(s): Jennifer Salerno, Environmental Science and Policy Department

Abstract
Elkhorn (Acropora palmata) and staghorn (A. cervicornis) corals are important reef builders on Honduran reefs, and their coverage has declined by >90% since the 1970s due to disease and bleaching. These corals form obligate symbioses with photosynthetic dinoflagellate endosymbionts, and different symbiont taxa provide the host coral with benefits that aid coral resilience, such as thermotolerance or disease resistance. Ongoing coral restoration projects in Honduras have not yet identified the symbiont taxa in their corals, which is helpful to ensure effective restoration. Here, we used restriction fragment length polymorphism analysis to screen and identify symbionts from 266 wild and restored corals across different reef sites. This information will be given to the restoration programs, enabling them to assess the genetic and symbiotic diversity of their restored corals and improve their approach to slowing the population decline of these important corals. 
Audio Transcript
Hello, everyone. My name is Karina Cabrera, and I am a Junior here at GMU pursuing a BS in Geology and a minor in oceanography. Today, I will be talking about the work I have done this semester to develop a protocol for identifying coral symbionts in elkhorn and staghorn corals.

Corals are important ecosystem engineers that build up coral reefs and provide habitat for extremely diverse organisms to live in, supporting as many as 1/3 of marine species. They also benefit human communities near the coast by supporting ecotourism and reducing coastal erosion. In the Caribbean, staghorn and elkhorn corals were historically dominant reef-builders but have experienced over 90% decline in the past 4 decades due to bleaching and disease.

This unfortunate decrease not only puts reef ecosystems at risk but also threatens the organisms that depend on reefs for survival, including humans. One way to combat this decline is through coral restoration, and specifically a method called coral gardening, in which samples are taken from wild corals and then grown in controlled conditions so that the coral population for staghorn and elkhorn corals are restored. Despite this collection method being an easy and fast way to restore corals additionally helping increase population numbers, because this process relies strictly on asexual reproduction, it also means that the coral host and symbiont diversity decreases over time.

These photosynthetic dinoflagellate symbionts form obligate symbiotic relationships with the corals, and different symbiont taxa provide the host coral with benefits that aid coral resilience, such as thermotolerance or disease resistance. Because of this, understanding the phylogenetic diversity of these symbionts will help improve the effectiveness of coral restoration efforts. I am working with four coral restoration programs in the Bay Islands of Honduras, seen on this map, but these restoration programs do not currently have the necessary molecular facilities or financial resources to perform molecular symbiont identification. To address this need, my URSP project focuses on developing a relatively cheap and efficient assay to identify the coral symbionts.

Samples were collected from wild and restored populations of the two coral species being restored in Honduras, staghorn and elkhorn corals. 100 wild corals were collected from sites all around the island of Roatan, and 166 restored corals were collected from the four different restoration programs on Roatan and Utila. To identify the symbionts in these samples, I developed a protocol based on polymerase chain reaction (or PCR) and restriction fragment length polymorphisms (RFLP), originally developed by Rowan and Powers. This protocol amplifies the 18S rRNA gene in the symbiont and then cuts up the DNA. These different length fragments from different DNA sequences are what cause different banding patterns. These different patterns then correlate to the taxonomic clades that the symbionts belong to. As you can see these are the banding patterns for clade a, b, c, and d. Getting into my results, I first optimized the PCR step. Based on the original protocol, which incorporated lower-quality DNA extractions, I was not getting good amplification of the target gene from most of the samples as shown in this PCR blank gel. This is due to the DNA being too short for the banding to show up. Because this gene is very long, I switched the protocol to use higher quality DNA instead and received much better results. In this optimized gel there are clear bandings due to the DNA being of higher quality and longer. I am now working to optimize the RFLP portion of the protocol. The restriction appears to be working from the gel there is some banding appearing at 30 minutes and there is some double banding patterns present, which is expected for these symbionts, but was not separated enough so I let the gel run for an hour and saw that it had become blurry. Because of this, my next steps are to try optimizing the time in which the gel is run since an hour seems too long, but 30 minutes is not enough for the bands to become clear, so hopefully reducing the time will give us better and more clear results. Once I have optimized this portion of the protocol, I will screen all the wild and restored corals and share my results and the protocol itself with the four restoration programs in Honduras. This will help them design out planting schemes that maximize genetic diversity and ensure that the restored populations mimic the diversity found in the wild. This will help improve the effectiveness of restoration efforts in Honduras and help to build future reef resilience against ongoing climate change.

This research would not have been possible without the OSCAR URSP Program and the environmental science and policy department here at mason. Thank you to Teagen Corpening, Jennifer Keck, and all of the RIMS interns who helped to collect samples and made this research possible. Finally, I acknowledge all the funders who supported this project. Thank you for your attention!

Categories
College of Science OSCAR

Navigating the Healthcare System: Barriers and Resources for Individuals from Low-Income and Immigrant Backgrounds.

Author(s): Kabir Toor

Mentor(s): Blake Silver, Department of Sociology and Anthropology

Abstract
This study investigates the barriers individuals from low-income and immigrant backgrounds face when navigating the U.S. healthcare system. While much existing research focuses on health outcomes, this project centers on the process of accessing care, including how individuals identify needs, seek services, and confront structural and cultural obstacles. The project originally involved an anonymous online survey featuring multiple choice and open-ended questions distributed through community organizations, hospitals, and doctors’ offices. Due to time limitations, primary data collection was minimal, and peer-reviewed scholarly sources were analyzed to identify trends aligned with the study’s goals. The literature revealed consistent barriers such as healthcare costs, limited insurance coverage, communication difficulties, transportation challenges, and fears related to immigration status. Facilitators of access, such as community health centers, family support, and bilingual social workers, were also commonly cited. Findings emphasize that even insured individuals often struggle to access care, illustrating a gap between insurance coverage and actual service use. These findings suggest a need for reforms that address not just insurance coverage but also cultural, logistical, and systemic obstacles. Underscoring the importance of community-informed research and policy interventions that reflect the complex experiences of low-income and immigrant individuals across the healthcare landscape.
Audio Transcript
Hello, my name is Kabir Toor, and I’m a student in the Department of Biology at George Mason University.
Today, I’ll be presenting my research project titled “Navigating the Healthcare System: Barriers and Resources for Individuals from Low-Income and Immigrant Backgrounds.”

Accessing healthcare in the U.S. is challenging for many, but especially for individuals from low-income and immigrant backgrounds.
My research asks: How do individuals from these communities navigate the healthcare system, and what barriers and resources shape their experiences?
While much existing research has focused on health outcomes, this project focuses on the process of accessing care itself—how individuals recognize needs, seek services, and confront obstacles along the way.

Existing studies show that access to care is shaped by insurance status, financial barriers, language differences, and trust in healthcare institutions.
For example, the author DeVoe et al. (2007) found that having insurance doesn’t always guarantee actual access to services.
Further, Ngondwe et al. (2024) emphasized that immigrant communities often face additional bureaucratic and cultural hurdles.
Given limited primary data collection, I analyzed trends across multiple major scholarly sources to anticipate key themes my survey was designed to capture.

The original study design involved creating an anonymous online survey distributed through community centers, hospitals, and doctors’ offices.
The survey included multiple-choice and open-ended questions aimed at individuals identifying as low-income and/or immigrants.
Participants were asked about their experiences navigating healthcare, including barriers encountered and resources utilized.
Although direct survey responses were limited this semester, the survey framework was developed and approved for community distribution.

Using peer-reviewed studies as reference, several consistent themes were identified. For Barriers: High healthcare costs, insurance gaps, communication difficulties, and transportation challenges were identified. For Facilitators: Access to community health centers, family support systems, and bilingual healthcare providers were identified. 
It is important to note that even individuals with insurance often struggled with actual access to needed services, showing that coverage alone is not enough.

Due to timing constraints, comprehensive primary data could not be collected during the allotted time.
The current findings are based on anticipated trends and literature synthesis rather than direct participant responses.
This limitation highlights the need for continued participant outreach to fully validate the study’s themes.

Moving forward, I plan to continue gathering survey responses through additional outreach at community centers and clinics.
Once a robust sample is collected, I will perform a qualitative analysis using open codebook methods.
This process will allow for the identification of emergent patterns directly from participants’ narratives, strengthening the study’s contributions to healthcare policy and access research.

So what are the implications, well, the findings suggest that reforms must go beyond expanding insurance access to address cultural, logistical, and systemic barriers.
Community-driven solutions and culturally competent healthcare systems are critical to bridging gaps in access.
This project reinforces the importance of centering underserved voices in future healthcare policy discussions.

I would like to thank Dr. Silver, my mentor, for his ongoing support and guidance.
I would also like to thank the OSCAR URSP for funding this research, and I would like to thank the Department of social science at George Mason University. That concludes my presentation.
Thank you for your time and attention.

Categories
College of Engineering and Computing OSCAR

Novel 3D Bioprinting Method To Create Hydrogel Gradients

Author(s): Elizabeth Clark

Mentor(s): Remi Veneziano, Bioengineering

Abstract
The primary objective of this project is to utilize hydrogels which are a gel that is composed of polymer(s) suspended in water to create a gradient (the change from one concentration of hydrogel to another). To address this, I used TEMPO-oxidized cellulose nanofibers (T-cnf), mixed with dye diluted with deionized water. The T-cnf was split into two portions and dyed two different colors and then placed into syringes which were heated to 70 degrees Celsius for at least five minutes to ensure smooth extrusion. By using a specialized nozzle, I could plug two syringes into one nozzle that uses a static mixer at the tip to mix the dyed T-cnf. Depending on how fast one syringe extruded versus the other I could change the color and even mix them. With my hands extruding the two syringes I was able to create gradients with the dyed T-cnf and recreate them with different colors. The results indicate that hydrogels can be manipulated to create gradients. Notably, this project uses a hydrogel and dye with large range of temperature it can be extruded under. When recreating this with different hydrogels and fluorescent dyes that can be used with biological material there is a need for more temperature control. This research draws on 3D printers that can do multicolored printing using multiple materials. Bioprinters are 3D printers that use bioinks which are biologically compatible inks (often hydrogels). This research aims to further explore the potential of creating custom bioinks that can be printed in gradients for use in bioprinters in regenerative medicine.
Audio Transcript
Hello My name is Elizabeth Clark. I’m a bioengineering student and my research project was about creating a new method to 3D bioprint hydrogel gradients, as many cellular functions and processes rely on gradients within the human body.

So for some background, hydrogels are hydrophilic polymers which are primarily comprised of water. A gradient can be thought of if the change of concentration of a property across a material in this case along a line,
So here as mentioned on the previous slide you can see the change in color from blue pink, which can help us visualize the gradient, which can be seen in this picture
In this project, the hydrogel used was TEMPO-oxidized cellulose nanofibers (I will refer to them as T-CNF). The T-CNF was loaded with printer ink the vials right here to visualize the gradients as you had seen in the previous picture, in this case magenta and and cyan ink was used. 2 syringes pictured here were then filled with the dye loaded T-CNF and heated using this blanket heater to 70 ° Celsius and then plugged into this specialized extruder which allowed me to plug in 2 syringes at once those syringes would then extrude into this small chamber with a static mixer to evenly mix the hydrogel and extruded out of a 22-gauge blunt tip needle.
By changing the syringe’s extrusion rate, the color of the gradients could be changed. The syringes could be guided by hand to create different shapes and designs. Gradients were extruded onto weighing paper or on a glass dish and were approximately 8.5-9 cm long, and the color could be changed multiple times in one gradient line.
Throughout this project, consistent gradients were achieved over several weeks, This means future alterations to the dyes utilized and the hydrogel itself could be done to create more biocompatible gradients. It also means a bioprinter could be used to create 3-dimensional gradients that can be used in biomedical engineering and regenerative medicine.

Categories
College of Engineering and Computing OSCAR

3D-Printed Porous Scaffolds Application in Bone Implants

Author(s): Joelle Nguyen

Mentor(s): Ketul Popat, Bioengineering

Abstract
Bone defect treatments remain a clinical challenge as they require synthetic bone scaffolds with strong mechanical and biological properties. There is a need for adequate exchange of waste and nutrients between the implanted bone scaffold and surrounding tissue. 3D-printed polymeric scaffolds can be designed in such a way that ensures that there is a uniform distribution of pore sizes and wall shear stress for proper nutrient exchange along with consistent cell growth and differentiation. Polycaprolactone (PCL) is a thermoplastic polymer mostly employed in bone tissue engineering due to its biocompatibility, biodegradability, processability, and tunable mechanical properties [1]. PCL has proven to produce effective bone cell growth in the form of PCL-based composite scaffolds in combination with a variety of metal, polymer, and ceramic materials [2]. PCL alone has insufficient osteogenic ability and mechanical strength therefore it’s often combined with Zinc metal alloy [3] or hydroxyapatite ceramic with polyethylene glycol polymer [4].

This project aimed to evaluate and demonstrate the effectiveness of 3D-printed porous bone scaffolds in supporting the proliferation of osteoblast cells. The primary goal was to assess how a scaffold design provided by 3D-Orthobiologic Solutions (3DOS) promotes cell proliferation and cell viability. This involved tracking how adipose-derived stem cells (ADSCs) populate the scaffold, differentiate, and interact with polycaprolactone (PCL) alone and in its composite form with an osteoconductive ceramic. After a series of experiments on optimizing the porous scaffolds’ material composition, layer thickness, and chemical treatment were done, it was found that none of the 3D-printed scaffolds were cytotoxic and were able to grow and differentiate into osteoblast cells over a span of 28 days based on fluorescence imaging and assays relevant to osteogenic differentiation.

References:

[1]A. Fallah et al., “3D printed scaffold design for bone defects with improved mechanical andbiological properties,” Journal of the Mechanical Behavior of Biomedical Materials, vol. 134, p.105418, Oct. 2022, doi: 10.1016/j.jmbbm.2022.105418
[2] M. Gharibshahian et al., “Recent advances on 3D-printed PCL-based composite scaffolds forbone tissue engineering,” Front Bioeng Biotechnol, vol. 11, p. 1168504, Jun. 2023, doi:10.3389/fbioe.2023.1168504.
[3] S. Wang et al., “3D-Printed PCL/Zn scaffolds for bone regeneration with a dose-dependent effecton osteogenesis and osteoclastogenesis,” Mater Today Bio, vol. 13, p. 100202, Jan. 2022, doi:10.1016/j.mtbio.2021.100202.
[4] C. C, H. P, P. A, P. Aj, A. F, and Y. J, “Characterisation of bone regeneration in 3D printed ductilePCL/PEG/hydroxyapatite scaffolds with high ceramic microparticle concentrations,” PubMed, 2021,Accessed: Sep. 26, 2024. [Online]. Available: https://pubmed.ncbi.nlm.nih.gov/34806738/
Audio Transcript
Hi everyone !

My name is Joelle Nguyen and I’m a senior majoring in bioengineering. I participated in OSCAR’s URSP this spring semester to conduct a research project on 3D-printed scaffolds for bone implants. Here’s a table of contents to capture what I will be going over today in this video presentation.

Starting with background, there is a clinical challenge for bone defect treatments as bone implants require synthetic scaffolds with strong mechanical and biological properties. These scaffolds require an adequate exchange of nutrients between the implanted bone scaffold and surrounding tissue and should be able to withstand any external forces or wall shear stress which bone implants usually encounter. 3D-printed polymeric scaffolds pose a potential solution, more specifically 3D-printed scaffolds using polycaprolactone (PCL) – composite blends. PCL-composite blends have proven effective to promote bone growth according to multiple researchers. PCL is a commonly used polymer in bone engineering due to its biocompatibility and ease in manufacturing. Composite blends with PCL combine PCL with a variety of metals, polymers or ceramic materials since PCL alone has insufficient osteogenic ability and mechanical strength necessary for bone growth.

My project aims to evaluate and demonstrate the effectiveness of 3D-printed porous scaffolds made of a composite blend which combines PCL with an osteoconductive ceramic and optimize the scaffold design’s porosity and thickness for future applications. The porous scaffold designs were provided by a 3D Orthobiologic Solutions or 3DOS for short.

Now I’ll go more in depth on the methodology of my research project. First let me start with my research timeline. So back in February there were some preparations made for setting up the experimental design and gathering all the necessary supplies. Then I started with my first experimental trial in late February where I wanted to compare how the thicknesses of the scaffold had an impact on cell proliferation. The next experimental trial started in mid March where I wanted to see how thickness and treating PCL-printed scaffolds with 5 M of NaOH affected cell proliferation & differentiation. Based on literature, NaOH worked to enhance the hydrophilicity of PCL and create a more rough surface for improved cell attachment. The third experimental trial was to compare the osteogenic cell growth of porous scaffolds made with PCL to porous scaffolds made with the composite blend after confirming which experimental conditions showed most cell proliferation in previous trials. Finally I have been working on the data analysis which includes counting all the cells imaged under fluorescence microscopy and performing one-way ANOVA on the data I’ve collected not only on the cell counts but from the multiple assays I’ve done throughout the semester.

The general procedure for each trial looked like the following where I’d sterilize the samples, seed the samples with 20,000 cells/mL of adipose-derived stem cells, stain the samples for fluorescence imaging, and perform assays on specific days. After all is done I work on cell counting and statistical analysis using one way ANOVA to compare between groups.

So I’ll only be discussing the results from Experiment #1. I have only a few fluorescence images here for the Days 1, 4, and 7 where the top row is of the non-porous PCL scaffold and the bottom row is the porous PCL scaffold. The blue circles are cell nuclei and the branches you may see are the cytoskeleton of the cell. These images give me a good idea of how many cells there are on each scaffold as well how well they are spreading throughout the sample.

Next is another set of images for Day 14 and 28 except this time there is green staining done to depict the amount of osteocalcin released from the cells. Osteocalcin is a protein released by osteoblast cells during bone formation. After imaging all samples under the fluorescence microscope, The general trend I found was that there is an increase in average cell count as the amount of layers increases for the scaffold designs. Through a one-way ANOVA analysis it was found that there wasn’t a significant difference in average cell count between the porous and nonporous scaffold designs with the same number of layers, which signifies that a similar amount of cells grow between nonporous and porous samples.

However, cell count does not completely capture what’s happening on the sample. Therefore, I conducted additional assays to determine what kind of cells are present on my samples: are there still adipose-derived stem cells or have these stem cells differentiated into osteoblast cells? First I’d observe the osteocalcin area on the fluorescence images. You can see in this graph that a porous scaffold with 5 layers or 1 mm depth showed the highest osteocalcin area in comparison to other scaffold designs, indicating that the cells imaged may be osteoblast cells. This is reaffirmed by the calcium assay done on Day 14 and 28 as porous scaffolds made with 5 layers, specifically on Day 28, showed a statistically significant concentration of calcium on the sample in comparison to other samples.

Onto the conclusion! Based on experiment 1, scaffolds printed with more layers showed an increase in cells, but there was no statistically significant difference between the number of cells on non-porous and porous scaffolds with the same amount of layers. A statistically significant difference in calcium deposition was observed for the porous layer 5 compared to other samples, meaning that porous scaffolds showed earlier differentiation of stem cells to osteoblast cells. Next steps include completing a statistical analysis for Experiments #2 and #3, testing scaffold designs with smaller pores, and testing additional scaffolds printed from the composite blend filament. Much more information will be provided on my poster during the OSCAR celebration on May 6th. Be sure to stop by if you’d be interested to learn more about how my research turned out.

I’d like to end my presentation with acknowledgements. I’m so grateful for all the support I received throughout this semester. Thank you for listening to my presentation !

Categories
College of Science OSCAR

Digitizing colors of soils in mesocosm wetlands using Nix sensor

Author(s): Seung Han

Mentor(s): Changwoo Ahn, Environmental Science and Policy

Abstract
Forty mesocosms located at the Ahn Mesocosm Compound at George Mason University have been a part of legacy studies for 12 growing seasons since 2012. In a previous study, each mesocosm wetland was planted with different amounts of species richness. After pre-COVID mesocosm studies halted, vegetation communities gradually changed. The Nix color sensor scanned mesocosm soils that were collected to produce 15 unique color variables for each mesocosm. Data was grouped by mesocosms that shared the original number of species planted. Preliminary results show that there are two out of five groups that have similar color variable values. Further analysis will continue to see if any differences arise and whether the soil conditions have been altered.
Audio Transcript
Hello! My name is Seung Han, and my project is “Digitizing color of soils in mesocosm wetlands using Nix sensor.”
When we look at soil, the color tells us about soil health and properties such as the presence of organic matter, mineral contents, and changes from fluctuating water levels between the different layers of soil. Here we can see the differences between the soil colors with the very grey wetland soil on the right versus the dark, fertile soil on the left. The standard method of analyzing soil color has been performed by using the munsell soil color chart. This method requires training, is costly, and analysis can be varied from person to person.
A new digital method of analyzing soil color is the Nix color sensor. Each Nix scan produces values for 15 color variables from 5 different color spaces. The color variables are l-a-b, c-h, r-g-b, x-y-z, c-m-y and k.
The study occurred at the Ahn Wetland Mesocosm Compound located in George Mason University’s West Campus. Back in 2012, forty mesocosms, or experimental ecosystems, were set up to study the effects of planting different amounts of species in each tub, ranging from no plants to 4 different wetland plants. The mesocosms were maintained by weeding out unwanted species and keeping the water levels above 5 cm. After pre-COVID studies were completed, the maintenance of these mesocosms stopped in 2019. Now for the big question. After studies and maintenance stopped, are the mesocosm wetland soils still wetland soils?
So now we begin. Each mesocosm had soil cores extracted from each of these five sections. The soil core was cut along the top to create a flat surface to scan, transferred on top of a white sheet, and the Nix was placed on top of a flat, smooth section of the soil for a scan to be performed. This was repeated at least 3 times for each core. Each scan data was saved in the Nix app on a smartphone, and the scans were exported into a csv file for analysis.
Data was organized by grouping mesocosms together with their original amount of planted species. Preliminary results analysis show that mesocosms originally planted with two and four species seem like they both have similar amounts in ten different color variables. A, b, and c, h in these graphs, and z, g, b, c, m, and y in these graphs.
Data analysis continues to see if the differences can be explained by differences in vegetation communities by identifying, counting species, and calculating the amount of percent cover.
I would like to thank my mentor, Dr. Changwoo Ahn, for guidance, encouragement, and believing in me. Thank you to Dr. Stephanie Schmidt for feedback and support this semester. Thanks to Rylee for coming out on hot days to help with some cores and note taking. Special thanks to Trinity Lavenhouse for being a great lab mate, giving support, and being a positive presence. And thanks to OSCAR for funding this URSP project. Take care!
Categories
College of Engineering and Computing OSCAR

Machine Learning Aided Nanoindentation to discover Material Properties

Author(s): Jake Samuel

Mentor(s): Ali Beheshti, Mechanical Engineering

Abstract
Traditionally, identifying various material properties requires specific and expensive tests, which usually destroy the material being tested. Machine Learning (ML) models could be used with nanoindentation tests to establish a relation between the macro properties and the microstructure without needing to understand the physical processes that takes place. This work examines how neural networks, a type of ML model, could be used to relate a material’s indentation data with yield strength and Young’s modulus. After training, the model was able to perform predictions with an error of 0.14. It was concluded that a neural network could relate indentation data with macro material properties with the right hyperparameters
Audio Transcript
Hello, my name is Jake Samuel and today I will be talking about using machine learning and nanoindentation to discover material properties. So, I’ll talk about traditional testing first. Traditional strength testing requires using a machine such as this, and you insert the material coupon that you wanna test in it, and then the machine pulls it apart until it breaks. Uh, the problem with this kind of setup is that uh the coupon can be expensive to make, and it’s not reusable, cuz once you break it it’s useless, and it’s impossible to test materials like thin films like this one that you see here which is only a couple of micrometers thick. And the solution to that is nanoindentation and how nanoindentation works is you have an indenter apply a force to your material and it creates a small indent, and it produces this force vs depth chart. Uhh and, the advantages of this over traditional testing is that you do not require a large specimen, and it is not destructive; it only leaves a small dent in your material. Now, this test gives us a clue as to how the material behaves in the nanoscale, but solving for the material’s nanostructure and its macroproperties can be quite a challenge, and this specific problem is called the nanoindentation problem because the problem describes trying to get a uhh strength graph from this nanoindentation graph, and how I’ve chosen to solve that problem is using machine learning! And to give a quick rundown of what machine learning is, is it’s a umm model that uses some kind of mathematical algorithm to guess your result, and if there is an error in the result, it tweaks the mathematical model in hopes of getting the result to be lower. It has some cool math, linear algebra things behind it, and the pros of this are is that it’s much faster and easier than trying to find your relationships manually, but it comes at the cost of not understanding what the physical process behind it is. And the machine learning model I chose to implement are neural networks, and how they work is they’ll have neurons such as these with their bias and weights contained in hidden layers, and this is what we call the hidden layers, and they could be any kind of mathematical function, they could be a linear function, they could be a relu, which.. is.. ummm… it’s a cutoff function where above a certain value you cut off, below its, uhh you include it or vice versa, or you can do a mix of any of them. And, things you can change in this or tweak it is that you can change how the error is calculated, you can change the number of layers, the number of neurons, and you can change the learning rate. An example of this in literature is this amazing paper by these researchers where they used indentation and machine learning to predict the hardness of maize grains. So as you can see in this picture, uh this maize grain is impossible to test on traditional testing uh machines, but it’s perfect for this nanoindenter, and they were able to successfully uhh predict the hardness of maize grains, and it’s really cool because it demonstrates how the indentation technique can be applied to a wide range of materials. The work I’ve done this semester learning how to use and implement neural networks in python using the pytorch library. And I started with a sample model that use-uhh used to predict, or classify, specifically, iris flowers depending on their petal number and other data and then I was able to alter it into a regression model that takes these inputs, and it outputs yield strength and Young’s modulus. Now where do these inputs come from? So , if we take another look at our indentation chart, you can see that in the loading curve, which is when the indenter goes in, this curve can be modeled as a power function of some C raised to some N, and that C is one of the inputs. And you can see the unload- in the unloading curve, when the indenter is taken off, uhh the slope of the tangent line right here, uh represented by this S is another of the inputs, and work ratio is essentially if you see the area under the under the curve of this graph describes how much work uhh or how much energy you’ve lost in the process of the indentation and that’s another one of the inputs. So we have 3 inputs, 2 outputs, then we have 2 hidden layers with 8 and 9 neurons. How many neurons you select are pretty arbitrary, and how many- the number of hidden layers is also pretty arbitrary and I used a linear function to drive my model. The results were I used data- I used simulation data from literature. Uhh, and I tested that data, uhh in the model and I was able to get down to an error of 0.14. Now because pytorch uses tensors to calculate- to do all their calculations, and all their math, I don’t know what this number actually means, because I’m not uhh well versed in tensor maths, and i have two inputs. Two outputs, sorry, which complicates things. But I was able to demonstrate that neural networks could be used to form the relationships needed to solve the inverse indentation problem. For future work, uhh I’ll probably expand the scope of the model to take multi-fidelity inputs. What this means is to be able to take inputs that are both uh simulations and real indentation experiments that I’m hoping to conduct in the lab, and the model will also be used to test for properties such as creep and fracture toughness which are hard to test for otherwise. I’d like to acknowledge Dr. Ali Beheshti who was my advisor and was a lot of help to me this semester. Shaheen Mahmood who taught me how to use the indenters and how to make the samples for them, and also a lot of help in the lab, and Dr. Karen Lee who taught me a lot about research practices and research ethics in my OSCAR class, and thank you for watching.
Categories
College of Engineering and Computing OSCAR

Multi Expert Debate (MED): An LLM Framework for Analysis

Author(s): Jacob Sheikh

Mentor(s): Ozlem Uzuner, Information Systems and Technology

Abstract
In this work, we introduce Multi-Expert-Debate (MED): An LLM Framework for Analysis. Analysis is an open ended problem; given the same facts, different people draw different conclusions (based on their background, their personality, their beliefs, etc.). In MED, LLM agents are each initialized with their own personas. Agents are all provided the same problem and same knowledge, and, after coming to their own individual solution, debate with other agents, until the ensemble produces a singular, refined idea. We also present SumRAG: a summary-based retrieval method to augment LLM generation. We believe this work will establish a valuable baseline to measure other approaches to reasoning against.
Audio Transcript
Hello. Today, I want to talk about explicit and implicit reasoning in language models. I want to guide this discussion with the question: How can you construct representations of the world in such a way that some agent can navigate those representations to solve problems? In other words, how can you construct an artificial general intelligence?
The first step in answering this question—and what we addressed in our work—was understanding how to navigate representations of knowledge, which is essentially how to reason. There are two approaches to reasoning in language models today: explicit reasoning and implicit reasoning. In our work, we focused on explicit reasoning.
Language models like ChatGPT have shown the ability to improve their responses through reasoning. Shown here is one example technique called chain of thought. On the left, the language model does not reason through its answer and simply outputs “11,” which is incorrect. On the right, the model verbalizes its thought process—explicit reasoning—and arrives at a more accurate answer. Explicit reasoning, therefore, is the process of articulating thoughts step by step to improve the final output. It has proven to be very effective.
The goal of our work was to synthesize explicit reasoning with other emerging techniques in language models, including retrieval-augmented generation (RAG)—querying a database—and multi-agent systems, where multiple LLMs interact. We aimed to combine all three into a unified framework to create the best of what’s currently available in LLM-based explicit reasoning.
Our work hopes to produce a framework called Multi-Expert Debates (MED). In MED, we initialize multiple agents (LLMs), each with their own opinions and personas, given access to the same information via the same RAG setup. These agents debate and defend their decisions until they converge on a single, agreed-upon output.
This work was done in the context of medical care—specifically, decision support systems to assist clinicians in diagnosis. To support this, we implemented a summarization-based RAG pipeline using a dataset that includes foundational medical knowledge, case studies, and procedural guidance.
While the system is still under development, we aim to compare its performance with models that use implicit reasoning. In implicit reasoning, the model reasons internally without verbalizing steps. For example, given the question “Find the capital of the state containing Dallas,” the model might internally reason: “Dallas is in Texas, the capital of Texas is Austin,” and output “Austin” without showing its steps. This form of reasoning has been observed but is not always reliable.
The broader objective of our research is to explore implicit reasoning further. For now, we are building a strong explicit reasoning framework as a baseline for future comparison. We’ve also found interesting connections with neuroscience, particularly regarding disentangled representations, which play a key role in how reasoning may be structured.
We are hopeful our work will provide a valuable foundation for evaluating and developing implicit reasoning approaches in the future.