OSCAR Celebration of Student Scholarship and Impact
Categories
College of Engineering and Computing OSCAR

Developing an Affordable Open-Source Bionic Hand

Author(s): Robert Haas

Mentor(s): Nathaila Peixoto, Electrical and Computer Engineering

Abstract
This project explores the feasibility of developing an affordable, open-source, bionic prosthetic hand to develop an alternative to be used in applications where traditional prosthetics are not an option. Developing a functional prototype based on proportions taken from a human hand provided a solid foundation for the rest of the project. The initial prototype served as a testbed, allowing me to test different methods of control. Such as input from an electromyography amplifier and accelerometer module, that when processed by an Arduino microcontroller can be used to control servos that open and close the hand. The project aims to provide a functional prototype, design schematics, and CAD models as reference material. To be used by others in the pursuit of developing and delivering non-traditional prosthetics options.
Audio Transcript
INTRO:
Welcome to a brief overview of my Undergraduate Student Research Project, Developing an Affordable Open-source Prosthetic Hand.

BACKGROUND:
The initial prototype I designed was part of a group project for my introduction to engineering class. It was controlled by a mobile app that connected to a microcontroller via Bluetooth module. This prototype had several shortcomings, most notably the single motor set up didn’t deliver enough power to close the hand properly. This issue was compounded by the hand’s lack of flexibility.

SECOND PROTOTYPE:
When developing the second prototype, I mitigated these problems with additional motors on the back of the hand and included a horizontal joint in the upper half of the palm, allowing the hand to flex.

FINGER DEVELOPMENT: 20 sec (show pictures of finger prototypes and them video of them flexing)
When designing the fingers, I developed multiple prototypes to test different tolerances for the hinge joints. I used dual pivoting joints for the fingers to increase flexibility and extended the supports on the back of each of the links to prevent them from flexing backwards.

MATERIALS
I tested multiple materials for this project including PLA+, PETG, and Resin. Resin provided the highest level of detail with resin. However, the most practical option was PETG. It’s stronger and more heat resistant then PLA+, and unlike resin it can be printed on a standard FDM printer.

CONTROL METHODS
I experimented with multiple control methods, using an off the shelf electromyography amplifier, also known as EMG, and custom-built amplifier. I was able to control the hand using the input from an accelerometer module. When the module detects a tilt the motors wind mono filament line closing the hand. When the module is tilted back the motors spin the opposite direction, unwinding the line and opening the hand. One area for improvement is with the EMG amplifier. I was unable to control the hand using input from the EMG amplifier.

CONCLUSION
There are some improvements that can be made to the project. Specifically, further testing of the EMG component as well as exploring other methods of control, to provide diverse control options to potential users. Additionally, certain components of the hand such as the cable guides and elastic retainers could be redesigned to print as one piece. This would also reduce and optimize the materials needed to construct it. I would also like to integrate a Bluetooth component to the hand, to allow the user to configure specific gesture presets through an app.

Categories
College of Engineering and Computing OSCAR

DNA-origami-based biomimetic interfaces for T-cell activation

Author(s): Benjamin Safa

Mentor(s): Remi Veneziano, Bioengineering

Abstract
DNA Origami Nanoparticles, also referred to as DNA-NPs, are nanostructures that take advantage of the Watson-Crick complementary base pairing rules to create precise DNA nanostructures with precise and customizable size and shape within the 10-nanometer to 100-nanometer size range. DNA-NPs provide a highly programmable interface for function as nanocarriers of proteins and are highly biocompatible, allowing them to be administered in vivo as they can be broken down by natural enzymes within the body (such as DNAse). DNA-NPs have also shown significant promise in the realm of T-cell activation, acting as nanocarriers for conjugated tumor antigens and antibodies, allowing for the activation of T-cells and recognition of target antigens (such as those overexpressed in tumor cells). Here, we aim to determine favorable nanoscale organizations of simple DNA-NP motifs—specifically ligand stoichiometry, protein orientation, and position—to determine the efficacy of different T-cell signaling pathways in CD8+ T-cell expansion and proliferation. Building on previous work done under previous URSPs, the previous wheel motif will be utilized in order to move forward with assembling such interfaces. Various protein conjugation methods will be tested with motif—mainly copper free click chemistry techniques, such as DBCO-azide, maleimide-thiol, and NHS/NH2 —and conjugation efficiency will be tested through the use of surface plasmon resonance experiments. Motifs will then be tested with fluorophores/mock proteins (i.e., streptavidin) to verify the nanoscale control of antigen presentation using the DNA Origami technique, most likely through the use of atomic force microscopy (AFM) and fluorescence resonance energy transfer (FRET) experiments. After validating optimal conjugation methods with the motifs, interfaces will be produced with attached cytokines and tested by our collaborator in vitro with CD8+ T-cells.
Audio Transcript
Unknown Speaker 0:01
Ben, hi everyone. My name is Ben Safa, and my project is DNA origami based biomedic interfaces for T cell activation.

Unknown Speaker 0:11
So a key problem in adoptive cell therapy, which is a form of cancer immunotherapy,

Unknown Speaker 0:17
is basically the immunosuppressive environment of the tumor microenvironment and the related cell exhaustion adopted. U cell therapy is a form of cancer where we extract white blood cells from patients, modify them, and then re administer them. And these white blood cells are susceptible to a variety of things, but the main problem that we’re focusing on is cell exhaustion, which causes limited responses across cancers.

Unknown Speaker 0:51
The need for a memory type is integral to solving this problem of cell exhaustion, and

Unknown Speaker 0:59
there have been different types of platforms created for this, but DNA origami is something that may provide a very efficient, scalable platform and very precise platform to solving this problem. DNA origami is basically

Unknown Speaker 1:15
DNA nanostructures that are created through a long, single strand scaffold and several short staple strands that help fold the structure into any arbitrary shape.

Unknown Speaker 1:25
This research thus attempts to create such interfaces,

Unknown Speaker 1:31
namely 2d

Unknown Speaker 1:33
structures to enrich memory like qualities in T cells, in the context of improving adoptive cell therapy and fusion products.

Unknown Speaker 1:43
So the methods we employed for this were one design and two synthesis. We started with a target geometry, in this case a wheel, and used the software predicts to generate scaffold and staple sequences, and we used we ordered those Safa scaffold and scaffold sequences from

Unknown Speaker 2:05
a manufacturer and added different types of

Unknown Speaker 2:10
synthesized the scaffold, first using DNA polymerase and different types of materials such as DNTPs, and added the staple strands to get our final structure.

Unknown Speaker 2:22
Then after that, what we hope to do is attach any type of protein you want. Likely this will be different types of stimulatory ligands, such as 41 BB. This is common

Unknown Speaker 2:35
stimulatory ligand for memory for T cells. We also could attach stuff like fluorophores to track our interfaces and different types of

Unknown Speaker 2:48
beads to characterize these structures,

Unknown Speaker 2:55
so the results that we experienced

Unknown Speaker 2:59
are detailed. So at the top here we have our scaffold synthesis. So we started with a template, primers and additives. So these are DNTPs, which allow us to make the scaffold, and we have a faint Ben at the bottom, which is our single strand DNA, which you want to isolate, as well as our double strand at the top,

Unknown Speaker 3:27
the single strand DNA here is has been purified through centrifugation, and after that, we folded it with our structures, with Our staple strands, and did a variety of different characterization on this at the bottom here, you’ll see a dynamic light scattering graph on the left, then a atomic force microscopy graph image, and then a gel electrophoresis image on the left. Our DLs graph shows the diameter of our particles, and we measured it to be around 58 nanometers, and we got something around that value. So that means it’s consistent. Our AFM images showed that there were some defects in our structure, so that would require us to go back to the staple sequence design, which is what we did and what we’re currently working on. And our gel electrophoresis showed that we were able to successfully purify our structures.

Unknown Speaker 4:27
So

Unknown Speaker 4:29
over the course of the spring, basically what we’ve done is verify the folding of our structures and modify our staple sequences such that we are able to get a structure that does not have the defects that you see in these AFM images. We’ve also proceeded to add overhangs, and we are going to be trying conjugation protocols over this coming summer. Thank you. Applause.

Categories
College of Engineering and Computing OSCAR

Comments to Communities: Modeling Online News-Related Discourse

Author(s): Reva Hirave

Mentor(s): Antonios Anastasopoulos, Computer Science

Abstract
This project presents the development of a lightweight, extensible tool designed to collect and aggregate user-generated commentary from online news platforms, including the New York Times, Fox News, and Reddit. By unifying disparate data formats into a standardized structure, this tool facilitates downstream tasks such as toxicity analysis, network modeling, and discourse comparison. To date, over 9,000 comments have been collected, and preliminary analysis using the Perspective API reveals cross-platform trends in toxicity and engagement. Additionally, early visualizations of user interaction networks explore the extent to which individuals engage outside their ideological or topical communities. Ultimately, this project aims to offer a command-line interface that enables customizable data harvesting across platforms, with filters for topic relevance, discussion tree structure, and other discourse features. This tool lays the groundwork for deeper investigations into online news discourse and community polarization.
Audio Transcript
Hi, I’m Reva, and this is Comments to Communities—a project about modeling how people talk about the news across online platforms.

The first rule of the internet: don’t forget the comments. Wreck-It Ralph wasn’t really wrong. Comment sections are chaotic, often toxic, and sometimes considered the worst parts of the internet. But they’re also where some of the most honest and unfiltered public discourse can happen. So instead of ignoring the comments, this project combs through thousands of them.

News sites and social media platforms host ideologically distinct communities. Commenters on The New York Times don’t necessarily talk anything like those on Fox News. Comments can reveal more than just interactions between strangers—they can reflect how online communities construct, challenge, or echo narratives around ongoing social issues.

But right now, existing tools for studying discourse fall short. They usually focus on one platform or treat comments as isolated utterances rather than parts of larger conversations. They also rarely combine toxicity metrics with network structure, which means they don’t fully capture complex social relationships.

This project addresses those gaps. These insights led us to two research questions:

How do different online communities talk about the news, and do they talk across ideological lines?

How can we measure the health of these discussions in ways that go beyond just likes or shares?

To answer these questions, we built a lightweight tool that scrapes and standardizes comments from three platforms: The New York Times, Fox News, and political subreddits. The tool unifies this data into a common format, adding metadata like reply structure and timestamps. This makes it easy to analyze both the content and the shape of these discussions—like who’s replying to whom, and how toxic the exchanges are.

So far, we’ve collected about 4,000 comments from The New York Times, another 4,000 from Reddit, and around 1,000 from Fox News. That’s just under 9,000 posts total, and while it’s not enough yet, it’s already yielded some compelling visualizations.

These are reply networks for Fox News, and you’ll also see one for The New York Times. Each node is a comment, and the size corresponds to the number of replies it received. Every edge—or line—represents a reply relationship. Nodes are color-coded by toxicity: green for less toxic and red for more toxic. These toxicity labels were generated using Google’s Perspective API.

These networks already hint at platform-specific dynamics and how polarized—or productive—these spaces can be. For example, if we look at the New York Times network, some circles are larger simply because they have more replies. One discussion concerns an op-ed about a former Kamala Harris skeptic, which is represented as the largest node in the middle. As expected, most comments have zero replies and cluster near the original post, but there are a few longer threads as well.

This opens up a range of further research questions, like: Do toxic comments produce toxic replies? Could we predict when a conversation will become toxic?

Why does any of this matter? Tools like this can help journalists, sociologists, and NLP researchers ask new kinds of questions—not just what people are saying, but how they’re saying it. If we want healthier discourse, we first need to understand how people talk. This project offers a step in that direction by making comment sections a little less mysterious and a lot more measurable.

This work builds on research from the 2024 Yellow Neck Workshop at Johns Hopkins and is supported by the OSCAR Program at George Mason University. I’d like to thank Dr. Antonis Anastasopoulos and the AI-Curated Democratic Discourse team from the JHU workshop.

Thank you so much for listening. Feel free to reach out if you want to explore the tool or just talk about the project. I’m available at arhavegu.edu.

Thank you again.

Categories
College of Engineering and Computing OSCAR

Multi Expert Debate (MED): An LLM Framework for Analysis

Author(s): Jacob Sheikh

Mentor(s): Ozlem Uzuner, Information Systems and Technology

Abstract
In this work, we introduce Multi-Expert-Debate (MED): An LLM Framework for Analysis. Analysis is an open ended problem; given the same facts, different people draw different conclusions (based on their background, their personality, their beliefs, etc.). In MED, LLM agents are each initialized with their own personas. Agents are all provided the same problem and same knowledge, and, after coming to their own individual solution, debate with other agents, until the ensemble produces a singular, refined idea. We also present SumRAG: a summary-based retrieval method to augment LLM generation. We believe this work will establish a valuable baseline to measure other approaches to reasoning against.
Audio Transcript
Hello. Today, I want to talk about explicit and implicit reasoning in language models. I want to guide this discussion with the question: How can you construct representations of the world in such a way that some agent can navigate those representations to solve problems? In other words, how can you construct an artificial general intelligence?
The first step in answering this question—and what we addressed in our work—was understanding how to navigate representations of knowledge, which is essentially how to reason. There are two approaches to reasoning in language models today: explicit reasoning and implicit reasoning. In our work, we focused on explicit reasoning.
Language models like ChatGPT have shown the ability to improve their responses through reasoning. Shown here is one example technique called chain of thought. On the left, the language model does not reason through its answer and simply outputs “11,” which is incorrect. On the right, the model verbalizes its thought process—explicit reasoning—and arrives at a more accurate answer. Explicit reasoning, therefore, is the process of articulating thoughts step by step to improve the final output. It has proven to be very effective.
The goal of our work was to synthesize explicit reasoning with other emerging techniques in language models, including retrieval-augmented generation (RAG)—querying a database—and multi-agent systems, where multiple LLMs interact. We aimed to combine all three into a unified framework to create the best of what’s currently available in LLM-based explicit reasoning.
Our work hopes to produce a framework called Multi-Expert Debates (MED). In MED, we initialize multiple agents (LLMs), each with their own opinions and personas, given access to the same information via the same RAG setup. These agents debate and defend their decisions until they converge on a single, agreed-upon output.
This work was done in the context of medical care—specifically, decision support systems to assist clinicians in diagnosis. To support this, we implemented a summarization-based RAG pipeline using a dataset that includes foundational medical knowledge, case studies, and procedural guidance.
While the system is still under development, we aim to compare its performance with models that use implicit reasoning. In implicit reasoning, the model reasons internally without verbalizing steps. For example, given the question “Find the capital of the state containing Dallas,” the model might internally reason: “Dallas is in Texas, the capital of Texas is Austin,” and output “Austin” without showing its steps. This form of reasoning has been observed but is not always reliable.
The broader objective of our research is to explore implicit reasoning further. For now, we are building a strong explicit reasoning framework as a baseline for future comparison. We’ve also found interesting connections with neuroscience, particularly regarding disentangled representations, which play a key role in how reasoning may be structured.
We are hopeful our work will provide a valuable foundation for evaluating and developing implicit reasoning approaches in the future.
Categories
College of Engineering and Computing OSCAR

Machine Learning Aided Nanoindentation to discover Material Properties

Author(s): Jake Samuel

Mentor(s): Ali Beheshti, Mechanical Engineering

Abstract
Traditionally, identifying various material properties requires specific and expensive tests, which usually destroy the material being tested. Machine Learning (ML) models could be used with nanoindentation tests to establish a relation between the macro properties and the microstructure without needing to understand the physical processes that takes place. This work examines how neural networks, a type of ML model, could be used to relate a material’s indentation data with yield strength and Young’s modulus. After training, the model was able to perform predictions with an error of 0.14. It was concluded that a neural network could relate indentation data with macro material properties with the right hyperparameters
Audio Transcript
Hello, my name is Jake Samuel and today I will be talking about using machine learning and nanoindentation to discover material properties. So, I’ll talk about traditional testing first. Traditional strength testing requires using a machine such as this, and you insert the material coupon that you wanna test in it, and then the machine pulls it apart until it breaks. Uh, the problem with this kind of setup is that uh the coupon can be expensive to make, and it’s not reusable, cuz once you break it it’s useless, and it’s impossible to test materials like thin films like this one that you see here which is only a couple of micrometers thick. And the solution to that is nanoindentation and how nanoindentation works is you have an indenter apply a force to your material and it creates a small indent, and it produces this force vs depth chart. Uhh and, the advantages of this over traditional testing is that you do not require a large specimen, and it is not destructive; it only leaves a small dent in your material. Now, this test gives us a clue as to how the material behaves in the nanoscale, but solving for the material’s nanostructure and its macroproperties can be quite a challenge, and this specific problem is called the nanoindentation problem because the problem describes trying to get a uhh strength graph from this nanoindentation graph, and how I’ve chosen to solve that problem is using machine learning! And to give a quick rundown of what machine learning is, is it’s a umm model that uses some kind of mathematical algorithm to guess your result, and if there is an error in the result, it tweaks the mathematical model in hopes of getting the result to be lower. It has some cool math, linear algebra things behind it, and the pros of this are is that it’s much faster and easier than trying to find your relationships manually, but it comes at the cost of not understanding what the physical process behind it is. And the machine learning model I chose to implement are neural networks, and how they work is they’ll have neurons such as these with their bias and weights contained in hidden layers, and this is what we call the hidden layers, and they could be any kind of mathematical function, they could be a linear function, they could be a relu, which.. is.. ummm… it’s a cutoff function where above a certain value you cut off, below its, uhh you include it or vice versa, or you can do a mix of any of them. And, things you can change in this or tweak it is that you can change how the error is calculated, you can change the number of layers, the number of neurons, and you can change the learning rate. An example of this in literature is this amazing paper by these researchers where they used indentation and machine learning to predict the hardness of maize grains. So as you can see in this picture, uh this maize grain is impossible to test on traditional testing uh machines, but it’s perfect for this nanoindenter, and they were able to successfully uhh predict the hardness of maize grains, and it’s really cool because it demonstrates how the indentation technique can be applied to a wide range of materials. The work I’ve done this semester learning how to use and implement neural networks in python using the pytorch library. And I started with a sample model that use-uhh used to predict, or classify, specifically, iris flowers depending on their petal number and other data and then I was able to alter it into a regression model that takes these inputs, and it outputs yield strength and Young’s modulus. Now where do these inputs come from? So , if we take another look at our indentation chart, you can see that in the loading curve, which is when the indenter goes in, this curve can be modeled as a power function of some C raised to some N, and that C is one of the inputs. And you can see the unload- in the unloading curve, when the indenter is taken off, uhh the slope of the tangent line right here, uh represented by this S is another of the inputs, and work ratio is essentially if you see the area under the under the curve of this graph describes how much work uhh or how much energy you’ve lost in the process of the indentation and that’s another one of the inputs. So we have 3 inputs, 2 outputs, then we have 2 hidden layers with 8 and 9 neurons. How many neurons you select are pretty arbitrary, and how many- the number of hidden layers is also pretty arbitrary and I used a linear function to drive my model. The results were I used data- I used simulation data from literature. Uhh, and I tested that data, uhh in the model and I was able to get down to an error of 0.14. Now because pytorch uses tensors to calculate- to do all their calculations, and all their math, I don’t know what this number actually means, because I’m not uhh well versed in tensor maths, and i have two inputs. Two outputs, sorry, which complicates things. But I was able to demonstrate that neural networks could be used to form the relationships needed to solve the inverse indentation problem. For future work, uhh I’ll probably expand the scope of the model to take multi-fidelity inputs. What this means is to be able to take inputs that are both uh simulations and real indentation experiments that I’m hoping to conduct in the lab, and the model will also be used to test for properties such as creep and fracture toughness which are hard to test for otherwise. I’d like to acknowledge Dr. Ali Beheshti who was my advisor and was a lot of help to me this semester. Shaheen Mahmood who taught me how to use the indenters and how to make the samples for them, and also a lot of help in the lab, and Dr. Karen Lee who taught me a lot about research practices and research ethics in my OSCAR class, and thank you for watching.
Categories
College of Engineering and Computing OSCAR

3D-Printed Porous Scaffolds Application in Bone Implants

Author(s): Joelle Nguyen

Mentor(s): Ketul Popat, Bioengineering

Abstract
Bone defect treatments remain a clinical challenge as they require synthetic bone scaffolds with strong mechanical and biological properties. There is a need for adequate exchange of waste and nutrients between the implanted bone scaffold and surrounding tissue. 3D-printed polymeric scaffolds can be designed in such a way that ensures that there is a uniform distribution of pore sizes and wall shear stress for proper nutrient exchange along with consistent cell growth and differentiation. Polycaprolactone (PCL) is a thermoplastic polymer mostly employed in bone tissue engineering due to its biocompatibility, biodegradability, processability, and tunable mechanical properties [1]. PCL has proven to produce effective bone cell growth in the form of PCL-based composite scaffolds in combination with a variety of metal, polymer, and ceramic materials [2]. PCL alone has insufficient osteogenic ability and mechanical strength therefore it’s often combined with Zinc metal alloy [3] or hydroxyapatite ceramic with polyethylene glycol polymer [4].

This project aimed to evaluate and demonstrate the effectiveness of 3D-printed porous bone scaffolds in supporting the proliferation of osteoblast cells. The primary goal was to assess how a scaffold design provided by 3D-Orthobiologic Solutions (3DOS) promotes cell proliferation and cell viability. This involved tracking how adipose-derived stem cells (ADSCs) populate the scaffold, differentiate, and interact with polycaprolactone (PCL) alone and in its composite form with an osteoconductive ceramic. After a series of experiments on optimizing the porous scaffolds’ material composition, layer thickness, and chemical treatment were done, it was found that none of the 3D-printed scaffolds were cytotoxic and were able to grow and differentiate into osteoblast cells over a span of 28 days based on fluorescence imaging and assays relevant to osteogenic differentiation.

References:

[1]A. Fallah et al., “3D printed scaffold design for bone defects with improved mechanical andbiological properties,” Journal of the Mechanical Behavior of Biomedical Materials, vol. 134, p.105418, Oct. 2022, doi: 10.1016/j.jmbbm.2022.105418
[2] M. Gharibshahian et al., “Recent advances on 3D-printed PCL-based composite scaffolds forbone tissue engineering,” Front Bioeng Biotechnol, vol. 11, p. 1168504, Jun. 2023, doi:10.3389/fbioe.2023.1168504.
[3] S. Wang et al., “3D-Printed PCL/Zn scaffolds for bone regeneration with a dose-dependent effecton osteogenesis and osteoclastogenesis,” Mater Today Bio, vol. 13, p. 100202, Jan. 2022, doi:10.1016/j.mtbio.2021.100202.
[4] C. C, H. P, P. A, P. Aj, A. F, and Y. J, “Characterisation of bone regeneration in 3D printed ductilePCL/PEG/hydroxyapatite scaffolds with high ceramic microparticle concentrations,” PubMed, 2021,Accessed: Sep. 26, 2024. [Online]. Available: https://pubmed.ncbi.nlm.nih.gov/34806738/
Audio Transcript
Hi everyone !

My name is Joelle Nguyen and I’m a senior majoring in bioengineering. I participated in OSCAR’s URSP this spring semester to conduct a research project on 3D-printed scaffolds for bone implants. Here’s a table of contents to capture what I will be going over today in this video presentation.

Starting with background, there is a clinical challenge for bone defect treatments as bone implants require synthetic scaffolds with strong mechanical and biological properties. These scaffolds require an adequate exchange of nutrients between the implanted bone scaffold and surrounding tissue and should be able to withstand any external forces or wall shear stress which bone implants usually encounter. 3D-printed polymeric scaffolds pose a potential solution, more specifically 3D-printed scaffolds using polycaprolactone (PCL) – composite blends. PCL-composite blends have proven effective to promote bone growth according to multiple researchers. PCL is a commonly used polymer in bone engineering due to its biocompatibility and ease in manufacturing. Composite blends with PCL combine PCL with a variety of metals, polymers or ceramic materials since PCL alone has insufficient osteogenic ability and mechanical strength necessary for bone growth.

My project aims to evaluate and demonstrate the effectiveness of 3D-printed porous scaffolds made of a composite blend which combines PCL with an osteoconductive ceramic and optimize the scaffold design’s porosity and thickness for future applications. The porous scaffold designs were provided by a 3D Orthobiologic Solutions or 3DOS for short.

Now I’ll go more in depth on the methodology of my research project. First let me start with my research timeline. So back in February there were some preparations made for setting up the experimental design and gathering all the necessary supplies. Then I started with my first experimental trial in late February where I wanted to compare how the thicknesses of the scaffold had an impact on cell proliferation. The next experimental trial started in mid March where I wanted to see how thickness and treating PCL-printed scaffolds with 5 M of NaOH affected cell proliferation & differentiation. Based on literature, NaOH worked to enhance the hydrophilicity of PCL and create a more rough surface for improved cell attachment. The third experimental trial was to compare the osteogenic cell growth of porous scaffolds made with PCL to porous scaffolds made with the composite blend after confirming which experimental conditions showed most cell proliferation in previous trials. Finally I have been working on the data analysis which includes counting all the cells imaged under fluorescence microscopy and performing one-way ANOVA on the data I’ve collected not only on the cell counts but from the multiple assays I’ve done throughout the semester.

The general procedure for each trial looked like the following where I’d sterilize the samples, seed the samples with 20,000 cells/mL of adipose-derived stem cells, stain the samples for fluorescence imaging, and perform assays on specific days. After all is done I work on cell counting and statistical analysis using one way ANOVA to compare between groups.

So I’ll only be discussing the results from Experiment #1. I have only a few fluorescence images here for the Days 1, 4, and 7 where the top row is of the non-porous PCL scaffold and the bottom row is the porous PCL scaffold. The blue circles are cell nuclei and the branches you may see are the cytoskeleton of the cell. These images give me a good idea of how many cells there are on each scaffold as well how well they are spreading throughout the sample.

Next is another set of images for Day 14 and 28 except this time there is green staining done to depict the amount of osteocalcin released from the cells. Osteocalcin is a protein released by osteoblast cells during bone formation. After imaging all samples under the fluorescence microscope, The general trend I found was that there is an increase in average cell count as the amount of layers increases for the scaffold designs. Through a one-way ANOVA analysis it was found that there wasn’t a significant difference in average cell count between the porous and nonporous scaffold designs with the same number of layers, which signifies that a similar amount of cells grow between nonporous and porous samples.

However, cell count does not completely capture what’s happening on the sample. Therefore, I conducted additional assays to determine what kind of cells are present on my samples: are there still adipose-derived stem cells or have these stem cells differentiated into osteoblast cells? First I’d observe the osteocalcin area on the fluorescence images. You can see in this graph that a porous scaffold with 5 layers or 1 mm depth showed the highest osteocalcin area in comparison to other scaffold designs, indicating that the cells imaged may be osteoblast cells. This is reaffirmed by the calcium assay done on Day 14 and 28 as porous scaffolds made with 5 layers, specifically on Day 28, showed a statistically significant concentration of calcium on the sample in comparison to other samples.

Onto the conclusion! Based on experiment 1, scaffolds printed with more layers showed an increase in cells, but there was no statistically significant difference between the number of cells on non-porous and porous scaffolds with the same amount of layers. A statistically significant difference in calcium deposition was observed for the porous layer 5 compared to other samples, meaning that porous scaffolds showed earlier differentiation of stem cells to osteoblast cells. Next steps include completing a statistical analysis for Experiments #2 and #3, testing scaffold designs with smaller pores, and testing additional scaffolds printed from the composite blend filament. Much more information will be provided on my poster during the OSCAR celebration on May 6th. Be sure to stop by if you’d be interested to learn more about how my research turned out.

I’d like to end my presentation with acknowledgements. I’m so grateful for all the support I received throughout this semester. Thank you for listening to my presentation !

Categories
College of Engineering and Computing OSCAR

Novel 3D Bioprinting Method To Create Hydrogel Gradients

Author(s): Elizabeth Clark

Mentor(s): Remi Veneziano, Bioengineering

Abstract
The primary objective of this project is to utilize hydrogels which are a gel that is composed of polymer(s) suspended in water to create a gradient (the change from one concentration of hydrogel to another). To address this, I used TEMPO-oxidized cellulose nanofibers (T-cnf), mixed with dye diluted with deionized water. The T-cnf was split into two portions and dyed two different colors and then placed into syringes which were heated to 70 degrees Celsius for at least five minutes to ensure smooth extrusion. By using a specialized nozzle, I could plug two syringes into one nozzle that uses a static mixer at the tip to mix the dyed T-cnf. Depending on how fast one syringe extruded versus the other I could change the color and even mix them. With my hands extruding the two syringes I was able to create gradients with the dyed T-cnf and recreate them with different colors. The results indicate that hydrogels can be manipulated to create gradients. Notably, this project uses a hydrogel and dye with large range of temperature it can be extruded under. When recreating this with different hydrogels and fluorescent dyes that can be used with biological material there is a need for more temperature control. This research draws on 3D printers that can do multicolored printing using multiple materials. Bioprinters are 3D printers that use bioinks which are biologically compatible inks (often hydrogels). This research aims to further explore the potential of creating custom bioinks that can be printed in gradients for use in bioprinters in regenerative medicine.
Audio Transcript
Hello My name is Elizabeth Clark. I’m a bioengineering student and my research project was about creating a new method to 3D bioprint hydrogel gradients, as many cellular functions and processes rely on gradients within the human body.

So for some background, hydrogels are hydrophilic polymers which are primarily comprised of water. A gradient can be thought of if the change of concentration of a property across a material in this case along a line,
So here as mentioned on the previous slide you can see the change in color from blue pink, which can help us visualize the gradient, which can be seen in this picture
In this project, the hydrogel used was TEMPO-oxidized cellulose nanofibers (I will refer to them as T-CNF). The T-CNF was loaded with printer ink the vials right here to visualize the gradients as you had seen in the previous picture, in this case magenta and and cyan ink was used. 2 syringes pictured here were then filled with the dye loaded T-CNF and heated using this blanket heater to 70 ° Celsius and then plugged into this specialized extruder which allowed me to plug in 2 syringes at once those syringes would then extrude into this small chamber with a static mixer to evenly mix the hydrogel and extruded out of a 22-gauge blunt tip needle.
By changing the syringe’s extrusion rate, the color of the gradients could be changed. The syringes could be guided by hand to create different shapes and designs. Gradients were extruded onto weighing paper or on a glass dish and were approximately 8.5-9 cm long, and the color could be changed multiple times in one gradient line.
Throughout this project, consistent gradients were achieved over several weeks, This means future alterations to the dyes utilized and the hydrogel itself could be done to create more biocompatible gradients. It also means a bioprinter could be used to create 3-dimensional gradients that can be used in biomedical engineering and regenerative medicine.

Categories
College of Engineering and Computing OSCAR

Laser-Induced Graphene for Flexible Graphene-based Doppler Imaging

Author(s): Philip Acatrinei

Mentor(s): Pilgyu Kang, Mechanical Engineering

Abstract
While commercial blood fluid velocity sensors exist, many cannot be used on pediatric patients or require the child to have their chest open and exposed for sensing. If there was a smaller flexible device that could be surgically attached to the aorta, or the largest artery of the heart, then it would be useful for bedside monitoring of pediatric patients as well as adults. This is achieved with a device utilizing porous laser-induced graphene as a flexible high surface area electrode as well as PVDF-TrFE (poly(vinylidene fluoride-trifluoroethylene)) as a flexible piezoelectric polymer. The combination of these two materials increases sensitivity, while retaining mechanical strength and flexibility. Unfortunately, the design of the device had to be changed halfway through testing so there is no data on the central frequency of the doppler device or how well it functions. With more testing, these figures will be known and the device can be properly tuned to achieve the performance numbers that are required by our collaborators at the National Children’s Hospital.
Audio Transcript
The video has the transcript embedded in Youtube’s closed captioning as well.

Hello everyone, my name is Philip Acatrinei, I am an undergraduate student
at the department of Mechanical Engineering for GMU
I’m working with Dr. Pilgyu Kang
to bring laser manufactured 3D Graphene for flexible graphene-based doppler imaging
This video is part of OSCAR URSP’s Spring 2025 Celebration
And without further ado, lets get into it!

So a little bit about our lab,
is, we have a background in 2D materials, micro and nano
manufacturing mechanics,
nano bio sensors, nano-photonics, opto-fluidics, optoelectronics, and plasmonics
We’ve done some collaborative research in the past with Cornell,
NSF, PARADIM, and CNF
but most recently we’ve done a little bit of collaborative research with NASA
and our lab is located at the IABR building
at SCI-TECH campus

So, after a cardio-
-vascular surgery, its really important
to have bedside monitoring of blood fluid velocity, mainly of the aorta
to determine heart health of the patient
that’s great for us adults
but in pediatric surgery, children have much smaller bodies.
and the devices that are currently available for monitoring blood fluid velocity
are made for adults
so for children, they are usually too large and bulky to properly use.
That’s why we believe that it is very important
to have pediatric
blood fluid velocity sensors
to have safe monitoring of post-surgery heart health for children

Now, some commercially available blood fluid velocity sensors
have their advantages and disadvantages
some advantages are:
they’re common in hospitals around the world and the hospital staff are already trained on their use
They’re reusable which means multiple patients can use
the same device
multiple times in the device’s lifespan
and they’re accurate
they have real-time accurate data collection, they can display data, they have data storage available.
But, as touched on before, they do have some disadvantages.
Now, because of their increased size, they increase the risk
so, to use the devices in pediatric surgeries
the child’s chest must stay open and exposed.
Now, that is not a good thing if you want to have a safe monitoring of
blood fluid velocity to determine heart health.
and, again they are not conformable
so they’re not flexible or conformable to the human body, meaning
the chest must stay open and exposed
to integrate these sensors
to monitor blood fluid velocity

Some of the state of the art research tries to address this,
by using the PPG optical method, the DBUD method,
or any other method but most of them
read blood fluid velocity unobtrusively through the skin/fat layer
this is great because it is unobtrusive
But,
this is also its greatest weakness because they must be placed very specifically
or, they must be only
usable on specific parts of the body, say your fingertip
or a specific artery and you have to
very very carefully place it over that artery to make sure you’re aiming for it.
so they have their pros and cons too

Now, our novel approach
is a conformable device specifically developed for children
so we wanted it to be smaller and thinner to ensure flexibility and conformability
the materials need to be body-safe, robust, and flexible
and we want to utilize two materials:
we want to utilize PVDF, or
polyvinylidene fluoride, its a flexible piezoelectric polymer
that is better for this application than traditional ceramic piezoelectric elements
that are not flexible
and we wanted to utilize laser-induced graphene
which is a flexible, high surface area electrode that better interacts with PVDF
and that better interaction increases device performance
so to go about how our
device works, I want to give a practical example

So, everyone has
experienced the doppler effect in their life, whether you know it or not
but as an example, we can take an ambulance
everyone has heard an ambulance drive by
where it sounds high pitched
when its coming towards you and the seconds it passes you it magically lowers in pitch
now, that difference from the heard frequency
that what you’re hearing, say the higher or lower pitch compared to the pitch that the ambulance is constantly putting out
is called the doppler shift
and the doppler shift is directly proportional to the speed that the ambulance is going, or that you are going in relation to the ambulance
and we use this doppler shift as our working principle

So we have two doppler devices, one is an emitter and one is a receiver,
we emit ultrasound at a specific central frequency that we know
it will bounce off of a red blood cell and scatter. It will lose or gain energy
either increasing or lowering in pitch, and by measuring
the shift from the original central frequency, we are able to tell the speed of the red blood cells passing by.

So, we have again an emitter and receiver
but we also use, because we have two, we use constant wave doppler
if we had one emitter that also acted as a receiver, we would get loss of information
as, it could only receive or send it could never do both at the same time
but because we have an emitter and a receiver, we’re able to have loss-less information which is really really great.
Now, our device is specifically tuned to an angle theta of 15 degrees, so that we target
around 4mm into the aorta, which is the center where the velocity is the fastest

So a little talk about the materials that I very briefly glossed over
I wanted to start with our 3D porous graphene
to manufacture it we use a photothermal process via laser, a CO2 laser
that we use to lase polyimide sheets
which make our laser-induced graphene. It makes it simple, scalable and cost-effective

Now, its unique properties are great for our
purpose in flexible electrodes.
So we used a, in this case we used a four point probe method to find the sheet resistance which we found to be 5.35 ohms
which is very low, which is excellent for electronic applications but because of the structure, its very mechanically flexible and strong
and it has a great high carrier mobility, which is great for high speed electronics.

Some more advantages of it
are that it has an increased surface area, so the interface with PVDF is increased
and the electrochemical properties are increased for device performance
and again its mechanically flexible

so a little bit more about PVDF, or polyvinylidene fluoride
it starts off as a liquid that we pour, and then we cure it at 140 degrees Celsius to become a polymer
and because it starts off as a liquid
when we pour it on top of our laser-induced graphene, which is a porous structure,
the porous structure almost acts like a sponge, sucking in the PVDF liquid
so when it turns solid, we have a really incredible interface
between the PVDF and the laser-induced graphene with that high high surface area.
So we can not only have a cost effective additive manufacturing process
for putting the PVDF there, we can adjust the central frequency which is important for the human body
as 10 megahertz is the ideal
central frequency for going through the skin, skin and fat, and,
by changing the thickness of the PVDF layer, which is very easy to do
we can adjust the central frequency to whatever we want
and, PVDF is very flexible, its a very flexible piezoelectric polymer
that is perfect for wearable electronics or flexible electronics that we’re interested in

And a little bit about PDMS which I didn’t touch up on
It is the substrate that we place our sensor on to keep it at the 15 degrees angle theta
PDMS, also known as polydimethylsiloxane, is a sort of silicone
so, it starts off in two parts, a base and a curing agent as a liquid
you can then pour it into a mold and then when we remove it from the mold, we get a very flexible silicone
which is great because it is cost effective, it means we can do whatever we want for molds, its incredibly mechanically flexible
but, the most important thing for us is that it is optically clear
it does not effect our ultrasound waves in any way shape or form as its passing through
it doesn’t refract or lose energy which is incredible for us
and for what we are trying to achieve. So to test sensor performance

we do either d33 characterization, using an LCR meter
to determine piezoelectricity after poling and we use a phantom heart model which can set blood fluid velocity
and we can test our sensor readings against what we know the blood fluid velocity to be, to determine accuracy

Now some conclusions, we have found some advancements in acoustic transducers
via the laser-induced graphene and PVDF layers
we found some innovations in wearable electronics, all of these being flexible and conformable to the human body
now, I did want to say, there were some setbacks with this project over the semester
in the first half of the semester, we worked with a design
and we finished it and we got it ready for testing and then our collaborators at Children’s National Research Institute
told us that it wasn’t good enough and we had to redesign
and so the second half of the semester we spent redesigning and producing the new redesigned
thing, so unfortunately we were not able to test this semester with the d33 characterization or phantom heart
but we hope to do that very soon
and for potential applications we hope to see it used in pediatric surgeries and integrate it with a wireless platform for bedside heart health and blood monitoring

some acknowledgements I wanted to make were for Noemi Lily Umanzor who helped make the CAD model of the
device the new updated device’s design
she also helped me do some basic tasks around and made my life definitely a little bit easier with this project
I wanted to acknowledge the Chitnis lab and give a thanks to Dr. Parag Chitnis and especially Ehsan for helping us pole the PVDF and use their poling machine that they have on Fairfax campus
I want to thank our collaborators at George Washington University and our
contact with Children’s National Research Institute Dr. Kevin R. Cleary
And yeah, I think that is all– I can’t take questions unfortunately because this is a video, but I hope you can find me
on the day that we are doing posters which should be May 6th, and I’ll see you there! Thank you.

Categories
College of Engineering and Computing OSCAR

Monitoring Water And Air Quality at Mason

Author(s): Chayanan Maunhan

Mentor(s): Viviana Maggioni, Department of Civil, Environmental, and Infrastructure Engineering

Abstract
Environmental quality plays a key role in both human health and campus sustainability. This
research project investigates air and water quality across George Mason University’s Fairfax and
Arlington campuses to better understand how campus operations, weather, and traffic contribute
to local pollution in suburban and urban settings.
The primary goal is to observe patterns and collect baseline environmental data that can support
long-term comparison efforts. While this is a short-term project, the findings will help identify
how differences in campus layout and activity, such as stream restoration at Fairfax versus dense
traffic at Arlington affect air and water conditions throughout the year.
By sharing results publicly, this research will not only contribute to ongoing sustainability
planning at Mason but also provide students with hands on experience and encourage data driven
decisions for future campus and community environmental strategies.
Audio Transcript
Have you ever wondered where stormwater goes after it runs off campus sidewalks, or how construction might affect the air we breathe?
At George Mason University, I’m working as part of the Patriot EnviroWatch project to monitor how our everyday activities impact water and air quality, and ultimately, the health of our environment.

Hello! My name is Chayanan Maunhan and I am an undergraduate researcher in the Department of Civil, Environmental, and Infrastructure Engineering at George Mason University. Today, I’ll be presenting my research work, which is part of the broader Patriot EnviroWatch project.

My specific focus within the Patriot EnviroWatch project is monitoring water quality across Mason’s Fairfax campus, along with participating in preliminary air quality data collection.
The photos you see here show a few of the key sites where I collected samples under different weather and seasonal conditions.
Research like this is critical because stormwater runoff can carry pollutants that harm local streams, rivers, and eventually the Chesapeake Bay, while air pollution affects campus health and sustainability.
By measuring these indicators, we can evaluate the effectiveness of campus restoration efforts and help guide future environmental management.

In my research, I primarily focused on monitoring water quality across George Mason University’s Fairfax campus.
I used Vernier probes to measure key water quality parameters: pH, turbidity, conductivity, temperature, and dissolved oxygen concentration.
Chlorophyll levels, which provide insight into algae growth and nutrient enrichment, were measured using a Vernier spectrophotometer.
Although my main focus was water quality, I also contributed to preliminary air quality data collection at Mason’s Arlington campus using portable PurpleAir PM2.5 monitors.
The air quality data generally remained within EPA’s acceptable range, but a few instances exceeded 12 micrograms per cubic meter.
While still considered safe for most of the population, these elevated levels could pose some risk to sensitive groups, such as individuals with respiratory conditions.
These early results demonstrate the importance of continuing both water and air quality monitoring as part of Mason’s sustainability goals.

One important factor in environmental monitoring is that conditions constantly change.
After rainstorms, turbidity and nutrient levels often rise due to runoff carrying sediments and pollutants into streams.
During hot weather, dissolved oxygen levels can drop, stressing aquatic life.
In dry periods, conductivity often increases because of accumulated salts.

My research activities included collecting water quality field data during different seasons and weather conditions, contributing to preliminary air quality measurements, and analyzing trends in environmental conditions.
These efforts help support Mason’s broader sustainability goals, including improving stormwater management and protecting the Chesapeake Bay watershed.

I would like to sincerely thank my faculty mentor, Dr. Viviana Maggioni, the Patriot EnviroWatch research team, and Mason Facilities for their support and collaboration.
I would also like to acknowledge the OSCAR Undergraduate Research Scholars Program for providing funding and making this research opportunity possible.

Thank you for listening to my presentation.
Through this research, I’m gaining valuable experience in environmental monitoring and helping protect both Mason’s environment and the broader Chesapeake Bay watershed.

Categories
College of Engineering and Computing College of Science OSCAR

THE DIRTY CONSEQUENCES OF POOR SLEEP: MODELING GLYMPHATIC EFFICIENCY ACROSS DIVERSE SLEEP-WAKE CYCLES QUALITY.

Author(s): Alvaro Olmo Jimenez

Mentor(s): John Robert Cressman, Department of Physics and Astronomy, Krasnow Institute for Advanced Studies

Abstract
The glymphatic system plays a vital role in maintaining brain health by facilitating the clearance of metabolic waste, a process most active during sleep. This clearance is facilitated by changes in extracellular space due to glial and neuronal shrinkage, enabling enhanced flow of interstitial and cerebrospinal fluid. The relation between the change in brain volume and the effectiveness of the glymphatic system has already been described. Despite the evidence linking sleep to brain clearance, the relationship between the quality of the sleep-wake cycle and glymphatic system efficiency remains unexplored. Thus impeding the understanding of how disrupted sleep may increase vulnerability to neurodegenerative diseases by impairing brain waste clearance. This study investigates the relationship between sleep-wake cycle quality and glymphatic system effectiveness by utilizing an existing computational model of neural dynamics. We calibrated the model to replicate real brain activity – matching frequencies and activity with data collected through EEG– during healthy NREM and REM sleep. These cycles were modeled and their response in brain volume change examined to assess the performance of the glymphatic system. Then, parameters – such as ionic conductance or vascular volume– were modified to simulate poor or high-quality sleep-wake cycles and the glymphatic system’s response examined. Early findings suggest that high-quality sleep cycles induce higher volume changes and therefore better glymphatic performance. Nevertheless, further analysis is required to more fully assess the system’s behavior across all sleep conditions.
Audio Transcript
Hello everyone. My name is Álvaro Olmo Jiménez, and today I’ll be presenting my research on THE DIRTY CONSEQUENCES OF POOR SLEEP: MODELING GLYMPHATIC EFFICIENCY ACROSS DIVERSE SLEEP-WAKE CYCLES QUALITY.
First, we will start by explaining why we are doing this research. Basically,we know that there are established links between sleep and brain clearance. The glymphatic system acts as the brain’s cleaning system and during sleep changes in glial and neuronal cell volume expand the extracellular space, which promotes convective fluid flow and waste clearance. Nevertheless, the specific impact of sleep quality on the glymphatic functions remains unexplored. This knowledge gap limits our understanding on how disrupted sleep may contribute to neurodegenerative disease risk
Thus, this study aims to explain how the quality of the sleep-wake cycles affect the glymphatic system during sleep.
To do so, we first established what was going to be our indicator for sleep quality → Brain volume change. This is because variations in extracellular and intracellular volumes during sleep enhance the glymphatic performance. Also, because the release of sleep-promoting molecules like prostaglandin induces blood vessel dilation and further volume changes.
At this point, we could state that our main focus was to study how the volume change is affected by varying the sleep-quality.
Once we had our indicator for good sleep, we used an existing model of neural dynamics implemented with glial dynamics whose behavior is determined by the concentration of ions.
This model was calibrated to replicate real brain activity – matching frequencies and activity with data collected through EEG. For example, a frequency of 2.8Hz was used to simulate NREM and 5.6Hz to simulate REM.
Moreover, we used an electrical and a volume stimulation as parameters to determine the sleep quality. The higher these parameters, the higher the simulated sleep quality. Therefore, from a bigger volume stimulation, a bigger volume change and thus the better glymphatic performance.
In order to replicate regular sleep, we did numerous simulations. However, just the most significant ones are going to be shown.
In this figure we can see 3 different simulations. In the three of them, the same electrical stimulation is used. The difference between the high and low volume stimulation is that the stimulation effort is halved. We can observe that over 20 cycles, there is not a significant brain volume change if we don’t stimulate the volume and that there is some difference in the final volume depending on the stimulation.
Now, we can see the transmembrane potential change for the high volume stimulation. One can see that the voltage deeply decreases with the volume stimulation. This makes sense because while the volumes vary, ion concentration varies too. Thus, we can state that the alterations of pump dynamics and diffusion result in a decrease in the transmembrane voltage.
In this figure, which again outputs high volume stimulation over the last cycle of the simulation, we can clearly appreciate the change in frequency from NREM to REM with the change in volume. Also, for further visualization, the right has been done and the change in frequency revealed.
From these figures, which show the change of concentration of intracellular sodium and extracellular potassium over the last cycle between electrical and non-electrical simulation outputs, we can see how electrical stimulation is fundamental for the correct simulation of sleep dynamics. Although it does not seem that important for volume change, we can see that in the simulation with electrical stimulation there is a balance between the intracellular and extracellular potassium and sodium. While in the non electrical-stimulated run, there is no apparent difference. This happens because the ATP-pump is shut-off due to the low extracellular potassium and thus cannot transport these two ions correctly. Although these ionic effects may not seem that important, they can be highly significant, as they can alter the signalling properties of the neuron.

This figure shows how the overall volume change varies if the sleep quality is disrupted. It is important to remark that in the microarousals simulation, 3 random intervals ranging from 1 and 5 seconds for each cycle were done and volume stimulation was stopped. Something similar was done for the less quality sleep simulation. In it 3 random intervals ranging from 5 and 15 seconds for each cycle were done and volume stimulation force was halved.

We can see the final values for each volume in this next figure.

Although it seems that the volume decrease is higher in the simulation with microarousals – suggesting that it has better glymphatic performance than varying sleep quality simulation – it is not. This is because the microarousals last less than the low stimulation stages. Thus, the simulation (with microarousals) would have less volume decrease if both periods– microarousals and low stimulation stages– lasted the same.

Now, from this data we can conclude that as sleep quality decreases, we observe a reduction in both overall volume changes and thus in glymphatic efficiency. This is consistent with previous findings that link slow-wave activity and stable sleep patterns with enhanced interstitial fluid movement and metabolic waste clearance.

Moreover, while volume stimulation contributes to mechanical shifts in brain tissue, electrical stimulation proves essential for preserving ionic balance. Without it, ATP-dependent pumps like the sodium-potassium pump become ineffective, leading to disrupted ion gradients and impaired homeostasis.

This underscores the critical role of electrical activity in maintaining proper cellular function, beyond just facilitating volume changes. The breakdown of ionic regulation in the absence of electrical stimulation highlights the interdependence of mechanical and electrophysiological processes in sleep. Together, these findings reinforce the complexity of accurately simulating sleep.

Ultimately, further research is needed in order to flawlessly replicate sleep, accounting not only for volumetric shifts and electrical rhythms, but also for how these elements dynamically interact over time. Accounting for the metabolic rate of the pumps.

Categories
College of Engineering and Computing OSCAR

Hagley Library & Archive Management

Author(s): Soo Yoo, Nguyen Pham, Ayan Diraye, Matthew Walsh, Maha Kaleem, Matteen Mahfooz

Mentor(s): Gail Therrien, IST Department

Abstract
Our capstone project involved working with the Hagley Museum & Library in Delaware, U.S. We enhanced the processes for Hagley’s archive inventory management to enable smoother operations for pulling and returning research materials, as well as viewing statistical information. We carried out this project because we value Hagley’s pursuit in supporting innovative research and hope to positively affect the processes for both archivists and researchers.
Audio Transcript
Categories
College of Engineering and Computing OSCAR Undergraduate Research Scholars Program (URSP) - OSCAR

Multiscale Indentation-based Mechanical Characterization for Advanced Alloys Suitable for Aeroengine Applications

Author(s): Mariah Tammera

Mentor(s): Dr. Ali Beheshti, Department of Mechanical Engineering and Shaheen Mahmood, Graduate Student Advisor

Abstract
Multiscale indentation is a reliable method used to extract basic mechanical properties from
materials, particularly structural metals and alloys. Knowing and understanding the mechanical
properties is critical for engineers to effectively and safely design structures and components
based on specific environments, applications, or loads the materials will be subjected to.
Although indentation techniques have been previously utilized to determine basic mechanical
properties, such as elastic modulus, extensive progress has not been made towards the ability to
employ multiscale indentation for extracting advanced mechanical properties (e.g. creep
parameters and fracture toughness properties) in a reliable manner that produces results closer to
the bulk of the material. This project aimed to evaluate creep and fracture toughness properties
for Inconel 718 by utilizing micro-indentation techniques at room temperature. Analysis of the
material microstructure occurred via the use of the Scanning Electron Microscope (SEM). Due to
the limitation of conducting research in one semester, the learning objectives fulfilled were
performing indentation tests to extract basic mechanical properties (i.e. hardness values and
elastic modulus values) and conducting SEM analysis on the indentation site to evaluate the
success of the indentation tests and note observations about the material. Moving forward, future
work will concentrate on building upon the exploration of indentation techniques at room and
elevated temperatures to improve current ability to determine advanced mechanical properties of
material in an efficient and reliable manner.
Audio Transcript
Hi everyone, my name is Mariah Tammera. This fall, I was working under Dr. Ali Beheshti and
Shaheen Mahmood in the Tribology and Surface Mechanics Lab on Multiscale Indentation-based
Mechanical Characterization for Advanced Alloys Suitable for Aeroengine Applications.
Multiscale indentation is a reliable method used in the field to extract basic mechanical
properties from materials, such as the elastic modulus value, by understanding the relationship
between the indentation load versus penetration depth. However, extensive progress has not been
made towards the ability to employ multiscale indentation for determining advanced mechanical
properties, such as creep parameters and fracture toughness values, to acquire data that is reliable
and closer to the bulk of the material. This project intends to focus on evaluating creep
parameters and fracture toughness values for Inconel 718 by utilizing micro-indentation
techniques at room temperature. After the indentation tests are concluded, the Scanning Electron
Microscope will be utilized to analyze the indentation site. As you can see, Figures 1 and 2
showcase the equipment used in this project.
Before beginning any lab work, I worked on a literature review to learn about creep deformation
and what the fracture toughness of a material is. I conducted a literature search with the guidance
of Dr. Beheshti, to learn about what some of the commonly used experimental methods to extract
creep and fracture toughness are. From this preliminary literature search, it becomes clear that –
based on the findings – the literature has limited research on utilizing multi-scale indentation
techniques to determine creep parameters and fracture toughness properties at both room
temperature and elevated temperatures.
An indentation site matrix is a conventionally utilized technique to systematically map out
measurement locations on the sample surface. On the left, Figure 3 represents a 5×4 matrix that
was used to map out 20 places on the sample surface where the indentation tests will occur. As
noted on my slides it’s important that the location chosen for these indentation tests should be on
a smooth, flat area free of holes, pits and away from the edges of the sample.
Figure 4 represents an example of one of the micro-indentation curves obtained from one of the
20 indentation tests with indentation load on the y-axis and the penetration depth on the x-axis.
I’d like to point out the green box around the horizontal line up on the top right, which indicates
that the indentation load is constant here for 5 seconds.
From the 20 indentation tests, an elastic modulus value and hardness value were derived using
the Anton Paar Indentation software. The average elastic modulus value found was
approximately 160.04 ± 2.04 GPa and the average hardness value found was approximately 2.46
± 0.09 GPa.
After the indentation tests were successfully finished, the sample was taken and analyzed under
the SEM. Figure 5 is a close-up of one indentation sites, showcasing the square pyramid-shaped
Vickers tip and that the Vickers tip is sharp enough due to the precise diagonal lines across the
indentation site. Figures 5 and 6 are indicative of a successful indentation site due to minimal
plastic deformation, as we see minimal surface features, like raised lines or deformation bands
around the indentation site. Lastly, there is no visible cracking along the edges and outer corners
of the indentation site, which signifies that Inconel 718 is more of a ductile material.
Due to the limitation of conducting research in one semester, the learning objectives fulfilled
were performing indentation tests to extract basic mechanical properties and conducting SEM
analysis on the indentation site. Despite not completely fulfilling the projective objectives, the
results tell us that exploring multi-scale indentation techniques is a promising method to
determine advanced mechanical properties at room and elevated temperatures to obtain values
that are reliable and closer to the bulk of the material. The advanced mechanical properties
gleaned will only benefit future researchers and engineers regarding material selection in a
variety of field, particularly aerospace.
I’d like to thank George Mason University’s Undergraduate Research Scholars Program at the
Office of Student Creative Activities and Research for the funding that allowed me to contribute
to this project, and I’d also like to thank Dr. Beheshti and Shaheen for all of the mentoring,
training, and support they have each given to me. They are both dedicated professionals, and it
was a pleasure to work with them.
Lastly, these are my references. Thank you very much for your attention.