OSCAR Celebration of Student Scholarship and Impact
Categories
College of Engineering and Computing OSCAR

Purification of DNA Origami Nanoparticles

Author(s): Mahin Chowdhury

Mentor(s): Remi Veneziano, Bioengineering

Abstract
The main purpose of this undergraduate research project is to find a more efficient and viable method of purifying DNA origami nanoparticles with the intent of increasing its product yield. Based on studies such as “DNA Origami Presenting the Receptor Binding Domain of SARS-CoV-2 Elicit Robust Protective Immune Response” by Veneziano et al., the most optimal method for purification currently is to use filtration-based ultracentrifugation. The filters, AmiconUltra, assist in separating the desired nanoparticle structures from residual particles, oligonucleotides, or byproducts. However, this method of purification often leads to a low yield of DNA nanoparticles, which is not optimal to reproduce within a clinical or industrial environment. Due to this matter, the objective of the project is to test multiple purification methods and identify the best method (including the current method). These methods include the usage of filters with different membrane compositions, column-based bead purification, and others. The expectation set for this project is to observe and analyze the results of purification from each method, and depict the best method based on the yield.
Audio Transcript
WEBVTT

1
00:00:00.830 –> 00:00:16.510
Mahin Chowdhury: Alright, Hello! My name is Mahin Chowdhury, and this and this is ‘The Purification of DNA Origami Nanoparticles’. Firstly, I would like to thank Dr. Remi Veneziano for helping me with with this project from start to finish, and I would also like to thank the Oscar program for providing me with this wonderful opportunity.

2
00:00:17.650 –> 00:00:37.060
Mahin Chowdhury: So the overview of the project: DNA origami nanoparticles is a new vaccination method that represents the next generation of vaccine delivery to fight infectious diseases. The current method towards purifying these nanoparticles relies upon ultracentrifugation, using Amicon Ultra filters which typically result in a low yield

3
00:00:37.060 –> 00:00:38.860
Mahin Chowdhury: In a laboratory environment this is-

4
00:00:38.940 –> 00:00:40.710
Mahin Chowdhury: -standard, pretty all right.

5
00:00:40.760 –> 00:00:57.070
Mahin Chowdhury: However, if you were to scale this up to the industrial level, it becomes a lot more complicated, as this will increase costs to manufacture and produce the nanoparticles. So the main objective is to find a way that that that could help reduce the cost or

6
00:00:57.100 –> 00:01:00.760
Mahin Chowdhury: purify the the structures as much as possible

7
00:01:02.340 –> 00:01:08.960
Mahin Chowdhury: The methods that we will be using: So we will be using 3 different DNA nanoparticles of

8
00:01:08.980 –> 00:01:22.690
Mahin Chowdhury: 3 different sizes, basically a 6-Helix Bundle, which is referred to as 6HB. Pentagonal Bipyramidal (PB), and Pentakis Dodecahedron (PD). As I mentioned, before, these particles are different than one another.

9
00:01:22.730 –> 00:01:33.020
Mahin Chowdhury: They have different properties, shaped differently. There are some similarities between 6HB and PB in terms of size (base pairs).

10
00:01:33.360 –> 00:01:36.170
Mahin Chowdhury: Pentakis Dodecahedron, on the other hand, is a lot bigger.

11
00:01:36.460 –> 00:01:43.420
Mahin Chowdhury: So we will be trying to purify these 3 types of nanoparticles, using the methods shown below

12
00:01:44.740 –> 00:01:59.890
Mahin Chowdhury: The first method we use is the regenerated cellulose membrane filter. As you can see, we have the experimental concentrations retrieved from the trials and the theoretical that was calculated, based on the volume of a retrieved at the end of the trial.

13
00:02:00.240 –> 00:02:10.600
Mahin Chowdhury: As you can see, there are some differences between the theoretical and the experimental concentrations. I encourage you that you please pause and look through these diagrams.

14
00:02:10.800 –> 00:02:18.500
Mahin Chowdhury: These diagrams / data would be compared to the agrose gel that we collected via a gel electrophoresis.

15
00:02:18.740 –> 00:02:32.360
Mahin Chowdhury: These bright bands are the nanoparticles. These bands below are the staple strands, or any byproducts that may come off from purifying the actual nanoparticles.

16
00:02:32.430 –> 00:02:36.960
Mahin Chowdhury: These are what we’re trying to get rid of essentially, as you can see.

17
00:02:37.010 –> 00:02:43.120
Mahin Chowdhury: The brightness indicates the concentration of both the nanoparticles and the staple strands.

18
00:02:43.570 –> 00:02:49.990
Mahin Chowdhury: So the brighter it is, the more concentrated it is. Meaning, there’s more of it; there’s a lot in the actual solution.

19
00:02:50.750 –> 00:03:06.030
Mahin Chowdhury: As you can see the 30kDa, based on the each structure. You can see that each band becomes fainter and fainter as you go up the scale for the filters, which indicates that the filters

20
00:03:06.220 –> 00:03:13.570
Mahin Chowdhury: do purify the staple strands or byproducts as much as possible.

21
00:03:13.870 –> 00:03:15.570
Mahin Chowdhury: Moving onto trial 2.

22
00:03:15.650 –> 00:03:21.770
Mahin Chowdhury: We can see that it is also a little bit consistent in comparison to trial one, and

23
00:03:21.900 –> 00:03:25.660
Mahin Chowdhury: if you compare it to the gel from electrophoresis

24
00:03:25.700 –> 00:03:35.150
Mahin Chowdhury: you can see that these bands become faint or barely visible, which shows that it has become pretty pure, and

25
00:03:35.150 –> 00:03:52.780
Mahin Chowdhury: could often indicate that some based on the comparison with the theoretical and the experimental, we can determine whether or not we lost the majority of the nanoparticles, or we retain them, in which case some we lose a lot, and some we aren’t able to purify as much

26
00:03:53.010 –> 00:03:57.120
Mahin Chowdhury: Moving on the Spin X polyethersulfone filters.

27
00:03:57.260 –> 00:04:05.080
Mahin Chowdhury: These filters are used in the similar method with the Cellulose filters. As you can see-

28
00:04:05.670 –> 00:04:25.000
Mahin Chowdhury: -you have the 6HB. Which is similar in the theoretical and experimental. However, if you were to compare it to the agarose gel, you have a lot of stable strands, so you can’t always rely upon the numerical data / figure. You have to use both the data and the visual gel.

29
00:04:25.190 –> 00:04:31.400
Mahin Chowdhury: In which case you can see that many for, trial one at least, many of these are not that pure

30
00:04:32.030 –> 00:04:39.690
Mahin Chowdhury: Moving onto trial 2, using the same gel, it’s pretty consistent, not as pure as it would seem.

31
00:04:39.740 –> 00:04:43.120
Mahin Chowdhury: Especially for PD, since the bands are very bright.

32
00:04:44.190 –> 00:05:01.580
Mahin Chowdhury: The Spin Kleen columns by Bio-Rad. They have a very high theoretical concentration, but a low experimental, which typically indicates that we we lost a lot more of the nanoparticles, and, as you can see here.

33
00:05:02.160 –> 00:05:06.100
Mahin Chowdhury: we actually have a a lot more nanoparticles

34
00:05:06.320 –> 00:05:15.770
Mahin Chowdhury: band of nanoparticles, but not a band for the staple strand, which means that we were able to purify it. However, at the cost we lost a lot of product.

35
00:05:16.660 –> 00:05:28.630
Mahin Chowdhury: The same is consistent with trial, 2, in which case trial 2 also had some consist of the same consistency as trial one. We lost a lot more of the nanoparticles, but we were able to purify as much as we could.

36
00:05:29.440 –> 00:05:49.230
Mahin Chowdhury: Moving on to this Zbba spin desalting columns, we could see that the experimental concentration is really high compared to the theoretical concentration, and this is also backed up by the bands. The staple strand bands that we see here, especially for PD. Which are pretty bright, indicating that these are not as pure as it should be.

37
00:05:50.520 –> 00:05:54.140
Mahin Chowdhury: The same thing goes with the trial 2 same consistency.

38
00:05:54.500 –> 00:06:01.490
Mahin Chowdhury: It just shows that these are these columns are not a viable method towards purifying the nanoparticles.

39
00:06:02.640 –> 00:06:04.330
Mahin Chowdhury: So the conclusions

40
00:06:04.380 –> 00:06:22.040
Mahin Chowdhury: Based on the data, many different nanoparticles are best purified by a specific method. PD, for instance, is very well purified, using the 100kDa cellulose membrane filter

41
00:06:22.050 –> 00:06:26.750
Mahin Chowdhury: There is still some access that you may have, but

42
00:06:26.880 –> 00:06:44.460
Mahin Chowdhury: you still have a good amount of nanoparticles. The same goes with the other nanoparticles. They all have a different type of purification yield, depending on whether or not the method, the method, whether filter or column, can purify, based on its structure and size.

43
00:06:44.780 –> 00:06:56.690
Mahin Chowdhury: so as they as dated for some methods would just result in a setting pure structure, but with a high yield. Other structures

44
00:06:56.770 –> 00:06:59.780
Mahin Chowdhury: have a very pure structure, but a low you.

45
00:06:59.810 –> 00:07:02.570
Mahin Chowdhury: So is this optimal for industrialization?

46
00:07:03.150 –> 00:07:04.560
Mahin Chowdhury: Not as much.

47
00:07:04.630 –> 00:07:17.860
Mahin Chowdhury: It could be better, but it is mainly because we just don’t have a very viable method at this point right now. The current method is still the best method.

48
00:07:17.920 –> 00:07:24.710
Mahin Chowdhury: On the other hand, we were able to find that many different structures are purified better with different methods.

49
00:07:24.740 –> 00:07:30.860
Mahin Chowdhury: better way to purify each structure based on a different method that we can use.

50
00:07:31.050 –> 00:07:44.890
Mahin Chowdhury: So what’s next? The next? The next step is to continue searching for a better better solution. This is not the end. This is merely the beginning of finding a better way to purify these nanoparticles.

51
00:07:45.020 –> 00:07:50.950
Mahin Chowdhury: depending on the size or shape. we just have to find something that works for that specific nanoparticle.

52
00:07:53.250 –> 00:08:06.870
Mahin Chowdhury: Thank you so much for listening to my presentation. Please leave any questions if you have any. Once again. Thank you so much to Dr. Veneziano and the Oscar program, and thank you again for listening to my presentation. I hope you have a great one.

Categories
College of Humanities and Social Science OSCAR

The Effect of 3D/VR on the Perception of Robot Humanlikeness

Author(s): Min Ji Kim, Rydia Weiland, Sara Weiland

Mentor(s): Dr. Elizabeth Phillips, Psychology

Abstract
Humanlikeness is a quality possessed by social robots tasked for human interaction. This quality has remained ambiguously defined in literature, despite the notion that social robots with humanlike qualities tend to be better received by users. Currently, the Anthropomorphic Robot (ABOT) Database serves as a validated resource for robot designers and researchers to reference a robot’s humanlikeness score. However, one of the limitations of this database is that the scores are based on 2D images of the robots, rather than based on the actual embodied robots. To combat this limitation, the current study aims to research if having the robots in a 3D Virtual Reality (VR) space makes a difference in the humanlikeness scores. This current study focuses on collecting the humanlikeness scores of robots in VR, and further aims to compare the scores with the scores that are currently listed in the ABOT Database. To accomplish this, 16 robot models were put in a VR environment where participants rated the humanlikeness of each robot. All the robots were previously rated in the current ABOT database so the scores from the VR environment and the ABOT database could be compared. Results from 11 participants show no significant difference between 2D humanlikeness scores and 3D/VR humanlikeness scores. The results of this paper will validate the progression of continuing to build a 3D visualized database of humanlike robots while providing further context for how perceptions of humanlikeness differ throughout 2 and 3-dimensional spaces.
Audio TranscriptHello, everyone. My name is Rydia Weiland and I’m an undergraduate at George Mason University studying Psychology for the purpose of understanding social robots and our perceptions of them, as well as how to make them better companions, teammates, and assistants.

>CHANGE SLIDE< Today I want to talk to you about humanlike robots. We're going to explore what a humanlike robot is, why they're important in modern society, and how our perception of them changes when they are presented in 3D, particularly virtual reality. To start, though, let's talk about social robots. >CHANGE SLIDE< First question: What are social robots and what is robot humanlikeness? >CHANGE SLIDE< A social robot is a robot an autonomous bot that has been tasked with communicating and/or interacting with a human being or another autonomous agent. Oftentimes, these robots are considered to be embodied and are built with human interaction in mind (dont imply that all robots are humanlike) Many social robots are humanlike and some are not. Thus, we get a whole range of designs that possess a degree of humanlikeness, or in literal terms, how similar the robot physically looks like a human being. >CHANGE SLIDE/SHOW EXAMPLES< Some examples include Jibo up here in the top left, which was a social robot intended on living with you in your home and providing entertainment and companionship for the whole family. Aibo the dog robot on the bottom was made by Sony for the purpose of, you guessed it, being a robot dog companion. And last but not least, Nao to our right is a social robot that can be programmed for a huge variety of tasks, but he is popular in settings such as healthcare centers and as an assistant for various companies to inform and entertain visitors. >CHANGE SLIDE< So, something that these three and many other wonderful robots share is a quality called humanlikeness, or the degree to which a robot resembles a human being. As for why their humanlikeness matters, literature shows that a human-likeness effect exists, wherein people's responses to more humanlike robots tend to be more positive. However, people's receptiveness towards the robot is affected depends upon the context in which they are used (for example, an industrial robot with low humanlikeness scores versus a medical assistant robot with a high score). And of course, there is the issue of robots being perceived negatively once we hit the uncanny valley which we want to avoid. So, it's important to have a way to systematically quantify humanlikeness in terms of design because we may want to make certain adjustments depending on the task and context the robot will be in. Alright, so we've determined humanlikeness of social robots is important. How do we go about measuring it? >CHANGE SLIDE< In the past, a systematic way to determine a robot's humanlikeness did not exist. Until the anthropomorphic robot database showed up! The Anthropomorphic Robot Database (ABOT) database is a database that stores over 250 humanlike robots with associated humanlikeness ratings based on the robot's appearance in 2D. The original ABOT study found that humanlike appearance is based on three dimensions - body manipulators, face, and surface - and as such, each robot has a unique overall humanlikeness score. From this data, a predictor model was made that intends on predicting a robot's humanlikeness based on the image that has been input. This research is fabulous because it offers a way to systematically quantify the concept of humanlikeness beyond initial One limitation of this database, though, is that presently, data for these bots only exist in 2D. So, my goal with this project is to expand upon this work. >CHANGE SLIDE< My project takes 16 robots from the Anthropomorphic Robot Database and visualizes them in 3D through representative models of the robots. The purpose of visualizing these robots in 3D is to understand if having a more holistic view of the robots changes how they are perceived, vs viewing a 2D image of the robots. VR has been established as an effective way of increasing a sense of immersion as well as realism. If we can bring these robots to 3D, we may be able to save a lot of money and time with a more lifelike example of how these robots would look before they're even produced. So without further ado, Let's take a look at the actual procedure. Prior to the experiment, participants took a basic demographics questionnaire, as well as completed a virtual reality sickness questionnaire to measure their propensity for experiencing VR sickness. This was to cover the concern that VR can cause some degree of discomfort for participants. Luckily though, since our experiment was conducted entirely sitting, most participants scored pretty low and were able to participate. >PLAY VIDEO< During the experiment, participants were asked to wear an HTC Vive Pro 2 headset and make selections onscreen with the controllers. Participants had the option of rotating the robot 360 degrees as well as adjusting it's position relative to them. They read a short instructional segment and proceeded to rate robots on their physical humanlikeness from 0 to 100, with 0 being not humanlike at all to 100 being extremely humanlike. The robot models are scaled within the environment to their real-life size as best possible. After the experiment is over, the participant removes their headset and is thanked for their time. So, let's talk about what that participant data showed. Our results from an independent samples t test was performed to compare the humanlikeness scores between Vr and the 2D ABOT database scores. The robot scores in 2D vs 3D were found to not be significantly different from each other. So, since robots rated in VR tended to be rated similarly to that of their 2D image counterparts, there are a few possible interpretations of these results. Firstly, this may contribute evidence for researchers and designers who were looking to confirm if the original ABOT database's predictor tool is indeed an effective way to assess their design's humanlikeness. If VR represents a more realistic model but scores remain unchanged, it could be that using the 2D image predictor is sufficient and we do not need to go to the next stage of using VR with 3D modelling for a more accurate view. Another possible interpretation is that the perception of overall humanlikeness does not change whether the robot is presented in 3D vs 2D. As humanlikeness is still relatively new in literature and what is known is limited, this result would help expand the breadth of what is known about the perception of humanlikeness in multiple dimensions and give way to potential follow up studies to narrow the concept further. >CHANGE SLIDE< To touch briefly on some limitations of this study, we were working with a low sample size of only 11 participants and robot models that were chosen based on availablity, rather than an even distribution of low humanlike robots to high humanlike robots across the board. >CHANGE SLIDE< In conclusion, though we obtained a null result, we are nevertheless excited by the potential benefits of continuing to conduct research on robot humanlikeness in VR. We plan to add more models in the future and continue participant data collection into the summer, and hope that more companies will make available their models in 3D format so that we may visualize them and assess their humanlikeness in VR. Thank you so much.[/expand]

Categories
College of Public Health OSCAR

The Structural Determinants of Health: How Structural Racism Facilitates Community Violence in Washington, DC

Author(s): Nadia Altaher

Mentor(s): Debora Goldberg, Public Health

https://youtu.be/8ovMauNmhqE

Abstract
Racism is deeply embedded in social determinants of health, establishing racial health inequities in populations of color. Recent measures have been taken to address this issue in Washington, DC including the 2020 Racial Equity Achieves Results (REACH) Amendment Act, which focuses on racial equity, social justice, and economic inclusion. To further these efforts, there is a need to understand the relationship between structural racism, unemployment, poverty, and violence. This research explores the correlation between historic racism, social determinants of health, housing policies, and community violence in Washington, DC. Methods include mapping racial covenants from 1940 to 2010, neighborhood displacement, and social determinants of health. Current mortgage lending in the neighborhoods across the city was used to measure the housing market and lending discrimination. Demographic data, drawn from various sources, were used to measure social determinants of health across statistical neighborhoods. Findings indicate Wards 5, 7 and 8, in South and Eastern parts of DC, have the highest rates of crime, unemployment and concentrated poverty, lowest house lending rates, and experienced the most housing displacement from 1940-2010. The district’s racial dissimilarity index of 70.9 indicates that the city is still highly segregated and that zip codes play a significant role in individual health and exposure to violence. To achieve health equity, measures must be taken to dismantle structural racism that include community based participatory research and policies that incorporate a historical context of the problem as well as voices of community members.
Audio Transcript
Hello, everyone. My name is Nadia Altaher. And today I’ll be discussing my research project on the structural determinants of health, specifically how systemic racism facilitates community violence in Washington, DC.

So what is structural racism? Why is it important? It is a product of history but has adapted its contexts over time to create conditions that allow for worse health outcomes in racially marginalized populations. And we still see these worse health outcomes and disparities today as we higher rates of maternal mortality, lower life expectancy, and worse mental health outcomes among racially marginalized populations. It permeates in the various social determinants of health which is the environment in which people live, work, and play. And determines a lot of out health outcomes. As you guys can see, health equity and health equality are completely different as health equity strives to provide greater resources and attention to disadvantaged populations. Structural racism is a driving force many of these health inequities we see today.

So why conduct this research? Well, previous research has shown that mental health problems are more prominent among populations experiencing racism or discrimination. While young individuals are disproportionately impacted by community violence, certain populations are more at risk than others, specifically those experiencing systemic racism, bias, discrimination – which all impact adverse childhood experiences and the environment in which they grow up in. So drivers for violence impact communities of color and place residents at greater risk for poor mental health outcomes, tying metnal health and community violence aspects and their social determinants together. So I wanted to look at this theoretical framework between historic racism, social determinants of health, as well as housing and lending policies in DC as they have affected gentrification, community demographics, as well as community displacement over time. I did this by measuring racial covenants from 1940 to 2010, looking at current and previous housing and lending policies, as well as demographic data from DC department of health.

So my results have found that Wards 5, 7, and 8 – historically black neighborhoods (according to the Racial Covenants) are characterized by the highest poverty rates, gross rent rates, and violence mortality rates. They also have the lowest life expectancy, educational attainment, and income levels.

They are coincidentally make up the majority of the Non-Hispanic Black community as 93.7% to 98% are located in Southeast and Northeast DC. While the non-Hispanic White population is about 61.1% to 72.8% in Northwest and Southwest DC. Ward 3, the highest percentage of White Americans has a higher life expectancy by 16 years compared to Ward 8 which also has a 6 times higher rate of infant mortality, which is also an indicator for systemic racism.

I also wanted to point out that the racial dissimilarity index in DC is 70.9, meaning that 70.9% of White residents would have to move to achieve complete White/Black integration in the city, concluding that the city continues to be highly segregated.

The DC homicide rate is 16.0 per 100,000. That might seem low, but it is three times higher than the national average of 5.3. 74% come from firearms and 26% from suicides. But the most important thing to look at is that 94% of the victims were Black and 88% were males living in Wards 5, 7, and 8. As you guys can see the browner the color, the higher the violence mortality rate.

So this is a table I created looking at the population make up of these wards and their current social determinants such as violence mortality rate, life expectancy, median income, and unemployment. And how they drastically change looking at the Wards 1-3 from Wards 5-8.

So I wanted to look at the DC housing and redlining policies as they have impacted the placement of housing that people currently have in the city. So historic redlining comes from the 1933 New Deal Project which used to further segregation efforts by refusing to lend mortgages in neighborhoods of color while ensuring mortgages and reinvestments in white neighborhoods. It led to lasting impacts of generational wealth and property ownership as Black individuals were less likely to own a property of value and live in disinvested communities that were deliberately maintained by racial segregation. According to a study done on DC Housing in 2005, less than 10% of lending applicants in underserved census tracts were denied loans inequitably. And many public housing projects have been slated for private redevelopment, adding to existing waitlists for rent vouchers to assist underserved populations for private housing.

So what does all this mean? Why is it important? Why should we address this? So drivers for structural determinants of health have a long-standing impact on community violence in Washington, DC. Community trauma from adverse childhood experiences stemming from concentrated poverty, low-quality housing, and community segregation from redlining practices during the Jim Crow Era play a significant role in perpetuating community violence to this day. It was found that the availability of affordable housing shapes families’ choices on where they live and has the potential to relocate low-income families to substandard housing in neighborhoods with higher rates of poverty and crime, and fewer health care services.

So my conclusion from this research project was to look more into the anti-racist housing policies to understand gentrification and violence in low-income DC neighborhood and understanding the historical contexts that allowed for current community demographics and individual displacement. I also wanted to look more into how we can prevent crime and invest in community safety – what changes must be made to community infrastructure involving diverse approaches with youth that are multi-sectoral, include public-private partnerships and have multiple stakeholders in the community.

Thank you for listening! I would like to acknowledge my mentor, Dr. Debora G. Goldberg. The college of public health, GMU undergraduate research scholars program, and the Office of Student Scholarships, Creative Activities, and Research for funding this project. Thank you everyone. I hope you have a great day.

Categories
College of Humanities and Social Science OSCAR

Assessing reproductive hormones in adult female blue whales (Balaenoptera musculus) by analyzing historic baleen samples from the 1940’s

Author(s): Nadia Gray

Mentor(s): Kathleen Hunt, Biology

Abstract
Monitoring reproductive parameters such as gestation period and calving intervals in blue whale (Balaenoptera musculus) populations is difficult due to limitations in hormone data collection methods for large whales. The most significant hindrance is that it is impossible to collect blood samples from these animals due to their size; therefore, alternative methods for hormone analysis must be investigated. Recent studies have shown that hormones like progesterone can be detected in baleen powder of other mysticete whale species like bowhead (Balaena mysticetus) and North Atlantic right (Eubalaena glacialis) whales. I hypothesized that baleen from female blue whales would contain regions of high progesterone indicating prior pregnancies. To test this, I ran enzyme immunoassays to quantify progesterone from serial samples taken along the length of historic baleen plates of four female blue whales, two of which were known to be pregnant upon capture. All of the specimens in this study were originally captured by post World War II era Japanese commercial whaling vessels and are archived at the Smithsonian National Museum of Natural History. This time period overlaps a pause in commercial whaling globally, and predates significant impacts of climate change, implying a relatively low stress environment for this population compared to recent years. Most baleen plates had several broad regions of high progesterone, as predicted, with patterns suggesting a 2-year calving interval for all but one female. A possible reason for this outlier could be due to the age of the individual, as she was the smallest individual in the sample size and may have not yet been reproducing. Two individuals known to be pregnant at death had high progesterone in the most recently grown baleen, as predicted. These findings may clarify historic norms of reproduction in blue whales, and could be helpful for comparisons to modern populations.
Audio Transcript
Hi, everyone! My name is Nadia Gray. I’m an Environmental and Sustainability Studies student, and for this semester I’ve been working with Dr. Kathleen Hunt of the Biology department here at George Mason, and my project is assessing reproductive hormones in adult female blue whales by analyzing historic baleen samples from the 1940’s. So let’s talk about the blue whale. So this is a marine mammal, and it is the largest animal ever to have lived. As far as we know. it is about as big as 3 school buses lined up bumper to bumper, and as heavy as 25 African elephants in one room and on average, they can live anywhere between 80- 90 years. And the reason why we’re talking about the blue whale is because, despite the fact that commercial whaling has been banned since 1986. This animal is still considered endangered by the IUCN Red List. This is due to other stressors, such as climate change, and although their population is increasing, we still have yet to see stable numbers. And that being said, we also don’t know that much about blue whale reproduction, and that is because these animals are too large to collect blood samples, from which is how hormone data is generally generated. There have been some studies trying to use other methods, such as fecal samples, respiratory vapor, and blubber collections. However, just do the nature of these animals. It is a very difficult method. For this project we’re going to be looking into using baleen as a method. and balen is the keratin-based apparatus you see growing inside of a whale’s mouth, and it is used to filter feed out krill. And for this study we’re also going to be looking at the steroid hormone progesterone, and that is because in high amounts we believe that progesterone can indicate pregnancy. For our research questions we’re going to be looking at is progesterone on detectable in historic baleen. How long is the gestation period of a female blue whale? And what was the number of calving intervals during the mid to late 1940’s. The sample size that we looked at in this project consists of 4 adult females, 2 of which were pregnant upon capture, and 3 of which had an intact root. And that is an important piece of data, because baleen grows similarly to how teeth grow where it has a root that anchors into the gums, but isn’t necessarily visible. So if you were to cut at the gum line, you’d be losing out that little last piece of data. The first step is to collect your samples. So this was done by drilling along the length of each baleen plate and collecting powder samples every centimeter. next up is to weigh out the powder of each centimeter of the baleen plate. We measured out around 20 grams for each specimen. And then you can move on to the extraction process, which is where you are, adding methanol to the powder, and that is what pulls out the hormones. And then the next step is, you are going to remove that methanol that has been sitting in the powder and separate it from the powder pellet that you see there. Next, you’re going to move on to the reconstitution phase which this is after you have dried down that previous methanol sample, and you are essentially rehydrating it so you can create samples with it, and then you’re going to have to go ahead and dilute those samples, because the progesterone, and sometimes is too high in specimens to be able to actually read it. And for my specimens. All of them were one to 4 dilutions, except for one that was one to 40. And then finally, you can move on to the assay portion. So each sample that you have created is going to go into each of these wells, and once you’re done with that, it’ll look something like this and then you can place the assay tray into what is called a spectrophotometer. And this brings us to our results. So this is what the raw data looks like. Everywhere you see light colors. That is what is considered to be high progesterone, and everywhere you see is dark. That is what we consider to be low progesterone, and the unit that is used in the spectrophotometer is called ocular density. And then this is what the final data ends up looking like. These graphs are made through a program called Prism, and if you are interested in a more in depth description of what is going on with each of these graphs. You are going to have to look at my poster of the same name, but essentially anywhere you see, a yellow bar is what we consider to be high progesterone. and on average, we found that these whales were experiencing, on average, of a 2 year calving interval, with the exception of
Figure D, which that one only had 2 high progesterone points, and prior to that was all low progesterone, which we later found out was the smallest world population, so this could potentially indicate that she had not yet been reproducing, because she might have been younger than the other specimens in this sample. And then the next steps: The purpose of this research is to generate baseline data for future hormone studies in blue whales. So essentially we are creating a control group and by testing whales from other time periods we can use this and determine what is considered normal reproduction. This knowledge can also be used to apply towards future management plans of the species. And this brings us to the end of the presentation. I would like to thank Dr. Michael Mcgowan and John Sosky from the Smithsonian National Museum of Natural history. I’d like to thank Allie Case and Dr. Kathleen Hunt over at side Tech at George Mason University, as well as Dr. Alyson, Fleming and Malia Smith, of the University of North Carolina, Wilmington.
Categories
College of Science OSCAR

Analyzing the Effect of an HIV Protease Inhibitor Drug, Ritonavir, in Women Undergoing Surgery for Newly Diagnosed Breast Cancer

Author(s): Naomi Alemayehu

Mentor(s): Emanuel Petricoin, Center for Applied Proteomics and Molecular Medicine

Abstract
Introduction: Ritonavir is a protease inhibitor that is currently used to treat HIV. This project aimed to identify changes in expression levels of proteins associated with G1/S checkpoint and inhibition of apoptosis.
Methods: Pre-treatment (n=6) and post-treatment (n=6) breast tissue sections, from patients who received Ritonavir treatment, and control patients (n=7), who did not receive Ritonavir treatment, were obtained. Reverse Phase Protein Array (RPPA) was performed to investigate the expression of selected endpoints.
Result/Conclusion: Rb (S780) was found to be significantly lowered by Ritonavir (p= 0.0047). Ritonavir also decreased the expression of several proteins involved in the inhibition of apoptosis pathway, such as Signal transducer and activator of transcription 3 (Stat3), Stat3 (Y705), and Steroid receptor coactivator (Src). Our results demonstrate that Ritonavir is effective in inducing apoptosis in breast cancer tumors by interfering with the inhibition of apoptosis pathway and G1/S checkpoint
Audio Transcript
I am Naomi, and I will present my research project, which focuses on analyzing the effect of an HIV drug on breast cancer patients.

Ritonavir is a drug that is currently exclusively used to treat HIV patients. However, recent research has shown that Ritonavir also has anti-tumor properties. Ritonavir is believed to stop the progression of tumors and induce cell death in breast cancer. This project aimed at characterizing the effects of Ritonavir on the inhibition of the apoptosis pathway and G1/S checkpoint. To accomplish this purpose, a core biopsy was taken from 14 newly diagnosed breast cancer patients. These patients were then divided into a control and treatment group. The control patients received conventional therapy while patients in the treatment group received Ritnoavir twice a day for 5 days. After this, tumor sections were removed from all of the patients by the type of surgery of their choice.

The endpoints analyzed by this project were mainly from G1/S CHECKPOINT, NF-KB signaling, APOPTOSIS REGULATION, and INHIBITION OF APOPTOSIS.

The tissue sections from each patient were deparaffinized, dehydrated, and stained with hematoxylin. The stroma from each sample was collected by Laser Capture Microdissection and lysed using an extraction buffer. In preparation for Reverse Phase Protein Array analysis, the lysates were printed onto nitrocellulose slides and probed with 35 antibodies. The slides were scanned and analyzed with Microvigene software.

These scatterplots represent the expression values of Cyclin E1 and CDK2 in the treatment and control groups. The P values were obtained by comparing the fold changes in the treatment and control groups. The results demonstrated that there was a decrease in the expression of Cyclin E1 and CDK2.

As you see from these graphs, the same trend was observed from ATM (S1981) and CDK2. These proteins also decreased significantly in the treatment group when compared to the control groups.

Overall, Ritonavir decreased the expression of proteins in the inhibition of apoptosis and G1/S checkpoint. Our data also showed that, in the treatment group, patients exhibited similar protein expression patterns to each other before Ritonavir treatment as well as after ritonavir treatment. Our data strongly suggest that further investigation of the effect of Ritonavir on breast cancer will be useful in the treatment of breast cancer.
Thank you for watching.

Categories
College of Engineering and Computing OSCAR

Effect of Inflammation on lipid nanoparticle performance

Author(s): Peter Touma

Mentor(s): Caroline Hoemann, Bioengineering

Abstract
Characterization of nanoparticles was conducted on various samples for the purpose of analyzing the concentration of the samples and their particle counts. The samples studied were liposomes composed of phosphatidylcholine and phosphatidylserine (PSPC), and platelet microparticles. Nanoparticle characterization was conducted on these samples in order to better understand the results of surface plasmon resonance (SPR) analyses that were previously conducted on the same samples to test their binding ability to the Lox-1 protein. The data collected by SPR analysis showed that the PSPC sample had the greatest binding to the SPR sensor chip. However, the particle count or concentration of a sample is known to influence the accumulation of the sample. To measure particle count, samples were diluted in filtered deionized water and analyzed using a Zeta View instrument. The Zeta Viewer is equipped with a camera that collects up to 11 images at high magnification to analyze particle size and density. Results showed that PSPC had a mean concentration of 7E+13 particles per mL with a mean diameter of 213.7 nm and polydispersity of 209.5 nm – 217.1 nm (range). Platelet microparticles were found to have a mean concentration of 9.93E+10 particles per mL with a mean diameter of 176.9 nm and polydispersity of 176 nm – 178.5 nm. These results indicate that PSPC liposomes had a 1000-fold particle count compared to the platelet microparticles. Future experiments will require Zeta View analysis of the particle count and subsequent SPR tests with several dilutions, to calculate the binding constant of each sample type.
Audio Transcript
Hello, my name is Peter Touma, and today I’ll be sharing my undergraduate research with you all. I hope you enjoy. My research from this semester consisted of a nanoparticle analysis using a ZetaView instrument in one of the labs at the Manassas SciTech campus. Micro particles are nano sized extra cellular vesicles that are released from cells that are known to carry procoagulant surfaces with an anionic charge due to the presence of phosphatidylserine within them. The long term objective of the project is to measure the binding affinity of these micro particles to scavenger receptors, such as lectin-like oxidized lipoprotein receptor one, or simply called lox-1. To this end, the goal of the experiment was to measure the diameter and concentration of synthetically made lipid nanoparticles that contain phosphatidylserine and phosphatidylcholine, as well as platelet-derived particles from a human donor. To prepare the samples, liposomes containing phosphatidylcholine and phosphatidylserine were prepared and platelet microparticles were collected from a healthy consenting donor. The lipids were then suspended in PBS, extruded using varying filter sizes to form liposomes, and stored at 4 degrees Celsius until use in the experiment. The platelet microparticles were previously generated from another experiment using, again, a human donor. A citrated blood sample was centrifuged at 200 xg for five minutes at room temperature to first clear the sample’s red and white blood cells. Then the supernatant was centrifuged again, this time at a higher degree of 1500 xg for 10 minutes to pellet the platelets. These platelets were then washed and citrated isotonic saline and then resuspended in half the original volume of in isotonic saline with two millimolar CaCl2. The platelet samples were then pipetted into half milliliter aliquots and incubated for 30 minutes at 37 degrees Celsius in the presence of thrombin. The resulting platelet clot was then centrifuged again, and 25 microliters of the supernatant-containing platelet microparticles was frozen for use in this experiment. To conduct the experiment, the samples were first diluted in filtered water before injecting into the ZetaView machine. This is to ensure proper analysis, and it goes with the standard operating procedure of the machine. Four cycles were conducted using the microparticles, one of which was at first omitted but then later added back into the study to see a comparison with and without it. And three cycles were carried out using the PSPC liposomes. The initial cycle was recorded and used in the data, but the data was interpreted both with and without it to see the difference that or any possible difference that may arise. The purpose of collecting the data that we collected was the concentration in the particle size of particles. And this was just the overall particle count of our samples. Here we have three scatter plots, one depicting the platelets with cycle A, or the initial cycle, one of the platelets without the initial cycle, and one of the PSPC liposomes alone. Seen in the scatter plots, the PSPC liposomes had an overall larger diameter or size than the platelets. And the average particle size of the platelets seemed to increase when we removed the initial cycle from the analysis. Here, we have a table. Here in this table, we see, again, the mean particle size of our samples, as well as the concentration of our samples as well. And it’s followed by the dilution factors for each cycle, as well as the particles for frames and numbers of channels included for analysis. The number of channels included is important because the ZetaView uses 11 total channels, and not all of them are typically seen to be readable for the machine. Well, here we see that the liposomes actually had 1,000-fold higher concentration than the platelet micro particles at 7 times 10 to the 13th particles per milliliter. And we also see that the concentration of platelet microparticles increased, as did the particle size when removing the initial cycle. So using the ZetaView analysis, we obviously found out that the particle size and concentration of the PSPC liposomes was greater than the micro particles. The platelet micro particles also showed an increase in size and concentration when omitting the first cycle from the data, showing that the collected data initial cycle may be quite different than from the other cycles. With the measurements taken in this study, it will be beneficial to conduct further analysis and further analyze results from a previous SPR experiment or surface plasmid resonance so that we can better understand the binding affinities of both of these particles onto the lectin 1 protein, the lox1 protein. Typically, it’s best to conduct SPR analysis with similar particle counts and concentrations. So nanoparticle tracking analysis, such as this experiment, should usually be done prior to SPR and will be done prior to SPR for future experiments. That is all I had. I hope that you enjoyed it. Thank you for your time.
Categories
College of Science OSCAR

Calving intervals inferred from progesterone patterns in historic baleen of female fin whales (Balaenoptera physalus)

Author(s): Piper Thacher

Mentor(s): Kathleen Hunt, Biology

Abstract
Reproductive patterns were once a cryptic topic in whale physiology due to the inaccessibility of obtaining endocrine data from mysticete whales. However, recent advanced approaches in measuring hormones in baleen reveal the possibility of obtaining multi-year reproductive hormone profiles from various species of mysticete whales. Hormone analysis of baleen thus offers a tool to evaluate how ecological and anthropogenic pressures impact whale physiology. We investigated progesterone patterns in historic WWII-era baleen of four female fin whales (Balaenoptera physalus) to (1) develop the first longitudinal hormone profiles of fin whales and (2) evaluate gestation periods and calving intervals in a time period when global climate change was minimal and there was a cessation of commercial whaling. Historic baleen plates from the Smithsonian National Museum of Natural History were drilled every other centimeter to obtain a pulverized powder for hormone extraction. Enzyme immunoassays were run to measure progesterone concentrations of each whale (4 females, pregnant at the time of capture) over ~3-4 years. Results indicate a likely one- to two-year calving interval in fin whales, but with notable individual variation. This pilot study helps provide a baseline of female fin whale progesterone patterns, thus offering data that can be compared to modern-day whales that are subjected to the effects of climate change. Future research can assess the influence of modern anthropogenic stressors by identifying abnormal reproductive patterns which can influence recovery efforts and management strategies.
Audio Transcript
Hi, my name is Piper Thacher, and I am a senior majoring in Environmental Science and Policy. For my URSP project this semester, I was mentored by Dr. Kathleen Hunt to determine possible calving intervals of female fin whales from the late 1940s. This was done by creating the first-ever longitudinal profile of progesterone using historic baleen samples. So, for a brief background on fin whales, they are classified as vulnerable on the IUCN red list so there are around 100,000 left and their main threats are vessel collisions and entanglement. We currently have a knowledge gap on the hormones of fin whales and because of that, we cannot determine if modern-day whales who are subjected to climate change are having altered reproductive cycles. So, this project aims to fill that gap. Typically, blood samples are taken to measure hormones, however, that is not possible for a whale. Scientists can use feces, earwax, and blubber, to look at hormones but that only tells us the hormones of the individual for that specific day and the samples do not remain stable over time. Luckily, within the last decade, advanced approaches to measuring hormones in whales were discovered. This new approach involves baleen. So, for those who do not know what baleen is, it is a filter-feeding apparatus in the mouths of certain whales, and here is a close-up of one baleen plate that was actually used for my project. These baleen plates are made from keratin and grow over time similar to our own fingernails. So as the baleen grows hormones are accumulated within it, so it essentially provides a time series of the whale’s hormones for a said number of years. Now this is a more recent finding, however, it was confirmed that different steroid and thyroid hormones such as cortisol, progesterone, and thyroxine are detectable in baleen which leads us to the core of my project. We hypothesized that there would be periodic spikes of progesterone over the length of the baleen plates that may indicate past pregnancies. So, my first goal was to create longitudinal progesterone profiles of four female fin whales that were captured by Japanese commercial whaling vessels. These four females were all pregnant upon capture in February of 1948. The second goal was to determine what the normal gestation periods and calving intervals were for fin whales. And what I mean by normal is what were their reproductive cycles like at a time when there was a cessation of commercial whaling, and the effects of climate change were minimal. This goal is important because it really helps provide a baseline of female fin whale progesterone patterns, which can, in the future, offer comparable data to modern-day whales that are subjected to those anthropogenic factors. So, there are three major steps in the methods for my project. The first is the preparation and extraction. So, baleen was pulverized into a powder every odd centimeter from the base to the tip. Then we take 20 milligrams of power from every odd centimeter, add methanol, decant, and evaporate every sample to get pure dried hormone extracts. And here is what my setup looks like and in those tubes are the weight-out baleen power. The next step is to reconstitute the samples and make a 1 to 4 dilution because the hormones are so highly concentrated. Now the last step is to put the samples into an enzyme immunoassay and when we compare the readings from the plate reader to a standard curve, we can determine the amount of progesterone in the sample. So, this is a little bit of what the process looks like. I have added a stop solution here to end the 30-minute colorimetric changing stage. And then we take the plate and record the optical density. So, for my results, these are the longitudinal progesterone profiles of the four female fin whales. These results show high progesterone spikes that may indicate a possible one to two-year calving interval with notable individual variation and we are assuming each baleen plate has a 3 to 4-year timeline. We can see that our prediction of progesterone spikes indicating pregnancies is confirmed because all of these whales were pregnant when captured and we can see a spike leading up to their death which is at the 0cm from the base. The one oddball at the top right that had its baleen cut from the base rather than being pulled so we are missing the most recent growth which may be why we don’t fully see a spike there. There is a dramatic individual variation when it comes to the average progesterone levels of each whale some have spikes that go as high as 300 nanograms of progesterone per gram of baleen while another has up in the 3000. So, for the next steps, we first want to use stable isotopes to determine the actual timeline of the baleen and this can help solidify when the breeding seasons were and if they match the progesterone spikes on these figures. We also want to expand the sample size so I will be continuing this work so we can get a larger population size. In the future, we also plan to run stress hormones like cortisol and corticosterone to map any correlations between reproductive and stress hormones. So, I wanted to take a moment to thank some amazing people who made this project possible. My mentor Dr. Kathleen Hunt, I am so privileged to have worked with you and I am excited to continue working with you. Allie Case you essentially were a second mentor to me and I’m so thankful for you and your guidance. Also, all of my co-authors who made this research even possible. The Smithsonian Museum of natural history for letting us use their baleen archives and lastly George Mason’s College of Science and the OSCAR program for supporting this research I am so appreciative of you all. I included a QR code if anyone would like to look at references or my other work. Thank you so much everyone for listening this project was an amazing experience.
Categories
College of Humanities and Social Science OSCAR

Warmth and Competence Signaling in Job Interviews

Author(s): Renee McCauley

Mentor(s): Afra Ahmad, Psychology

Abstract
Previous research has indicated that non-native-accented individuals are discriminated against in a variety of contexts, but especially during the interview process. Initial research has begun exploring how speaker gender interacts with accent, but scholars agree that more study is needed. Additionally, few candidate empowering interventions have been tested. The current study is a pilot study for a larger study on accent and gender discrimination in hiring. This within-subjects experiment explores how increasing verbal signals of warmth and competence during the interview process affects perceptions of warmth and competence. The scripts used failed to influence perceptions of candidate warmth and competence as desired. Inaccurate competence signaling strategies and the confound of candidate assertiveness are believed to have contributed to our unexpected findings.
Audio Transcript
There’s a lot of research that suggests that we—all humans—judge people on two traits: warmth and competence. Competence is things like skill, efficiency, and capability; warmth is things like friendliness, trustworthiness, and sincerity.
For my study, we were asking the question: if we change how someone talks about themselves in a job interview, can we increase perceptions of the warmth or competence?
A job interview is a high-stakes interaction where someone is making a bunch of judgments about who you are, and based on those judgments they either offer you a job or they don’t.
So we came up with three scripts that could all reasonably be the way the same person would answer the same question. One was a control script where the candidate presents themselves as qualified enough, but doesn’t present a lot of information that hints at their warmth or competence. And then we made a warmth script that indicated warmth, and a competence script that indicated competence.
Our participants listened to the same person reading each script and after each script, they answered a few questions about how warm and competent the job candidate was.
And as you can see on this graph here, we found out that there was no statistically significant difference in the competence among the three scripts. We did a little better on differentiating warmth, but it still wasn’t what we wanted. The only statistically significant difference was between the control script and the competence script, with the control script being warmer than the competence script.
So what went wrong? I dove back into the literature to see why we hadn’t signaled warmth and competence as intended. I found some experiments that suggest that humans are better at intentionally signaling warmth-we try a bunch of strategies that really do signal warmth- but when it comes to competence, we think some things positively affect perceptions of competence, but they either don’t matter or they hurt competence perceptions. I suspect that better ability to signal warmth is partially responsible for our results. The other factor we think affected our results was that in the warmth and competence scripts, the job candidate was very proactive, which could have been perceived as overreaching, presumptuous, or too assertive.
With all these lessons learned, we’re going to go back and rewrite the scripts, submit them to the IRB, rerun the experiment, and then analyze the results. Hopefully we’ll get the results we wanted this time around!
Once we crack the code on intentionally signaling warmth and competence, we’ll use these scripts in a larger study I’m conducting on the interaction between gender and accent on hireability in job interviews.
The judgments of warmth and competence that we make about other people, they’re not just limited to individuals, they’re actually the basis of stereotypes.
Current research indicates that non-native-accented individuals are perceived as less hireable than their native-speaking counterparts.
We basically suggest that part of what’s happening is that based on a person’s speech, an interviewer can make an educated guess about their ethnicity and gender which activates stereotypes in their mind which affect how hireable they think the candidate is. We’re going to see if changing the way the candidate talks about themselves could influence some of these judgments and hopefully counteract them.
Categories
College of Humanities and Social Science OSCAR

‘Go F*** Yourself with Your Atomic Bomb’: Variegated Patriotism in the Principal Beat Works

Author(s): Aaron Aadahl

Mentor(s): Eric Eisner, English

Abstract
This undergraduate senior thesis is a comparative study of patriotism in the key works of the 1950s Beat Generation. The essay argues that the writers are too frequently collated into a single ideology and that each demonstrated a unique perspective regarding their relationship to America. As such, I offer a taxonomic system that delimits their methods, foregrounding both distinctiveness and permeability while relying on Schatz et. al.’s paradigm of critical vs blind patriotism as a baseline. First, Lawrence Ferlinghetti’s poem “I am Waiting”nd selections from his novel Her provide a control for comparison by positing a critical patriotism I term to be a constructive critical approach. Next, I examine the three most famous Beat works: Allen Ginsberg’s poem Howl, Jack Kerouac’s travel narrative On the Road, and William S. Burroughs’ surreal sketch novel Naked Lunch. Ginsberg and Burroughs each practiced critical patriotism, offering two very different ways in which an individual can cope with or subvert America’s norms and institutions-defined here as the spiritual critical and individual radical approaches, respectively. Finally, Kerouac differed from his peers as the only writer to subscribe to blind patriotism. Though he had offered rare and brief criticism, his work is ultimately informed by his belief in American exceptionalism and his affinity for the 1930s-Kerouac’s patriotism utilized a nostalgic passive approach. Providing a more complex understanding of the Beat Generation’s relationship to America, this essay supports future scholarship of the Beats and beyond.
Audio Transcript
I. Hello, my project is titled “‘Go F Yourself with Your Atomic Bomb’: Variegated Patriotism in the Principal Beat Works.” It was written in Spring 2023 under the advisement of Dr Eric Eisner. My research was spurred by a statement I often read, applied to the beats in various forms, that would say something like “they were disillusioned with post-world War II America.” How, I wondered, could an entire generation with such a diverse literary output be encapsulated by a single understanding of their relationship to country? Utilizing some of the most popular works, I developed a taxonomy that delimits their unique approaches to patriotism while leaving room for overlap exception and nuance.
II. Drawing from the early works of Lawrence Ferlingetti, Allen Ginsberg, Jack Kerouac and William S Burroughs, I propose four unique approaches to patriotism in literature. In his 1958 poem “I am Waiting” Ferlinghetti practices an optimistic version of patriotism I term the constructive critical. Throughout the poem, he reimagines history in order to achieve a desired future. His “reconstructed Mayflower” suggests the country should have been founded for more than just white people. His “sweet desegregated chariot” suggests that it is not too late to change America’s ethos. The approach is critical because he’s clear-eyed about both past and present. It’s constructive because he is optimistic about the possibilities of future change.
III. Allen Ginsberg’s poem Howl posits a patriotism that I term the spiritual critical. He mythologizes America–imagines the country in terms of internal spiritual struggle of good versus evil. In this line he defines America by its economic system. The “who” of the first word, are he and the other Beats– what he calls “the best minds of his generation.” They struggle helplessly against being inundated by cigarettes– products of capitalism. They burn the cigarette holes in their arms in symbolic protest, but also, I suggest they use the physical pain to distract from the emotional and existential pain they feel the country is wrought upon them. Ginsberg is pessimistic about the Beats being able to win any ideological battle, but he finds catharsis in keeping record and indicting the evil that he sees.
IV. Jack Kerouac’s approach to America is the sole patriotism of the beats that I turn to be a critical or what’s sometimes called “blind patriotism.” In his own biography, he idealized the 1930s and espoused American exceptionalism– his writing reflects this in a mode that I call the nostalgic passive. In this passage from his travelogue On the Road, he’s willing to overlook the obvious unhealthiness of apple pie in what works as a metaphor for his own willingness to overlook the country’s flaws. To Kerouac, how can apple pie or America be bad for him when both provide him with opportunities for self-gratification? Nostalgic passivity, then, fails to critique America unless some aspect of the country negatively affects the practitioner personally and immediately.
V. Lastly, William S Burroughs’ disposition toward America exemplifies an approach I term the radical individual. Burroughs considered all existence to be determined by an invisible control paradigm. America and all other countries function as parts of that paradigm. In this line from Naked Lunch, Dr Benway’s description of an “arduous and intricate bureaucracy” mirrors Burroughs’ own feelings about America. To him, country is just another means of control. Acts of subversion using drugs, sex, or violence become patriotic because they temporarily reclaim control for the individual American. Like the spiritual critical, the radical individual concerns itself with temporary relief from an otherwise inescapably oppressive country. In conclusion, while it may be fair to say that all the Beats were disillusioned with America, it is more useful to examine how they differed in their concepts of what precisely constitutes America and the resulting relationships to that country.
VI. By creating a comparative system that recognizes these similarities and differences, I hope to contribute to the scholarly conversation in a meaningful way. Thank you.
Categories
College of Engineering and Computing OSCAR

Mechanical and Surface Integrity of 3-D Printed PLA-HA Composite

Author(s): Alexander Stuart

Mentor(s): Shaghayegh Bagheri, Mechanical Engineering

Abstract
The materials currently used to construct replacement hip and knee joints for humans are stainless steel and titanium. Whilst these materials are strong and fairly biocompatible, they are expensive and can cause adverse long term side effects such as stress shielding. The aim of this project was to construct and evaluate an alternative to these materials, specifically 3-D printed Polylactic Acid and Hydroxyapatite composite (PLA-HA). PLA-HA is a polymer composite of Polylactic Acid (PLA) and Hydroxyapatite (HA). PLA-HA is relatively cheap, very biocompatible, and can be easily adapted for use in a 3-D printer. This vastly simplifies the process of manufacturing unique parts with complex geometries such as a replacement joint. Raw PLA-HA was created in-lab using two different manufacturing methods. These manufacturing methods were Dry Speed Mixing and Magnetic Stirring. The raw PLA-HA was then converted into filament for use in a 3-D printer using a uniaxial filament press. The PLA-HA filament was then used to 3-D print three different samples for mechanical testing for both manufacturing methods. These samples were subjected to different tests to characterize their mechanical and surface properties. These tests included Energy Dispersive Spectroscopy (EDS) and micro-indentation. These mechanical and surface properties were then evaluated to determine if they are sufficient for a joint replacement application. The different manufacturing methods will also be compared against one another to determine which method produces samples with the desired mechanical properties and distribution of hydroxyapatite throughout its matrix.
Audio Transcript
Slide 1:

Hello! My name is Zan Stuart, I’m a Senior Mechanical Engineering Student and today I’ll be talking about my project, which is a mechanical and surface characterization study of 3-D printed PLA-HA Composite.

Slide 2:

So the motivation for this project is the materials currently used for hip and knee joint replacements, such as stainless steel and titanium, are adequate, but they have some significant issues, such as the fact that they’re expensive, they’re difficult to machine into these complex shapes that are necessary for this application, and they can cause some adverse long term side effects such as stress shielding, which basically wears away the bone material around the replacement joint.

Polylactic acid, or PLA, and Hydroxyapatite, also known as HA, composite is a potentially viable alternative. PLA is a very common plastic used for 3-D printing, and hydroxyapatite is a mineral that’s commonly found in your bone structure.

PLA-HA is pretty cheap, it’s biocompatible, and it’s easily adapted for 3-D printing.

Very few studies exploring the properties of PLA-HA in general exist, and fewer, if any, study the properties of 3-D printed PLA-HA.

Slide 3:

To briefly state the objectives, we wanted to manufacture the PLA-HA composite using different manufacturing methods, we wanted to process that material into a form that useable with a 3-D printer, and we wanted to 3-D print some samples and test their properties, and study how the manufacturing method effects the properties of 3-D printed PLA-HA.

Slide 4:

Two different manufacturing methods were used, one is dry speed mixing, which basically involves placing the raw PLA pellets in a container, mixing them with hydroxyapatite powder, and spinning it very very fast in a dry film until the mixture is roughly homogenous and then turning that into filament.

The other is magnetic stirring, where you basically dissolve the PLA pellets in a solvent, and then put them in a magnetic stirrer, which mixes them up as you can imagine, mixing that viscous dissolved PLA mixture with hydroxyapatite powder and then a silane coupling agent and then forming that into raw PLA-HA after it cures, which you can see in the middle of the slide.

Slide 5:

That raw material is then placed into a filament extruder which you can see on the right side of the slide here, where you put the raw material in the hopper up top, it is then compressed and fed through a heated nozzle at the end where it spits out something resembling a plastic wire which is then fed to a 3D printer.

Slide 6:

Here is just a basic schematic of how 3-D printing actually works, like I said, the plastic string once again gets fed through a heated nozzle, the bed upon which the part that you’re building is formed on moves around along with the nozzle so you get the proper geometry as you desire.

Slide 7:

Here are a couple of examples of the completed samples I made for this project. On the right is a magnetic stirred sample as well as a failed one, and here’s an example of a dry speed mixed sample.

Slide 8:

Two different testing methods were used, one was microindentation which I won’t cover in extreme depth here, but basically it involves pressing a very small diamond tipped pin into the surface of the material, and you use the data collected from that to observe the mechanical properties. In this case we wanted to find the elastic modulus and contact creep performance.

Creep performance is a measure of how much a material deforms with time with a constant load. As you can imagine that’s something you want to keep to a minimum. And the SEM image here is just an example of one of the indentations that was performed on a magnetic stirred sample.

Slide 9:

The other method used was Energy Dispersive Spectroscopy or EDS. Here’s the basic working principle of it, you’re basically shooting an electron beam at the surface of the sample, that excites the atoms and displaces electrons at lower energy states. Following the principle of conservation of energy, when an outer shell electron moves to fill that vacancy, because it’s at a higher energy state that excess energy has to go somewhere, in this case it goes to X-rays. These X-Rays can then be received. Each element actually emits a unique X-Ray during this process, so by figuring out what the wavelengths of the x-rays are and receiving them we can figure out what the composition of the sample is.

Slide 10:

Here are the results of EDS that you can actually see, the highlighted areas or the colored areas indicate the presence of the element given on the left. Hydroxyapatite is composed of calcium oxygen, phosphorus, carbon, and hydrogen. Calcium is not present in the base material of PLA, so if we see calcium or phosphorus we know we’re seeing HA.

We can see that HA was actually integrated successfully using both manufacturing methods. As far as the concentration goes it’s hard to say much, as the area that we imaged was very small. However, this had to be done because the hydroxyapatite powder used is only visible on the nanoscale, so it’s hard to make a solid observation other than the fact that we know that HA is present in the PLA material.

Slide 11:

And here are the results of the mechanical testing, you can see that the contact creep was significantly higher for the magnetic stirred samples which is less desirable, and the elastic modulus of the dry speed mixed samples was significantly higher than that of the magnetic stirred samples.

A two sample t test was conducted to determine the significance of manufacturing method on these two properties, and you can see that yes, the manufacturing method significantly influenced both properties.

Slide 12:

In conclusion, the manufacturing method significantly altered both the elastic modulus and creep performance, and hydroxyapatite was successfully integrated into the PLA matrix using both manufacturing methods.

If I was to continue this project in the future, I would produce and test more samples, and use different manufacturing methods such as wet speed mixing and re-extrusion.

I would also conduct more and different mechanical tests, such as wet and dry wear tests to see what the friction properties of the material are, as well as tensile tests to more thoroughly evaluate the elastic modulus.

Slide 13:

Thank you for your time and attention, and I hope that you have a great day!

Categories
Honors College OSCAR Schar School of Policy and Government

The Legal History of Post-Emancipation and Farm Labor and Plantation Related Mortality from 1880 to 1950

Author(s): Amanda Magpiong

Mentor(s): Rick Smith, Anthropology

Abstract
With the end of the American Civil War, the legal practice of plantation slavery ended, but different forms of extractive farm labor, such as farm tenancy, emerged in its wake. This study is focused on the history of state and federal farm law, how it transformed after emancipation, and the health impacts of this legal history for farmworkers. While there are many public health studies investigating health disparities across the rural/urban divide, less attention has been given to the legal and structural factors driving negative health outcomes in rural communities. To assess the link between farm governance after emancipation and health outcomes on rural farmlands, we focused on the Blackland Prairies Ecoregion of Texas, one of the densest agricultural regions in the US which witnessed a disproportionately rapid growth of the farm tenancy system after emancipation. We first analyzed legal archives at the federal and state levels to trace the governance of agricultural labor over time. Next, we compiled publicly available vital records data for 2,544 individuals born in four Blackland counties (Dallas, Ellis, Hill, and Navarro) between 1880-1900. The effects of farm labor status on lifespan were evaluated using Kruskall-Wallis tests (p=4.238e-07) and ANOVA (p=2.64e-07). Results indicate that the legal landscape of farm labor after emancipation helped drive a more widespread and racially diverse decline in life expectancies in the Blacklands region. These findings extend our understanding of how federal and state farm law helped reproduce losses of life on rural farmlands after emancipation.
Audio Transcript
My name is, and I am presenting the legal history of post-emancipation and farm labor and plantation, related mortality from 1880 to 1950.

As a government and anthropology Major, I was very excited to be able to include both that incorporate both fields into this one study.

Where we start, our project is talking about the structural violence at a criminal plantation as a for foundational form of colonial violence in the Americas, and it peaked in the 19 twenties in Texas.Where violence on plantation began with dispossession and enslave enough indigenous and Africans people in the early colonial period. In 1,865, we reach emancipation for those in texas who finally heard the news, and following that States shifted legal and regulatory systems for plantation laborer and developed systems of farm tenancy where laborers lived and worked on land owned by others, which then contributes the current model that is used to explain health outcomes by using the rural and urban divide called the Rural Mortality Penalty, which is the belief that if you live in rural area you have more native health outcomes than if you were to live in an urban area.

Our ethical approach is that we really wanted this to be a community-driven project. Dr. Rick Smith is a descendant of the community, and we’ve Been working with the Ellis County Rural Heritage Farm, located in Waxahachie, Texas for the entirety of the project, which mine is simply a part of.

So the objective of our project is to evaluate the link between farm labor, deregulation and regulations, plantation, labor, and mortality.

The area which we are studying is Dallas County, as well as the Ellis Hill and Navarro counties within the black man’s region of Texas.

Our methods we compiled a legal history of State and Federal actions that impacted the lives of tenant farmers in the South. We evaluated mortalities across the role and urban divide by compiling data on individuals that were born and died in the Blackland counties, which is Ellis Hail and Navarro and the urban county of Dallas, born between 1,880 and 1,900, we collected 2,536 individuals, and with those individuals we conducted a Crystal Wallace test, as well as the survival analysis in R.

Now this this is our chart that shows the amount of people that we collected from different counties, and what they were categorized as.

Now. What we found in the policy is financial aid to farmers were attempts at the government put in place. Once they found out about the system of tenant farming that was formed without any laws and without any form of cash that was simple. Roll over from the plantation, from slavery directly into the tenant from a system. So once the Federal Government became aware of this and the post Emancipation. They attempted to enact these laws such as the Homestead Act, the Bank of Jones action, the firm Security administration, all of which were supposed to uplift these communities and provide financial aid to them to buy land and give them access to land that they hadn’t had. However, all of these acts failed because they didn’t provide them enough money to be able to actually purchase the lander, be able to afford the things that they need to be able to actually farm and live on this land, and in trying to improve their conditions. There is even a Court Case Block v. Hirsh. In which they were barred from being able to actually improve their living quarters, because if they did, then it only belonged to the land owner rather than to themselves. And the Southern tenant Farmer Union is something that arose out of the Agricultural Adjustment Act. And this community was fighting for better treatment and fighting for better qualities within the conditions that they were living in. All of which we’re fighting because the Federal Government wasn’t giving this community the things that they needed to be able to survive.

And what we found is that these population ones really were suffering. And so, when we look at the data that we found this is our age, average age of death. You can clearly see the rural and urban divide: urban people living longer than those who live in rural areas. However, when you break the rural population down between those who are not working on the plantation and those who are working on the plantation. You can see that those who are working who are owning the land instead of working it, are living just as long as those living in urban areas, and it’s those who are forced to work the land are stuck in this tenant farming system that are suffering and losing years of their lives. As a result,

However, that data sometimes can be skewed by population booms which we’re also occurring at this time. So we then turn to a survival analysis in which you can see that the same plantation workers are less likely to survive from one year to the next than any person living in any other area.

So clearly from this we can see that it’s not the fact that you’re living in a rural area. It’s the fact that you’re working on these former plantations and existing within this tenant farming system with this consistent form of violence that comes along with that.

So our conclusion from this is that the lack of regulation and federal and State protections for tenant farmers likely influenced these health disparities that exploited these farm workers, and the systematic violence on plantations is a more highly explanatory factor for disparities in mortality than simply the rural and urban divide alone. Going forward, greater attention to legal histories and systematic violence are necessary to be able to evaluate the root causes of these rural health disparities

Finally to acknowledge the people who have helped produce this work as a project as a whole. We like to acknowledge the lives and labor of Texas farm workers in the past, and we thank the members of the Board of the Ellis County role, Heritage Farm Generational knowledge of Texas farmers and farmlands help shape our thinking in this work. We are also grateful to the members of the critical molecular anthropology lab at George Mason and to Daniel Temple and Charles Rosen for their work on this work, and I would also like to thank my mentor, Dr. Vick Smith, for all of his help, and inviting me onto this project, as well as Oscar for helping me to produce this research.

Thank you.

Categories
College of Engineering and Computing OSCAR

Whitening Filter Testbed

Author(s): Anita Knighton

Mentor(s): Kathleen Wage, Electrical and Computer Engineering

Abstract
Abstract
Low frequency acoustic waves travel great distances underwater and are the primary means of underwater communication signaling. However, hydrophone arrays used in such signaling collect large amounts of unwanted ambient noise. Whitening filters turn ambient noise into identically distributed, independent frequencies, among which desired signals are better discerned. To create whitening filters, the characteristic distribution of noise to be filtered must be known. This can be estimated from real-world data, but availability of ocean data is limited by financial and practical collection costs. This project developed tests by which to establish a baseline minimum amount of input data needed for effective whitening, with the goal of improving whitening filter design and performance. The research process included generating normally distributed data using MATLAB and correlating the data with both sinc and Bessel functions using the eigenvalues and eigenvectors method. Whitening filters were developed by taking the inverse of correlation matrices. Matched filters were tested on correlated data. Eigenvalues of the decomposed correlation matrices of filtered data for various snapshot sizes were plotted. 1000-trial histogram results show that eigenvalues of correlation matrices of filtered data converge to a single value as snapshot size increases. Future study will include more analysis of the filtered data and also testing of mismatched filters.
Keywords: whitening filters, eigenvectors, eigenvalues, correlation matrices
Audio Transcript
Whitening Filter Testbed-OSCAR-Anita Knighton.mp4
Transcript

Hi, my name is Anita Knighton, and today I am going to present my project about creating a whitening filter testbed for underwater acoustics.

First, a little background:
Acoustic waves are useful for underwater communication and research because they travel much farther than other waves. For example, radio waves travel less than 10 m underwater, but acoustic signals can travel over 100 km.
However, a major problem faced in processing underwater acoustic signals is dealing with unwanted noise.
One way to overcome this is to use electronic filters to turn ambient noise into independent, identically distributed frequencies, otherwise known as “white noise.’ After this, the desired signal stands out.

That is a simple idea, but whitening ambient ocean noise is difficult.
One challenge is: to create an effective filter, the characteristic distribution of the signals to be filtered must be known. This can be estimated from real-world data. But ocean data has limited availability, and, also, the soundscape of a particular location may vary over time.

With this in mind, I wanted to find a way to determine a minimum amount of input data needed to make an effective whitening filter.

The plan was to create and test whitening filters using the following method:
• First, generate correlated data.
• Process it through an ideal whitening filter.
• Decompose the correlation matrix of filtered data into eigenvalues and eigenvectors (more on those later).
• Plot the eigenvalues, and then observe the results.


In more detail:
Step 1 is simulating correlated data.
• I started with MATLAB’s random normal generator—the uncorrelated data from this function represents white noise.
• Then I created two correlating functions—a sinc function for noise coming from three dimensions and a Bessel function for noise coming from two dimensions.
• I built correlation matrices for each.
• I broke those matrices down into Eigenvalues and Eigenvectors, which in this case can be thought of as convenient tools for doing matrix operations.
• Then I used these tools to transform the data.

And at that point in the process, the data represented correlated ambient ocean noise.

As an example of correlation vs. non-correlation, here on the left is a scatterplot of two vectors (one on the horizontal axis; one on the vertical axis) that are correlated. You can see that as the values for one of them increase, the values for the other one increase as well, and that’s why you get kind of a slanted plot. On the other hand, on the bottom right you have a more circular pattern, and that is a scatter plot of two vectors of uncorrelated data. And then finally on the top right is a plot of one column of the correlation matrix of sinc-correlated data, which is actually what I used in my project.


After correlating the data, I tested it using the inverses of the transformation matrices, which are actually the ideal whitening filters. I passed the 3D data through a 3D whitening filter and the 2D data through a 2D whitening filter.
Here are graphs of correlation matrix eigenvalues of a single trial for various sample sizes for each data type. So, to understand these graphs, you need to know that eigenvalues represent the weights—or amounts—of different spatial frequencies present. You can see that the graphs flatten out as sample size increases. And this means the spatial frequencies are becoming more evenly distributed, which is the goal of a whitening filter.


Finally, to evaluate my findings, I wanted to take a closer look at the eigenvalues of the correlation functions of the filtered data.
Here I have plotted 1000-trial histograms of 3D data passed through a 3D filter showing the distributions of Eigenvalues with varying numbers of data snapshots. You can see that the Eigenvalues for small numbers of snapshots are spread out. This means the data is not effectively whitened. But as more snapshots are added, the eigenvalues cluster more closely around one. That is the desired outcome for a whitening filter.

In conclusion:
I can see there’s a threshold where the eigenvalues begin to converge around a single value, but I am not yet certain where it is. There’s more to study, and I plan to continue this project.
Hopefully continued research on this topic will lead to improved whitening filter performance.

Here are references I consulted during this research and my acknowledgments. Thank you very much for watching.