Abstracts

All abstracts for the 2015 AGS Symposium can be found below. To see a condensed list of speakers and the titles of their talks, please visit the SCHEDULE tab at the top of the page.

 

Session 1


Coordination of pattern and growth in developmental biology

Abed Alnaif
Biomedical Engineering
Co-author: Arthur D. Lander, Professor in Developmental & Cell Biology, UCI
aalnaif@uci.edu

The development of multicellular organisms involves the differentiation of genetically identical cells into distinct cell type (e.g., liver or heart cells). However, as one can easily imagine, a correctly patterned organism would not result if each cell were to differentiate into any cell type at random. Rather, each cell type must differentiate into the type which is appropriate to its position, and thus biological pattern formation depends on the ability of cells to measure their positions.

Since cells aren’t equipped with rulers, the conveyance and measurement of positional informations constitutes a central problem in developmental biology. Typically, positional information is communicated via intercellular signaling molecules, most of which having been well characterized and being remarkably conserved evolutionarily. These signaling molecules are typically produced in only specific locations within tissues, and it is the concentration gradients which form as these signaling molecules spread from these locations which provide positional information to cells.

In my research, I use genetic experiments on fruit flies and mathematical modeling to study how patterning is coordinated with growth of the organism (in order to ensure that differently sized individuals maintain correct proportions). Specifically, most of my experiments are performed in the fruit fly wing, which is one of the best understood tissues in terms of pattern formation. The intercellular signaling molecules involved in patterning the fly wing are well conserved evolutionarily, and thus are also involved in patterning humans. One striking discovery I made is that the intercellular signaling molecules aren’t so much providing positional information to cells, as much as instructing them how much to grow. Furthermore, the patterns are actually controlling the concentration gradients of the signaling molecules, rather than just responding to them. Although still preliminary, both of these discoveries, if true, would suggest a significant departure from the conventional model of how intercellular signaling orchestrates the formation of biological pattern.


Enrichment of Circulating Tumor Cells using Lateral Cavity Acoustic Transducers

Neha Garg
Biomedical Engineering
gargn1@uci.edu

American Cancer Society predicts an estimate of 10,450 new cancer cases and 1350 deaths among children, while 5,330 new cases and 610 deaths among adolescents for 2014. Millions of dollars are being spent in cancer research and treatment every year. So, the knowledge of cancer type of a patient becomes really important in its diagnosis. However, since the concentration of CTC cells (circulating tumor cells) cells in our body is low (1-10 per mL) enrichment is needed. Hence, the goal of my research is to design a microfluidic device to enrich the CTC using Lateral Cavity Acoustic Transducer (LCATs) developed in Prof. Lee’s Lab. LCATs are microfluidic devices which utilize an array of acoustically actuated air/liquid interfaces generated using dead-end side channels. By analyzing the genes from progenitor CTC cells, the knowledge of cancer type can be determined. Besides this, effectiveness of drug can also be tested using CTC cells. Nowadays, cellsearch is the only company whose kit is intended for the immune-magnetic selection, identification, and enumeration of circulating tumor cells (CTCs) of epithelial origin in whole blood. The technology used by cellsearch is a bench-top mechanism where large quantity of costly reagents is required, along with costly equipment which can be eliminated by the employment of microfluidic technologies.

Furthermore, microfluidic technology requires small sample volumes and reduces the reagent consumption as well. Additionally, microfluidics platforms are amenable to integration with upstream and downstream process steps, which is essential for developing a complete Lab on a Chip (LOC) system. Also, to date, most cell sorting microfluidic devices use external pumps which lead to the reduction of cell concentration. A microfluidic device where enrichment can be done without the usage of any external pumps will greatly serve the purposes. Furthermore, the length scales of microfluidic device features are of the same order of magnitude as the cells and particles. The LCAT technology lends itself well its ability in the development of point of care devices as it forms self-contained microfluidic platforms capable of pumping the sample, separating cells based on size, DNA shearing, cell-particle sorting, enrichment etc. My goal is to design a portable device that integrates sample introduction, cell sorting and the downstream biological assays.


A Microfluidic Single-cell Analysis Chip For Cancer Diagnosis

Xuan Li
Biomedical Engineering
Co-author: Do-Hyun Lee, BioMint Lab, Department of Biomedical Engineering, UCI
xuanl14@uci.edu

Single-cell analysis provides precise metabolic and genetic information of individual cells, whereas traditional bulk tests neglect heterogeneity and stochastic effect among cell population, which is essential in determining key cellular activities. Clonal populations of cancer cells get distinct fate outcomes in response to uniform chemotherapy, because of the heterogeneity in regulatory-protein expressions and apoptosis regulation. Therefore, single-cell analysis is a more powerful and effective way for cancer characterization.

Here we fabricate a microfluidic diagnostic chip, integrating cell separation and single-cell trapping arrays with an open interface, enabling external micro-manipulating instruments to enter into individual cells for analysis. In particular, single-cell mRNA extraction is performed by atomic force microscope (AFM). The device is fabricated by standard PDMS soft lithography based on the silicon mold made by negative photo-lithography.

Slanted obstacles and filtration obstacles are used for hydrophoresis size-based, label-free cell separation, with a resolution higher than 90%. Slanted obstacles in the channel drive helical recirculations. Along the transverse flow, cells are focused to the sidewall. Cells larger than the gap of filtration obstacles are blocked and move through the filtration pore, whereas smaller ones pass the gaps and stay in focused position.

Cells are then trapped by hydrodynamic flow in conjunction with grooves arrayed on a serpentine cell-delivery channel (arranged in 5-column format, 20 traps each column). An array of cross-flow channels connects each section of the serpentine channel. Unlike traditional microfluidic chips with closed interface, a 10μm thick PDMS film seals the array top. External devices, e.g. AFM tips and micro-manipulators, can punch through the membrane and enter into individual cells.

The AFM tip is modified into a dielectrophoretic nanotweezer (DENT) to extract mRNA from nucleus. Application of an AC electric field between the inner and outer electrodes of the DENT creates a large electric field gradient, resulting in a dielectrophoretic force to extract mRNA. Selective extraction is achieved by decorating the tip with oligonucleotide probes hybridizing target mRNA. This approach is highly specific, fast, nondestructive, and requires no cell lysis or mRNA purification. The mRNA is then released and tested by q-PCR.

This device addresses the need for clinically compatible single-cell analysis in cancer diagnosis. It will be tested in melanoma cancer characterization. On the chip, melanoma cells extracted from skin biopsies are separated with surrounding cells (fibroblasts, keratinocytes), then tyrosinase mRNA is extracted and tested by q-PCR to determine the cancer stage of a certain cell.


Microfluidic Device for Mechanical Dissociation of Tumor Tissues into Single Cells

Xiaolong Qiu
Biomedical Engineering
Co-authors: Trisha Westerhof (School of Biological Sciences), Edward Nelson (School of Medicine), and Jered Haun (School of Engineering)
xiaolonq@uci.edu

Cancer is the second leading cause of death in the Western world, and thus there is a tremendous need for new approaches and technologies that will help us better detect and treat this deadly disease. Nearly all cancer types form solid tumors. Tumors are highly heterogeneous, consisting not only of cancer cells but stroma, immune cells, and blood vessels. These ubiquitous tumor characteristics have driven intense interest in replacing general chemotherapies with molecularly-targeted agents to achieve personalized therapies for cancer patients. However, realizing this goal will only be possible if molecular measurements can be made are rapid, cost-effective, multiplexed, and at the resolution of single cells. Satisfying all of these needs is extremely challenging for solid tumors because clinical specimens are procured as tissues. Therefore the very first step in approaching single cell molecular analysis is tumor dissociation. Clinical gold standard of tissue dissociation relies on the combination of enzymatic treatment and mechanical disruption. While manual mechanical disruption hinders the speed of clinical diagnostics, enzymatic treatment ultimately destroys certain biomarkers of diagnostic interests. Thus, new diagnostic approaches and technologies are needed for solid tumor tissue specimens to usher in the era of molecular medicine. Microfabrication technologies have advanced the fields of biology and medicine by miniaturizing devices to the scale of cellular samples. In particular, microfluidic systems have enabled precise manipulation of cells and other reagents to achieve systems with high throughput, cost efficiency, and point-of-care operation. Nevertheless, little attention has been given to processing tissues. To advance and automate mechanical dissociation of tumor tissues, we have developed a novel microfluidic device with gradually reduced cross-sections through a series of bifurcating stages. We also introduced constriction and expansion regions on the device to induce flow disturbances that help mix tissues and generate fluidic jets at different length scales to provide shear forces. Using cultured tumor spheroids and recently clinical tumor biopsies, we have demonstrated that our microfluidic dissociation device significantly augmented single cell yields compared to the current clinical gold standard while still maintaining viability. Most importantly, all results were obtained in less than ten minutes of total processing time. In conclusion, we have created a microfluidic device that is capable of dissociating a tumor into single cells at the point-of-care for downstream analysis. We envision our device to serve as one of the core components toward more personalized cancer therapies in the future.


Automation of Serial Dilution by Microfluidic Digital Logic

Manasi M. Raje
Biomedical Engineering
Co-authors: Siavash Ahrar, Elliot E. Hui
mraje@uci.edu

Serial dilution is a fundamental laboratory procedure that produces a logarithmic array of concentrations from an initial sample. Since this procedure is common to a number of laboratory protocols, its automation would be a powerful contribution to lab-on-a-chip systems. We previously reported a microfluidic strategy for serial dilution employing valve-driven circulatory mixing: the serial dilution ladder [1]. While the diluter provides an effective architecture to obtain a series of dilutions from a small amount of sample, it is operated using a vacuum-driven solenoid valve array under computer control. We envision eliminating such unwieldy machinery around the diluter by automating it using on-chip pneumatic logic circuitry. The heart of this system is a four-stage serial diluter, which is a structure with four loops arranged in the form of a ladder. Each loop carries out a 1:1 dilution through circulatory mixing by peristaltic pumping. Peristaltic pumping is established by the coordinated opening and closing of three valves within a loop. We achieved the automation of this pumping pattern by integrating the diluter with an on-chip oscillator [2]. The oscillatory signal provided by the oscillator is routed to the appropriate valves on the active loop via a decoder circuit, a network of pneumatic valves and channels surrounding the dilution ladder. This circuit receives a four-bit input signal which is logically decoded to select the appropriate loop. Once a loop is selected, the oscillatory signal for peristaltic pumping is routed to that loop. Finally, automated selection of the loops can be achieved by using an on-chip four-bit finite state machine to generate the four-bit signal that controls the decoder circuit. We have succeeded in integrating the diluter, oscillator, and decoder circuits, achieving a four-stage serial dilution in approximately one minute. The entire device requires a total of only 4 control inputs. With the integration of the four-bit finite state machine, the device will be fully automated, requiring no external control inputs to function, and powered by a single static vacuum source. This compact and automated serial dilution system is appropriate for point-of-care applications and can be applied to a variety of assays.

References

1. S. Ahrar, M. Hwang, P. N. Duncan, E. E. Hui, Analyst, 2014, 139, 187-190.

2. P. N. Duncan, T. V. Nguyen and E. E. Hui, Proc. Natl. Acad. Sci. U.S.A., 2013, 110, 18104-18109.


Biodegradable Anti-inflammatory Material: A Novel Protein-Conjugated Polymer with Applications in the Implantable Medical Device Field

Crystal Rapier
Biomedical Engineering
Co-author: Esther Chen (from Dr. Wendy Liu’s lab)
crapier@uci.edu

The human immune system is designed to recognize and fight “foreign” objects, pathogens, or domestic diseased cells. Part of the immune system’s defenses includes an inflammatory response in which the area surrounding damaged or infected tissue becomes red and swollen (inflamed). This is in part due to white blood cell activation from the presence of a foreign object or laceration. Inflammation poses a major problem to the successful function of essentially “foreign” implanted medical devices such as vascular stents, heart valves, hip or knee replacements, etc. Conventional implants are permanent and made from surgical grade stainless steel metal which does not provoke an immune response. However, there is a need for non-permanent implant solutions. Biodegradable/bioabsorbable materials are a relatively new movement in the multibillion dollar implantable medical device field. These materials break down into smaller absorbable subunits in the body after it has served its purpose. Despite the promise of non-permanent implants, subunits of degraded bioabsorbable materials can induce a localized inflammatory immune response. This can be seen in a commonly used bioabsorbable material called Poly(lactic-co-glycolic acid) (PLGA), which is currently used as degradable stent and suture material among other things. PLGA degrades into nontoxic lactic and glycolic acid subunits. Although PLGA is a safe material for implantation into the body, accumulation of its acidic subunits in a localized area can cause irritation thereby invoking an inflammatory response. In previous studies, an immunomodulatory protein called CD200 coated on polystyrene surfaces has been found to reduce the activation of white blood cells and their subsequent secretion of inflammatory factors. Our work aims to combine the bioabsorbable property of PLGA and the anti-inflammatory property of CD200. We are using a microfluidic system to create PLGA material that displays CD200. This system is designed to conjugate functionalized CD200 to PLGA to form protein-polymer hybrid particles. We are assessing whether CD200 can be effectively displayed throughout the particle lifetime and effectively inhibit white blood cell activation. In the future, we hope to create vascular stents with this anti-inflammatory CD200-PLGA material which would finally give patients a non-permanent, biodegradable alternative.

 
 

Session 2


A New Visual Teaching Aid: DanceChemistry

Gidget Tay
Chemistry
Co-author: Kimberly D. Edwards (Chemistry Lecturer, UCI)
tayg@uci.edu

A visual aid teaching tool, the DanceChemistry video series, has been developed to teach fundamental chemistry concepts through dance. These educational videos present chemical interactions at the molecular level using dancers to represent molecules. The DanceChemistry videos help students visualize chemistry ideas in a new and memorable way. This project involves students, staff, and instructors in both the chemistry and dance department and provides a platform for a diverse set of people to work together. Students that participate in these videos play an active role in their own education while providing a visual teaching aid for their peers to use. These videos also give graduate students who are interested in pursing a career in teaching an opportunity to create an educational tool for their own future use. I surveyed 1200 undergraduate chemistry students who watched my videos in class; the students who watched the videos scored 30% higher on a short quiz than their classmates who did not see the video. Greater than 75% of the students said they would like to learn the chemistry concept using these videos. The DanceChemistry videos are broadly disseminated for free on YouTube; this broad distribution enhances the infrastructure for education at secondary schools and provides underserved communities in science with free instructional videos that can be used to improve scientific understanding from a creative viewpoint.

Enhancing Working Memory Training with Concurrent Transcranial Direct Current Stimulation

Jacky Au
Education
Co-authors: Susanne Jaeggi (School of Education), Martin Buschkuehl (MIND Research Institute), Steven Small (School of Medicine), Susan Duncan (School of Social Sciences), Julian Quintanilla (School of Biological Sciences), Claire Arakelian (School of Social Sciences), Kimberly Bunarjo (School of Social Sciences)
jwau@uci.edu

Electrical stimulation of the brain is an ancient technique that dates as far back as the first century, when Roman physicians wrapped electric eels around their patients’ heads as a remedy for headache. Though the methods of the ancient Romans leave something to be desired, modern-day science validates the use of electrical stimulation to modulate neural activity in the brain. The goal of the present research is to explore the use of one specific stimulatory method, transcranial direct current stimulation (tDCS), in improving working-memory and executive control in healthy adults. The tDCS is administered by attaching a stimulating electrode to the left dorsolateral prefrontal cortex, a region known to subserve working-memory performance, and a grounding electrode to the contralateral supraorbital region (above right eyebrow). The stimulating electrode is thought to increase the resting membrane potential of target neurons in order to increase brain plasticity. By introducing computerized working-memory training (WMT) during this period of heightened plasticity, we hope to augment the effects of WMT. Participants are asked to come into the lab and are randomized into one of two conditions. The first undergoes a week-long intervention of concurrent tDCS and WMT. The second undergoes the same protocol but with sham tDCS instead of active. In the sham condition, current is only administered for a brief ramp-up period in the beginning and then shut off without the participant’s knowledge. Sensation is often imperceptible after the initial ramp-up of current, and therefore participants are usually unable to differentiate between sham and active conditions. Pre- and post-tests are conducted one day immediately before and after the intervention, assessing performance on untrained working-memory tasks, as well as broader executive abilities such as response inhibition and interference control. Previous work (Buschkuehl et al., 2014, Hsu et al., 2013) using an identical WMT regimen has shown significant improvements in these domains. Data collection is still ongoing, but we hypothesize our sham+WMT group will replicate previous findings, but our active+WMT group will show significant improvements above and beyond even that of the sham group. As our progress as a society becomes increasingly bottlenecked around the ability of each of its members to handle information overload in an effective and efficient manner, studies such as this one become increasingly pertinent. Though modest, our results will pave the way for future studies of brain stimulation and cognitive enhancement that may literally revolutionize the way we think.


Can Everybody Be a Math Person? The Links Between Competitive Classrooms and Academic Identity

Peter McPartlan
Education
Co-author: Meeta Banerjee (Ph.D., UCI Department of Education)
pmcpartl@uci.edu

Increasing the number and quality of U.S. STEM graduates has become a national priority (National Center for Education Statistics, 2013). In order to do so, research has begun to focus on providing students with opportunities to develop interest in math and science at a young age. The task of identifying classroom conditions that optimize interest and effort has been aided by the Eccles Expectancy-Value model, which takes into account how future utility, personal importance, and cost of succeeding influence perceptions of value and importance in different subjects (Eccles, 2005). Notably, this model has been used to show that among early adolescents, more competitive, performance-oriented classrooms are associated with declines in math value (Anderman et al., 2001). However, it is critical to understand how the competition and ability comparisons typical of performance-oriented classrooms affect individual components of an academic identity, specifically in the math domain.

In the current study, we hypothesize that in highly competitive classrooms, social comparisons between students will be more salient, which will amplify the importance of math within the academic identities of high-achieving middle-school students while degrading the importance of math within the identities of low-achieving students. This would suggest that competitive, performance-oriented classrooms are limited in their capacity to promote math value for all students. The study uses longitudinal survey data from the Michigan Study of Adolescent and Adult Life Transitions, examining a sample of approximately 3,250 adolescent students in Spring of the 6th grade. The math classroom competition measure was created from student perceptions of competition. Math social comparisons as indicators of math identity were formed from questions such as “Knowing how well I do in math compared to other kids my age has helped me decide that I’m good at math.” Academic identity was comprised of student ratings of value, importance, and utility.

Preliminary analyses show a positive correlation between classroom competition social comparisons. Results also showed that students who use social comparisons as indicators of positive math identity have greater math importance and math utility, but lower math anxiety. This suggests that classrooms high in competition may be making social comparisons more common as well as more salient indicators of math identity. Such a result stands to contribute new evidence to existing research suggesting that mastery-oriented classrooms and performance-oriented classrooms affect significant differences in student motivation and achievement outcomes. Implications for conditions under which competition may facilitate academic identity will be discussed.


Beyond the Bell: Findings from an Afterschool STEM Learning Initiative

Rahila Simzar
Education
Co-authors: Deborah Vandell, Pilar O’cadiz, Valerie Hall, Andrea Karsh
rmunshi@uci.edu

In 2009 the Obama Administration launched the Educate to Innovate initiative to improve American students’ math and science achievement. Bolstered federal investment in science, technology, engineering, and math (STEM) education resulted in an increased awareness and dedication to improving students’ STEM learning opportunities. Out-of-school time (OST) learning has surfaced as an untapped resource in advancing these goals. Recent efforts have exposed the potential for STEM learning in OST contexts. These added STEM learning opportunities positively impact students’ interest in science in the real world, which ultimately motivates STEM career choices. Thus, recent initiatives have focused on improving students’ STEM OST learning opportunities.

This study took place during the 2013-14 implementation of the Power of Discovery: STEM2 learning initiative–a large systemic effort to improve STEM teaching and learning in California afterschool programs. We use pre- and post-initiative survey, interview, and observation data to characterize the effectiveness of the initiative. Mixed methods analyses examine the relations between STEM professional development experiences, program staff beliefs, and student outcomes over an academic year. During the initiative, staff reported increases in staff discussions about STEM activities, staff discussions with classroom teachers about STEM activities, and increased participation in STEM-related events with parents. This network building among program staff and among program staff and classroom teachers was linked to gains in students’ math and science efficacy, as well as students’ beliefs about their likelihood of future success. Time spent in STEM activities also was linked to gains in math efficacy. Interviews with program staff suggested that providing staff opportunities to build STEM networks with each other, with classroom teachers, and with parents enhanced their efforts to improve STEM learning. Observations conducted at sites that received initiative-based treatment offer rich vignettes characterizing high and low quality STEM activities.

This study offers a multi-faceted view of happenings that occurred during a unique initiative. Findings point to the value of providing afterschool program staff with opportunities to connect to school administrators and classroom teachers and build relationships with parents around the quality and purpose of the STEM learning activities they lead in the afterschool program. Further, both quantitative and qualitative evidence from this study highlight how the development of vibrant communities of practice–where staff share knowledge of promising practices and build networks to gain greater access to curriculum and professional development resources–can serve to grow their confidence in implementing STEM activities resulting in positive student outcomes.


Students as Policymakers: Engaging Middle School Students in Youth Participatory Action Research

Christopher Stillwell
Education
cstillwe@uci.edu

Solutions to the countless crises middle schools face are typically proposed by policy makers with little actual classroom experience. Meanwhile, essential insights from students on the issues they experience firsthand every day are largely ignored. Youth participatory action research (YPAR) can help. In YPAR, social researchers collaborate with youth, training them to identify and research concerns in their schools and take leadership to address them. Yet numerous factors can pose obstacles, including factors related to the youth’s developmental capacity to partake in such activity, as well as facilitator factors that can inadvertently cause more harm than good.

My work offers a critical investigation of a case of YPAR in a university-middle school partnership at a school identified as in need of turnaround because of an entrenched culture deemed unsafe and unconducive to learning. The analysis of this undertaking answers a call for more transparent descriptions of tensions and contradictions that participants grapple with in such projects, providing potential YPAR facilitators with the insight to identify and prepare for problems long before they derail a project, helping to ensure that efforts do not go to waste or unintentionally cause harm to the communities they are meant to serve.

Data come from facilitators’ written responses to open-ended survey prompts, students’ responses to Likert scale survey items, and extensive field notes that were checked against the field notes of colleagues to gain multiple perspectives. Analysis began with review of all data in a preliminary round of in vivo coding intended to allow themes to emerge. Data were then revisited to further investigate patterns and recurring themes. In the final phase, blocks of theme-related data were examined in relationship to one another and also in relationship to the common tensions identified in existing literature on challenges of implementing YPAR.

This research shows that factors related to youths’ behavior, mismatched expectations, and lack of voice in society can put them at odds with YPAR’s approach to collaboration and research. Facilitators’ ways of engaging in YPAR can similarly complicate participation, such as facilitators’ tendency to presume adult norms for participation and to engage in discussions at a level that inadvertently reinforces the norm of youths’ lack of voice in adult society. This research points toward specific practices related to leadership and preparation that YPAR facilitators can follow in order to address these complicating factors effectively.


Language, Culture, and Teacher Identity: A Longitudinal Investigation of Teachers’ Development of Equity-Minded Mathematics Practices

Cathery Yeh
Education
catheryy@uci.edu

Spend some time in any major school district in America and you will notice a significant demographic gap between the teachers and the students they serve. Today, nearly half of the students in the nation’s public schools are culturally and linguistically diverse. However, linguistically and culturally diverse (CLD) teachers represent only 18% of the teaching workforce. Studies have shown that teacher diversity reinforces teacher effectiveness. Culturally and linguistically diverse teachers serve as advocates, cultural brokers and role models of achievement; they are more likely to foster culturally relevant teaching and confront issues of racism through their teaching. These practices have been shown to lead to improved outcomes in tests scores, attendance, high school completion, and college attendance. However, very little research has examined the experiences and practices of the beginning stages of teaching for CLD teachers. The first few years are critical, particularly for CLD teachers. Teacher turnover during initial practice is acute. Furthermore, this period is an intense identity experience where new teachers engage in identity work as they strive to make sense of what it means to be a teacher and how to teach. This study examines the factors that impact the retention and attrition of CLD elementary teachers as well as the factors that support or impede their development and implementation of reform-based, equity-minded mathematics practices. Four CLD teachers, all who have articulated an equity-minded vision of mathematics teaching at the end of their teacher preparation, will be followed from their year in the teacher education program through their first two years of teaching in high-needs linguistically rich Latin@ communities. The purpose of this study is to understand the mechanism of elementary mathematics teachers’ learning, how and why elementary teachers develop practices of mathematics teaching while participating in multiple communities of practices. Elementary mathematics has become a highly politicized gatekeeper to high-level mathematics and science. Identifying the mechanisms for the development of equity-minded mathematics teaching as well as the challenges can support in the development of a theory of beginning CLD teachers’ development that can be used to inform teacher preparation and other initiatives.


Engaging elementary school students through the arts and inquiry to improve scientific learning

Doron Zinger
Education
dzinger@uci.edu

What would happen if we ignited student scientific learning? What would it be like if students were excited about science? How can we improve on the thirty percent science proficiency rate of 8th graders? What if a science curriculum integrating art and inquiry could address these questions? The Equitable Science Curriculum for Integrating Arts in Public Education (ESCAPE) brings an engaging inquiry and visual and performing arts (VAPA) approaches to science instruction through a partnership between researchers from UCI, curriculum designers from the Orange County Department of Education, and artists from the Segerstrom Center for the performing arts. The project focuses on improving science learning for 3rd-5th grade students in high need schools with large populations of English Language Learners, targeting 21,000 students over the course of three years. The ESCAPE program was launched this summer through a week long institute attended by one hundred and fifty teachers from twenty eight schools and nine districts in Orange County. Institute participants went back to their classrooms this school year equipped with tools to teach six engaging science lessons. These lessons focus on addressing common scientific misconceptions, as well as general grade level scientific knowledge based on the new Next Generation Science Standards. Curriculum and instruction are designed to reduce student cognitive load to help make content more accessible. Three lessons will be delivered through an inquiry and VAPA approaches. Which will improve student learning more?

Students in the project will be pre and post-tested through each set of lessons. The primary goal of the study is to see which approach; VAPA or inquiry engages students more and leads to greater student learning. Additionally, data collected will be analyzed to determine if there is a difference in student learning based on the sequence of instruction, with inquiry or VAPA being first. Addressing students’ scientific learning and engagement at earlier grades can have a significant impact on later learning. Positive outcomes from this project can have significant broader impacts on science instruction on a larger scale through the development of an online certificate course based on the science curriculum developed. The online curriculum will initially be available for teachers in Orange County, however the goal is to ultimately make it available on a larger scale.

 
 

Session 3


Governance in Virtual Worlds: How Developers Decree Permissible Virtual Citizenship

Daniel Gardner
Social Science – Medicine Science and Technology Studies
dlgardne@uci.edu

Video game and virtual world developers and creators take on a role similar to that of “the state” in relation to their players or subscribers. This project looks at how these developers take on their governing role specifically in relation to the creation of avatars. Avatars are the embodiment of the player within these virtual settings, and their form may evolve over time. This case study of avatar creation brings together analysis of ten virtual avatar creation offerings to describe how developers limit who players are allowed to be within their sphere of governance. I look specifically at the how the range of options, and even more specifically, how available base options for embodiment may in turn have ranges of secondary or tertiary appearance, ability, and even mannerism customization that can shape or determine the roles within a world a player is given permission to take on. Bringing into concert the work of preceding virtual ethnographers like Tom Boellstorff and Celia Pearce, I describe how the choices developers make as to what to allow players to be within their world can reflect or reproduce, or even enable resistance to physical world social or cultural norms and limitations.


A Problem of Free Riders: Corporate Shuttles and Gentrification

Brian Asquith
Economics
basquith@uci.edu

Protests have rocked the San Francisco Bay Area since Fall 2013 over private buses that ferry tech workers from the city to job sites throughout the region. Activists blame these buses and the workers they carry for raising eviction rates and rent increases, squeezing out the poor and middle classes. Actual evidence of this is largely sparse or anecdotal, and an investigation into what role the buses are playing in gentrification can serve the public interest in two ways.

The first is a better understanding of exactly how the success of the tech industry impacts the local economic ecosystem it operates in. Few industries have reshaped society as much the tech industry, and that extends not just from how we catch a ride, but to the effect on the social fabric of its community. Further, the initiation of the private bus system is an ideal situation to study the causal role of transit in gentrification, a problem economists have been working on for decades. I plan to measure the effect the shuttle services have had on rent increases and socioeconomic shifts in San Francisco.

Secondly, the shuttle system sheds a new light on an old issue: rent control and affordable housing. San Francisco has one of the most restrictive rent and tenant control regimes in the country. Previous work on economics has shown that rent control creates more losers than winners in terms of access to affordable housing, but left unexplored is whether rent control is a good policy to prevent gentrification, i.e. if the policy goal is to preserve the “character” of a neighborhood. Rent control may slow the rate of socioeconomic change in a neighborhood, but it may also increase the incentive for landlords to replace their tenants with higher-paying ones by evicting the old, lower-paying ones. Assessing which effect is strongest will be a major contribution to our understanding of impact of rent control.

My work will contribute to the public’s understanding of gentrification and rent control and to the economics literature by studying rent control, evictions, and gentrification with a clearly identified source of variation. As cities all over the developed world grapple with the rise of wealthy, “creative class” workers who want to live in urban environments, San Francisco’s policy responses to balance the needs of new and old residents and industries will be highly relevant. residents will be much studied.


After Occupy: Exploring the Personal and Cultural Outcomes of the Occupy Movement

Megan Brooker
Sociology
brookerm@uci.edu

This research project examines the personal and cultural consequences of the Occupy Movement, particularly through its impact on individuals’ trajectories of subsequent movement participation and its influence on the broader social movement sector through movement spillover and diffusion. Scholarship on social movement outcomes often focuses on the political and policy implications of movements, but recent work has highlighted the importance of the personal and cultural consequences of social movements as well. My research contributes to this body on literature on social movement outcomes and explores the impact of the Occupy Movement on individual participants and social movement communities. Although the Occupy encampments were mostly ephemeral in nature, I hypothesize that the movement’s participatory democratic approach, confrontational tactics, and the high intensity of involvement that it compelled from participants may have led to more lasting effects and encouraged subsequent movement engagement. In addition, if Occupy activists diffused into other social movement organizations post-Occupy, this is likely to have resulted in movement spillover of personnel, ideas, and tactical and strategic repertoires. This is a qualitative research study based on data collected through in-depth, semi-structured interviews with individuals who participated in the Occupy Movement in Oakland, Berkeley, Portland, and Honolulu. Data collection for the study is ongoing, with 29 interviews conducted to date. My preliminary findings indicate that the Occupy Movement has achieved persistent impacts via its influence on personal trajectories of participation, interpersonal and SMO networks, and social discourse. It also produced a notable effect on social movement communities by sparking important strategic debates among activists on issues related to the most effective movement goals, tactics, and organizational structures. This case study of the Occupy Movement will expand our scholarly awareness of how social movements shape the lives of individuals and contribute to the vigor of a democratic civil society. This research will be beneficial to academic, activist, and political actors as it will help develop a richer understanding of the personal and cultural impacts of social movements, which are often difficult to measure and easy to overlook but play a pivotal role in social change. It will also be interesting to a wider audience, as it will answer the question of what happened after the widely publicized Occupy encampments ended and whether participants have continued involvement in other social change activities.


Interactional Social Capital: How the Influence of Parent, Peer and Teacher Support Varies Based on Structural Context

Alma Garza
Sociology
angarza@uci.edu

The theory of social capital has stimulated an enormous debate regarding the type of relationships that best facilitate the academic performance of students. Findings consistently indicate that students enjoy more positive educational outcomes when they count on supportive parents, have academically aspiring peers or hold good relationships with their teachers (Coleman 1988; Croninger and Lee 2001; Crosnoe, Cavanagh and Elder 2003). In recognizing the importance of how structural factors impact micro-level interactions, more recent literature has also traced how neighborhood or school contexts mediate or are mediated by the influence of social capital on students’ academic performance (Crosnoe, Cavanagh and Elder 2003; Roscigno 1998; Ainsworth 2002). However, these studies do not adequately capture how the context component of social capital interacts with micro-level processes because they study these factors separately or can only understand the impact of one factor while holding the others constant.

I draw on the restricted-use 2002 Educational Longitudinal Study (ELS: 2002) to understand the conditions under which members of black, Hispanic, Asian and immigrant groups are able to enroll in college. I take a holistic assessment of how parent, peer and teacher social capital interact with neighborhood and school context in order to determine the specific pathways or conditions under which these sources of capital enable enrollment in college for underrepresented youth. Fuzzy Set Qualitative Comparitive Analysis (fs/QCA) is the primary analytic method employed in this study because it is uniquely disposed to address my objective. It operates on set theoretic principles and allows me to discover what subset of conditions produce a given outcome (Ragin 2008).

Preliminary findings (based on the public-use version of the dataset) indicate that white and Asian students depend on only one form of social capital to enroll in college whereas black and Hispanic students need at least two to successfully make the transtion from high school to college. Furthermore, immigrant students who belong to the first-generation, also depend on more social capital than their second and third-generation counterparts. Ongoing analyses will reveal how these combinations are impacted based on the socioeconomic status of the student’s neighborhood and the high schools where they’re enrolled. I argue that the social uplift predicted by increased social capital is also conditioned through proximate arrangements of resources and privileges. Hence, fruitful understandings of how social capital advantages particular groups must not simply be grounded within relevant structural contexts but also studied as interactional components.


Communicating power in a male-dominated political system: What do the words of Hillary Rodham Clinton reveal?

Jennifer Jones
Social Sciences
jonesjj@uci.edu

Hillary Clinton is arguably the most prominent female in American politics today. How has she succeeded in a man’s world? Does Ms. Clinton talk more “like a man” (linguistically-speaking) as her political ambitions have grown? This project uses Ms. Clinton’s speech over the course of her public career to discover (1) how her linguistic patterns vary according to her political role, (2) how her linguistic style compares to that of high-ranking male politicians, and (3) how her self-image and self-presentation has consciously positioned her for success. I analyze Ms. Clinton’s speech in more than 500 interviews and debates conducted between 1987-2013 and utilize a text analysis program, the Linguistic Inquiry Word Count (Pennebaker, Francis, & Booth, 2007), to uncover the linguistic patterns of Clinton’s speech over time. Results indicate significant shifts in her use of pronouns, cognitive, social, and emotional words, as well as function words (including I-words, prepositions, articles, verbs, and auxiliary verbs) from early interviews to later ones. The direction of these shifts support the notion that Ms. Clinton?€?s language has become more masculine over time. Results also indicate strategic shifts in language during her successful Senate campaign in 2000, as well as her unsuccessful bid for the Democratic nomination for President in 2008. Clinton’s speech increasingly resembles that of male Presidential candidates evidenced in prior research. In a male-dominated political arena, it is possible that female politicians conform to male speech patterns to be seen by voters as competent and trustworthy.

Women continue to be disproportionately underrepresented in American politics, making up only 18.5% of Congress and 10% of state governors. Thus, the importance of this project–to reveal how a successful female politician compares and competes with her male counterparts as well as to uncover hidden biases in political life that may disadvantage women–cannot be underestimated. Clinton’s career illustrates the contortions women undergo to compete in a profession still dominated by men and by a male model. Such insight has significance not only for women and members of other marginalized groups in American politics, but also for any citizen interested in promoting a more representative democracy in an age of new media.


The External Congestion Costs of Light-Duty Trucks

Kim Makuch
Economics
kmakuch@uci.edu

Traffic congestion is a large problem in the United States. Wasted time and fuel as well as increased pollution are all costs of congestion. Summing these costs, the Texas Transportation Institute found the annual cost of congestion to be $121 billion nationally (2010 Annual Urban Mobility Report).

Fundamentally, congestion occurs when freeway users are so numerous that drivers must slow down to travel safely. Economists have long advocated the use of Pigovian taxes to combat congestion. They argue that drivers impose a negative externality on other drivers by slowing them down. Therefore, the equilibrium number of freeway users will be larger than the socially optimal level because each driver considers only the private cost of travel — personal time and fuel costs. Drivers do not weigh the full social cost of freeway travel which includes the delays suffered by other drivers. A toll equal to the cost imposed by one driver on other drivers induces the efficient level of traffic because drivers then account for the entire social cost of travel (Brueckner 2011). Congestion tolls are beginning to gain public favor and have been employed internationally in London, Stockholm, Singapore and Milan and domestically in the form of High Occupancy Toll lanes.

This paper investigates the congestion externality imposed by vehicles of different types. In recent years, the U.S. vehicle fleet has undergone a shift away from cars and toward pickup trucks, sport utility vehicles (SUVs) and vans. Together these vehicles are categorized as light-duty trucks. In 1994, light-duty trucks made up 28% of the vehicle fleet but by 2012 the proportion of light-duty trucks had increased to 49% (U.S. DOT). Because light-duty trucks are heavier and taller than cars they may require more freeway space and impose larger freeway capacity costs. As freeway capacity is exhausted, congestion develops. Therefore, vehicles which cause a larger reduction in available capacity impose larger congestion costs on other vehicles. Because vehicles of different types contribute differently to congestion costs, optimal congestion tolls will vary by vehicle type.

Using vehicle trajectory data from Interstate 80 in Emeryville, CA I find that on average, a light-duty truck uses 4% more freeway capacity than a car. The implications for congestion are explored. The results will help policymakers in setting optimal congestion tolls and in constructing alternative large vehicle policies.

References
Brueckner, Jan K. Lectures on Urban Economics. United States: MIT Press, 2011. Print.


Social Movement Use of Strategies of Economic Disruption

Maneesh Arora
Political Science
maneesha@uci.edu

This project investigates the factors that lead social movements to achieve successful policy outcomes, paying particular attention to strategies that disrupt the economy. We build upon existing literature to create an empirical model for assessing the effectiveness of social movement organizations. Using empirical evidence from past and current American social movements, we investigate and measure the effect that strategies of economic disruption have on causing successful policy outcomes. We argue that a social movement’s ability to achieve policy goals is greatly enhanced by its ability to disrupt the relevant economic institutions. By affecting the bottom line of an economic institution, a social movement organization can exert influence over economic elites, who can then influence the relevant policy decision-makers. The effect size of strategies of economic disruption has grown with the increasing amount of influence that money plays in the political system, and the increasing amount of influence economic elites have over policy decision-makers. The ultimate goal of this project is to gauge the potential for the Black Lives Matter movement to achieve its intended policy outcomes, and to prescribe actions that the BLM movement, and other current and future social movement organizations, can take to maximize policy concessions.

 
 

Session 4


Kite Wind Energy Plant Ground Station Design and Implementation

Jing Guo
Electrical Engineering
Co-author: Florian, Electrical Engineering Ph.D
jingg3@uci.edu

Absolutely, the clean and alternative energy is important and it is more commercially available than ever, but still less than 5% of global energy production with the huge investment and low energy density. Over time, wind turbine structures have generated more power by becoming taller, heavier and more expensive. In order for clean energy sources like wind to gain mass adoption, they must become as cheap and available as traditional fossil fuels. It?€?s time to rethink wind power.

An energy kite operates on the same principles as a conventional wind turbine, but tethered to the ground like a kite. Kite wind energy plants consist of a parachute winch to Winch Generator System (WGS) on ground. Energy is produced through a tethered airfoil that flies in large circles at up to 1,000 feet altitude, where the wind is much stronger and pretty more consistent than the winds reached by conventional systems. As it flies in a yoyo process, air moving across the energy kite rotors forces them to spin, producing electricity, which travels down the tether and into the grid. It is amazing that power kite has the ability to eliminate 90% of the material used in conventional wind systems, while generate the totally same amount energy.

However, there are some born deficiencies of the Power Kite against all of the advantages above. The energy generated by power kite shakes a lot and it is easily disturbed even by a little influence by errors inside the system or by the unpredicted atmosphere outside. At the same time, when the generation mode of power kite has been done, the Kite Wind Energy Plant Ground Station promises to pull the kite back safely. During the pull-back mode, zero error happening must be guaranteed. Kite Wind Energy Plant Ground Station should keep the stability of power kite, generate high quality electric and fight against all kinds of disturbance both inside and outside.

Fortunately, the OCC Converter is very suitable to solve this challenge. In recent years, the UCI Power Electronics Laboratory has proposed several new topologies for multi-level converters. These converters have many advantages over conventional converters, such as reduced input disturbance, constant power flow, reduced energy storage requirements, etc.
With OCC-Converter the capacity factor of Kite Wind Energy is higher, then the capital investments are much lower, finally, the price of energy will be lower.


Timestamp-Free Synchronization for Wireless Body-Area Networks

Rohan Ramlall
Electrical Engineering
rramlall@uci.edu

Telemonitoring of biosignals is a growing area of research due to the aging world population. Telemonitoring utilizes a wireless body-area network (WBAN) consisting of wearable biosignal sensors (i.e., wearable technology) equipped with ultra low power radios. The measured data from each sensor on the patient is sent to a central communication node (e.g., smartphone or personal computer), which then sends the data to a healthcare provider via the internet. Thus, the patient?€?s health is monitored continuously and remotely in real-time without the need for the patient to visit their doctor.

One of the major constraints in WBANs is power consumption, since these sensors are meant to be used for weeks, months, and even years. The power consumed by wirelessly transmitting the data to the central communication node is orders of magnitude higher than the power consumed by any other operation, and thus, must be minimized.

To enable real-time monitoring of the biosignals, it is critical to have accurate timestamped data from the sensors in the WBAN. For example, if a sensor uses a typical low cost 32,768 Hz crystal oscillator with a frequency stability of 100 ppm, the time offset can be as high as 259 seconds after 1 month of use without any synchronization algorithm. Most of the synchronization algorithms presented in the literature require the exchange of dedicated timing messages containing digital timestamps on the network. However, this is not feasible for WBANs due to the high power cost associated with transmitting messages.

The contribution of this research is a novel timestamp-free synchronization algorithm applicable to power constrained networks such as WBANs. In the proposed algorithm, the synchronization algorithm is embedded in the existing network messages, so there is no additional overhead and power cost of exchanging dedicated timing messages. ”


Building energy: how to make a fuel cell and a battery work together to generate electricity effectively?

Gia Nguyen
Mechanical and Aerospace Engineering
Co-author: Jack Brouwer (UCI)
gia.nguyen@uci.edu

What is the best strategy to operate a fuel cell-battery system to generate electricity for a building? Traditionally, large central power plants produce the electricity for use in buildings with good efficiency and reliability. However, with the diminishing fossil fuel reserve, and the environmental impact of energy production, it is important to produce electricity with higher efficiency and reliability, and cutting harmful emissions. A different approach is to use small and efficient engines near buildings. This method can increase the reliability of the electricity and reduce the emissions of pollutants. To produce electricity for a building, an energy system needs to vary the power output quickly to accommodate the dynamic electrical demand.

A candidate for distributed generation is fuel cells. Fuel cells can produces electricity with high efficiency and low pollutant emissions, but can only vary the power output slowly. In contrast, batteries can store and discharge electricity quickly. Batteries can act as buffers for fuel cells, storing electricity when the demand is low, and discharging electricity when the demand is high. The combination of the fuel cell and battery creates a system with high efficiency and ability to vary the electrical output. My research studies the coordination needed for a fuel cell-battery system.

First, we collected high-resolution electrical demands from buildings in Southern California. With these data, we can simulate how the fuel cell-battery system must operate to satisfy the electrical demand.

Second, we develop a controller for the fuel cell-battery system. Using weather and time of the day, the controller creates a forecast on electrical consumption for every hour over a 24-hour period. With this forecast, the controller determines the operation to run the fuel cell with slow variation in power output, and uses the electricity produced from the fuel cell for the building or store it in the battery. The controller attempts to maintain the fuel cell and battery within their operation regime. The result of the optimization is the power set points for fuel cell and battery for every hour over a 24-hour period.

At the next time step, the controller updates a new forecast of electrical demand, and reruns the optimization. By constantly updating the new forecast and running the optimization, the controller developed in this research automatically coordinates the operation of a fuel cell-battery hybrid system, thus providing electricity for building and maintaining all equipment within their operation regime.


Products two Cantor sets and application to the Labyrinth model

Yuki Takahashi
Mathematics
takahasy@uci.edu

Cantor set is a bounded subset of the real line which does not contain any interval, and has no isolated point. Physicists and Chemists have long believed that such sets never appear in nature, but recently, it was discovered that quasicrystals, which is a material gave rise to a paradigm shift in material science, have Cantor set as their spectrum. Investigation of the spectral properties of quasicrystals is a subject which have fascinated Physicists and Chemists, but it is known that even numerical investigations present a challenge, due to the aperiodic structure of quasicrystal.

Recently, it was found out that many spectral properties which have been considered to be impossible to show experimentally, can be proven rigorously using Mathematics. Nowadays, Mathematics is one of the most crucial tool to understand quasicrystals.

In our research, we consider the Labyrinth model, which is a two dimensional quasicrystal model. So far, almost nothing was known rigorously for this model, and all the works have been centered on numerics. We prove some spectral properties of this model, which gives an insight to the Physics and Chemistry society for further understanding of quasicrystals.
We first show that the spectrum of this model is given by the product of two Cantor sets, and is an interval in certain parameter region. Then, we consider the so called integrated density of states, and show that it is absolutely continuous with respect to Lebesgue measure. This is called scattered states, and physically, this means that the particle eventually runs out to infinity.

Motivated by the fact that the spectrum of the Labyrinth model is given by the products of two Cantor sets, we then consider this products of two Cantor sets more in detail. It is known that for any Cantor set, a real number called thickness can be assigned. In terms of this thickness, we get beautiful optimal estimates that guarantee that the product of two Cantor sets is an interval. Surprisingly, it turns out that if the thickness of two Cantor sets are the same, then the optimal value of the thickness is the golden mean, which is a value appears everywhere in nature, art, and architecture. For example, it is the ratio between a side and a diagonal of pentagon.

Lastly, we discuss the surprising connection between this problem and the “intersection of two Cantor sets” problem, which is a problem considered in many papers before.


Movement Anticipation and EEG: Implications for BCI-Contingent Robot Therapy

Sumner Norman
Mechanical and Aerospace Engineering
slnorman@uci.edu

Brain-Computer interfacing is a technology that can potentially be used to improve patient effort in robot-assisted rehabilitation therapy, leading to increased motor outcomes after stroke. Past studies have used BCI as a control system where the patients?€? attempted movements are read at the cortical level, and translated into robotic orthosis movement. For example, movement intention reduces mu (8-13 Hz) and beta (13-35 Hz) wave oscillation amplitude over the sensorimotor cortex, a phenomenon referred to as event-related desynchronization (ERD). In what can be called an BCI-contingent assistance paradigm, initial studies have most commonly used ERD as the trigger signal for providing robotic assistance to limb movement. There are, however, a few previous reports of ERD occurring in response to externally imposed movements in which the subject remained passive. This introduces the possibility that the presence of ERD cannot be relied on to ensure active effort in movement training. Here we investigated how ERD changed as a function of audio-visual stimuli, overt movement from the participant, and robotic assistance. Eight unimpaired subjects played a musical computer game designed for rehabilitation therapy using the FINGER robotic exoskeleton. In the game, the participant and robot matched movement timing to audiovisual stimuli in the form of notes approaching a target on the screen set to the consistent beat of popular music. The audiovisual stimulation of the game alone did not cause ERD, before or after training. In contrast, overt movement by the subject caused ERD, whether or not the robot assisted the finger movement. Notably, ERD was also present when the subjects remained passive and the robot moved their fingers to play the game. This ERD occurred in anticipation of the passive finger movement with similar onset timing as for the overt movement conditions. These results demonstrate that ERD can be contingent on expectation of robotic assistance; that is, the brain generates an anticipatory ERD in expectation of a robot-imposed but predictable movement. Therefore, in a predictable therapy environment, the use of ERD as an orthosis control signal does not necessarily require the patient’s active engagement in the motor task, but simply in the expectation of robotic movement. As such, ERD is suboptimal in the context of patient engagement in BCI-contingent robot therapy, which may limit motor outcome after stroke. This is a caveat that should be considered in designing a BCI-robot therapy system for enhancing patient effort in robotically assisted therapy.


Optical trapping of dielectric nanoparticles by plasmonically enhanced evanescent wave

Qiancheng Zhao
Electrical Engineering
Co-authors: Rasul Torun, Shah Rahman, Tuva Cihangir Atasever, Ozdal Boyraz
qianchez@uci.edu

We demonstrate the optical trapping of polystyrene nanoparticles by evanescent wave which is plasmonically enhanced by gold bowtie antennas. The light intensity between two antennas is 26 times larger than that of non-antenna enhancement case. The trapping force is boosted by 2 orders of magnitude with the help of plasmonic enhancement. Compared to gravity, optical force is 8 times larger. It shows a strong tendency that the nanoparticle tends to move to the highest light intensity spot, yielding a trapping phenomenon. Optical trapping starts from launching 785 nm light into a silicon nitride waveguide that is transparent to both infrared and visible region. TE mode is preferred because of its significant tangential electric field with respect to the top surface of the waveguide, which is capable of facilitating coupling mode into antennas. Due to discontinuity of electric field and magnetic field at the boundary, there is evanescent wave near the waveguide surface. However, the evanescent wave decays exponentially and thus cannot interact with nanoparticles effectively. To increase the trapping efficiency, a pair of 20 nm thick gold bowtie antennas is deposited on top of the silicon nitride waveguide. By manipulating the length of the antenna, resonance is achieved, and field enhancement occurs at the gap between two antennas. Although smaller gap leads to stronger field, the gap between two antennas is set to be 30 nm for the sake of fabrication convenience. A polystyrene nanoparticle with 20 nm diameter is placed in the gap of bowtie antennas. Due to light intensity gradient, the particle experiences different forces, which can be characterized by Maxwell Stress Tensor, at different locations. The net force upon the particle can be further calculated by integration over the particle surface. Simulation reveals that vertical optical force pulls the particle down towards the waveguide, and that transverse force pulls the particle to the center of the antenna gap. Therefore the nanoparticle is trapped into the gap between antennas. The simulation matches the theoretical expectation well, because high intensity region attracts optical denser medium but repels optical sparse medium. Compared to optical trapping by prism, waveguide structure with antenna enhancement is more compact and robust. The lab-on-chip system can be fabricated in a CMOS compatible way, rendering itself promising in low-cost production.


 
 

Session 5


Production of high specific activity radiolanthanides for medical application using the UC-Irvine TRIGA reactor

Leila Safavi-Tehrani
Chemical and Biochemical Engineering
Co-authors: Dr. Mikael Nilsson (PI, Department of Chemical and Biochemical Engineering), Dr. George Miller (Department of Chemistry)
lsafavit@uci.edu

Radioactive lanthanides have become an important imaging, diagnostic and therapeutic tool in the medical field. For example, the neutron rich samarium isotope of Sm-153 has been proven to have desirable characteristics for treatment of bone cancer. However, for medical purposes, the radioactive lanthanide isotope must be produced at high specific activity, i.e. low concentration of inactive carrier, so they are both beneficial for therapy and the concentration of the metal ions does not exceed the maximum sustainable by the human body. The objective of our research is to produce radioactive lanthanides with high specific activity in a small-scale research reactor using the Szilard-Chalmers method. The Szilard-Chalmers process is a method to separate radioactive ions away from a bulk of non-radioactive ions by neutron capture. Our preliminary experimental results show a decrease of 34% in the amount of lanthanide needed for a typical medical procedure. We propose an innovative experiment setup to instantaneously separate the radioactive recoil product formed during irradiation from the bulk of non-radioactive ions. The instant separation prevents the recoiled radioactive nucleus from reforming its original bonds with the target matrix and chemically separates it from the non-radioactive target matrix, resulting in a carrier free radiolanthanide with increased specific activity. We will present methods for preparation and synthesis of the material used for irradiations and the results of enrichment factors and extraction yields in radioactive lanthanide solutions.
This research project is important due to its potential contribution to both the diagnostic and therapeutic medical field. Developing and optimizing methods for producing high-specific activity radioisotopes has a number of valuable outcomes and applications such as (but not limited to): 1. Local production of small amounts of radioisotope for the local medical school and research labs to incorporate in research areas such as labeling studies and synthesis of radiolabeled drugs. 2. Larger facilities to scale up our methods and supply these radioisotopes for medical application.


Elucidating High Resolution Structures of Amyloid-β(Aβ) Oligomers Implicated in Alzheimer’s Disease

Kevin Chen
Chemistry
kevinhc1@uci.edu

Alzheimer’s disease (AD) is a neurodegenerative disease that is currently affecting 5.2 million Americans. Almost all of the patients are elders age 65 and older. But about 5% of AD patients have early-onset AD, where they have a genetic predisposition toward developing AD at a much younger age, decades earlier than the common AD patient. Surprisingly, as AD is the 6th leading cause of death, there are no treatments, cures, or even preventive measures available. The severity of this lack of medical care is detailed in the latest 2012 World Health Organization’s report, in which AD is viewed as a priority for global public health.

The hallmark of AD is the accumulation of amyloid plaques in the brain, which consist of fibrillar aggregates of amyloid-β (Aβ). Instead of these insoluble fibrils, the soluble aggregates of Aβ have been implicated as the toxic species of interest. The pathogenesis of these soluble Aβ higher-order assemblies (Aβ oligomers) is not well understood. Likewise, there is little structural information on these toxic Aβ oligomers. Understanding their structures is indispensable for elucidating Aβ’s pathogenesis. Unfortunately, their structural elucidation is complicated by Aβ’s innate propensity to self-associate and aggregate into multiple species. This property makes the isolation, purification, and subsequent characterization of these Aβ oligomers difficult. There is a pressing need to establish chemical models that can mimic the molecular interactions of Aβ oligomers, but are much easier to manipulate and study.

My Ph.D. thesis research in Dr. James Nowick’s laboratory in the Chemistry department utilizes a macrocyclic, peptide-based chemical model aiming to identify the atomistic structures of these elusive Aβ oligomers by using high-resolution structural elucidating techniques. I design these chemical models to contain the Aβ sequence, so they can imitate Aβ’s molecular interactions. Most importantly, they also contain a N-methylated amino acid derivative that suppresses the aggregation caused by Aβ’s ability to self-associate. Specifically, I utilize my Aβ-derived chemical models to study oligomers of Aβ familial mutants. I aim to elucidate and then compare their structures with the native Aβ peptide to identify any structural differences that may attribute to the different physical properties exhibited by the Aβ familial mutants. My research will ultimately contribute the much-needed structural information on Aβ oligomers. This information will generate a greater understanding of these oligomeric structures, and will enable scientists in academia and industry to better devise a strategy to develop therapies for Alzheimer’s disease.


Reactions with Light in the Atmosphere and Effects of Environmental Conditions

Mallory Hinks
Chemistry
Co-authors: Hanna Lignell (California Institute of Technology), Monica Brady (Georgetown University), Sergey Nizkorodov (UCI)
mhinks@uci.edu

Air pollution is a major contributor to human driven climate change. Molecules that are introduced into the atmosphere from either man-made or natural sources may undergo changes as they are exposed to sunlight. Our research is focused on how atmospheric conditions such as temperature and relative humidity will affect these changes. We are specifically interested in how long these pollutants will remain in the environment as well as how the molecules will change as a result of interactions with sunlight. Pollutants that have longer lifetimes in the atmosphere may survive long enough to be transported away from their sources (e.g., factories) and into the “cleaner” areas where people may live. It is important to understand how environmental conditions affect pollutant lifetimes so that we can better predict where they may end up and how they may affect human health. One type of pollutant that we are particularly interested in is aerosol particles, which are made up of a composite of various molecules. Exposure of the aerosol to sunlight would cause any light-sensitive molecules to degrade. These light-driven reactions are further complicated by the fact that aerosol particles are comparable in consistency to caramel. As the temperature and/or relative humidity is changed, the consistency will be altered, similar to how caramel becomes softer and thinner as it is heated. We hypothesize that molecules in a thick caramel-like material, such as an aerosol, will move more slowly. This will result in slower reactions, because molecules need to be able to move to react, including those induced by sunlight. We have found that as temperature is decreased, molecules tend to have longer lifetimes. The implications of these results are that pollutant molecules associated with aerosol particles may remain in the atmosphere for longer periods of time on cold, dry days and thus travel further from their source.


Bridging the gap between structure and biological activity of beta-amyloid oligomers

Adam Kreutzer
Chemistry
akreutze@uci.edu

Neurodegenerative diseases, such as Alzheimer’s disease and Parkinson’s disease, have emerged as an epidemic among aging populations of developed nations. There are no cures or treatments to prevent or halt the progression of these diseases, which ultimately lead to death. Understanding the fundamental cause of these diseases is imperative to their prevention and to development of treatments.

The neurodegeneration observed in Alzheimer’s disease and Parkinson’s disease is thought to occur as a result of the accumulation of amyloid peptides and proteins in the brain. In Alzheimer’s disease, the peptide beta-amyloid (Aβ) aggregates to form fibrils that contain hundreds of thousands of Aβ peptides. Aβ fibrils form insoluble plaques in the brains of patients with Alzheimer’s disease, and were originally targeted as the toxic aggregate that causes neurodegeneration. However, poor correlation between plaque deposition and neuronal loss has shifted researchers attention to alternate assemblies of Aβ that exist en route to fibril formation termed ‘Aβ oligomers’ as the toxic species that causes neurodegeneration in Alzheimer’s disease.

Aβ oligomers are small soluble assemblies of Aβ made up of only a few Aβ peptides. Despite mounting evidence that links Aβ oligomers to neurodegeneration, atomic resolution structures of Aβ oligomers remain elusive. This critical gap in knowledge is the result of isolations and preparations of Aβ oligomers containing a heterogeneous, meta-stable mixture of different oligomeric assemblies, making determination of their atomic resolution structure seemingly impossible.
Our lab, among others, has simplified these issues by studying fragments of the full length Aβ peptide that are important in its assembly into aggregates. This approach has facilitated atomic resolution structural characterization of numerous oligomers of Aβ fragments by X-ray crystallography. These studies have afforded tremendous insight into the structural diversity of Aβ oligomers, and have allowed us to generate hypotheses about how Aβ oligomers form in solution and elicit their neurodegenerative effect.

My talk will focus on the approach and methodology our lab has developed to gain atomic resolution structural insight into these novel assemblies of Aβ. I will talk about the design of a chemical model system that enables these studies; and how our lab has used this system to see these otherwise unseeable Aβ aggregates. In addition, I will focus on how this research has challenged our understanding about amyloid peptide and protein assembly, and is pushing the field of neurodegenerative disease study in new directions.


Examining host-parasite communication with bioorthogonal probes

Lidia Nazarova
Chemical and Materials Physics
Co-authors: Roxanna Ochoa, Krysten Jones, Naomi Morrissette, Jennifer Prescher
lnazarov@uci.edu

Intracellular pathogens present a serious threat to human health as they can be difficult to detect and can artfully evade many host defenses. Among the most prevalent of these intracellular pathogens is Toxoplasma gondii, a parasite that infects one-third of the world’s population. In recent years, T. gondii has been discovered to express and secrete unique glycoproteins within host cells. These biomolecules likely interact with host signaling proteins and other networks to ensure parasite survival, but the extent of these interactions and their downstream physiological consequences remain unknown. To begin to examine these features, we aimed to profile the repertoire of T. gondii’s glycoproteins using a chemical reporter strategy. This strategy involves the metabolic incorporation of unnatural glycan building blocks endowed with unique chemical handles (i.e., “chemical reporters”) into target glycoproteins. Following incorporation, T. gondii’s tagged glycoproteins can be specifically detected in a second step utilizing highly selective (i.e., bioorthogonal) probes. Using this strategy, we found that the metabolic labeling of T. gondii’s glycoproteins was both dose- and time-dependent, and did not compromise parasite viability. We further identified a large, diverse set of glycosylated proteins in the parasite, including some previously unannotated proteins likely involved in modulating host-parasite interactions. Further biochemical evaluation of these glycoproteins will provide a more detailed understanding of host-parasite communications and can help develop new diagnostics and therapeutics for parasitic infections.


Seeing the unseeable in Parkinson’s disease: An atomic resolution structure of a neurotoxic oligomer of alpha-synuclein

Patrick Salveson
Chemistry
psalveso@uci.edu

Neurodegenerative diseases such as Parkinson’s disease are an epidemic among aging populations; nearly one in one hundred adults over the age of sixty will be afflicted with Parkinson’s disease. There are no known treatments to cure or halt the progression of the disease. Understanding the fundamental causes of Parkinson’s disease is imperative to prevent and treat this neurodegenerative disorder.
There is strong evidence that the molecular events leading to Parkinson’s disease involves the aggregation of the protein alpha-synuclein into plaques, termed Lewy bodies, in the brain. It has become strikingly apparent in recent years that the Lewy body aggregates of α-synuclein are not the toxic agent in Parkinson’s disease; rather a transient species that exists in route to the plaques appears to be the culprit. These species are now known as soluble oligomers of alpha-synuclein. There is a critical knowledge gap in our understanding of the biochemical and biophysical properties that enable these oligomers to cause the neurodegeneration associated with the disease. This lack of understanding is due to the transient and heterogeneous nature of the oligomers; regardless, a detail structure characterization of these species would be transformative to our understanding of Parkinson’s disease.

My lab has championed an approach that simplifies the issue of heterogeneity and instability of toxic oligomers that results form working with full-length proteins such as alpha-synuclein. Namely, we have found that using small fragments of these proteins has allowed us to study their assembly at atomic resolution. These structures provide snap shots of the invisible oligomers and allow us to generate hypotheses about their modes of action on neurons.

My research involves the study of a fragment of alpha-synuclein that has been demonstrated to be crucial to the oligomers’ assembly. Through the use of X-ray crystallography, I characterized an oligomer formed by this fragment of alpha-synuclein. I have been able to demonstrate that this fragment is capable of killing human neuronal cells; further, I have observed that this fragment’s toxicity is directly related to its ability to assemble into the oligomer, echoing the properties of full-length α-synuclein. The X-ray crystal structure I solved could serve as a model with which we can began to understand the detailed atomic mechanisms that result in the development of Parkinson’s disease. This structure is the first and only atomic resolution insight we have into the neurotoxic oligomers and could be transformative to our understanding of Parkinson’s disease.


A New Visual Teaching Aid: DanceChemistry

Gidget Tay
Chemistry
Kimberly D. Edwards (Chemistry Lecturer, UCI)
tayg@uci.edu

A visual aid teaching tool, the DanceChemistry video series, has been developed to teach fundamental chemistry concepts through dance. These educational videos present chemical interactions at the molecular level using dancers to represent molecules. The DanceChemistry videos help students visualize chemistry ideas in a new and memorable way. This project involves students, staff, and instructors in both the chemistry and dance department and provides a platform for a diverse set of people to work together. Students that participate in these videos play an active role in their own education while providing a visual teaching aid for their peers to use. These videos also give graduate students who are interested in pursing a career in teaching an opportunity to create an educational tool for their own future use. I surveyed 1200 undergraduate chemistry students who watched my videos in class; the students who watched the videos scored 30% higher on a short quiz than their classmates who did not see the video. Greater than 75% of the students said they would like to learn the chemistry concept using these videos. The DanceChemistry videos are broadly disseminated for free on YouTube; this broad distribution enhances the infrastructure for education at secondary schools and provides underserved communities in science with free instructional videos that can be used to improve scientific understanding from a creative viewpoint.

 
 

Session 6


Manipulated Eyewitness Reports Cause Memory Distortion

Kevin Cochran
Psychology and Social Behavior
Co-authors: Daniel Bogart (Psychology and Social Behavior), Elizabeth Loftus (Ph. D., Psychology and Social Behavior)
kjcochra@uci.edu

It is well known that memory can be distorted by misleading information. After witnessing an event, if people are exposed to misleading suggestions, they often incorporate those suggestions into their memories for the event. This process is known as the misinformation effect. However, no previous study has tested whether eyewitness’ memories can be distorted by falsified versions of their own memory reports.

The present study examined this question. In an online experiment, subjects first witnessed a slideshow depicting a crime, and answered questions about their memories for the crime. Later, they were asked to review their answers to the memory questions, but some of their responses had been altered; this constituted the misinformation. Finally, subjects answered the same memory questions a second time. We were interested in seeing whether the misinformation that subjects were given would cause their memories to change for the second memory test. Results indicated that being exposed to misleading information about subjects’ own memory reports caused their memories to change to be consistent with the misinformation. For critical items but not control items, subjects’ memory reports at the second memory test had shifted, and this shift occurred in line with the misinformation. Thus, the present study demonstrated that people can be misled about their own memory reports, and that this misinformation can have enduring effects on their memories.

This research has diverse implications. In legal contexts, witnesses are often shown statements that are supposed to be summaries of their reports. If these statements contain errors, either due to misunderstandings between witnesses and police or due to deliberate manipulation, merely reading through one’s own “witness statement” could cause one’s memory to be contaminated. The present findings could also be important in other domains. One of the strongest predictors of future behavior in a given domain is memory for past experiences in that domain. For instance, one factor that affects peoples’ willingness to go to the doctor is their previous experiences with doctors. If people can be made to remember unpleasant doctor’s visits as more positive and less negative they truly were, they may be more willing to seek care in the future. Finally, the present findings might be applied to educational or workplace settings. If students or employees can be made to remember stressful tasks as less difficult and more enjoyable, they should be more willing and better able to accomplish similarly difficult tasks in the future.


Promoting a Comprehensive Approach to Ensuring Accessibility to Gender Incongruent Healthcare

Jordan Aiken
Interdisciplinary/Gateway and Law Programs
aikenj@lawnet.uci.edu

Overview:
Many transgender and gender nonconforming (GNC) individuals are denied access to preventive healthcare and services due to the gender marker on file with their insurance provider/payer. For example, transgender men (female to male), who might have a male gender marker on file, are frequently denied coverage for Pap smears, prenatal care, breast exams and menstrual disorders even though they physically need this care. Similarly, transgender women are often denied coverage for prostate exams. These are examples of gender incongruent healthcare, which occurs when treatment that is required for one’s body does not match traditional notions of the care needed for that gender. While law and policy on this topic have improved recently, several barriers remain, which separate transgender and GNC individuals from essential comprehensive healthcare.

Methods:
Twenty transgender and GNC individuals from across the United States and Canada were interviewed for this study. Their individual experiences informed the study’s analysis and illustrate the healthcare barriers that transgender and GNC individuals face. A review of state and federal statutes, case law, the Affordable Care Act, Medicaid regulations, Veterans Administration directives, insurance coding procedures and American Medical Association policies on this issue inform the present analysis.

Conclusions:
After studying the laws, policies, regulations and directives on this topic, I propose a comprehensive approach to policy changes that would include medical provider and staff cultural competency trainings, insurance coding system reform, coordinated care among providers, bolstered complaint mechanisms and new intake procedures.

This work affects:
There are approximately 700,000 transgender individuals in the US. Transgender and GNC individuals face outright denials of healthcare coverage necessary for their bodies, and some forego healthcare altogether rather than face a hostile or unwelcoming environment. This work helps to reverse these trends by informing policy that would ensure access to comprehensive care.

Why you should care:
The denial of appropriate healthcare affects not only transgender and GNC individuals, but can negatively affect loved ones, sexual partners, family, friends, and public health at large. The lack of proper healthcare has led to an under diagnosis of breast, cervix and prostate cancers and increased disease transmission and progression.

Implications of my study:
This study will inform policy changes that foster implementation of the laws, directives and regulations that will compel healthcare providers, insurers/payers, legal/governmental bodies and transgender and GNC patients to work together to overcome barriers and ensure access to comprehensive healthcare.


The Role of Maltreatment in Adolescents’ Development of Emotional Competence

Helen Milojevich
Psychology and Social Behavior
helen.milojevich@uci.edu

Background: The costs of child maltreatment are staggering with deleterious effects on children emerging in every developmental domain, including cognitive, psychological, and social functioning. The mechanisms underlying these effects are complex and multifaceted, being shaped not only by characteristics within children, but also by their experiences during and after exposure to maltreatment. One mechanism that has been relatively understudied, but has the potential to interact with children’s experiences to affect a range of outcomes, is emotional competence, or the ability to express, understand, and regulate one’s emotions. Emotional competence is associated with psychological well-being, behavioral functioning, delinquency, and physical health in nonmaltreated samples as well as samples of young maltreated children. Given that adolescence is a unique time, cognitively and emotionally, it may be a time when maltreatment experiences play a critical role in shaping emotional competence.

Methods: At Time 1, two samples of youth, ages 10-17, one with a substantiated history of maltreatment and one with no reported history of maltreatment, completed a battery of measures, including those that tap emotional competence, behavioral functioning, and cognitive abilities. Caregivers completed questionnaires regarding adolescents’ emotional and behavioral functioning. At Time 2 (delay = 6 months), adolescents and their caregivers completed questionnaires regarding the adolescents’ behavioral functioning and, for the maltreated adolescents, placement stability. Information about the adolescents’ family, maltreatment, and background was collected at both times via case files. Data collection for the maltreated sample has concluded; testing for the nonmaltreated sample is still underway and will be completed shortly.

Implications: The current study has the potential to advance theoretical understanding and to inform the treatment and intervention of highly vulnerable adolescent populations. Theoretically, the current study will elucidate the mechanisms underlying the development of emotional competence and provide a novel investigation into the trajectory of emotional competence by following maltreated adolescents over multiple time points, and determining how their emotional competence predicts their behavioral and psychological functioning across development. Practically, the study has tremendous implications for the treatment and intervention of maltreated adolescents. Findings from the current study will be used to create a standardized assessment battery that residential staff and emergency care providers can utilize in the drafting of treatment and intervention plans. Thus, the current study has tremendous implications for improving our understanding of the mechanisms underlying emotional competence development and how to intervene with high-risk populations that tend to demonstrate extreme deficits in emotional functioning.


Frederick Law Olmsted and a New School of Ethnographic Planning

Hope Pollard
Urban and Regional Planning M.U.R.P.
pollardh@uci.edu

In 1961, over 100 years after Frederick Law Olmsted drew up his plan for Central Park, housing activist Jane Jacobs wrote a scalding narrative of the planning profession. In it, she denounced the planning profession’s top-down approach of implementing plans based on theory rather than on actual knowledge of how cities function. She called out specific housing projects, such as Morningside Heights in New York City, as being centers for vice and degeneration despite valiant efforts of planners to enforce their theoretical concepts on the area. Although she spoke primarily about the process of urban renewal being undergone during her time – one that included the mass leveling of communities marked by planners as “slums” – her attacks also pointed to the history of the planning. Rewind to 1858 – the year Frederick Law Olmsted and his partner Calvert Vaux were chosen as the winners in a competition to design a central park for New York City. Olmsted had spent the first 30 years of his life bouncing between informal schools in the countryside and searching for a calling among clerking, shipping, scientific farming, and a form of cultural journalism. Due to sumac poisoning that left his eyes weak, he escaped the formal study that might have left him lumped together with the planners Jacobs would later denounce. Olmsted did not study cities through research behind closed doors. He learned about the way they work and about the lives of the people in them through his own experiences. As a result, he achieved a legacy of success and overwhelming popularity. He did not designate simple patches of grass as open space (as planners would later attempt to do in low-income housing projects), but he designed picturesque landscapes to heal the souls of all classes. He knew what people wanted because he worked with them and spoke to them. In this way, he was an early ethnographic researcher. And his legacy, coupled with Jane Jacob’s criticism a century after his first major work, serves as evidence that ethnographic research should be a requirement of all proposed city plans and all professional planning education programs. Olmsted’s work hints that if such a requirement were implemented, we might see much healthier and more vibrant communities and cities.


Now You See Him, Now You Don’t: Choice Blindness and Eyewitness Identification

Rachel Greenspan
Psychology and Social Behavior
Co-authors: Kevin Cochran (UCI), Dr. Elizabeth Loftus (UCI)
greenspr@uci.edu

Most people believe that they make decisions through an uncomplicated, straightforward process. When making a choice, they look at their options and simply pick the one they prefer most. However, research shows that insight into one’s decision making is poor. Recent studies even indicate that memory for recently made choices is so malleable that people can be led to remember choosing an option they actually did not. In one study participants were asked to choose which of two faces they found more attractive. Afterwards, they were shown the face they picked and asked to explain their choice. Unbeknownst to participants, researchers used a card trick such that the face participants viewed was actually the face they had not chosen. Nearly 75% of these participants did not notice the switch and many participants even created reasons why they chose this previously unselected face.

In a laboratory setting, findings such as these give researchers insight into memory and decision making processes. However, when applied beyond the laboratory, these factors grow into critical issues for the legal system. When an eyewitness picks someone from a lineup, it is common for police to make a written record of this response. Later, the witness may view this record, but what happens if the officer mistakenly miss-records the witness’ response? Extending from results of previous literature, it is possible, even likely the witness would not notice the error, and may incorporate the misinformation into their existing memory. This process is investigated in the current study.

In this study, participants watched a fake robbery and identified the culprit from a lineup. Later, they were shown a photograph of the man they chose from the lineup and explained their reasons for choosing this person. However, for some participants, the picture shown was not the person they chose. Instead, it was a different photograph from the lineup. Results show that over half of participants failed to notice that the picture they were shown was not the one they originally selected. Moreover, this misinformation affected later memory. When later asked to identify the original culprit from a lineup, participants who were earlier shown the incorrect photograph were more likely to change their lineup decision, often choosing the other picture they were shown. Thus, not only did the misinformation impair the witness’ original memory, making it more difficult to convict the true perpetrator, it also falsely implicated an innocent suspect.


Organizational Engagement with Regulatory Change: Analyzing the Impact of Media and Status

Harsh Jha
Management
jhah@uci.edu

One of the key issues in public policy research is the role played by non-state organizations in policy formulation. How these organizations engage, or fail to engage, with the regulatory change process is a critical question because the level and pattern of engagement may affect the influence these organizations have on policy formulation. Policy formulation may be the prerogative of the state, but almost all policy formulation in legislative democracies includes a public feedback processes from relevant organizations and is also susceptible to behind the scenes lobbying by special interest groups. In this paper I explore this issue of organizational engagement with regulatory change process through the study of the antecedents of the Legal Services Act (LSA 2007) passed by the British parliament. asIn 2007 the British Parliament passed the Legal Services Act for England and Wales. LSA 2007 was a wide ranging market based reform regulation which fundamentally challenged the three key pillars of the legal profession. This legislation removed the entry barriers on non-lawyers from practicing law, allowed non-lawyers to control and own law firms and removed professional self-regulation by creating non-professional regulatory body. The fact that such a highly stable field, the last major regulatory change in legal profession in England happened in 1848, underwent such drastic change makes this case a compelling and vivid instance for exploring how organizations engage with regulatory change. In this paper I analyze organizational engagement with three critical events preceding LSA 2007 – release of discussion paper, royal commission, & joint parliamentary commission – and argue that media coverage and organizational status are key drivers of organizational engagement. I use a mix of quantitative and qualitative research methods and archival and interview data collection techniques. Unlike most prior research, which included media as an undifferentiated aggregate, I analyze the effect of media coverage across multiple distinct channels, that is, mass, business and trade media. The results suggest that the level of media coverage of critical events has differential affect on organizational engagement. I find that mass media affects the engagement of small law firms, NGOs and policy think tanks; business media affects big law firms and corporate organizations; while trade media affects legal professionals. Further, I find that high status organizations publicly engage in the later stage events, whereas low status organizations engage in the early stage events. These findings present a novel model for explaining organizational influence on policy formulation.


Cumulative Inequalities in Criminal Justice Institutions: Racial & Geographic Biases

Nick Petersen
Criminology
npeterse@uci.edu

Research has uncovered racial disparities in the implementation of capital punishment. However, the death penalty literature primarily focuses on the final stages of capital prosecution, paying little attention to biases at earlier stages in the process. As such, we know that race shapes death penalty outcomes, but we do not know how or why. To understand the locus of racial bias within the death penalty system I examine the chain of events producing death sentences. Analyzing data on several thousand homicide victims and defendants from Los Angeles County (California), I answer two overarching research questions: (1) to what extent are racial and geographic biases present in Los Angeles County’s death penalty system?; and (2) do racial and geographic disparities accumulate across multiple stages of the criminal justice system? Multilevel regressions indicate that individual and neighborhood-level demographics influence the funneling of cases through Los Angeles County’s death penalty system, shaping police and prosecutorial responses to homicide. Moreover, racial and geographic biases accumulate across stages of the criminal justice system, producing stark disparities at later stages in the process. Results contribute to ongoing debates about capital punishment and organizational theories of inequality.

 
 

Session 7


Cancer risk among naval shipyard workers exposed to asbestos and welding fumes

Citadel Cabasag
Epidemiology
Co-authors: Hoda Anton-Culver, PhD, Professor and Chair, Dept. of Epidemiology, Director, Genetic Epidemiology Research Institute
Agyrios Ziogas, Ph.D., Associate Adjunct Professor, Dept. of Epidemiology
ccabasag@uci.edu

Employment records are used to form historical occupational cohorts to study chronic health outcomes such as cancer due to occupational exposures. However, employment records are not collected with the intention to be applied in epidemiological studies, and therefore a number of variables important for association studies need to be reconstructed from other sources. The objective of this study is to generate an occupational cohort with exposure information and assess etiological factors for various health outcomes including cancer. The Long Beach Naval Shipyard (LBNSY) consists of approximately 44,000 employment workers from 1978 to 1985. The LBNSY cohort consists of 13,935 records that have been digitized. In addition, a subsample of 1,763 workers participated in a survey, which we are using for validation of some variables. In our study, we are using public databases to examine occupational exposures at the shipyard over a long time period since 1978 and examine morbidity and mortality outcomes with a focus on cancer incidence. We used information from employment rosters and the survey to determine job exposures based on job titles. The survey contained assessment of 23 different occupational exposures in the LBNSY. Each job title was standardized based on the Standard Occupational Classification from 1983. Exposures to various occupational agents were assigned for each shop (job location) based on the information from the survey and on the occupations in each of the shop. Furthermore, we validated a modified method previously proposed in 1983, which used Florida phosphate workers cohort, to predict the year of birth of individuals in our cohort with unknown date of birth using Social Security Number. After validation the method were applied to individuals with missing age. We also linked the employment records with the California Cancer Registry and death file to determine cancer incidence and mortality. There are 3,690 deaths and 1,899 observed cancer incidences in the cohort. In conclusion, occupational cohorts are an important source of information in evaluating the long-term effects of exposures to multiple occupation agents. Chronic diseases such as cardiovascular diseases and cancers take several years after exposure to manifest. Historical occupational cohorts is economical and allows the ability to examine diseases with long latency period. However, employment records used in these cohorts were not collected for the purpose of research and have several limitations. Our study presents different approaches to add more informative data in an occupational cohort to assess etiological factors for chronic health outcomes.


Heart’s natural scaffolding influences human stem cell-derived heart muscle cell maturation

Ashley Fong
Biological Sciences
Co-authors: Mónica Romero López (Biomedical Engineering, UCI), Steven George (Biomedical Engineering, Washington University, St. Louis), Christopher Hughes (Molecular Biology and Biochemistry, UCI)
ahfong@uci.edu

Cellular therapies using heart muscle cells (HMC) created from stem cells have great promise in treating heart disease, which is the number one cause of death in the US. There are many advantages to using stem cells, which include the ability to convert patients’ own cells into stem cells, access to unlimited number of cells, and that they can become almost any cell type in the human body, including heart muscle cells. Unfortunately, it is well established that stem cell-derived HMC are immature with characteristics of fetal heart cells rather than adult heart cells. These characteristics include disorganized structures, spontaneous beating, improper signaling and different responses to pharmaceutical drugs compared to mature HMC. Consequently, it is considered less than ideal to use immature stem cell-derived HMC for cellular therapies due to safety concerns arising from the cells’ immature state. It is necessary therefore to understand what influences HMC maturation, and to find strategies to mature the stem cell-derived heart muscle cells so that they may be effectively and safely used for transplantation. We are exploring the possibility that a tissue’s natural scaffold and blood vessel cells drive HMC maturation and behavior in the same way they drive maturation of organs such as liver and pancreas. We have generated heart scaffolding from cow heart tissue by removing all cells from the tissue, and find that the heart scaffolding affects stem cell-derived HMC maturation, based on enhanced expression of mature HMC markers. In addition, we have also seen increased levels of maturation when the stem cell-derived HMC are seeded into a three-dimensional heart scaffold compared to two dimensions. We are now studying how the heart scaffold and blood vessels work in concert to affect HMC maturation. Taken together, these studies will help develop mature stem cell-derived HMC that can be effectively used for drug screening, and safely used for cellular therapies to treat heart disease. Lastly, the results of this research will help advance the tools necessary to treat the millions of current and future heart disease patients.


The use of diffuse optical spectroscopy to measure physiological changes in fat tissue with weight loss

Goutham Ganesan
Pharmacological Sciences
Co-authors: Robert V. Warren (Biomedical Engineering), Pietro Galassetti (Pharmacology), Shaista Malik (Cardiology), Bruce J. Tromberg (Beckman Laser Institute)
gganesan@uci.edu

Obesity and its large prevalence constitute one of the gravest threats to public health in the United States today. While it is well known that obesity is a risk factor for the development of diabetes and heart disease, the precise mechanisms by which it leads to these complications are not fully understood. Recent evidence points to the role of fat tissue itself as a potential culprit in this process. Some hypothesize that as fat cells become increasingly large with overall weight gain, they also become less efficient at utilizing oxygen, and that this can lead to hypoxia and inflammation. However, there are currently no tools available to study fat tissue in humans non-invasively. Therefore, we hypothesized that diffuse optical spectroscopy and imaging (DOSI) could be used to measure changes in structure and function of subcutaneous fat tissue. DOSI is a portable device that measures the near-infrared scattering and absorption of tissue using a probe placed on skin. Generally, scattering properties reflect tissue structure whereas absorption relates to tissue blood supply, oxygenation, and water.

To test our hypothesis, we recruited participants from those participating in a medically-supervised weight loss program. We use DOSI to measure the abdominal fat tissue at several points over the course of weight loss. Additionally, we used ultrasound to characterize the thickness of fat tissue, and also measured weight, blood pressure, and abdominal circumference.

Results: A total of 8 subjects have participated in this study, and they have experienced an average weight loss of 11.3% of their starting weight. We found that weight loss is associated with significant changes in the way that light is scattered in fat tissue, indicating a decreased size and increased density of scattering events. Furthermore, there is an elevated concentration of hemoglobin and water with weight loss.

Conclusion: It is likely that the changes detected using DOSI reflect both the reduction in fat cell size and increase in fat metabolism that are known to occur with weight loss. While at this time we have no histological evidence for these processes, they have been observed by others using different techniques. These findings are important because they suggest that DOSI could be used to non-invasively and longitudinally assess the effects of treatments on the risk of complications from obesity. Ultimately, this might be useful in both the assessment and screening of treatments for obesity and diabetes for effects on fat tissue function.


Influence of Air Pollution on Infant Bronchiolitis Hospitalization

Mariam Girguis
Public Health
Co-authors: Roxana Khalili, Scott Bartell, Verónica Vieira
mariamg@uci.edu

Particulate matter less than 2.5µm in diameter (PM2.5) can alter immune function making individuals, especially infants, more susceptible to illness. Outdoor air PM2.5 is a mixture of particles from vehicle exhaust emissions, industrial activities, and coal and wood burning, and levels are typically highest in the winter. The goal of this work is to determine if short-term exposure to PM2.5 is associated with infant bronchiolitis, the leading cause of hospitalizations in the first year of life, among all infants born in Massachusetts between 2001and 2009. Average daily PM2.5 concentrations for all of Massachusetts were modeled using satellite remote sensing data. We analyzed 11,805 hospitalization records of infant bronchiolitis using a case-cross over study design where cases serve as their own controls. Using this study design, only variables that change during the short-term, such as the temperature and humidity, need to be considered in the analyses along with exposure. Results indicate that for an additional 10 µg increase of PM2.5 , infants were 6 to 28% more likely to be hospitalized for bronchiolitis, depending on gestational age and seasonality. Infants born prematurely between 32 and 37 weeks of gestation were more likely to be hospitalized for bronchiolitis than infants born after 37 weeks gestation. The influence of PM2.5 on bronchiolitis risk was stronger during the winter months. Understanding the role of PM2.5 in terms of increased susceptibility to illness is important for adopting preventive measures. This work can provide evidence to support regulations when determining safe levels of PM2.5 for infants.


Cell-type Specific Tracing of Subcortical Inputs to V1 from the Hypothalamus in Mice

Georgina Lean
Psychology-Cognitive Neuroscience
Co-author: David Lyon (lab director and advisor)
leang@uci.edu

Determining the detailed microcircuitry of connections in the brain is a critical step toward understanding the functional mechanisms in healthy and diseased tissue. By studying the cortical projections in an animal model, we can begin to define the complex pathways that are involved in human cognition. Additionally, we can develop techniques to visualize these pathways and ultimately create a map of the cortical projections, which in turn can facilitate the management and treatment of a myriad of neurological conditions. The goal of our project is to characterize one such pathway (hypothalamus to visual cortex).

The primary visual cortex (V1) is the region of cortical neurons that start to interpret sensory inputs from the retina. Previous studies have suggested an inhibitory role of projections from the lateral hypothalamus to the visual cortex in the monkey, but no additional work has been done to characterize this connection or demonstrate its presence in other animal models. Our preliminary results show a direct input to V1 from the lateral hypothalamus in the mouse, specifically the lateral preoptic area. This region has been implicated in thermoregulation and sleep cycles in rats, as well as osmosensitivity and thirst-regulation. Characterization of this projection may shed light on appetite and sleep stimulation that may contribute to our understanding of homeostatic regulations.

Our research will examine the subcortical-cortical links using retrograde labeling from a targeted rabies virus injected into V1. In combination with specially designed helper viruses, modifications to the rabies virus limit initial infection to inhibitory or excitatory neurons and retrograde spread only to presynaptically connected neurons. The rabies expresses a red fluorescent reporter, allowing cell bodies, dendrites, and axons of all connected cells to be identified clearly in their place of origin. We will determine which specific subregions contain these labeled cell bodies to identify additional subcortical projections to V1. This will help to identify candidate behaviors that may be regulated by the pathway. Key to the tracing strategy are the different helper viruses that will be utilized to determine the inhibitory or excitatory nature of the pathway. Since the brain consists primarily of excitatory neurons, the ability to independently target inputs to excitatory or inhibitory V1 neurons will allow us to better determine whether this pathway plays a role in inhibition as predicted by earlier work. Such interactions with inhibitory neurons could serve an important modulatory role on basic visual processes and visually related behaviors.


Simulation of Genetic Screening via Online Dating Applications in Genetically Isolated Populations

John Schomberg
Medicine
jschombe@uci.edu

Genetically isolated populations have higher prevalence of a variety of genetic diseases. In Israel in the Ashkenazi Jewish population specifically we see that the burden of rare genetic diseases is heavy. 20 years ago Israel introduced a program of comprehensive premarital and prenatal screening, resulting in reduction of genetic diseases like Tay-Sachs. Prevention of these rare disease variants is dependent upon couples that are both carriers of a disease allele choosing not to have children, or selectively aborting fetuses that carry both mutant alleles for a rare disease. While these methods have been effective in reducing the prevalence of cases it has not addressed the fixation of these alleles in the population nor has it addressed the health impact of selective abortion. Current technology provides genetically isolated populations the potential opportunity to apply additional screening to reduce the prevalence of rare disease cases and also increase the speed at which a mutant allele is fixated. All, without necessitating an intervention which challenges mothers health and negatively impacts a woman’s reproductive lifespan.

Online dating is a method of courtship that is gaining popularity. In the genetically isolated country of Israel, approximately 20% of the unmarried adult population has pursued courtship using online dating sites. Genetic screening tools are beginning to be offered along with dating services. Testing the potential impact of these screening applications can be accomplished through time forward population simulation.

Methods: A time forward population simulation model was used for this study. Simulation used published prevalence of Gaucher disease carriers to create simulated populations. Simulation was used to identify the impact of such an application assuming different size of user base, fitness of carriers, online dating success rate, increasing rates of admixture, screening error rate, and proportion of population using traditional dating and screening, on rare disease prevalence.

Results: Without Online dating intervention, carrier frequency decreased by .002-.004% per generation. Assuming 50% of the dating populous enrolled and 50 % achieved successful courtship then carrier frequency would decrease by .004-.005% per generation.

Conclusion: Current screening methods can be enhanced by genetic screening via online dating. However, it is a large assumption that the proportion of online daters will increase by 50% and 50% of those daters will successfully complete their courtship. Nevertheless this study is the first to highlight the potential impact of a new genetic screening tool.


A Novel Role for Nucleosome Remodeling in Cocaine-Associated Memories.

Andre White
Biological Sciences
aowhite@uci.edu

Substance abuse costs the US economy half a trillion dollars annually due to crime, lost work productivity and healthcare. As well as draining resources from the economy, substance abuse inflicts an exorbitant toll on the lives of millions of Americans. Drug addiction is a particularly persistent disease given that only 10% of addicts remain drug abstinent 6 months after completing drug rehabilitation. Drug addiction is characterized in part by the strong associations that are formed between the environment and the rewarding properties of the drugs. These drug associations increase in strength over the course of drug abuse. Eventually, these associations become so resilient that after months or years of abstinence, drug-associated memories are capable of driving drug cravings and ultimately relapse. My research seeks to understand the molecular mechanisms involved in the acquisition of drug-associated memories with the goal of diminishing the strength of these memories.

Long-term memory formation is known to require gene expression. In the last two decades, the role of gene expression in addiction has led researchers to focus on the epigenetic mechanisms that regulate gene expression. Epigenetics commonly refers to mechanisms that regulate gene expression by altering chromatin structure, independent of changes in DNA sequence. Marcelo Wood’s lab, here at UCI, has shown that epigenetic mechanisms play an integral role in regulating gene expression necessary for the formation and extinction of cocaine-associated memories and cocaine-induced behaviors. However, there is a major epigenetic mechanism, chromatin remodeling, which is capable of generating changes in gene expression but up until now has not been studied in the field of drug addiction. Throughout the course of my research, I use two selective genetic manipulations to target and disrupt a key component (BAF53b) of a chromatin remodeling complex (nBAF) then examine the subsequent effect on cocaine-associated memories and behaviors. Our lab generated genetically modified mice that have reduced BAF53b or a non-functioning form of BAF53b. These mice were then given cocaine in a distinct environment and their memory for that event was later tested. My research demonstrates that both genetically modified mice have deficits in cocaine-associated memories but not all cocaine-induced behaviors. These results show for the first time that chromatin remodeling plays a role in the formation of cocaine-associate memories. In addition, these results possibly provide a novel target for therapeutics aimed at combating addictive disorders.

 
 

Session 8


Searching for dark matter with the Fermi gamma-ray telescope

Anna Kwa
Physics Ph.D.
Co-authors: K. Abazajian (UCI), N. Canac (UCI), S. Horiuchi (Virginia Tech), M. Kaplinghat (UCI)
akwa@uci.edu

Approximately 20% of the mass in the universe is composed of familiar, known particles, i.e. protons and electrons– the remaining 80% of the Universe’s mass, the so-called ‘dark matter’, thought to be constituted of a new, exotic particle whose existence is inferred through is gravitational effects on visible matter. Dark matter is one of the biggest mysteries that modern physics has yet to solve. Physicists know almost nothing about dark matter because it does not absorb or emit light and is therefore impossible to observe with telescopes… or is it?

Certain classes of well-motivated particle physics models of dark matter predict that annihilations between pairs of dark matter particles can produce high-energy gamma-rays. We analyze data from the Fermi gamma-ray space telescope to search for signs of excess gamma-ray emission in regions where observations of gravitational effects indicate a very high density of dark matter. We choose to focus on observations of the Milky Way center because of its high concentration of dark matter and its proximity. A large part of this analysis involves precise modeling of the background in the region, as the predicted dark matter signal is weak compared to the gamma-ray flux produced by more mundane astrophysical sources (e.g. supernovae and cosmic rays interactions). Careful background modeling helps us to avoid spurious dark matter detections arising from an undersubtraction of the background.

We find that there is indeed an excess gamma-ray signal present at the galactic center whose observed properties are remarkably consistent with expectations for a dark matter annihilation signal. Intriguingly, we also detect additional gamma-ray signals that are consistent with what might be produced by secondary interactions following the initial dark matter particles’ annihilation. However, we still have not completely ruled out the possibility that these signals might be produced by less exotic astrophysical processes. Further analysis and modeling of high-energy astrophysical sources will help to determine the true origin of the excess emission. If the galactic center gamma-ray excess is indeed the result of dark matter annihilations, it would shed light on dark matter’s particle properties and have enormous implications for fundamental particle physics.


Ion Transport Through Manganese Oxide Mesorods Reveals Different Charge States

Timothy Plett
Physics
Co-authors: Trevor Gamble (UCI), Eleanor Gillette (University of Maryland), Zuzanna Siwy (UCI)
tplett@uci.edu

Clean, reusable energy is a subject of critical importance both to science and modern culture. With the growing number of portable electronic devices, battery technology has received considerable attention and has seen improvements in lifespan, power, and efficiency. Though these improvements are considerable, the demand for smaller, more powerful batteries continues. Recent discoveries have revealed remarkable possibilities for energy storage in battery material structures at the nanometer scale. Several materials have been studied in particular manganese dioxide. Manganese dioxide (MnO2) is attractive since it is currently used in modern batteries, is abundant, inexpensive, non-toxic, and has staggering theoretical energy storing capability. Many studies have been made of MnO2 nanostructures, testing energy storage for a variety of architectures. Despite this, the nature of ion transport in, through, and around MnO2 at the nanoscale is not well understood. MnO2 is naturally porous. Therefore, we have designed an experiment to help us understand how well ions move through MnO2, the hope being such an experiment will allow us to further optimize structural design for controlled charging and discharging of batteries.

A useful tool to perform this experiment that has been used to understand ion transport at the nanoscale is the synthetic nanopore, a channel with a sub-micron diameter that has been etched or drilled into a thin membrane. Properties of the ion current passing through nanopores, that is the flow of ions through the channel, can reveal characteristics about the pore’s structure and surface charge.

In this study, we utilized synthetic nanopores to perform experiments on MnO2 to determine how easily it allows ions to move through the material, i.e. its conductivity, in different charged states. The measured ion current carried information on the pores in MnO2 and its surface charge. Membranes containing many nanopores as well as membranes containing only one nanopore were coated with gold, and then deposited with MnO2 ‘nanowires.’ The gold layer remained in direct contact with the MnO2 wires, which permitted charging and discharging of the wires with lithium ions, similar to conventional rechargeable batteries. Measurements of ion current through the wires after deposition, after charging, and after discharging revealed that each charge state demonstrated a different conductivity. Several different concentrations of KCl electrolyte in single nanowire studies caused changes in the response of MnO2, indicating that the pores in the MnO2 nanowire had changed as a result of the lithium charging and discharging.


Clean Solar-Driven Hydrogen Production Using Yellow Color Car Paint

Vineet Nair
Materials Science and Engineering
Co-authors: Craig L. Perkins (National Renewable Energy Laboratory), Matt Law (Department of Chemistry, UCI)
vnair@uci.edu

My research involves studying a material a bright yellow color compound called bismuth vanadium oxide(BiVO4). A while ago, the yellow color used to paint cars was lead-based(lead oxide to be specific). But due to toxicity of lead, it was replaced by BiVO4 due to its earth abundance and non-toxicity. My work involves using this material to split water molecules into hydrogen and oxygen by absorbing simply sunlight.

As part of my PhD, I have been involved in optimizing the electronic and catalytic properties of BiVO4. I can fabricate devices as large as 3 x 3″ by simply spin casting a viscous solution of bismuth and vanadium salts and heating the as-cast film to 475C for 15 minutes. The entire process from the making of the ink to the final device is completed in little more than an hour. These devices have demonstrated a record performance(over 5%) for solar-driven water splitting owing to their high catalytic and superior electronic properties when compared to those made by any other lab across the world. This is because I am able to control the way bismuth and vanadium atoms in these films are bonded to each other such that they mimic the catalyst in plants that are responsible for photosynthesis.

This work will pave the way ahead for a new method to rationally develop cheap and high efficiency systems to generate hydrogen directly from the sun.


It’s the little things: ultra-faint satellites around isolated dwarf galaxies

Coral Wheeler
Physics & Astronomy
Co-authors: James Bullock, UCI. Jose Onorbe, MPIA. Mike Boylan-Kolchin, UMD.
crwheele@uci.edu

In the currently favored cosmological paradigm, galaxies are embedded within massive collapsed pockets — or “halos” — of a mysterious substance known as dark matter. According to this theory, dark matter is required for regular matter overcome the early expansion of the Universe and to collapse into the galaxies and stars that are required for life as we know it.

These dark halos that permeate the known Universe are themselves predicted to be filled with smaller dark matter clumps in a hierarchical manner. Observationally verifying the existence of these small clumps is one of the most important goals in modern cosmology.

We expect that the smallest dark halos should be largely free of stars, due to the ambient ionizing background radiation — emitted when the first stars formed — preventing them from accreting gas. But below what mass do we expect all dark halos to be completely dark? What are the masses of the smallest galaxies and can we observe them now or in the near future?

I use ultra-high resolution hydrodynamic simulations — the highest resolution ever run with realistic models of star-formation and energy released from star formation — to make predictions aimed at testing the validity of the prevailing paradigm. I predict that orbiting around isolated low mass galaxies — themselves thousands of times less massive than the Milky Way — we will find ultra-faint satellite galaxies that are only a few thousand times the mass of the sun. These galaxies are lower in mass than the lowest-mass galaxies predicted by many authors, and were able to form only because they managed to form their stars before the ionizing background heated their gas away.

The most massive of these satellites should be visible with current telescopes, and more powerful instruments coming on line in the next few years will be able to observe even the faintest of these objects. Verifying the predictions I have made would provide critical support to the prevailing cosmological paradigm, while failing to find these tiny galaxies would call into question much of what we think we know about the origin of the Universe. Either way, my research will have a deep impact on the astrophysical community and on society as a whole, because who hasn’t looked up on a dark night, seen the stars, the Milky Way, and the endless emptiness of space and wanted to know from where it all came?


Harnessing Small Scale Physics with Metamaterials

Robert Joachim
Chemical and Materials Physics / Physics
Co-authors: Peter Taborek (Professor, UCI)
RJoachim@uci.edu

As the technology supporting modern life becomes increasingly complex we have begun to encounter serious obstacles to improving this technology. These issues stem not only from a lack of knowledge but also from the physical limitations inherent in the underlying materials. Things can only be made so small, so thin, etc. before small scale physics effects, phenomena we never experience in our daily lives, become significant. In some cases these effects present a serious stumbling block. For instance microchip architecture is now so compact that quantum physics must be taken into account. While these physical phenomena can be problematic we can also harness them to both improve existing technologies and create entirely new ones. One very promising avenue is to simply engineer new materials with these effects in mind. Such revolutionary materials have been termed “metamaterials”.

Our research focuses on a phenomenon known as evanescent heat transfer. This is a unique way in which heat moves between objects which are separated by very small distances, such as those separating microchip transistors. Using theoretical studies as a framework we have manufactured a “metamaterial” consisting of alternating microscopic layers of glass and a common ceramic. This material is designed such that it should experience a drastic enhancement of evanescent heat transfer. This means that over small distances it should emit heat at a greater rate than any existing material. Using our unique experimental apparatus we are probing the flow of heat from this novel material. Better understanding its unique characteristics promises numerous potential applications from improving nanotechnology to devising new ways of transforming heat into electrical energy.

 
 

Session 9


Implementing approximate computing algorithms over stochastic circuit

Seyyed Ahmad Razavi Majomard
Computer Science
Co-author: Nazanin Ghasemian (Shahid Beheshti University)
srazavim@uci.edu

By advent of new applications of digital systems such as Internet of Things, it is necessary to reduce the power consumption of digital systems while maintaining their reliability. Hence, Recently, Approximate Computing (AC) and Stochastic Computing (SC) got a huge attention from research centers. There are several papers published in recent years which show the great benefits of AC and SC such as power consumption reduction. Note that AC and SC are two different concepts, the first is in algorithms level and the other is in circuit level. First of all, I will explain the AC, and after that, I will go through what I have in my mind.

Many digital systems rely on precise computation, which requires to spend time and power. However, such precision is not required in some applications; for example, one of the most famous image compression algorithms is JPEG, which reduces the size of image at the cost of degradation in image quality. In such applications, the precise computation is not required because the algorithm has it owns errors and by allowing imprecise computation, the error rate is still tolerable. Hence, using AC, the size and power consumption of the digiral systems can be reduced.

Up to now, the researches on AC considered that it lays over reliable circuit. However, because the new technologies in chip manufacturing are not reliable, we want to develop AC algorithms which lays on unreliable circuits. for this purpose, we will use SC which is a new method of computation with unreliable circuits. (Although the SC circuits uses the same manufacturing technology of the traditional circuits, it requires much less area and it is much more reliable, but with inherent error in computation.)

Therefore, by running AC (Approximate Computing) algorithms over SC (Stochastic Computing) circuits, which has its own inherent errors, we can gain power and area reduction, and improve the reliability. In other words, we will take advantages of both AC and SC circuits to overcome their weaknesses, and make these brilliant methods more practical.


Hackathons: The Intersection of Informal Learning and Civic Engagement

Van Custodio
Informatics
vcustodi@uci.edu

Exposing diverse students to Computer Science (CS) education faces limited resources and socioeconomic barriers (Deahl, 2014). In 2013, 57.7% of college students majoring in computing were white; only 14% were women (Zweben & Bizot, 2014) with similar rates of participation in AP CS exams (Ericson, 2013). Special classes and programs have been created to address this disparity, and yet it persists. At the same time, hackathons, competitions focused on developing software in a compressed timeframe, tend to produce products that are not further pursued (Zeid, 2013). If not to develop production software, what purpose do these events serve? In this work, I will explore how Hackathons can be used as Informal Learning Environments (ILE) to augment existing approaches and close the diversity gap in CS Education. Participation in these kinds of contests has been shown to increase performance and motivation in the classroom (Zeid, 2013). Civic hackathons, in particular, have the potential to attract diverse participants (DiSalvo et al., 2014). However, these effects have thus far been viewed as relatively incidental to other goals, such as increasing corporate brand recognition or improving access to civic data. In this work, I will explicitly analyze hackathons as an ILE using social and situated learning theories (Bandura & McClelland, 1977; Lave & Wenger, 1991). In particular, this research project asks the following questions:

— How do hackathons contribute to advancing diversity in CS learning? What implications do these contributions have for the design of ILEs focused on computing education?
— What organizations, processes, structures, and artifacts are used by organizers and participants during hackathons? How do these compare to traditional ILEs?
— In what ways do hackathon organizers and participants currently integrate STEM learning during hackathons?
— Can interest in CS and learning of computing skills be improved for under-represented minorities with the addition or alteration of these practices?

This work makes two major contributions. First, by applying known theories of learning to the challenge of organizing hackathons as ILE, I will demonstrate if and how such competitions can increase interest in, access to, and confidence in computer science for under-represented minorities. Second, by studying learning in the context of hackathons, I will refine theories of social, situated, and informal learning. The outcomes of this work can be used to promote hackathons and other ILEs for CS education as well as to provide an evidence-based model for organizing such events.


Source Estimation for EEG Signals: A Statistical Approach

Yuxiao Wang
Statistics
yuxiaow1@uci.edu

Electroencephalography (EEG) has been widely used in studying the dynamics in human brains due to its relatively high temporal resolution (in millisecond). EEGs are indirect measurements of neuronal sources. Estimation of the underlying sources is challenging due to the ill-posed inverse problem. EEGs are typically modeled as a linear mixing of the underlying sources. Here, we consider source modeling and estimation for multi-channel EEG data recorded over multiple trials. We propose parametric models to characterize the latent source signals and develop methods for estimating the processes that drive the source — instead of merely recovering the source signals. Moreover, we develop metrics for connectivity between channels through latent sources by studying the properties of the estimated mixing matrix. Our estimation procedure pulls information from all trials using a two-stage approach: first, we apply the second order blind identification (SOBI) method to estimate the mixing matrix and second, we estimate the parameters for latent sources using maximum likelihood. Our methods will also impose regularization to ensure sparsity. Our proposed methods have been evaluated on both simulated data and EEG data obtained from a motor learning study.


Modeling Human Behavior from Location Based Activity Data

Moshe Lichman
Computer Engineering Ph.D.
lichman@gmail.com

Abstract:
Increasing availability of location-based data sets opens up new ways to analyze data and extract valuable insights on population mobility patterns. Data such as Location Based Social Media and taxi locations (collected by GPS on each taxi) are great, new sources for such studies. However, in many cases, the amount of data is very limited and often we have very little information – if any at all, on individuals within the population. In our work, we focus on a type of data where each spatio-temporal data point – a point where we have information on both space and time – can be associated with a particular user. We show that by using such data, we can model an individual’s mobility patterns around highly dense, urban areas. Moreover, our work introduces a novel way of overcoming the problem of “too little data”. By learning the structures of the entire ! population, we borrow methods widely used in linguistics that allow us to project a common pattern onto individuals with “too little data”. We show that our results exceed current method results, not only on “too little data” individuals, but in general as well. This allows us to make better predictive models for population behaviors than previously possible.

Research Significance:
Policy planning officials, whether they are part of a public or private organization, have always needed information in order to make important policy decisions. In the area of Urban Planning, such information includes such items as the population locations and resource usage. In the past, such information was gathered mostly with polls. Nowadays, more and more decisions are being made based on information gathered by automatic sensors. Information derived from traffic sensors and cell phone usage can be used to identify population mobility patterns. By learning such patterns, decision makers can plan policies to answer the population’s needs, whether by creating a better public transportation system or even a better resource distribution system. Our work comes as a complementary, yet necessary part of that field. By creating better, more accurate models tha! t can pre dict and understand population mobility patterns, we improve our ability to extract valuable insights from a vast amount of data, insights that are crucial to every policy planning process. With these tools, we will vastly increase our understanding of how best to build the cities of the future.


What can we learn from the travel pattern of car-less households?

Suman Mitra
Transportation Sciences
skmitra@uci.edu

Planning for more sustainable travel in California. “Mobility is an important prerequisite for equal participation in society and the satisfaction of basic human needs. The capacity to undertake most social activities depends on mobility, yet mobility is unevenly distributed across social and geographical boundaries. Compared to mobile households, mobility-impaired households are at a disadvantage for employment, education, and other essential opportunities. However, many households in the United States are not able to access the benefits of transportation services due to various limitations (i.e., some members cannot drive due to a disability or medical condition, some members cannot afford a car, or some members have no access to transit services). People with physical disabilities are most at risk but elderly households are increasingly at risk as well. Approximately 10.5 million US households, or 9%, do not own cars (2008-12 American Community Survey). Unfortunately, our knowledge of car-less households is lacking, as is our research on their predicaments.

Therefore, the focus of my study is to contribute to the growing interest in social justice issues in urban transportation planning through examining the transportation needs of car-less households in California, based on the 2012 California Household Travel Survey (CHTS). These households, who often seem forgotten in transportation policy discussions, can be organized in two groups: involuntary and voluntary car-less households. Using discrete choice models, I analyze the characteristics of voluntary and involuntary car-less households and their travel behavior using the CHTS travel diaries. In addition, I explore the degree of choice available to car-less households and the wider impacts their transportation choices have on their lives.

The study shows that, given the well-document link between income and car ownership, most households with no cars are in that situation because they cannot afford a car. Households that are forced into being car-less face restrictions to their mobility which will in turn have a negative impact on their life participation and well-being. However, households who voluntarily choose to live without cars do not face the same access restrictions and have more opportunities for jobs, social connections, access to care, and entertainment.

The findings of this study have the potential to contribute to a better understanding of individuals?€? unmet transportation needs due to being car-less. Consequently, the study will allow policymakers to better understand the hardships of car-less households, and help formulate appropriate policies that will provide better mobility to this disadvantaged group and promote social equity in California.


An Exploratory Data Analysis of EEGs Time Series: A Functional Boxplots Approach

Duy Ngo
Statistics
Co-authors: Dr. Hernando Ombao (UCI). Dr. Marc G. Genton (King Abdullah University of Science and Technology), Dr. Ying Sun (King Abdullah University of Science and Technology)
dngo5@uci.edu

We conduct exploratory data analysis on electroencephalograms (EEG) data to study the brain’s electrical activity during resting state. The standard approaches to analyzing EEG are classified either into the time domain (ARIMA modeling) or the frequency domain (via periodograms). Our goal here is to develop a systematic procedure for analyzing periodograms collected across many trials (which consists of 1 second traces) during the entire resting state period. In particular, we use functional boxplots to extract information from the many trials [1]. First, we formed consistent estimators for the spectrum by smoothing the periodograms using a bandwidth selected using the generalized cross-validation of the Gamma deviance. We then obtained descriptive statistics from the smoothed periodograms using functional box plots which provide the median and outlying curves. The performance of functional boxplot is compared with the classical point-wise boxplots in a simulation study and the EEG data. Moreover, we explored the spatial variation of the spectral power for the alpha and beta frequency bands by applying the surface boxplot method on periodograms computed from the many resting-state EEG traces. This work is in collaboration with the Space-Time Group at UC Irvine.
Functional boxplot is a new nonparametric method to analyze a functional data, such as EEG data. Any researchers, who have worked on functional data, will find that the functional boxplot can be an informative exploratory tool to visualize a high dimension functional data. The descriptive statistics, provided by this new method, are rank-based, so one can develop a robust statistical models to investigate the features of EEG data.

References:
[1] Sun, Y., and Genton, M.G. (2011), ?€?Functional Boxplots,?€? Journal of Computational and Graphical Statistics, 20, 316-334.


Interacting with humanlike interfaces: why we love Siri but hate Clippy

Bart Knijnenburg
Informatics
Co-author: Martijn Willemsen (Eindhoven University of Technology)
bart.k@uci.edu

Agent-based interaction, in which the user interacts with a virtual entity using natural language, has come a long way: what started with an annoying paperclip in MS Office has evolved into a powerful means of hands-free interaction with our phone. But what makes an interface agent usable? This question is harder than you might think… Since agents have no buttons or sliders, standard usability methods do not apply!

In my research I investigate the usability of human-like agent-based interfaces. In an experiment with a travel advisory system, I manipulated the “human-likeness” of the agent interface: one agent was made to look like a computer program that talks “computerese”, but the others were shown as a human-like character that interacted with the user using casual human-like language. In the experiment I demonstrate that users of the more human-like agents form an anthropomorphic use image of the system: they act human-like towards the system and try to exploit typical human-like capabilities they believe the system possesses.

This all works fine if the system actually possesses these human-like capabilities. But if the system lacks such capabilities, this severely reduces the usability of system! Interestingly, this “overestimation effect” only happens for the human-looking agents; users naturally adjust their expectations when using the computer-like agent.

Furthermore, in the analysis of the study results, I demonstrate that it is very difficult to fix the usability problems that arise in the human-like agent. This is because the “use image” (the mental representation) that users form of the agent-based system is inherently integrated (as opposed to the compositional use image they form of conventional interfaces). This integrated use image means that the feedforward cues provided by the system do not instill user responses in a one-to-one matter (in which case the user would only exploit capabilities that the agent itself demonstrates), but that these cues are instead integrated into a single use image. Consequently, users try to exploit capabilities that were not signaled by the system to begin with, thereby further exacerbating the overestimation effect.

Due to technological advancements we will soon interact with virtual agents, smart appliances, personal drones, self-driving cars, and autonomous robots using natural language. My work is essential for designers of such systems to understand how people interact with them. In my presentation at the AGS symposium I will demonstrate what good and bad things are ahead of us in terms of agent-based interaction.
 
 

Session 10


The Militarization of Childhood: Education and the Children’s Corps in Chinese Communist Border Regions, 1937-1945

Kyle David
History
kedavid@uci.edu

Conspicuously absent from history of the Chinese Communist Party’s (CCP) miraculous rise during the 1930s and 1940s is the voice of its children. As with many non-elite historical subjects who left behind few if any records, children seldom enter the historical record. However, by reading the records written for and regarding children it is possible to provide an “imaginative reconstruction” of what life was like for children during an exceptionally tumultuous and violent period of modern Chinese history. This conference paper provides an examination of two institutions around which the lives of many border region children revolved: school and the Children’s Corps (ertongtuan). Such an examination argues that childhood in the CCP border regions was exceptionally militarized and inundated with images and stories of highly graphic violence. Specifically, this paper looks at how institutions such as primary schools used graphic and violent imagery in curricula and textbooks. In addition to teaching children basic reading, writing, and mathematics skills, schools within the communist border regions also constructed an anti-Japanese, pro-CCP narrative which implemented violent stories of struggle and various accounts of war atrocities in order to galvanize children in the war effort. This paper also looks at the role children played in the Children’s Corps, a militia-like organization which recruited children between the ages of seven to fifteen. The Children’s Corps worked within the purview of adult military organizations such as the New Fourth and Eighth Route Armies to aid in the war effort by fulfilling a variety of roles. Among many other duties, children played key roles as sentry guards, message runners, counter-intelligence and counter-espionage agents, and supply runners. Such precarious positions at times led to torture, serious injury, disfigurement, and even death. Drawing on a wide collection of newspaper articles, government directives, and oral histories given by former Children’s Corps members, this paper challenges recent scholarship which argues that the war and violence of this period was “beyond juvenile comprehension” while also providing a window into the lives of children growing up in CCP-governed border regions before, during, and immediately following the War of Resistance against Japan. Lastly, this paper seeks to situate the legacies of CCP war-time education within the broader genre of the present-day People’s Republic of China’s National Humiliation Education (guochi jiaoyu) and Patriotic Education (aiguozhuyi jiaoyu) curricula.


Becoming-Fairy: Targeting Everyday Life in the UC Global Food Initiative

Crystal Hickerson
Comparative Literature
chickers@uci.edu

A popular thread of critical theory in the humanities right now involves ecomaterialism—a discourse concerned to put developments in the natural sciences into conversation philosophy. Ecomaterialism is asking how we have understood the material environment and how those understandings have influenced our critical theory, usually with the tacit aim of contributing to the debate on how humans can live more “sustainably.” My critical project “Becoming-Fairy” expands ecomaterialist discourse by using popular conceptions of fairies in Britain and the United States during the 19-20th centuries to consider how material practices of everyday life can be altered more effectively. In fact, “Becoming-Fairy” reads a research project funded by the UC Global Food Initiative that is underway in Verano Place graduate housing right now, designed to encourage residents to begin growing some of their own food. The project, entitled “Grow Your Own Food Campaign,” operates not through overt pedagogical measures such as educational workshops but through small changes in the visual and social environment designed to encourage a more “fairy-like” existence, to lead to more of a cradle-to-cradle system of food production and disposal that enriches our natural and social ecosystems.

My methodology of planting cues in everyday life to work subconsciously on student residents goes against the grain of the usual political posture adopted by researchers in the humanities, whose convention is to contextualize a problem in such a way that an audience will consciously understand that problem differently and freely decide to take ethical action. Here, I am suggesting we make gardening less queer to students and their families via an opposing (but not singular) approach to changing everyday life, suggesting that instead of raising consciousness of the ethical imperative to accept or become educated in queer lifestyles—from the homosexual, to the androgyne historically represented by the figure of the fairy, to the environmental activist-gardener—that we develop strategies for smuggling queer behaviors into the norms of everyday life. I am suggesting, perhaps provocatively, that we avail ourselves of the same subtle tactics that capitalism has used to discipline our consumerist behaviors. Using theories from the logic of fairy-ness in Victorian folklore, turn-of-the century American drama, and present-day children’s media; the philosophy of Deleuze and Guattari; and the human-based research on UCI campus, I suggest radical alternatives for encouraging humans in becoming-fairy as they integrate seemingly queer but arguably healthier ecological practices into their already over-taxed daily routines.


The Dehumanized and the Nonhuman: Empathetic Play in Lucas Pope’s Papers, Please

Matt Knutson
Visual Studies
mknutso1@uci.edu

My current research examines the power of videogames to elicit empathy. In popular discourse, the videogame is a medium better known for graphic violence than perspective-taking; however, games are exceptionally capable of making us consider a new point of view when we walk (digitally) in a character’s shoes. The methods by which games “make us” perform behaviors requires unpacking. To this end, I apply Bruno Latour’s Actor-Network Theory to videogames, counting them among the “missing masses” of nonhuman agents that push back against humans and influence our behavior. By eliciting specific behaviors, games can and do provoke in us emotional states, and one emotion of critical significance is empathy.

To explore empathetic play critically, I center my analysis on Papers, Please, a 2013 game in which players assess immigration documents at a border post. By performing the role of a border officer, the player’s task entails scrutinizing immigrants in order to sort them as parcels rather than as people with rights such as privacy. The uncomfortably intimate moment of exchange between officer and immigrant reveals the humanity behind legal documents and the inhumanity of the state’s methods of assessment. Papers, Please often challenges the concept that play is “fun” and instead presents an experience of frustration and heartbreak; in other words, it is a game with a strongly negative affect. I find this negativity productive in its relevance to contemporary discourse on immigration.

The player of Papers, Please can’t help but bring a new set of experiences to the table when moving beyond fictive game to political reality. Empathetic play exposes the arbitrariness of legal documentation, the sometimes frustrating impenetrability of national borders, and the institutional dehumanization often inherent in immigration law. Papers, Please communicates these experiences at its structural, mechanical core; as nonhuman actor, it provokes behavioral and emotional responses from its players. By investigating the methods by which games push back on their players, we can discover and embrace the empathetic consequences of critical play.


Enhancing Student Ownership: The Undergraduate Dance Major and the Creative Process of Choreography

Cara Scrementi
Dance
cscremen@uci.edu

I believe in creating connections between course work and universal skills, and the gained experiences are meant to serve the student holistically. In my thesis research, I am exploring how to consciously and thoughtfully incorporate ways of serving the student holistically within a classroom environment for the undergraduate dance major and how the concepts can be developed within a choreographic framework. Through the integration of experiential learning and active learning theories and techniques into the creative process of choreography, I aim to discover a variety of means in which this process can lead to an enhanced sense of student ownership. The successful methods from a study exploring this idea can help create a habit that can foster lifelong learning within the students. They will be actively improving not only their dance composition skills, but also developing their critical thinking, communication, and intra- and interpersonal skills. They will be learning to take greater initiative and advantage of leadership opportunities, working collaboratively with peers and instructors, increasing self-confidence and heightening their awareness for their own learning processes through activities, reflection, and/or lectures. Students learning the fundamental aspects of the creative process while also being aware of how to use them in the context of future coursework, jobs or other aspects of life is the undercurrent that drives my research topic.

The study will be conducted during the winter quarter. Participating students will meet together with me two times per week to investigate a variety of concepts primarily through movement, but also through reading, writing, reflection, guest lecturers, and discussion. Student progress will be measured through a series of three individual interviews (pre-, mid- and post-study), written and verbal activity assessments, my own observations and evaluations of their progress based on where they began, and student self-observations and evaluations. This research intends to serve the participating students as they continue their education and move into their careers, myself as a growing educator, other educators as a reference to consider in their own teaching methods, and therefore their students. It’s about educating the whole person, the underpinning of what I believe a college education to be.


The Asian American Literary Review Synchronous Teaching Initiative

Gerald Maa
English
gmaa@uci.edu

Per our mission statement, the Asian American Literary Review is ?€?a space for all those who consider the designation “Asian American” a fruitful starting point for artistic vision and community.?€? When my co-editor-in-chief, Lawrence-Minh B??i Davis, and I published our most experimental issue, we knew we wanted to extend the innovative impulse beyond the page. Our mixed race issue commissioned collaborative work from writers, artists, and scholars from around the world, encouraging, requiring, even, that the collaboration beget work of mixed genre and/or media capable of contributing to exciting, current conversations about mixed race politics and identity. We partook as well, conceiving of the issue as a box that contains the work collated into two books, a poster, and a deck of cards.

To extend the experimental, community-building impulse out of the box, so to speak, we started our synchronous teaching initiative, for which I was director. We established a virtual node through which classes across the nation, and even a handful abroad, could interact and learn with each other as they study our mixed race issue in their separate classrooms. We built digital labs, like ?€?Indigenity,?€? ?€?Migration,?€? and ?€?Mixed Race Feminisms.?€? We found specialists to imagine curricular material for the issue. We hosted discussion boards that brought together students across all time zones. We encouraged professors to share, co-teach, and think in connection with other classrooms. Over our two year endeavor, the scores of classes that participated in our synchronous teaching program produced imaginative, incisive, and collective work like a virtual database of literary mixed-race figures, Tumblrs that log and reflect upon how mixed cultures appear in students?€? daily lives, fieldtrips undertaken for engaged encounters with memorials, and personal archives of stories. From its inception, the Asian American Literary Review has endeavored to be a conduit between academia and the world outside of the collegiate halls. Through our success with the synchronous teaching program, we have learned that one way to enlarge the channels between university classroom and the social, political, and/or mundane world is to build conduits between classrooms as well. That is why we are excited about continuing our synchronous teaching initiative with a special issue on war to be published fall 2015, appearing in line with the fiftieth anniversary of the Vietnam War.


UC Irvine Service Workers Community-Based Theatre Project

Amanda Novoa
Drama
novoaa@uci.edu

The Service Workers Project is a community-based theatre project engaging UC Irvine service workers. The project is being produced by the campus organization, Brown Bag Theater Company (BBTC).

BBTC’s Mission: “We are an ensemble of students, artists, and scholars who aim to produce critically engaging work that reflects, impacts and empowers the Latino community. We create opportunities and leadership roles for Latino/Hispanic artists. We are committed to sharing and celebrating the richness of Latino culture. We believe that the art of theatre is a cultural force with the capacity to transform the lives of individuals within our community and society at large.”

I am a graduate student leader for BBTC and I established and serve in the role of Engagement Director for the Service Workers Project. I have developed a close relationship with the community of service workers at UCI and have been hosting events through BBTC that serve as a forum for communication between workers and students. These events involve story circles, which include activities that I have adapted to resurface forgotten memories, cultivate conversations, and bring out voices. In preparation for this opportunity, I took an intensive course with Cornerstone Theater Company, a reputable organization that has specialized in community-based work for almost thirty years. As part of my research, I was involved with South Coast Repertory (SCR), a theatre known for producing award-winning new plays. I worked on SCR’s Dialogues/Dialogos, a two-year community-based project that shared the stories of the Latino community of Santa Ana, California.

With the Service Workers Project, the narratives that arise from each story circle that I facilitate will cumulate into an original play inspired by the community. BBTC will present a production that embodies our shared experiences. This original production will invite service workers to be involved as part of the cast and design team. We will be leading theatrical workshops for acting and design to share the world of theatre with the workers and empower them to be artistically invested in our project.

Along with the Service Workers Project, BBTC is presenting additional work that is Latino community focused and invites this audience of underserved and underrepresented populations to the theatre.

A student-worker relationship has been a part of UCI’s history for many years. Students and workers have supported each other through many adversities. This project will strengthen this relationship and create a long-term partnership through the art of theatre.