Session 9 Abstracts

Implementing approximate computing algorithms over stochastic circuit

Seyyed Ahmad Razavi Majomard
Computer Science
Co-author: Nazanin Ghasemian (Shahid Beheshti University)
srazavim@uci.edu

By advent of new applications of digital systems such as Internet of Things, it is necessary to reduce the power consumption of digital systems while maintaining their reliability. Hence, Recently, Approximate Computing (AC) and Stochastic Computing (SC) got a huge attention from research centers. There are several papers published in recent years which show the great benefits of AC and SC such as power consumption reduction. Note that AC and SC are two different concepts, the first is in algorithms level and the other is in circuit level. First of all, I will explain the AC, and after that, I will go through what I have in my mind.

Many digital systems rely on precise computation, which requires to spend time and power. However, such precision is not required in some applications; for example, one of the most famous image compression algorithms is JPEG, which reduces the size of image at the cost of degradation in image quality. In such applications, the precise computation is not required because the algorithm has it owns errors and by allowing imprecise computation, the error rate is still tolerable. Hence, using AC, the size and power consumption of the digiral systems can be reduced.

Up to now, the researches on AC considered that it lays over reliable circuit. However, because the new technologies in chip manufacturing are not reliable, we want to develop AC algorithms which lays on unreliable circuits. for this purpose, we will use SC which is a new method of computation with unreliable circuits. (Although the SC circuits uses the same manufacturing technology of the traditional circuits, it requires much less area and it is much more reliable, but with inherent error in computation.)

Therefore, by running AC (Approximate Computing) algorithms over SC (Stochastic Computing) circuits, which has its own inherent errors, we can gain power and area reduction, and improve the reliability. In other words, we will take advantages of both AC and SC circuits to overcome their weaknesses, and make these brilliant methods more practical.


Hackathons: The Intersection of Informal Learning and Civic Engagement

Van Custodio
Informatics
vcustodi@uci.edu

Exposing diverse students to Computer Science (CS) education faces limited resources and socioeconomic barriers (Deahl, 2014). In 2013, 57.7% of college students majoring in computing were white; only 14% were women (Zweben & Bizot, 2014) with similar rates of participation in AP CS exams (Ericson, 2013). Special classes and programs have been created to address this disparity, and yet it persists. At the same time, hackathons, competitions focused on developing software in a compressed timeframe, tend to produce products that are not further pursued (Zeid, 2013). If not to develop production software, what purpose do these events serve? In this work, I will explore how Hackathons can be used as Informal Learning Environments (ILE) to augment existing approaches and close the diversity gap in CS Education. Participation in these kinds of contests has been shown to increase performance and motivation in the classroom (Zeid, 2013). Civic hackathons, in particular, have the potential to attract diverse participants (DiSalvo et al., 2014). However, these effects have thus far been viewed as relatively incidental to other goals, such as increasing corporate brand recognition or improving access to civic data. In this work, I will explicitly analyze hackathons as an ILE using social and situated learning theories (Bandura & McClelland, 1977; Lave & Wenger, 1991). In particular, this research project asks the following questions:

— How do hackathons contribute to advancing diversity in CS learning? What implications do these contributions have for the design of ILEs focused on computing education?
— What organizations, processes, structures, and artifacts are used by organizers and participants during hackathons? How do these compare to traditional ILEs?
— In what ways do hackathon organizers and participants currently integrate STEM learning during hackathons?
— Can interest in CS and learning of computing skills be improved for under-represented minorities with the addition or alteration of these practices?

This work makes two major contributions. First, by applying known theories of learning to the challenge of organizing hackathons as ILE, I will demonstrate if and how such competitions can increase interest in, access to, and confidence in computer science for under-represented minorities. Second, by studying learning in the context of hackathons, I will refine theories of social, situated, and informal learning. The outcomes of this work can be used to promote hackathons and other ILEs for CS education as well as to provide an evidence-based model for organizing such events.


Source Estimation for EEG Signals: A Statistical Approach

Yuxiao Wang
Statistics
yuxiaow1@uci.edu

Electroencephalography (EEG) has been widely used in studying the dynamics in human brains due to its relatively high temporal resolution (in millisecond). EEGs are indirect measurements of neuronal sources. Estimation of the underlying sources is challenging due to the ill-posed inverse problem. EEGs are typically modeled as a linear mixing of the underlying sources. Here, we consider source modeling and estimation for multi-channel EEG data recorded over multiple trials. We propose parametric models to characterize the latent source signals and develop methods for estimating the processes that drive the source — instead of merely recovering the source signals. Moreover, we develop metrics for connectivity between channels through latent sources by studying the properties of the estimated mixing matrix. Our estimation procedure pulls information from all trials using a two-stage approach: first, we apply the second order blind identification (SOBI) method to estimate the mixing matrix and second, we estimate the parameters for latent sources using maximum likelihood. Our methods will also impose regularization to ensure sparsity. Our proposed methods have been evaluated on both simulated data and EEG data obtained from a motor learning study.


Modeling Human Behavior from Location Based Activity Data

Moshe Lichman
Computer Engineering Ph.D.
lichman@gmail.com

Abstract:
Increasing availability of location-based data sets opens up new ways to analyze data and extract valuable insights on population mobility patterns. Data such as Location Based Social Media and taxi locations (collected by GPS on each taxi) are great, new sources for such studies. However, in many cases, the amount of data is very limited and often we have very little information – if any at all, on individuals within the population. In our work, we focus on a type of data where each spatio-temporal data point – a point where we have information on both space and time – can be associated with a particular user. We show that by using such data, we can model an individual’s mobility patterns around highly dense, urban areas. Moreover, our work introduces a novel way of overcoming the problem of “too little data”. By learning the structures of the entire ! population, we borrow methods widely used in linguistics that allow us to project a common pattern onto individuals with “too little data”. We show that our results exceed current method results, not only on “too little data” individuals, but in general as well. This allows us to make better predictive models for population behaviors than previously possible.

Research Significance:
Policy planning officials, whether they are part of a public or private organization, have always needed information in order to make important policy decisions. In the area of Urban Planning, such information includes such items as the population locations and resource usage. In the past, such information was gathered mostly with polls. Nowadays, more and more decisions are being made based on information gathered by automatic sensors. Information derived from traffic sensors and cell phone usage can be used to identify population mobility patterns. By learning such patterns, decision makers can plan policies to answer the population’s needs, whether by creating a better public transportation system or even a better resource distribution system. Our work comes as a complementary, yet necessary part of that field. By creating better, more accurate models tha! t can pre dict and understand population mobility patterns, we improve our ability to extract valuable insights from a vast amount of data, insights that are crucial to every policy planning process. With these tools, we will vastly increase our understanding of how best to build the cities of the future.


What can we learn from the travel pattern of car-less households?

Suman Mitra
Transportation Sciences
skmitra@uci.edu

Planning for more sustainable travel in California. “Mobility is an important prerequisite for equal participation in society and the satisfaction of basic human needs. The capacity to undertake most social activities depends on mobility, yet mobility is unevenly distributed across social and geographical boundaries. Compared to mobile households, mobility-impaired households are at a disadvantage for employment, education, and other essential opportunities. However, many households in the United States are not able to access the benefits of transportation services due to various limitations (i.e., some members cannot drive due to a disability or medical condition, some members cannot afford a car, or some members have no access to transit services). People with physical disabilities are most at risk but elderly households are increasingly at risk as well. Approximately 10.5 million US households, or 9%, do not own cars (2008-12 American Community Survey). Unfortunately, our knowledge of car-less households is lacking, as is our research on their predicaments.

Therefore, the focus of my study is to contribute to the growing interest in social justice issues in urban transportation planning through examining the transportation needs of car-less households in California, based on the 2012 California Household Travel Survey (CHTS). These households, who often seem forgotten in transportation policy discussions, can be organized in two groups: involuntary and voluntary car-less households. Using discrete choice models, I analyze the characteristics of voluntary and involuntary car-less households and their travel behavior using the CHTS travel diaries. In addition, I explore the degree of choice available to car-less households and the wider impacts their transportation choices have on their lives.

The study shows that, given the well-document link between income and car ownership, most households with no cars are in that situation because they cannot afford a car. Households that are forced into being car-less face restrictions to their mobility which will in turn have a negative impact on their life participation and well-being. However, households who voluntarily choose to live without cars do not face the same access restrictions and have more opportunities for jobs, social connections, access to care, and entertainment.

The findings of this study have the potential to contribute to a better understanding of individuals?€? unmet transportation needs due to being car-less. Consequently, the study will allow policymakers to better understand the hardships of car-less households, and help formulate appropriate policies that will provide better mobility to this disadvantaged group and promote social equity in California.


An Exploratory Data Analysis of EEGs Time Series: A Functional Boxplots Approach

Duy Ngo
Statistics
Co-authors: Dr. Hernando Ombao (UCI). Dr. Marc G. Genton (King Abdullah University of Science and Technology), Dr. Ying Sun (King Abdullah University of Science and Technology)
dngo5@uci.edu

We conduct exploratory data analysis on electroencephalograms (EEG) data to study the brain’s electrical activity during resting state. The standard approaches to analyzing EEG are classified either into the time domain (ARIMA modeling) or the frequency domain (via periodograms). Our goal here is to develop a systematic procedure for analyzing periodograms collected across many trials (which consists of 1 second traces) during the entire resting state period. In particular, we use functional boxplots to extract information from the many trials [1]. First, we formed consistent estimators for the spectrum by smoothing the periodograms using a bandwidth selected using the generalized cross-validation of the Gamma deviance. We then obtained descriptive statistics from the smoothed periodograms using functional box plots which provide the median and outlying curves. The performance of functional boxplot is compared with the classical point-wise boxplots in a simulation study and the EEG data. Moreover, we explored the spatial variation of the spectral power for the alpha and beta frequency bands by applying the surface boxplot method on periodograms computed from the many resting-state EEG traces. This work is in collaboration with the Space-Time Group at UC Irvine.
Functional boxplot is a new nonparametric method to analyze a functional data, such as EEG data. Any researchers, who have worked on functional data, will find that the functional boxplot can be an informative exploratory tool to visualize a high dimension functional data. The descriptive statistics, provided by this new method, are rank-based, so one can develop a robust statistical models to investigate the features of EEG data.

References:
[1] Sun, Y., and Genton, M.G. (2011), ?€?Functional Boxplots,?€? Journal of Computational and Graphical Statistics, 20, 316-334.


Interacting with humanlike interfaces: why we love Siri but hate Clippy

Bart Knijnenburg
Informatics
Co-author: Martijn Willemsen (Eindhoven University of Technology)
bart.k@uci.edu

Agent-based interaction, in which the user interacts with a virtual entity using natural language, has come a long way: what started with an annoying paperclip in MS Office has evolved into a powerful means of hands-free interaction with our phone. But what makes an interface agent usable? This question is harder than you might think… Since agents have no buttons or sliders, standard usability methods do not apply!

In my research I investigate the usability of human-like agent-based interfaces. In an experiment with a travel advisory system, I manipulated the “human-likeness” of the agent interface: one agent was made to look like a computer program that talks “computerese”, but the others were shown as a human-like character that interacted with the user using casual human-like language. In the experiment I demonstrate that users of the more human-like agents form an anthropomorphic use image of the system: they act human-like towards the system and try to exploit typical human-like capabilities they believe the system possesses.

This all works fine if the system actually possesses these human-like capabilities. But if the system lacks such capabilities, this severely reduces the usability of system! Interestingly, this “overestimation effect” only happens for the human-looking agents; users naturally adjust their expectations when using the computer-like agent.

Furthermore, in the analysis of the study results, I demonstrate that it is very difficult to fix the usability problems that arise in the human-like agent. This is because the “use image” (the mental representation) that users form of the agent-based system is inherently integrated (as opposed to the compositional use image they form of conventional interfaces). This integrated use image means that the feedforward cues provided by the system do not instill user responses in a one-to-one matter (in which case the user would only exploit capabilities that the agent itself demonstrates), but that these cues are instead integrated into a single use image. Consequently, users try to exploit capabilities that were not signaled by the system to begin with, thereby further exacerbating the overestimation effect.

Due to technological advancements we will soon interact with virtual agents, smart appliances, personal drones, self-driving cars, and autonomous robots using natural language. My work is essential for designers of such systems to understand how people interact with them. In my presentation at the AGS symposium I will demonstrate what good and bad things are ahead of us in terms of agent-based interaction.