"Hot Topics in Systems Research at Intel Labs"
It's an exciting time to be doing systems research as emerging technologies like new non-volatile memories, silicon photonics and energy-efficient SoC designs are changing the way that we put together, optimize and scale systems. In this talk, I'll give an overview of some of the key technology drivers and how researchers at Intel Labs are exploring their use in big-data analytics, machine learning, software-defined networks and new storage-system designs.
Bio: Rich Uhlig is an Intel Fellow and director of Systems Architecture in Intel Labs, where he leads research efforts in virtualization, big-data analytics, storage systems, software-defined networking and energy-efficient computing. He is currently focusing on new architectures for scale-out server systems in support of Cloud workloads and datacenters. Uhlig is also the managing director for the Intel Science and Technology Center (ISTC) for Cloud Computing.
In past work, Uhlig started virtualization efforts within Intel in 1998 and led the definition of multiple generations of virtualization architecture for Intel processors and platforms, known collectively as "Intel Virtualization Technology" (Intel VT). Intel VT is today used in a variety of settings and applications to improve the utilization, management, availability, and security of systems based on Intel Architecture.
Prior to joining Intel in 1996, Uhlig held post-doctoral fellowships
at the European national research labs of Germany, Greece, and France,
where he worked on advancing simulation technology and on architectural
support for modern operating-system design. Uhlig has published over 20
technical papers, holds 24 patents, and has received two Intel
Achievement Awards for his work on system simulation and Intel VT. He
earned the Ph.D. in Computer Science and Engineering from the
University of Michigan in 1995.
"Follow Your Heart or Jumping Onto Hot Wagons: Finding the Right Strategy for Setting Research Agenda"
In this talk, the speaker will describe how she transitioned from being an industrial researcher into an academic researcher, challenges she faced during this transition, and how she overcame them. She will also discuss some key differences between the industrial and academic researches using examples from her past research projects. Last but not least, she will describe a few strategies for identifying important research opportunities to contribute towards. This includes how she slowly evolves her research from merely designing networking protocols to designing secure mobile healthcare systems. Some descriptions of her current networking and mobile healthcare projects will be included.
Bio: Dr. Mooi Choo Chuah is an associate professor in Computer Science & Engineering Department at Lehigh University. Before she joined Lehigh in 2004, she spent 12 years at Bell Laboratories, Homdel, New Jersey. While at Bell Laboratories, she contributed towards the design of Wireless LAN and Third Generation Cellular Network systems. Her work at Bell Laboratories (some with her colleagues) has resulted in her being awarded 62 US and 15 international patents. At Lehigh, she initially worked on designing networking protocols for disruption tolerant networks, mitigation schemes for distributed denial of service attacks. Her current research includes designing next generation content-centric networks, secure mobile healthcare systems, and network security. Her research work has been supported by DARPA, PITA, and NSF. She has published at numerous IEEE and ACM networking conferences and journals. She was the IEEE Infocom 2010 Technical Co-Chair and an associate editor of IEEE Transactions on Mobile Computing. She is a senior member of IEEE and ACM and an associate editor of IEEE Transactions on Parallel & Distributed Computing.
"An all-in-one talk: My research experiences with academia and industry"
In this talk, I will first cover highlights of my dissertation research: the problem of software misconfiguration. I will describe the core idea of my thesis which is to automate misconfiguration diagnosis by using causality analysis to determine the specific inputs to an application that cause the application to produce undesired output. I explain how we were able to infer such relations by analyzing the execution of the application and the interactions between the application and the operating system. Then I will switch gears and talk about my transition to industry after graduation. I will describe the differences that I see between industry and academia, and will also talk about the new research topics that I've been recently working on at Google.
Bio: Mona received her PhD in May 2012 under supervision of Prof. Jason Flinn at University of Michigan. Her research interests broadly include software systems with an emphasis on operating systems. The focus of her research is on software reliability. More specifically, she has built tools that automate software misconfiguration troubleshooting. More information about her research can be found at: http://web.eecs.umich.edu/~monattar.
She joined Google cloud infrastructure in August 2012, where she works on Google Compute Engine. Google Compute Engine provides virtual machines on top of Google infrastructure to users outside Google. On the side, she also enjoy working on large-scale analysis of genomic data.
Cloud computing is often compared to the power utility model as part of a trend towards the commoditization of computing resources. However, today's cloud providers do not simply supply raw computing resources as a commodity, but also act as distributors, dictating cloud services that are not compatible across providers. In this talk, I will discuss a new cloud service distribution layer, called a Supercloud, that is completely decoupled from the cloud provider. A Supercloud give its users the illusion of their own homogenized private cloud (albeit, layered on top of one or more third-party providers). Under the hood, the Supercloud can include different hypervisors, hardware architectures, storage subsystems, and connectivity fabrics. Leveraging a nested paravirtualization layer called the Xen-Blanket, the Supercloud maintains the control necessary to implement hypervisor-level services and management. Using the Xen-Blanket to transform various cloud provider services into a unified offering, we have deployed a Supercloud across Amazon's Elastic Compute Cloud (EC2), IBM, and Cornell University, and performed live VM migration between the different sites. Furthermore, Superclouds create opportunities to exploit resource management techniques that providers do not expose, like resource oversubscription, and ultimately can reduce costs for users.
"Plug into the Supercloud"
Bio: Hakim Weatherspoon is an assistant professor in the Department of Computer Science at Cornell University. His research interests cover various aspects of fault-tolerance, reliability, security, and performance of large Internet-scale systems such as cloud computing and distributed systems. Professor Weatherspoon received his Ph.D. from University of California at Berkeley and B.S. from University of Washington. He is an Alfred P. Sloan Fellow and recipient of an NSF CAREER award, DARPA Computer Science Study Panel (CSSP), IBM Faculty Award, the NetApp Faculty Fellowship, Intel Early Career Faculty Honor, and the Future Internet Architecture award from the National Science Foundation (NSF).