conference-talk

Using R for Spatial Analytics

Date and Time: 
Wednesday, April 6th, 2016
Location: 
Center Green
Speaker: 
Guido Cervone, Carolynne Hultquist and Elena Sava

R is a powerful open source and community supported programming environment first developed to satisfy the requirements of the Statistical community. Over the years R has developed into a general purpose language, and it has been applied to solve various problems across different disciplines. Powerful R packages are now available to analytically interact with spatial data in an efficient manner.

Speaker Description: 

TBD

Event Category:

Better than Free: Data Explorations with public data and software tools.

Date and Time: 
Wednesday, April 6th, 2016
Location: 
Center Green
Speaker: 
Grace Peng and Mary Haley

Students will be guided through hands-on experience working with extremes of data from very homogenized globally gridded data published by national centers to highly idiosyncratic (often unpublished) point source data.

Students will learn where to go for more information about data and data tools. Most importantly, they will examine which kinds of problems can be answered by data, which type(s) of data can help answer their questions, how to find that data, and the mechanics of how to use that data.


Speaker Description: 

Grace Peng works in the Data Support Section.

Mary Haley works in the Data Visualization & Analysis Tools group

Event Category:

An innovative framework for real-time visualization and computational steering of high-resolution regional climate simulations

Date and Time: 
Tuesday, April 5th, 2016
Location: 
Center Green
Speaker: 
Raffaele Montuoro

An innovative software framework is introduced as a tool to interactively visualize large three-dimensional datasets generated in real-time by high-resolution regional weather and climate models. The integrated system consists of a state-of-the-art, high-resolution coupled regional climate model, developed at Texas A&M University, and a combined, high-performance 3D visualization package and graphical user interface (GUI), developed by IBM Research. 

Speaker Description: 

Dr. Montuoro is an Instructional Assistant Professor in the Department of Atmospheric Sciences at Texas A&M University. He joined Texas A&M University in 2004, after working as IT consultant for Eutelsat SA in Paris, France. Dr. Montuoro holds a PhD in Theoretical Chemistry from the Scuola Normale Superiore di Pisa, Italy, and has developed innovative numerical models used for accurate calculations of photoionization phenomena. In 2010, some of his recent work in code optimization has been featured in the national press. Dr. Montuoro's current work focuses on the development of high-resolution coupled regional climate models and tools.

Event Category:

Data analysis techniques for Detection and Attribution in climate studies

Date and Time: 
Tuesday, April 5th, 2016
Location: 
Center Green
Speaker: 
Philippe Naveau
In climatology,  the question of Detection and Attribution (D&A)  is fairly well defined:  ("Detection" is the process of demonstrating that climate has changed in some defined statistical sense, without providing a reason for that change and "Attribution"  is the process of establishing the most likely causes for the detected change with some defined level of confidence, see the IPCC definition). 
 
Speaker Description: 

Philippe is Research Scientist at the “Laboratoire des Sciences du Climat et de l’Environnement”, in France. He specializes in statistical climatology and hydrology, extreme value theory, time series analysis, spatial statistics.

Event Category:

Polyglot, Event Driven Computational Science Using the Actor Model

Date and Time: 
Tuesday, April 5th, 2016
Location: 
Center Green
Speaker: 
Joe Stubbs

Data-intensive computational techniques have become indispensable in virtually every domain of science. The sheer quantity of data being generated by various instruments and devices presents a significant challenge for even the most advanced computing centers. Traditional off-line “batch” approaches to data analysis often times cannot keep pace with real-time streaming data. At the same time, an explosion of new software tools has given computational scientists an unprecedented number of quality choices for analyzing their data.

Speaker Description: 

After completing a PhD in Mathematics from the University of Michigan, Joe moved to the University of Texas where he has been building distributed systems, web services and analytic tools for a variety of scientific applications. He is currently a research scientist at the Texas Advanced Computing Center where he works on the Agave “science as a service” project, a hosted platform for hybrid cloud, HPC and high-throughput scientific computing. Joe is co-creator of several open source Python projects in use at TACC including abaco, a system that implements the actor model of concurrent programming using containers and http.

Event Category:

Expanding users’ analysis capabilities with the CMIP Analysis Platform at NCAR

Date and Time: 
Tuesday, April 5th, 2016
Location: 
Center Green
Speaker: 
Dave Hart

Most academic researchers do not have the resources to download, store, and analyze large portions (often tens or hundreds of terabytes) of the 2 PB of data published worldwide from the Coupled Model Intercomparison Project Phase 5 (CMIP5). This limitation will be exacerbated in Phase 6, with data volumes expected to be 10 or 20 times larger. For CMIP6, NCAR alone is projecting the creation of 5 PB of data or more.

Speaker Description: 

David Hart is manager of CISL's User Services Section, where he handles allocations for CISL's high-performance computing systems and oversees the CISL Help Desk and Consulting Services Group. Prior to arriving at NCAR in 2010, David worked for 15 years at the San Diego Supercomputer Center (SDSC) at the University of California, San Diego, in a variety of roles and leadership positions, including allocations, user support, and communications. During his time at SDSC, he also held a number of leadership positions in the TeraGrid program and continues to be involved with the XSEDE program. His professional and research interests include metrics for measuring the performance and impact of cyberinfrastructure systems and activities.

Event Category:

Jupyter Ascending: a practical hand guide to galactic scale, reproducible data science

Date and Time: 
Tuesday, April 5th, 2016
Location: 
Center Green
Speaker: 
John Fonner

Scientific reproducibility must be as much about accessibility and clearly communicating ideas as it is about making calculations consistent. As computation plays an ever increasing role in research, packaging computations in a way that supports simple, transparent recreation of the results is critical for many scientific domains. A number of software tools and container technologies such as Jupyter notebooks and Docker containers provide key elements toward this end, but they also have limitations both in capability and ease of use.

Speaker Description: 

John Fonner is a research associate in Life Sciences Computing at the Texas Advanced Computing Center (TACC). He earned a Ph.D. in Biomedical Engineering at the University of Texas at Austin, where he used a blend of experimental and computational techniques to study binding interactions between peptides and conducting polymers for implant applications in the nervous system. Since joining TACC in 2011, John has served on a number of projects that help life sciences researchers leverage advanced computing resources, both through training and through the development of better tools and cyberinfrastructure.

Event Category:

Data Thinking before Data Crunching

Date and Time: 
Tuesday, April 5th, 2016
Location: 
Center Green
Speaker: 
Grace Peng

Data scientist is the hot job title du jour. Many books and sites teach the mechanics of how to make data visualizations, but skip or gloss over the foundations of data science. This talk will help you think through fundamental considerations before you plunge into your data analysis.
Can this problem be answered with data? What type of data can help me answer this question? Where can I find the best data for the job? Does the data match the data documentation? How do I cite the data for reproducibility?

Speaker Description: 

Grace Peng works in the Data Support Section.

Event Category:

One approach to ensuring that data analysis projects and research reports are reproducible

Date and Time: 
Tuesday, April 5th, 2016
Location: 
Center Green
Speaker: 
Janine Aquino

Authors: Mike Daniels, William Cooper, Janine Aquino, Teresa Campos, William Brown (All from NCAR/EOL)

Speaker Description: 

Janine manages research data from the two NCAR/EOL research aircrafts: HIAPER, a modified Gulfstream V jet, and a four-engine turboprop C-130. Data are made available online as part of comprehensive project websites that support cutting edge atmospheric research.

Event Category:

Brown Dog: An Elastic Data Cyberinfrastrure for Autocuration and Digital Preservation

Date and Time: 
Tuesday, April 5th, 2016
Location: 
Center Green
Speaker: 
Jay Alameda

Smruti Padhy, Jay Alameda, Rui Liu, Edgar Black, Liana Diesendruck, Mike Dietze, Greg Jansen, Praveen Kumar, Rob Kooper, Jong Lee, Richard Marciano, Luigi Marini, Dave Mattson, Barbara Minsker, Chris Navarro, Marcus Slavenas, William Sullivan, Jason Votava, Inna Zharnitsky, Kenton McHenry
National Center for Supercomputing Applications
University of Illinois at Urbana-Champaign

Speaker: Jay Alameda

Abstract

Speaker Description: 

Jay Alameda is the lead for Advanced Application Support at the National Center for Supercomputing Applications. In this role, he works with the Extreme Science and Engineering Discovery Environment (XSEDE) which is a collaboration of NSF-funded high performance computing (HPC) resource providers, working to provide a common set of services, including the provisioning of advanced user support, to the science and engineering community. In particular, Jay leads the Extended Support for Training, Education, and Outreach Service of XSEDE, which provides the technical expertise to support Training, Education, and Outreach activities organized by XSEDE. He also was the lead of the recently completed NSF funded SI2 project, “A Productive and Accessible Development Workbench for HPC Applications Using the Eclipse Parallel Tools Platform”, which improved the Eclipse Parallel Tools Platform (PTP) to serve as a platform for development of HPC applications.

Event Category:

Pages

Subscribe to conference-talk