Events

Computational Science Distinguished Seminar Series

The School of Advanced Computing is pleased to announce the Computational Science Distinguished Seminar Series. Drawing from scientific computing, applied mathematics and scientific machine learning, these seminars will showcase the work of leading computational scientists with applications ranging from systems biology and materials physics, to aerospace, sustainability and energy science. Join us as this seminar series launches on Apr 25, 2024.

Spring 2024 Series

Apr 25, 2024 - Vikram Gavini, University of Michigan

Towards large scale quantum accuracy materials simulations

2:00 pm (refreshments at 2 pm, seminar begins at 2:30 pm)

RTH 526


May 2, 2024 - Yannis Kevrekidis, Johns Hopkins University

No Equations, No Variables, No Space and No Time: Data and the Modeling of Complex Systems 

9 am (refreshments at 9 am, seminar begins at 9:30 am)

MCB 101


May 7, 2024 - Vahid Tarokh, Duke University

Representation Learning, Prediction, and Sampling of Extreme Events 

9:00 am (refreshments at 9 am, seminar begins at 9:30 am)

RTH 526

Fall 2024 Series

Time / Details TBD - Youssef Marzouk, Massachusetts Institute of Technology

Time / Details TBD - Jessica Zhang, Carnegie Mellon University

Vikram GaviniApr 25, 2024, Vikram Gavini, University of Michigan

Towards large scale quantum accuracy materials simulations

2:00 pm (refreshments at 2 pm, seminar begins at 2:30 pm)

RTH 526

Abstract: Electronic structure calculations, especially those using density functional theory (DFT), have been very useful in understanding and predicting a wide range of materialsproperties. Despite the wide adoption of DFT, and the tremendous progress in theory and numerical methods over the decades, the following challenges remain. Firstly, many widely used implementations of DFT suffer from domain-size and geometry restrictions, limiting the complexity of materials systems that can be treated using DFT calculations. Secondly, there are many materials systems (such as strongly-correlated systems) where the widely used model exchange-correlation functionals in DFT, which account for the many-body quantum mechanical interactions between electrons, are not sufficiently accurate. This talk will discuss the recent advances towards addressing the aforementioned challenges, which provides a path for large-scale quantum accuracy materials simulations. In particular, the development of computational methods and numerical algorithms for conducting fast and accurate large-scale DFT calculations using adaptive finite-element discretization will be presented, which form the basis for the recently released DFT-FE open-source code. The computational efficiency, scalability and performance of DFT-FE will be presented, which can compute the electronic structure of systems containing many thousands of atoms in wall-times of few minutes. Some recent studies on the energetics of quasicrystals (ScZn 7.33 ) and dislocations in Mg using DFT-FE will be presented, which highlight the complex systems that can be tackled using DFT-FE. In addressing the second challenge, our recent progress in bridging highly accurate quantum many-body methods with DFT will be discussed, which is achieved by computing and using exact exchange-correlation potentials to improve the exchange-correlation functional description in DFT.

Bio: Vikram Gavini is Professor of Mechanical Engineering and Materials Science & Engineering at the University of Michigan. He received his Ph.D. from California Institute of Technology in 2007. His interests are in developing methods for large-scale and quantum-accurate electronic structure calculations, numerical analysis of PDEs and scientific computing. DFT-FE, a massively parallel open-source code for large-scale real-space DFT calculations, has been developed in his group. He is the recipient of NSF CAREER Award in 2011, AFOSR Young Investigator Award in 2013, Humboldt Research Fellowship for Experienced Researchers (2012-14), USACM Gallagher Award in 2015, among others. He led the team that received the 2023 ACM Gordon Bell Prize in high performance computing.

Yannis KevrekidisMay 2, 2024, Yannis Kevrekidis, Johns Hopkins University

Applied Mathematics and Statistics, Chemical and Biomolecular Engineering & the Medical School
John Hopkins University

also 

Bloomberg Distinguished Professor

Pomeroy and Betty Perry Smith Professor in Engineering, Emeritus Professor of Chemical and Biological Engineering, and of Applied and Computational Mathematics Emeritus Princeton University

 

No Equations, No Variables, No Space and No Time: Data and the Modeling of Complex Systems 

9 am  (refreshments 9 am, seminar begins at 9:30 am)

MCB 101

Abstract: I will give an overview of a research path in data driven modeling of complex systems over the last 30 or so years – from the early days of shallow neural networks and autoencoders for nonlinear dynamical system identification, to the more recent derivation of data driven “emergent” spaces in which to better learn generative PDE laws. In all illustrations presented, I will try to point out connections between the “traditional” numerical analysis we know and love, and the more modern data-driven tools and techniques we now have – and some mathematical questions they hopefully make possible for us to answer.

-------------

Bio: Yannis Kevrekidis studied Chemical Engineering at the National Technical University in Athens. He then followed the steps of many alumni of that department to the University of Minnesota, where he studied with Rutherford Aris and Lanny Schmidt (as well as Don Aronson and Dick McGehee in Math). He was a Director's Fellow at the Center for Nonlinear Studies in Los Alamos in 1985-86 (when Soviets still existed and research funds were plentiful). He then had the good fortune of joining the faculty at Princeton, where he taught Chemical Engineering and also Applied and Computational Mathematics for 31 years; seven years ago he became Emeritus and started fresh at Johns Hopkins (where he somehow is also Professor of Urology). His work always had to do with nonlinear dynamics (from instabilities and bifurcation algorithms to spatiotemporal patterns to data science in the 90s, nonlinear identification, multiscale modeling, and back to data science/ML); and he had the additional good fortune to work with several truly talented experimentalists, like G. Ertl's group in Berlin. Currently -on leave from Hopkins- he works with the Defense Sciences Office at DARPA. When young and promising he was a Packard Fellow, a Presidential Young Investigator and the Ulam Scholar at Los Alamos National Laboratory. He holds the Colburn, CAST Wilhelm and Walker awards of the AIChE, the Crawford and the Reid prizes of SIAM, he is a member of the NAE, the American Academy of Arts and Sciences, and the Academy of Athens. 

Vahid TarokhMay 7, 2024, Vahid Tarokh, Duke University

The Rhodes Family Professor of Electrical and Computer Engineering

Bass Connections Endowed Professor

Professor of Mathematics

Duke University

Representation Learning, Prediction, and Sampling of Extreme Events

9:00 am (refreshments 9 am, seminar begins at 9:30 am)

RTH 526

Abstract: Understanding the joint distribution of simultaneous extremes in the multi-dimensional scenario may be important in various disciplines such as medicine, environmental science, engineering, and finance. For example, how are extreme weather patterns related in geographical areas or how do extremes of different financial instruments relate? However, extreme events are rare in occurrence by definition and the traditional tools of statistical analysis often fail to apply in this regime. In this talk, we discuss some of our recent contributions [1,2,3,4,5] to development of efficient computational solutions including various novel neural network architectures for the modeling, sampling, and inference of high dimensional extreme value distributions.

[1] Yang, H., Hasan, A., Ng, Y. and Tarokh, V.,  Neural McKean-Vlasov Processes: Distributional Dependence in Diffusion Processes,  27th International Conference on Artificial Intelligence and Statistics (AISTATS),  May 2024.

[2] A. Hasan, Y. Ng, J. Blanchet, and V. Tarokh, “Representation Learning for Extremes”, Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS) Workshop on Heavy Tails in Machine Learning, Dec. 2023.

[3] Ng, Y., Hasan, A., and Tarokh, V., Inference and Sampling for Archimax Copulas, Conference on Neural Information Processing System (NeurIPS), Dec. 2022.

[4] Hasan, A., Elkhalil, K., Ng, Y., Pereira, J.M., Farsiu, S., Blanchet, J., and Tarokh, V., Modeling Extremes with d-max-decreasing Neural Networks, Conference on Uncertainty in Artificial Intelligence (UAI), Aug. 2022.

[5] Yuting Ng, Ali Hasan, Khalil Elkhalil, and Vahid Tarokh, Generative Archimedean Copulas, 37th Conference on Uncertainty in Artificial Intelligence (UAI), July 2021

Bio: Vahid Tarokh worked at AT&T Labs-Research until August 2000. In September 2000, he joined the Massachusetts Institute of Technology (MIT) as an Associate Professor of Electrical Engineering and Computer Science. In June 2002, he joined Harvard University as a Gordon McKay Professor of Electrical Engineering and Hammond Vinton Hayes Senior Research Fellow. He was named Perkins Professor of Applied Mathematics in 2005. In Jan 2018, He joined Duke University, as the Rhodes Family Professor of Electrical and Computer Engineering, Bass Connections Endowed Professor, and Professor of Computer Science, and Mathematics. From Jan 2018 to May 2018, He was a Gordon Moore Distinguished Scholar at the California Institute of Technology (CALTECH). During Jan 2019-Dec 2022, he served as a Microsoft Data Science Investigator at Duke University.

Published on April 11th, 2024

Last updated on April 15th, 2024