Antonia M. Tulino

Talk title: Coding for Caching in 5G Networks.

Abstract: the abstract will be provided soon.

Bio:

Antonia M. TulinoAntonia M. Tulino held research positions at  Princeton University, at the Center for Wireless Communications, Oulu, Finland and at Università degli Studi del Sannio, Benevento, Italy. In 2002 she joined the Faculty of the Universitá degli Studi di Napoli “Federico II”, and in 2009 she joined Bell Labs.

Dr. Tulino has contributed extensively to the information theoretic understanding of the potential and ultimate limitations in MIMO systems with applications in multiantenna systems, spread spectrum and multiuser detection.  An examples of one of her outstanding original contributions carrying a wide impact is the mercury water-filling (2009 Stephen O. Rice Prize winning contribution) which facilitates the optimization of resources, under practical constraint on signaling.

One of her prominent research area has been also the development and application of random matrix theory, free probability and related subjects to wireless communications where she has produced several fundamental results.  Using random matrix tools, Dr.  Tulino has solved several open problems in the capacity of fading channels as well as in the fundamental limits of compressed sensing technology. In 2004, she co-authored with Sergio Verdú the monograph “Random Matrices and Wireless Communications,” which gives a comprehensive account of the asymptotic theory of the spectral distribution of random matrices and its applications to information theory and signal processing.

Currently, in the context of the Bell Labs Network Energy program, her activities focus on the understanding of the fundamental efficiency limits of the future networked cloud and its practical implications for efficient cloud service delivery.

While for point to point communications the fundamental limits are by now well known and approachable with current technology, this is far from being  the case in the network environment, and even more in a networked cloud environment.

Understanding and pushing the network to operate close to these fundamental limits is becoming  more and more urgent, specially as the network evolves towards a massive compute platform that has to accommodate (and process) everybody’s content. While today, capacity growth is happening via resource parallelization, this growth will be unsustainable unless we understand the fundamental efficiency limits and use them as engineering design drivers.