Ali Ramezani-Kebrya

Image of Ali Ramezani-Kebrya
Norwegian version of this page
Phone +47 22852433
Room 4459
Username
Visiting address Gaustadalléen 23B Ole-Johan Dahls hus 0373 Oslo
Postal address Postboks 1080 Blindern 0316 Oslo

Details in my personal homepage .

I am an Associate Professor (with tenure) in the Department of Informatics  at the  University of Oslo (UiO), a Principal Investigator at the Norwegian Center for Knowledge-driven Machine Learning (Integreat), and the SFI Visual Intelligence, and a member of the European Laboratory for Learning and Intelligent Systems (ELLIS) SocietyI serve as an Area Chair of NeurIPS.

Before joining UiO, I was a Senior Scientific Collaborator at  EPFL , working with  Prof. Volkan Cevher  in  Laboratory for Information and Inference Systems (LIONS) . Before joining LIONS, I was an  NSERC Postdoctoral Fellow  at the  Vector Institute  in Toronto working with  Prof. Daniel M. Roy . I received my Ph.D. from the  University of Toronto  where I was very fortunate to be advised by  Prof. Ben Liang  and  Prof. My Dong .

My current research is focused on understanding how the input data distribution is encoded within layers of popular neural networks and developing theoretical concepts and practical tools to minimize the statistical risk under resource constraints and realistic settings referring to statistical and system characteristics contrary to an ideal learning setting. I am interested in a broad range of applications including but not limited to neuroscienceautonomous navigation and drivingdesigning next-generation networks, and medical data.

Selected Publications

We have shown it is crucial to establish an appropriate balance between the optimization error associated with the empirical risk and the generalization error when accelerating SGD with momentum and established generalization error bounds and explicit convergence rates for SGD with momentum under a broad range of hyperparameters including a general step-size rule. For smooth Lipschitz loss functions, we analyze SGD with early momentum (SGDEM) under a broad range of step-sizes, and show that it can train machine learning models for multiple epochs with a guarantee for generalization.

Ali Ramezani-Kebrya, Kimon Antonakopoulos, Volkan Cevher, Ashish Khisti, and Ben Liang, On the Generalization of Stochastic Gradient Descent with Momentum, Journal of Machine Learning Research, vol. 25, pp. 1-56, Jan. 2024.


Even for a single client, the distribution shift between training and test data, i.e., intra-client distribution shift, has been a major challenge for decades. For instance, scarce disease data for training and test in a local hospital can be different. We focus on the overall generalization performance on multiple clients and modify the classical ERM to obtain an unbiased estimate of an overall true risk minimizer under intra-client and inter-client covariate shifts, develop an efficient density ratio estimation method under stringent privacy requirements of federated learning, and show importance-weighted ERM achieves smaller generalization error than classical ERM.

Ali Ramezani-Kebrya*, Fanghui Liu*, Thomas Pethick*, Grigorios Chrysos, and Volkan Cevher, Federated Learning under Covariate Shifts with Generalization Guarantees, Transactions on Machine Learning Research, June 2023.


Beyond supervised learning, we accelerate large-scale monotone variational inequality problems with applications such as training GANs in distributed settings. We propose quantized generalized extra-gradient (Q-GenX) family of algorithms with the optimal rate of convergence and achieve noticeable speedups when training GANs on multiple GPUs without performance degradation.

Ali Ramezani-Kebrya*, Kimon Antonakopoulos*, Igor Krawczuk*, Justin Deschenaux*, and Volkan Cevher, Distributed Extra-gradient with Optimal Complexity and Communication Guarantees , ICLR 2023.


ML models are vulnerable to various attacks at training and test time including data/model poisoning and adversarial examples. We introduce MixTailor, a scheme based on randomization of the aggregation strategies that makes it impossible for the attacker to be fully informed. MixTailor: Mixed Gradient Aggregation for Robust Learning Against Tailored Attacks increases computational complexity of designing tailored attacks for an informed adversary.

Ali Ramezani-Kebrya*, Iman Tabrizian*, Fartash Faghri, and Petar Popovski, MixTailor: Mixed Gradient Aggregation for Robust Learning Against Tailored Attacks, Transactions on Machine Learning Research, Oct. 2022.


Overparameterization refers to the important phenomenon where the width of a neural network is chosen such that learning algorithms can provably attain zero loss in nonconvex training. In Subquadratic Overparameterization for Shallow Neural Networks, we achieve the best known bounds on the number of parameters that is sufficient for gradient descent to converge to a global minimum with linear rate and probability approaching to one.

Chaehwan Song*, Ali Ramezani-Kebrya*, Thomas Pethick, Armin Eftekhari, and Volkan Cevher, Subquadratic Overparameterization for Shallow Neural Networks, NeurIPS 2021.


In training deep models over multiple GPUs, the communication time required to share huge stochastic gradients is the main performance bottleneck. We closed the gap between theory and practice of unbiased gradient compression. NUQSGD is currently the method offering the highest communication-compression while still converging under regular (uncompressed) hyperparameter values.

Ali Ramezani-Kebrya, Fartash Faghri, Ilya Markov, Vitalii Aksenov, Dan Alistarh, and Daniel M. Roy, NUQSGD: Provably Communication-Efficient Data-Parallel SGD via Nonuniform Quantization, Journal of Machine Learning Research, vol. 22, pp. 1-43, Apr. 2021.


Communication-efficient variants of SGD are often heuristic and fixed over the course of training. In Adaptive Gradient Quantization for Data-Parallel SGD , we empirically observe that the statistics of gradients of deep models change during the training and introduce two adaptive quantization schemes. We improve the validation accuracy by almost 2% on CIFAR-10 and 1% on ImageNet in challenging low-cost communication setups.

Fartash Faghri*, Iman Tabrizian*, Ilya Markov, Dan Alistarh, Daniel M. Roy, and Ali Ramezani-Kebrya, Adaptive Gradient Quantization for Data-Parallel SGD , NeurIPS 2020.

Students

  • Supervision at the University of Oslo
    • Zhiyuan Wu, Ph.D. in progress, University of Oslo.
    • Amir Arfan, Ph.D. in progress, University of Oslo.
    • Chau Thi Thuy Tran, Ph.D. in progress, University of Oslo.
  • Co-supervision at the University of Toronto and EPFL
    • Anh Duc Nguyen, undergraduate intern, EPFL.
    • Thomas Michaelsen Pethick, Ph.D. in progress, EPFL.
    • Igor Krawczuk, Ph.D. in progress, EPFL.
    • Fabian Latorre, Ph.D. in progress, EPFL.
    • Xiangcheng Cao, master in progress, EPFL.
    • Seydou Fadel Mamar, master in progress, EPFL.
    • Mohammadamin Sharifi, buzzer intern, EPFL.
    • Wanyun Xie, MSc KTH, first job after graduation: Ph.D. that EPFL.
    • Fartash Faghri, Ph.D. UoT, first job after graduation: research scientist at Apple.
    • Iman Tabrizian, MASc. UoT, first job after graduation: full-time engineer at NVIDIA.

Major Collaborators

Tags: Machine Learning, Deep learning, Neural Networks, Artificial Intelligence, Distributed Systems, security
Published Dec. 20, 2022 12:32 PM - Last modified Apr. 11, 2024 5:47 PM

Projects