Research

My group develops tools that enable intelligent systems to

on information.

Currently, the main drive of my research is developing the theory, algorithms, and applications of constrained learning, a tool that enables the data-driven design of systems that satisfy requirements such as robustness[ 4 L. F. O. Chamon and A. Ribeiro. Probably approximately correct constrained learning. In Conference on Neural Information Processing Systems (NeurIPS). 2020. , 5 A. Robey*, L. F. O. Chamon*, G. J. Pappas, H. Hassani, and A. Ribeiro. Adversarial robustness with semi-infinite constrained learning. In Conference on Neural Information Processing Systems (NeurIPS). 2021. (* equal contribution). , 6 A. Robey, L. F. O. Chamon, G. J. Pappas, and H. Hassani. Probabilistically robust learning: Balancing average- and worst-case performance. In International Conference on Machine Learning (ICML). 2022. , 7 L. F. O. Chamon, S. Paternain, M. Calvo-Fullana, and A. Ribeiro. Constrained learning with non-convex losses. IEEE Trans. on Inf. Theory, 69[3]:1739–1760, 2023. ], fairness[ 4 L. F. O. Chamon and A. Ribeiro. Probably approximately correct constrained learning. In Conference on Neural Information Processing Systems (NeurIPS). 2020. , 7 L. F. O. Chamon, S. Paternain, M. Calvo-Fullana, and A. Ribeiro. Constrained learning with non-convex losses. IEEE Trans. on Inf. Theory, 69[3]:1739–1760, 2023. ], safety[ 18 S. Paternain, L. F. O. Chamon, M. Calvo-Fullana, and A. Ribeiro. Constrained reinforcement learning has zero duality gap. In Conference on Neural Information Processing Systems (NeurIPS), 7555–7565. 2019. , 19 S. Paternain, M. Calvo-Fullana, L. F. O. Chamon, and A. Ribeiro. Safe policies for reinforcement learning via primal-dual methods. IEEE Trans. on Autom. Control., 68[3]:1321–1336, 2023. , 20 M. Calvo-Fullana, S. Paternain, L. F. O. Chamon, and A. Ribeiro. State augmented constrained reinforcement learning: Overcoming the limitations of learning with rewards. IEEE Trans. on Autom. Control., 69[7]:4275–4290, 2024. ], smoothness[ 21 J. Cervino, L. F. O. Chamon, B. D. Haeffele, R. Vidal, and A. Ribeiro. Learning globally smooth functions on manifolds. In International Conference on Machine Learning (ICML). 2023. ], and invariance[ 22 I. Hounie, L. F. O. Chamon, and A. Ribeiro. Automatic data augmentation via invariance-constrained learning. In International Conference on Machine Learning (ICML). 2023. ]. On the one hand, I investigate fundamental questions such as

On the other hand, I am interested in the impact constrained learning can have on traditional learning tasks, such as image classification[ 4 L. F. O. Chamon and A. Ribeiro. Probably approximately correct constrained learning. In Conference on Neural Information Processing Systems (NeurIPS). 2020. , 5 A. Robey*, L. F. O. Chamon*, G. J. Pappas, H. Hassani, and A. Ribeiro. Adversarial robustness with semi-infinite constrained learning. In Conference on Neural Information Processing Systems (NeurIPS). 2021. (* equal contribution). , 6 A. Robey, L. F. O. Chamon, G. J. Pappas, and H. Hassani. Probabilistically robust learning: Balancing average- and worst-case performance. In International Conference on Machine Learning (ICML). 2022. , 7 L. F. O. Chamon, S. Paternain, M. Calvo-Fullana, and A. Ribeiro. Constrained learning with non-convex losses. IEEE Trans. on Inf. Theory, 69[3]:1739–1760, 2023. , 22 I. Hounie, L. F. O. Chamon, and A. Ribeiro. Automatic data augmentation via invariance-constrained learning. In International Conference on Machine Learning (ICML). 2023. ], semi-supervised learning[ 21 J. Cervino, L. F. O. Chamon, B. D. Haeffele, R. Vidal, and A. Ribeiro. Learning globally smooth functions on manifolds. In International Conference on Machine Learning (ICML). 2023. ], and data-driven control[ 18 S. Paternain, L. F. O. Chamon, M. Calvo-Fullana, and A. Ribeiro. Constrained reinforcement learning has zero duality gap. In Conference on Neural Information Processing Systems (NeurIPS), 7555–7565. 2019. , 19 S. Paternain, M. Calvo-Fullana, L. F. O. Chamon, and A. Ribeiro. Safe policies for reinforcement learning via primal-dual methods. IEEE Trans. on Autom. Control., 68[3]:1321–1336, 2023. , 20 M. Calvo-Fullana, S. Paternain, L. F. O. Chamon, and A. Ribeiro. State augmented constrained reinforcement learning: Overcoming the limitations of learning with rewards. IEEE Trans. on Autom. Control., 69[7]:4275–4290, 2024. ]. Most importantly, I think of constrained learning as a new mindset for the design data-driven solutions shifting away from the current objective-centric paradigm towards a constraint-driven one.

You can read more about this as well as other current and past projects below. If anything piques your interest, reach out to me by email or check out the prospective members page.

Current projects

Past projects

 


Current projects

Constrained learning theory

Constrained learning theory icon

Constrained learning uses the language of constrained optimization to tackle learning tasks that involve statistical requirements. Its strength lies in combining the data-driven, model-free nature of learning with the expressiveness of constrained programming. Doing so, however, also combines the statistical challenges of the former with the computational complexity of the latter. Hence, it is natural to wonder if constrained learning is feasible? If solving multiple learning problems simultaneously does not compound their complexity? These are the type of questions that concern constrained learning theory. We now know that constrained learning can solve problems that unconstrained learning cannot while often being essentially as hard. In fact, it is usually possible to learn under constraints by solving only of unconstrained problems. These results have already enabled many applications and opened up new theoretical questions on the limits of this new learning task.

  • I. Hounie, A. Ribeiro, and L. F. O. Chamon. Resilient constrained learning. In Conference on Neural Information Processing Systems (NeurIPS). 2023.
    arXiv ]
  • L. F. O. Chamon, S. Paternain, M. Calvo-Fullana, and A. Ribeiro. Constrained learning with non-convex losses. IEEE Trans. on Inf. Theory, 69[3]:1739–1760, 2023.
    arXiv ]
  • L. F. O. Chamon and A. Ribeiro. Probably approximately correct constrained learning. In Conference on Neural Information Processing Systems (NeurIPS). 2020.
    arXiv ] [ Poster ]
  • L. F. O. Chamon, S. Paternain, M. Calvo-Fullana, and A. Ribeiro. The empirical duality gap of constrained statistical learning. In IEEE International Conference in Acoustic, Speech, and Signal Processing (ICASSP). 2020. (Best student paper award)
    arXiv ] [ YouTube ] [ Slides ]
  • M. Eisen, C. Zhang, L. F. O. Chamon, D. D. Lee, and A. Ribeiro. Learning optimal resource allocations in wireless systems. IEEE Trans. on Signal Process., 67[10]:2775–2790, 2019. (Top 50 most accessed articles in IEEE TSP: May, July, Sept, Oct 2019)
    arXiv ]

 

 

Semi-infinite constrained learning

Semi-infinite constrained learning icon

Statistical requirements lie in a spectrum between in expectation and almost surely. On the former end, learning is performed using empirical averages over the available data. This is the case, for example, in wireless resource allocation, safe reinforcement learning, and certain definitions of fairness. Semi-infinite constrained learning is primarily concerned with problems on the other end of the spectrum, e.g., those involving min-max properties such as robustness and invariance. For these almost sure requirements to hold, however, an infinite number of constraints must be satisfied for each data point. Combining duality and hybrid stochastic optimization–MCMC sampling algorithms yield a new approach to tackle to these seemingly intractable problems. These developments have lead to new theoretical questions and applications, such as smooth learning and probabilistic robustness, a property that lies strictly in the interior of the expectation–almost sure spectrum.

  • J. Cervino, L. F. O. Chamon, B. D. Haeffele, R. Vidal, and A. Ribeiro. Learning globally smooth functions on manifolds. In International Conference on Machine Learning (ICML). 2023.
    arXiv ]
  • I. Hounie, L. F. O. Chamon, and A. Ribeiro. Automatic data augmentation via invariance-constrained learning. In International Conference on Machine Learning (ICML). 2023.
    arXiv ]
  • A. Robey, L. F. O. Chamon, G. J. Pappas, and H. Hassani. Probabilistically robust learning: Balancing average- and worst-case performance. In International Conference on Machine Learning (ICML). 2022. (spotlight)
    arXiv ]
  • A. Robey*, L. F. O. Chamon*, G. J. Pappas, H. Hassani, and A. Ribeiro. Adversarial robustness with semi-infinite constrained learning. In Conference on Neural Information Processing Systems (NeurIPS). 2021. (* equal contribution).
    arXiv ]
  • L. F. O. Chamon and A. Ribeiro. Probably approximately correct constrained learning. In Conference on Neural Information Processing Systems (NeurIPS). 2020.
    arXiv ] [ Poster ]

 

 

Constrained reinforcement learning

Constrained reinforcement learning icon

Constrained reinforcement learning (CRL) tackles constrained learning tasks arising in sequential decision making. These interactive, dynamic settings lead to more subtle behaviors than their supervised learning counterpart. While we can show that CRL is essentially as hard as vanilla RL, they turn out to be considerably problems. In fact, there are very simple CRL tasks that cannot be solved using unconstrained RL (regardless of the choice of reward). This means that typical primal-dual algorithms will also fail. Using a systematic state augmentation, however, we can obtain a procedure that yields optimal, feasible trajectories without the need for randomization. These developments lead to new challenges regarding multitask/multiobjective RL, the computational complexity of CRL training, and the development of RL methods capable of handling non-stationary scenarios.

  • M. Calvo-Fullana, S. Paternain, L. F. O. Chamon, and A. Ribeiro. State augmented constrained reinforcement learning: Overcoming the limitations of learning with rewards. IEEE Trans. on Autom. Control., 69[7]:4275–4290, 2024.
    arXiv ]
  • S. Paternain, M. Calvo-Fullana, L. F. O. Chamon, and A. Ribeiro. Safe policies for reinforcement learning via primal-dual methods. IEEE Trans. on Autom. Control., 68[3]:1321–1336, 2023.
    arXiv ]
  • S. Paternain, L. F. O. Chamon, M. Calvo-Fullana, and A. Ribeiro. Constrained reinforcement learning has zero duality gap. In Conference on Neural Information Processing Systems (NeurIPS), 7555–7565. 2019.
    arXiv ] [ Poster ]

 

 

Graph(on) neural networks

Graph/Graphon neural networks icon

Massive amounts of data in our increasingly interconnected world only make sense in the context of the networks from which they arise, be them social networks, power grids, IoT devices, or industry 4.0. Graph signal processing (GSP) and graph neural networks (GNNs) grew out of the need to extract information from those network (graph) signals. These techniques, however, are difficult to scale, hindering their use for large networks that can only be partially observed or in non-stationary, dynamic settings. Yet, it seems reasonable that if two graphs are "similar", then their graph Fourier transforms, graph filters, and GNNs should also be similar. Formalizing this intuition is one of the motivations for developing the theory of graphon signal processing. In fact, it has been used to show that GNNs are transferable between graphs, i.e., that they can be trained on subgraphs to then be deployed on the full network. These results raise fundamental questions on the limits of this transferability as well as to what is the right graph similarity metric to characterize it.

  • L. Ruiz, L. F. O. Chamon, and A. Ribeiro. Transferability properties of graph neural networks. IEEE Trans. on Signal Process., 71:3474–3489, 2023.
    arXiv ] [ YouTube ]
  • L. Ruiz, L. F. O. Chamon, and A. Ribeiro. Graphon signal processing. IEEE Trans. on Signal Process., 69:4961–4976, 2021.
    arXiv ]
  • L. Ruiz, L. F. O. Chamon, and A. Ribeiro. Graphon neural networks and the transferability of graph neural networks. In Conference on Neural Information Processing Systems (NeurIPS). 2020.
    arXiv ] [ YouTube ]

 

 

Non-convex functional optimization

Non-convex functional optimization icon

Until 60 years ago, the tractability boundary in optimization separated linear from nonlinear programs. Advances in convex analysis and barrier methods have since made it common place to hear convex used as a synonym for tractable and non-convex as a synonym for intractable. Reality is naturally more subtle. In fact, there are both computationally challenging convex programs—e.g., large-scale semi-definite programming—and tractable non-convex ones—e.g., low-rank approximation. I am particularly interested in a case of the latter that arises from the observation that problems known to be intractable in finite dimensions often become tractable in infinite dimensions. This means, for instance, that sparse regression is NP-hard in finite dimensions whereas its functional form is tractable. This observation precludes the use of convex relaxations to tackle off-the-grid compressive sensing problems and makes it possible to fit complex nonlinear models (from multi-resolution kernels to GPs), giving rise to new statistical questions. It is also instrumental in the development of constrained learning.

  • L. F. O. Chamon, Y. C. Eldar, and A. Ribeiro. Functional nonlinear sparse models. IEEE Trans. on Signal Process., 68[1]:2449–2463, 2020.
    arXiv ]
  • M. Peifer, L. F. O. Chamon, S. Paternain, and A. Ribeiro. Sparse multiresolution representations with adaptive kernels. IEEE Trans. on Signal Process., 68[1]:2031–2044, 2020.
    arXiv ]
  • L. F. O. Chamon, S. Paternain, and A. Ribeiro. Learning Gaussian processes with Bayesian posterior optimization. In Asilomar Conference on Signals, Systems and Computers, 482–486. 2019.
    PDF ] [ Slides ]

 


Past projects

Combinatorial optimization and approximate submodularity

Combinatorial optimization and approximate submodularity icon

When the scale of the underlying system and/or technological constraints such as computation, power, and communication, limit our capabilities, we are forced to choose a subset of the available resources to use. These might be sensors for state estimation, actuators for control, pixels for face recognition, or movie ratings to kick-start a recommender systems. These selection problems are notoriously hard (often NP-hard in fact), so that the best we can expect is to find an approximate solution. Greedy search is widely used in this context due to its simplicity, iterative nature, and the fact that it is near-optimal when the objective has a "diminishing returns" property known as submodularity. Yet, despite the empirical evidence for its effectiveness in the above problems, none of them are submodular. In fact, quadratic costs generally only display "diminishing returns" under stringent conditions. What this research has shown is that while the MSE and the LQR objective are not submodular, they are not far from it, enjoying similar near-optimal properties. While many notions of approximate submodularity had been investigated before, these were the first computable, a priori guarantees for these problems, precluding the use of submodular surrogates (e.g., logdet) or convex relaxations.

  • L. F. O. Chamon, A. Amice, and A. Ribeiro. Approximately supermodular scheduling subject to matroid constraints. IEEE Trans. on Autom. Control., 67[3]:1384–1396, 2022.
    arXiv ]
  • L. F. O. Chamon, G. J. Pappas, and A. Ribeiro. Approximate supermodularity of Kalman filter sensor selection. IEEE Trans. on Autom. Control., 66[1]:49–63, 2021.
    arXiv ]
  • L. F. O. Chamon, A. Amice, and A. Ribeiro. Matroid-constrained approximately supermodular optimization for near-optimal actuator scheduling. In IEEE Control and Decision Conference, 3391–3398. 2019.
    PDF ] [ Slides ]
  • L. F. O. Chamon and A. Ribeiro. Greedy sampling of graph signals. IEEE Trans. on Signal Process., 66[1]:34–47, 2018.
    arXiv ]
  • L. F. O. Chamon and A. Ribeiro. Approximate supermodularity bounds for experimental design. In Conference on Neural Information Processing Systems (NeurIPS), 5403–5412. 2017.
    arXiv ] [ Poster ]
  • L. F. O. Chamon and A. Ribeiro. Near-optimality of greedy set selection in the sampling of graph signals. In IEEE Global Conference on Signal and Information Processing (GlobalSip), 1265–1269. 2016.
    PDF ] [ Slides ]

 

 

Combinations of adaptive filters

Combinations of adaptive filters icon

As most stochastic optimization algorithms, adaptive filters suffer from trade-offs that can hinder their use in practice. Larger step sizes, for example, lead to faster convergence and better tracking, but also worse steady-state errors. Combinations of adaptive filters were proposed to address such compromises by mixing the output of a fast filter with that of an accurate one and adjusting that mixture depending on which filter is performing best. How these outputs are combined has a large influence in the resulting performance, so this research program set out to determine what is the best way to combine adaptive filters. To do so, it developed an algebra to describe combinations of adaptive filters using message passing graphs and used it to design and analyze a myriad of combination topologies tackling a diversity of new applications. In one instance, a combination of simple adaptive filters was used to reduce the complexity of adaptive algorithms by outperforming complex, Newton-type adaptive methods at roughly 30 times lower computational complexity.

  • C. G. Lopes, V. H. Nascimento, and L. F. O. Chamon. Distributed universal adaptive networks. IEEE Trans. on Signal Process., 71:1817–1832, 2023.
    arXiv ]
  • L. F. O. Chamon and C. G. Lopes. Combination of LMS adaptive filters with coefficients feedback. 2016.
    arXiv ]
  • L. F. O. Chamon and C. G. Lopes. There's plenty of room at the bottom: Incremental combinations of sign-error LMS filters. In IEEE International Conference in Acoustic, Speech, and Signal Processing (ICASSP), 7248–7252. 2014.
    PDF ] [ Poster ]
  • L. F. O. Chamon and C. G. Lopes. Transient performance of an incremental combination of LMS filters. In European Signal Processing Conference (EUSIPCO), 7298–7302. 2013.
    PDF ] [ Poster ]
  • L. F. O. Chamon, H. F. Ferro, and C. G. Lopes. A data reusage algorithm based on incremental combination of LMS filters. In Asilomar Conference on Signals, Systems and Computers, 406–410. 2012.
    PDF ] [ Poster ]
  • L. F. O. Chamon, W. B. Lopes, and C. G. Lopes. Combination of adaptive filters with coefficients feedback. In IEEE International Conference in Acoustic, Speech, and Signal Processing (ICASSP), 3785–3788. 2012.
    PDF ] [ Poster ]

 

 

Aircraft cabin simulator

Figure 1

Air passenger traffic has grown enormously in the last few decades. This increase in competition has made aircraft carriers and manufacturers aware of the need to find new ways to attract customers, inevitably turning to the comfort factor. Although studies on automotive comfort are abundant, those on aircraft environments are scarce, partly due to the difficulties in running experiments (costs, risks...). In order to address these issues, a real-sized aircraft cabin simulator capable of controlling variables such as sound, vibration, temperature, air flow, pressure, and lighting, was built at the University of São Paulo in collaboration with EMBRAER.

I co-designed and built the vibro-acoustic reproduction system, composed of more than 20 loudspeakers and 30 shakers. Using new MIMO equalization methods[ 23 L. F. O. Chamon, G. S. Quiqueto, S. R. Bistafa, and V. H. Nascimento. An SVD-based MIMO equalizer applied to the auralization of aircraft noise in a cabin simulator. In 18th International Congress on Sound and Vibration (ICSV). 2011. ], we were able to precisely simulate the noise patterns of dozens of aircraft, including take-off and landing. From the control room, it is possible to monitor the environment inside the simulator using microphones and accelerometers installed on each seat. After half a decade of work, this project culminated in more than 60 simulated flights involved over 1000 people. I was responsible for the statistical analysis of the results to understand the interplay between variables and passenger comfort. This simulator is still being used in collaborations between the University of São Paulo and aeronautic industries.

  • R. F. Bittencourt, L. F. O. Chamon, S. Futatsugui, J. I. Yanagihara, and S. N. Y. Gerges. Preliminary results on the modeling of aircraft vibroacoustic comfort. In INTERNOISE. 2012.
    PDF ] [ Slides ]
  • L. F. O. Chamon, G. S. Quiqueto, S. R. Bistafa, and V. H. Nascimento. An SVD-based MIMO equalizer applied to the auralization of aircraft noise in a cabin simulator. In 18th International Congress on Sound and Vibration (ICSV). 2011.
    PDF ] [ Slides ]
  • L. F. O. Chamon, G. S. Quiqueto, and S. R. Bistafa. The application of the Singular Value Decomposition for the decoupling of the vibratory reproduction system of an aircraft cabin simulator. In II SAE Brazil International Noise and Vibration Congress. 2010.
    PDF ]

 

Figure 2