Fast SR-UNet
Architecture and GAN-based training procedure for obtaining a fast neural network which enable better bitrate performances respect to the H.265 codec for the same quality, or better quality at the same bitrate.
Welcome to the AI Assets Catalog! Here you can browse, search and download all the assets currently indexed in the AI-on-Demand platform, including AI libraries, datasets, containers, and more. You are welcome to publish your own AI assets here! To do so, log in in to the platform, go to your dashboard and use the Submit new content button to access the submission forms.
Architecture and GAN-based training procedure for obtaining a fast neural network which enable better bitrate performances respect to the H.265 codec for the same quality, or better quality at the same bitrate.
d-Simplex classifiers achieve stationary, compatible features, enabling seamless model updates in retrieval systems.
A novel dataset of dynamic sequences of 3D face models, where a combination of synthetic and real identities exhibit an unprecedented variety of 4D facial expressions, with variations that include the classical neutral-apex transition, but generalize to e...
Using stationary representations, CoReS trains models to obtain compatible representations, eliminating costly re-indexing in retrieval systems during upgrades.
Mitigating forgetting in continual representation learning using contrastive supervised distillation.
Using fixed classifiers derived from regular polytopes to enhance neural network efficiency and accuracy by generating stationary, maximally-separated feature representations.
Elastic Feature Consolidation (EFC), exploits a tractable second-order approximation of feature drift based on an Empirical Feature Matrix (EFM).
We propose a more realistic, physics-based color data augmentation - which we call Planckian Jitter.
We propose a two-stage learning baseline with a learnable weight scaling layer for reducing the bias caused by long-tailed distribution in LT-CIL and which in turn also improves the performance of conventional CIL due to the limited exemplars.
Neural network based on an end-to-end trainable working memory, which acts as an external storage where information about each agent can be continuously written, updated and recalled