1 code implementation • 6 Jun 2024 • Yihang Chen, Qianyi Wu, Mehrtash Harandi, Jianfei Cai
In this paper, we introduce the Context-based NeRF Compression (CNC) framework, which leverages highly efficient context models to provide a storage-friendly NeRF representation.
no code implementations • 5 Jun 2024 • Yihang Chen, Fanghui Liu, Taiji Suzuki, Volkan Cevher
We first derive the asymptotic expansion of high dimensional kernels under covariate shifts.
no code implementations • 30 May 2024 • Jiaben Chen, Xin Yan, Yihang Chen, Siyuan Cen, Qinwei Ma, Haoyu Zhen, Kaizhi Qian, Lie Lu, Chuang Gan
In this work, we introduce a challenging task for simultaneously generating 3D holistic body motions and singing vocals directly from textual lyrics inputs, advancing beyond existing works that typically address these two modalities in isolation.
2 code implementations • 21 Mar 2024 • Yihang Chen, Qianyi Wu, Jianfei Cai, Mehrtash Harandi, Weiyao Lin
3D Gaussian Splatting (3DGS) has emerged as a promising framework for novel view synthesis, boasting rapid rendering speed with high fidelity.
no code implementations • 14 Mar 2024 • Yihang Chen, Fanghui Liu, Yiping Lu, Grigorios G. Chrysos, Volkan Cevher
To derive the generalization bounds under this setting, our analysis necessitates a shift from the conventional time-invariant Gram matrix employed in the lazy training regime to a time-variant, distribution-dependent version.
1 code implementation • 21 Nov 2023 • Cheng Wan, Hongyuan Yu, Zhiqi Li, Yihang Chen, Yajun Zou, Yuqing Liu, Xuanwu Yin, Kunlong Zuo
To address this issue, we propose the Swift Parameter-free Attention Network (SPAN), a highly efficient SISR model that balances parameter count, inference speed, and image quality.
Ranked #37 on Image Super-Resolution on Set14 - 4x upscaling
1 code implementation • 30 Sep 2023 • Yihang Chen, Lukas Mauch
To address these issues, we propose Order-Preserving GFlowNets (OP-GFNs), which sample with probabilities in proportion to a learned reward function that is consistent with a provided (partial) order on the candidates, thus eliminating the need for an explicit formulation of the reward function.
no code implementations • 29 Sep 2021 • Yihang Chen, Grigorios Chrysos, Volkan Cevher
Domain generalization deals with the difference in the distribution between the training and testing datasets, i. e., the domain shift problem, by extracting domain-invariant features.
1 code implementation • NeurIPS 2020 • Jingtong Su, Yihang Chen, Tianle Cai, Tianhao Wu, Ruiqi Gao, Li-Wei Wang, Jason D. Lee
In this paper, we conduct sanity checks for the above beliefs on several recent unstructured pruning methods and surprisingly find that: (1) A set of methods which aims to find good subnetworks of the randomly-initialized network (which we call "initial tickets"), hardly exploits any information from the training data; (2) For the pruned networks obtained by these methods, randomly changing the preserved weights in each layer, while keeping the total number of preserved weights unchanged per layer, does not affect the final performance.