no code implementations • 16 Feb 2024 • Zihao Lin, Mohammad Beigi, Hongxuan Li, Yufan Zhou, Yuxiang Zhang, Qifan Wang, Wenpeng Yin, Lifu Huang
Our in-depth study advocates more careful use of ME in real-world scenarios.
1 code implementation • 7 Feb 2024 • Jian Chen, Ruiyi Zhang, Yufan Zhou, Rajiv Jain, Zhiqiang Xu, Ryan Rossi, Changyou Chen
Controllable layout generation refers to the process of creating a plausible visual arrangement of elements within a graphic design (e. g., document and web designs) with constraints representing design intentions.
1 code implementation • 5 Dec 2023 • Yufan Zhou, Ruiyi Zhang, Jiuxiang Gu, Tong Sun
Some existing methods do not require fine-tuning, while their performance are unsatisfactory.
1 code implementation • 29 Jun 2023 • Yanzhe Zhang, Ruiyi Zhang, Jiuxiang Gu, Yufan Zhou, Nedim Lipka, Diyi Yang, Tong Sun
Instruction tuning unlocks the superior capability of Large Language Models (LLM) to interact with humans.
1 code implementation • 23 May 2023 • Yufan Zhou, Ruiyi Zhang, Tong Sun, Jinhui Xu
However, generating images of novel concept provided by the user input image is still a challenging task.
1 code implementation • 9 May 2023 • Jianyi Zhang, Saeed Vahidian, Martin Kuo, Chunyuan Li, Ruiyi Zhang, Tong Yu, Yufan Zhou, Guoyin Wang, Yiran Chen
This repository offers a foundational framework for exploring federated fine-tuning of LLMs using heterogeneous instructions across diverse categories.
no code implementations • 25 Dec 2022 • Yufan Zhou, Haiwei Dong, Abdulmotaleb El Saddik
In this paper, we study the task of 3D human pose estimation from depth images.
1 code implementation • CVPR 2023 • Yufan Zhou, Bingchen Liu, Yizhe Zhu, Xiao Yang, Changyou Chen, Jinhui Xu
Unlike the baseline diffusion model used in DALL-E 2, our method seamlessly encodes prior knowledge of the pre-trained CLIP model in its diffusion process by designing a new initialization distribution and a new transition step of the diffusion.
Ranked #3 on Text-to-Image Generation on Multi-Modal-CelebA-HQ
no code implementations • 25 Oct 2022 • Yufan Zhou, Chunyuan Li, Changyou Chen, Jianfeng Gao, Jinhui Xu
The low requirement of the proposed method yields high flexibility and usability: it can be beneficial to a wide range of settings, including the few-shot, semi-supervised and fully-supervised learning; it can be applied on different models including generative adversarial networks (GANs) and diffusion models.
no code implementations • CVPR 2022 • Yufan Zhou, Ruiyi Zhang, Changyou Chen, Chunyuan Li, Chris Tensmeyer, Tong Yu, Jiuxiang Gu, Jinhui Xu, Tong Sun
One of the major challenges in training text-to-image generation models is the need of a large number of high-quality text-image pairs.
no code implementations • 7 Dec 2021 • Yufan Zhou, Chunyuan Li, Changyou Chen, Jinhui Xu
With the rapidly growing model complexity and data volume, training deep generative models (DGMs) for better performance has becoming an increasingly more important challenge.
2 code implementations • 27 Nov 2021 • Yufan Zhou, Ruiyi Zhang, Changyou Chen, Chunyuan Li, Chris Tensmeyer, Tong Yu, Jiuxiang Gu, Jinhui Xu, Tong Sun
One of the major challenges in training text-to-image generation models is the need of a large number of high-quality image-text pairs.
Ranked #2 on Text-to-Image Generation on Multi-Modal-CelebA-HQ
no code implementations • 10 May 2021 • Yufan Zhou, Changyou Chen, Jinhui Xu
Learning high-dimensional distributions is an important yet challenging problem in machine learning with applications in various domains.
no code implementations • 7 Feb 2021 • Yufan Zhou, Zhenyi Wang, Jiayi Xian, Changyou Chen, Jinhui Xu
We achieve this goal by 1) replacing the adaptation with a fast-adaptive regularizer in the RKHS; and 2) solving the adaptation analytically based on the NTK theory.
no code implementations • ICLR 2021 • Yufan Zhou, Zhenyi Wang, Jiayi Xian, Changyou Chen, Jinhui Xu
Within this paradigm, we introduce two meta learning algorithms in RKHS, which no longer need an explicit inner-loop adaptation as in the MAML framework.
no code implementations • ICLR 2021 • Kevin J Liang, Weituo Hao, Dinghan Shen, Yufan Zhou, Weizhu Chen, Changyou Chen, Lawrence Carin
Large-scale language models have recently demonstrated impressive empirical performance.
no code implementations • NeurIPS 2020 • Yufan Zhou, Changyou Chen, Jinhui Xu
Manifold learning is a fundamental problem in machine learning with numerous applications.
no code implementations • 16 May 2020 • Yufan Zhou, Jiayi Xian, Changyou Chen, Jinhui Xu
We then propose feature aggregation as the composition of the original neighbor-based kernel and a learnable kernel to encode feature similarities in a feature space.
1 code implementation • AAAI 2019 • Zhenyi Wang, Ping Yu, Yang Zhao, Ruiyi Zhang, Yufan Zhou, Junsong Yuan, Changyou Chen
In this paper, we focus on skeleton-based action generation and propose to model smooth and diverse transitions on a latent space of action sequences with much lower dimensionality.
Ranked #4 on Human action generation on NTU RGB+D 2D
no code implementations • 2 Dec 2019 • Yufan Zhou, Changyou Chen, Jinhui Xu
Learning with kernels is an important concept in machine learning.