no code implementations • ICML 2020 • Wei Chen, Yihan Du, Longbo Huang, Haoyu Zhao
For Borda winner, we establish a reduction of the problem to the original CPE-MAB setting and design PAC and exact algorithms that achieve both the sample complexity similar to that in the CPE-MAB setting (which is nearly optimal for a subclass of problems) and polynomial running time per round.
no code implementations • 28 May 2024 • Haoyu Zhao, Xingyue Zhao, Lingting Zhu, Weixi Zheng, Yongchao Xu
Robot-assisted minimally invasive surgery benefits from enhancing dynamic scene reconstruction, as it improves surgical outcomes.
no code implementations • 27 May 2024 • Haoyu Zhao, Wenhang Ge, Ying-Cong Chen
LLM-Optic first employs an LLM as a Text Grounder to interpret complex text queries and accurately identify objects the user intends to locate.
no code implementations • 18 Mar 2024 • Haoyu Zhao, Yuliang Gu, Zhou Zhao, Bo Du, Yongchao Xu, Rui Yu
Second, to better capture high-frequency components and detailed information, Frequency-Aware Multi-scale Loss (FAM) is proposed by effectively utilizing multi-scale feature space.
no code implementations • 18 Mar 2024 • Haoyu Zhao, Wenhui Dong, Rui Yu, Zhou Zhao, Du Bo, Yongchao Xu
The task of single-source domain generalization (SDG) in medical image segmentation is crucial due to frequent domain shifts in clinical image datasets.
1 code implementation • 28 Feb 2024 • Kaifeng Lyu, Haoyu Zhao, Xinran Gu, Dingli Yu, Anirudh Goyal, Sanjeev Arora
Public LLMs such as the Llama 2-Chat have driven huge activity in LLM research.
1 code implementation • 29 Nov 2023 • Haoyu Zhao, Tianyi Lu, Jiaxi Gu, Xing Zhang, Zuxuan Wu, Hang Xu, Yu-Gang Jiang
Identity-consistent video generation seeks to synthesize videos that are guided by both textual prompts and reference images of entities.
Ranked #1 on Video Generation on MSR-VTT
no code implementations • 8 Oct 2023 • Rishab Balasubramanian, Jiawei Li, Prasad Tadepalli, Huazheng Wang, Qingyun Wu, Haoyu Zhao
Contrary to prior understanding of multi-armed bandits, our work reveals a surprising fact that the attackability of a specific CMAB instance also depends on whether the bandit instance is known or unknown to the adversary.
no code implementations • 7 Sep 2023 • Jiaxi Gu, Shicong Wang, Haoyu Zhao, Tianyi Lu, Xing Zhang, Zuxuan Wu, Songcen Xu, Wei zhang, Yu-Gang Jiang, Hang Xu
Conditioned on an initial video clip with a small number of frames, additional frames are iteratively generated by reusing the original latent features and following the previous diffusion process.
1 code implementation • ICCV 2023 • Wenhang Ge, Tao Hu, Haoyu Zhao, Shu Liu, Ying-Cong Chen
We show that together with a reflection direction-dependent radiance, our model achieves high-quality surface reconstruction on reflective surfaces and outperforms the state-of-the-arts by a large margin.
no code implementations • 14 Mar 2023 • Haoyu Zhao, Abhishek Panigrahi, Rong Ge, Sanjeev Arora
We also show that the Inside-Outside algorithm is optimal for masked language modeling loss on the PCFG-generated data.
1 code implementation • 13 Feb 2023 • Abhishek Panigrahi, Nikunj Saunshi, Haoyu Zhao, Sanjeev Arora
Given the downstream task and a model fine-tuned on that task, a simple optimization is used to identify a very small subset of parameters ($\sim0. 01$% of model parameters) responsible for ($>95$%) of the model's performance, in the sense that grafting the fine-tuned values for just this tiny subset onto the pre-trained model gives performance almost as well as the fine-tuned model.
1 code implementation • 26 Oct 2022 • Lingxiao Huang, Zhize Li, Jialin Sun, Haoyu Zhao
Vertical federated learning (VFL), where data features are stored in multiple parties distributively, is an important area in machine learning.
1 code implementation • 20 Jun 2022 • Zhize Li, Haoyu Zhao, Boyue Li, Yuejie Chi
We then propose a unified framework SoteriaFL for private federated learning, which accommodates a general family of local gradient estimators including popular stochastic variance-reduced gradient methods and the state-of-the-art shifted compression scheme.
1 code implementation • 31 Jan 2022 • Haoyu Zhao, Boyue Li, Zhize Li, Peter Richtárik, Yuejie Chi
Communication efficiency has been widely recognized as the bottleneck for large-scale decentralized machine learning applications in multi-agent or federated environments.
no code implementations • 24 Dec 2021 • Haoyu Zhao, Konstantin Burlachenko, Zhize Li, Peter Richtárik
In the convex setting, COFIG converges within $O(\frac{(1+\omega)\sqrt{N}}{S\epsilon})$ communication rounds, which, to the best of our knowledge, is also the first convergence result for compression schemes that do not communicate with all the clients in each round.
no code implementations • 10 Aug 2021 • Haoyu Zhao, Zhize Li, Peter Richtárik
We propose a new federated learning algorithm, FedPAGE, able to further reduce the communication complexity by utilizing the recent optimal PAGE method (Li et al., 2021) instead of plain SGD in FedAvg.
no code implementations • 23 Jun 2020 • Wei Chen, Yihan Du, Longbo Huang, Haoyu Zhao
For Borda winner, we establish a reduction of the problem to the original CPE-MAB setting and design PAC and exact algorithms that achieve both the sample complexity similar to that in the CPE-MAB setting (which is nearly optimal for a subclass of problems) and polynomial running time per round.
no code implementations • 10 Feb 2020 • Wei Chen, Li-Wei Wang, Haoyu Zhao, Kai Zheng
In a special case where the reward function is linear and we have an exact oracle, we design a parameter-free algorithm that achieves nearly optimal regret both in the switching case and in the dynamic case without knowing the parameters in advance.
no code implementations • 14 Nov 2019 • Haoyu Zhao, Wei Chen
The problem is more challenging than the standard online learning scenario since the private value distribution is non-stationary, meaning that the distribution of bidders' private values may change over time, and we need to use the \emph{non-stationary regret} to measure the performance of our algorithm.
no code implementations • 26 Sep 2019 • Rong Ge, Runzhe Wang, Haoyu Zhao
It has been observed \citep{zhang2016understanding} that deep neural networks can memorize: they achieve 100\% accuracy on training data.
no code implementations • 20 Jun 2019 • Haoyu Zhao, Wei Chen
In this paper, we study the stochastic version of the one-sided full information bandit problem, where we have $K$ arms $[K] = \{1, 2, \ldots, K\}$, and playing arm $i$ would gain reward from an unknown distribution for arm $i$ while obtaining reward feedback for all arms $j \ge i$.