no code implementations • 29 Sep 2021 • Zhe Wang, Jie Lin, Xue Geng, Mohamed M. Sabry Aly, Vijay Chandrasekhar
We formulate the quantization of deep neural networks as a rate-distortion optimization problem, and present an ultra-fast algorithm to search the bit allocation of channels.
1 code implementation • 13 Jan 2021 • Govind Narasimman, Kangkang Lu, Arun Raja, Chuan Sheng Foo, Mohamed Sabry Aly, Jie Lin, Vijay Chandrasekhar
Despite the vast literature on Human Activity Recognition (HAR) with wearable inertial sensor data, it is perhaps surprising that there are few studies investigating semisupervised learning for HAR, particularly in a challenging scenario with class imbalance problem.
no code implementations • 9 Jul 2020 • Manas Gupta, Siddharth Aravindan, Aleksandra Kalisz, Vijay Chandrasekhar, Lin Jie
PuRL achieves more than 80% sparsity on the ResNet-50 model while retaining a Top-1 accuracy of 75. 37% on the ImageNet dataset.
no code implementations • 25 Jun 2020 • Yasin Yazici, Chuan-Sheng Foo, Stefan Winkler, Kim-Hui Yap, Vijay Chandrasekhar
We examine two key questions in GAN training, namely overfitting and mode drop, from an empirical perspective.
no code implementations • 16 Apr 2020 • Saisubramaniam Gopalakrishnan, Pranshu Ranjan Singh, Yasin Yazici, Chuan-Sheng Foo, Vijay Chandrasekhar, ArulMurugan Ambikapathi
Utilization of classification latent space information for downstream reconstruction and generation is an intriguing and a relatively unexplored area.
no code implementations • ICLR 2020 • Raden Mu'az Mun'im, Jie Lin, Vijay Chandrasekhar, Koichi Shinoda
(4) Fast, it is observed that the number of training epochs required by MaskConvNet is close to training a baseline without pruning.
no code implementations • 25 Sep 2019 • Zhe Wang, Jie Lin, Mohamed M. Sabry Aly, Sean I Young, Vijay Chandrasekhar, Bernd Girod
In this paper, we address an important problem of how to optimize the bit allocation of weights and activations for deep CNNs compression.
1 code implementation • 17 Sep 2019 • Quang-Hieu Pham, Pierre Sevestre, Ramanpreet Singh Pahwa, Huijing Zhan, Chun Ho Pang, Yuda Chen, Armin Mustafa, Vijay Chandrasekhar, Jie Lin
With the increasing global popularity of self-driving cars, there is an immediate need for challenging real-world datasets for benchmarking and training various computer vision tasks such as 3D object detection.
no code implementations • ICLR 2019 • Panayotis Mertikopoulos, Bruno Lecouat, Houssam Zenati, Chuan-Sheng Foo, Vijay Chandrasekhar, Georgios Piliouras
Owing to their connection with generative adversarial networks (GANs), saddle-point problems have recently attracted considerable interest in machine learning and beyond.
1 code implementation • 9 Feb 2019 • Yasin Yazici, Bruno Lecouat, Chuan-Sheng Foo, Stefan Winkler, Kim-Hui Yap, Georgios Piliouras, Vijay Chandrasekhar
We propose a GAN design which models multiple distributions effectively and discovers their commonalities and particularities.
no code implementations • 4 Jan 2019 • Xue Geng, Jie Fu, Bin Zhao, Jie Lin, Mohamed M. Sabry Aly, Christopher Pal, Vijay Chandrasekhar
This paper addresses a challenging problem - how to reduce energy consumption without incurring performance drop when deploying deep neural networks (DNNs) at the inference stage.
1 code implementation • 19 Dec 2018 • Bruno Lecouat, Ken Chang, Chuan-Sheng Foo, Balagopal Unnikrishnan, James M. Brown, Houssam Zenati, Andrew Beers, Vijay Chandrasekhar, Jayashree Kalpathy-Cramer, Pavitra Krishnaswamy
Supervised deep learning algorithms have enabled significant performance gains in medical image classification tasks.
no code implementations • 12 Nov 2018 • Anran Wang, Anh Tuan Luu, Chuan-Sheng Foo, Hongyuan Zhu, Yi Tay, Vijay Chandrasekhar
In this paper, we present the Holistic Multi-modal Memory Network (HMMN) framework which fully considers the interactions between different input sources (multi-modal context, question) in each hop.
no code implementations • 22 Aug 2018 • Sibo Song, Ngai-Man Cheung, Vijay Chandrasekhar, Bappaditya Mandal
Specifically, using frame-level features, DATP regresses importance of different temporal segments and generates weights for them.
1 code implementation • ICLR 2019 • Bruno Lecouat, Chuan-Sheng Foo, Houssam Zenati, Vijay Chandrasekhar
Generative Adversarial Networks are powerful generative models that are able to model the manifold of natural images.
no code implementations • 7 Jul 2018 • Panayotis Mertikopoulos, Bruno Lecouat, Houssam Zenati, Chuan-Sheng Foo, Vijay Chandrasekhar, Georgios Piliouras
Owing to their connection with generative adversarial networks (GANs), saddle-point problems have recently attracted considerable interest in machine learning and beyond.
1 code implementation • ICLR 2019 • Yasin Yazici, Chuan-Sheng Foo, Stefan Winkler, Kim-Hui Yap, Georgios Piliouras, Vijay Chandrasekhar
We examine two different techniques for parameter averaging in GAN training.
no code implementations • 6 Mar 2018 • Savitha Ramasamy, Kanagasabai Rajaraman, Pavitra Krishnaswamy, Vijay Chandrasekhar
The online generative training begins with zero neurons in the hidden layer, adds and updates the neurons to adapt to statistics of streaming data in a single pass unsupervised manner, resulting in a feature representation best suited to the data.
no code implementations • 6 Nov 2017 • Fang Yuan, Zhe Wang, Jie Lin, Luis Fernando D'Haro, Kim Jung Jae, Zeng Zeng, Vijay Chandrasekhar
In particular, we unify traditional "knowledgeless" machine learning models and knowledge graphs in a novel end-to-end framework.
no code implementations • 18 Jul 2017 • Gaurav Manek, Jie Lin, Vijay Chandrasekhar, Ling-Yu Duan, Sateesh Giduthuri, Xiao-Li Li, Tomaso Poggio
In this work, we focus on the problem of image instance retrieval with deep descriptors extracted from pruned Convolutional Neural Networks (CNN).
1 code implementation • 17 Jun 2017 • Zhe Wang, Kingsley Kuan, Mathieu Ravaut, Gaurav Manek, Sibo Song, Yuan Fang, Seokhwan Kim, Nancy Chen, Luis Fernando D'Haro, Luu Anh Tuan, Hongyuan Zhu, Zeng Zeng, Ngai Man Cheung, Georgios Piliouras, Jie Lin, Vijay Chandrasekhar
Beyond that, we extend the original competition by including text information in the classification, making this a truly multi-modal approach with vision, audio and text.
no code implementations • 26 May 2017 • Kingsley Kuan, Mathieu Ravaut, Gaurav Manek, Huiling Chen, Jie Lin, Babar Nazir, Cen Chen, Tse Chiang Howe, Zeng Zeng, Vijay Chandrasekhar
We present a deep learning framework for computer-aided lung cancer diagnosis.
no code implementations • 26 Apr 2017 • Ling-Yu Duan, Vijay Chandrasekhar, Shiqi Wang, Yihang Lou, Jie Lin, Yan Bai, Tiejun Huang, Alex ChiChung Kot, Wen Gao
This paper provides an overview of the on-going compact descriptors for video analysis standard (CDVA) from the ISO/IEC moving pictures experts group (MPEG).
no code implementations • 18 Jan 2017 • Vijay Chandrasekhar, Jie Lin, Qianli Liao, Olivier Morère, Antoine Veillard, Ling-Yu Duan, Tomaso Poggio
One major drawback of CNN-based {\it global descriptors} is that uncompressed deep neural network models require hundreds of megabytes of storage making them inconvenient to deploy in mobile applications or in custom hardware.
no code implementations • 15 Mar 2016 • Olivier Morère, Jie Lin, Antoine Veillard, Vijay Chandrasekhar, Tomaso Poggio
The first one is Nested Invariance Pooling (NIP), a method inspired from i-theory, a mathematical theory for computing group invariant transformations with feed-forward neural networks.
no code implementations • 25 Jan 2016 • Sibo Song, Ngai-Man Cheung, Vijay Chandrasekhar, Bappaditya Mandal, Jie Lin
With the increasing availability of wearable devices, research on egocentric activity recognition has received much attention recently.
no code implementations • 9 Jan 2016 • Olivier Morère, Antoine Veillard, Jie Lin, Julie Petta, Vijay Chandrasekhar, Tomaso Poggio
Based on a thorough empirical evaluation using several publicly available datasets, we show that our method is able to significantly and consistently improve retrieval results every time a new type of invariance is incorporated.
no code implementations • 10 Nov 2015 • Jie Lin, Olivier Morère, Julie Petta, Vijay Chandrasekhar, Antoine Veillard
Then, triplet networks, a rank learning scheme based on weight sharing nets is used to fine-tune the binary embedding functions to retain as much as possible of the useful metric properties of the original space.
no code implementations • 11 Aug 2015 • Vijay Chandrasekhar, Jie Lin, Olivier Morère, Hanlin Goh, Antoine Veillard
The second part of the study focuses on the impact of geometrical transformations such as rotations and scale changes.
no code implementations • 30 Jan 2015 • Olivier Morère, Hanlin Goh, Antoine Veillard, Vijay Chandrasekhar, Jie Lin
A comprehensive user study is conducted comparing our proposed method to a variety of schemes, including the summarization currently in use by one of the most popular video sharing websites.
no code implementations • 20 Jan 2015 • Jie Lin, Olivier Morere, Vijay Chandrasekhar, Antoine Veillard, Hanlin Goh
This work focuses on representing very high-dimensional global image descriptors using very compact 64-1024 bit binary hashes for instance retrieval.