Gautham Krishna Gudur
email: gauthamkrishna [at] utexas [dot] edu
        gauthamkrishna [dot] gudur [at] gmail [dot] com

CV / Google Scholar
LinkedIn / Twitter
Github / HackerRank

I am a 2nd year Ph.D. student at The University of Texas at Austin in the Department of Electrical and Computer Engineering (ECE), advised by the wonderful Prof. Edison Thomaz. I am also a part of the WNCG group and affiliated with IFML. My research focuses on resource-efficient, data-centric, and human-centric AI. Previously, I worked as a Data Scientist at Ericsson R&D in the Global AI Accelerator (GAIA) team centered around machine intelligence and telecom. I have also worked at SmartCardia - an AI-assisted wearable healthcare spin-off from EPFL.

In a past life, I was a Research Assistant at Solarillion Foundation working on on-device ML. I earned a Bachelor's in Information Technology from SSN College of Engineering, Chennai, where I also did some basic research on ML, IoT and HCI. In my time away from research, I enjoy traveling, exploring Indian classical and world music, and playing badminton and DotA.

[ Research Interests | News | Publications | Patents | Services | Honors/Awards | Talks | Summer Schools | MOOCs ]

  Research Interests
  • Accelerating training and inference using data-centric approaches by prioritizing important samples, in continual (curriculum/few-shot) learning and data-efficient human-in-the-loop settings
  • Ubiquitous computing, cross-modal learning, and on-device human-centric AI (wearable/audio sensing, human activity recognition, mobile health, etc.) with a focus on real-world deployability
  • Leveraging data/sample- and parameter-efficient techniques for foundation models and LLMs (Generative AI)
  • Federated learning under statistical heterogeneities with new labels and models
  • Learning from limited supervision, particularly under data/label scarcity, sparsity, and calibrated uncertainty
Broadly, resource-efficient and resource-aware learning,
where resource := data, sample, label, model, parameter, compute, etc.

Here are a few keywords that might best describe my research interests (present/past, hopefully in the future!).

  What's New!
    Publications    [ Preprints | Conference/Journal/Workshop | Poster/Extended Abstract ]
* denotes equal contribution and joint lead authorship
  Conference/Journal/Workshop
sym

[NEW] SVFT: Parameter-Efficient Fine-Tuning with Singular Vectors
Vijay Lingam*, Atula Tejaswi*, Aditya Vavre*, Aneesh Shetty*, Gautham Krishna Gudur*, Joydeep Ghosh, Alex Dimakis, Eunsol Choi, Aleksandar Bojchevski, Sujay Sanghavi
Neural Information Processing Systems (NeurIPS), 2024
Workshop on Advancing Neural Network Training (WANT): Computational Efficiency, Scalability, and Resource Optimization [Oral Presentation]
Workshop on Efficient Systems for Foundation Models (ES-FoMo)
ICML 2024

pdf / abstract / poster / code / tweet / bibtex

Popular parameter-efficient fine-tuning (PEFT) methods, such as LoRA and its variants, freeze pre-trained model weights W and inject learnable matrices ∆W. These ∆W matrices are structured for efficient parameterization, often using techniques like low-rank approximations or scaling vectors. However, these methods typically show a performance gap compared to full fine-tuning. Although recent PEFT methods have narrowed this gap, they do so at the cost of additional learnable parameters. We propose SVFT, a simple approach that fundamentally differs from existing methods: the structure imposed on ∆W depends on the specific weight matrix W. Specifically, SVFT updates W as a sparse combination of outer products of its singular vectors, training only the coefficients (scales) of these sparse combinations. This approach allows fine-grained control over expressivity through the number of coefficients. Extensive experiments on language and vision benchmarks show that SVFT recovers up to 96% of full fine-tuning performance while training only 0.006 to 0.25% of parameters, outperforming existing methods that only recover up to 85% performance using 0.03 to 0.8% of the trainable parameter budget.

  @inproceedings{lingam_svft24,
  title={SVFT: Parameter-Efficient Fine-Tuning with
  Singular Vectors},
  author={Lingam, Vijay and Tejaswi, Atula
  and Vavre, Aditya and Shetty, Aneesh
  and Gudur, Gautham Krishna and Ghosh, Joydeep
  and Dimakis, Alex and Choi, Eunsol and
  Bojchevski, Aleksandar and Sanghavi, Sujay},
  booktitle={2nd Workshop on Advancing Neural
  Network Training: Computational Efficiency,
  Scalability, and Resource Optimization
  (WANT@ICML 2024)},
  year={2024},
  url={https://openreview.net/forum?id=DOUskwCqg5}
  }
  
sym

Can Calibration Improve Sample Prioritization?
Ganesh Tata*, Gautham Krishna Gudur*, Gopinath Chennupati, Mohammad Emtiyaz Khan
Human in the Loop Learning (HILL) Workshop
Has It Trained Yet? (HITY) Workshop
NeurIPS 2022

pdf / abstract / poster / code / bibtex

Calibration can reduce overconfident predictions of deep neural networks, but can calibration also accelerate training? In this paper, we show that it can when used to prioritize some examples for performing subset selection. We study the effect of popular calibration techniques in selecting better subsets of samples during training (also called sample prioritization) and observe that calibration can improve the quality of subsets, reduce the number of examples per epoch (by at least 70%), and can thereby speed up the overall training process. We further study the effect of using calibrated pre-trained models coupled with calibration during training to guide sample prioritization, which again seems to improve the quality of samples selected.

@inproceedings{tata_hity22,
  author = {Tata, Ganesh and Gudur, Gautham Krishna
  and Chennupati, Gopinath and 
  Khan, Mohammad Emtiyaz},
  title = {Can Calibration Improve Sample 
  Prioritization?},
  booktitle = {Has it Trained Yet? 
  NeurIPS 2022 Workshop},
  year = {2022}
}
sym

Data-Efficient Automatic Model Selection in Unsupervised Anomaly Detection
Gautham Krishna Gudur, Raaghul R, Adithya K, Shrihari Vasudevan
IEEE ICMLA 2022 [Oral Presentation]

pdf / abstract / slides / bibtex

Anomaly Detection is a widely used technique in machine learning that identifies context-specific outliers. Most real-world anomaly detection applications are unsupervised, owing to the bottleneck of obtaining labeled data for a given context. In this paper, we solve two important problems pertaining to unsupervised anomaly detection. First, we identify only the most informative subsets of data points and obtain ground truths from the domain expert (oracle); second, we perform efficient model selection using a Bayesian Inference framework and recommend the top-k models to be fine-tuned prior to deployment. To this end, we exploit multiple existing and novel acquisition functions, and successfully demonstrate the effectiveness of the proposed framework using a weighted Ranking Score (\eta) to accurately rank the top-k models. Our empirical results show a significant reduction in data points acquired (with at least 60% reduction) while not compromising on the efficiency of the top-k models chosen, with both uniform and non-uniform priors over models.

@inproceedings{gudur_icmla22,
  author = {Gudur, Gautham Krishna and Raaghul, R
  and Adithya, K and Vasudevan, Shrihari},
  title = {Data-Efficient Automatic Model 
  Selection in Unsupervised Anomaly Detection},
  booktitle = {IEEE ICMLA 2022},
  year = {2022}
}
sym

Zero-Shot Federated Learning with New Classes for Audio Classification
Gautham Krishna Gudur, Satheesh Kumar Perepu
INTERSPEECH 2021
Abridged version: Distributed and Private Machine Learning (DPML) & Hardware Aware Efficient Training (HAET) workshops, ICLR 2021
Also presented as a poster at EEML 2021

pdf / abstract / poster / video / slides / bibtex

Federated learning is an effective way of extracting insights from different user devices while preserving the privacy of users. However, new classes with completely unseen data distributions can stream across any device in a federated learning setting, whose data cannot be accessed by the global server or other users. To this end, we propose a unified zero-shot framework to handle these aforementioned challenges during federated learning. We simulate two scenarios here – 1) when the new class labels are not reported by the user, the traditional FL setting is used; 2) when new class labels are reported by the user, we synthesize Anonymized Data Impressions by calculating class similarity matrices corresponding to each device’s new classes followed by unsupervised clustering to distinguish between new classes across different users. Moreover, our proposed framework can also handle statistical heterogeneities in both labels and models across the participating users. We empirically evaluate our framework on-device across different communication rounds (FL iterations) with new classes in both local and global updates, along with heterogeneous labels and models, on two widely used audio classification applications – keyword spotting and urban sound classification, and observe an average deterministic accuracy increase of ∼4.041% and ∼4.258% respectively.

@inproceedings{gudur_interspeech21,
  author = {Gudur, Gautham Krishna and 
  Perepu, Satheesh Kumar},
  title = {Zero-Shot Federated Learning with 
  New Classes for Audio Classification},
  booktitle = {Proc. Interspeech 2021},
  pages = {1579--1583},
  year = {2021},
  doi = {10.21437/Interspeech.2021-2264}
}
sym

Bayesian Active Learning for Wearable Stress and Affect Detection
Abhijith Ragav*, Gautham Krishna Gudur*
Machine Learning for Mobile Health Workshop
NeurIPS 2020

pdf / abstract / poster / bibtex

In the recent past, psychological stress has been increasingly observed in humans, and early detection is crucial to prevent health risks. Stress detection using ondevice deep learning algorithms has been on the rise owing to advancements in pervasive computing. However, an important challenge that needs to be addressed is handling unlabeled data in real-time via suitable ground truthing techniques (like Active Learning), which should help establish affective states (labels) while also selecting only the most informative data points to query from an oracle. In this paper, we propose a framework with capabilities to represent model uncertainties through approximations in Bayesian Neural Networks using Monte-Carlo (MC) Dropout. This is combined with suitable acquisition functions for active learning. Empirical results on a popular stress and affect detection dataset experimented on a Raspberry Pi 2 indicate that our proposed framework achieves a considerable efficiency boost during inference, with a substantially low number of acquired pool points during active learning across various acquisition functions. Variation Ratios achieves an accuracy of 90.38% which is comparable to the maximum test accuracy achieved while training on about 40% lesser data.

@article{ragav_mlmh20,
  author = {Ragav, Abhijith and 
  Gudur, Gautham Krishna},
  title = {Bayesian Active Learning for 
  Wearable Stress and Affect Detection},
  journal = {arXiv preprint arXiv:2012.02702},
  year = {2020}
}
sym

Resource-Constrained Federated Learning with Heterogeneous Labels and Models for Human Activity Recognition
Gautham Krishna Gudur, Satheesh Kumar Perepu
2nd International Workshop on Deep Learning for Human Activity Recognition (DL-HAR), IJCAI-PRICAI 2020
Abridged version: Machine Learning for Mobile Health Workshop,
NeurIPS 2020

pdf / abstract / poster / slides / bibtex

One of the most significant applications in pervasive computing for modeling user behavior is Human Activity Recognition (HAR). Such applications necessitate us to characterize insights from multiple resource-constrained user devices using machine learning techniques for effective personalized activity monitoring. On-device Federated Learning proves to be an extremely viable option for distributed and collaborative machine learning in such scenarios, and is an active area of research. However, there are a variety of challenges in addressing statistical (non-IID data) and model heterogeneities across users. In addition, in this paper, we explore a new challenge of interest - to handle heterogeneities in labels (activities) across users during federated learning. To this end, we propose a framework with two different versions for federated label-based aggregation, which leverage overlapping information gain across activities - one using Model Distillation Update, and the other using Weighted \alpha$-update. Empirical evaluation on the Heterogeneity Human Activity Recognition (HHAR) dataset (with four activities for effective elucidation of results) indicates an average deterministic accuracy increase of at least ~11.01% with the model distillation update strategy and ~9.16% with the weighted \alpha-update strategy. We demonstrate the on-device capabilities of our proposed framework by using Raspberry Pi 2, a single-board computing platform.

@inproceedings{gudur_dlhar20,
  author = {Gudur, Gautham Krishna and 
  Perepu, Satheesh K},
  title = {Resource-Constrained Federated Learning
  with Heterogeneous Labels and Models for 
  Human Activity Recognition},
  booktitle = {Deep Learning for Human Activity 
  Recognition},
  pages = {55--69},
  year = {2021},
  publisher={Springer Singapore}
}

@article{gudur_mlmh20,
  author = {Gudur, Gautham Krishna and
  Perepu, Satheesh K},
  title = {Federated Learning with Heterogeneous 
  Labels and Models for Mobile Activity 
  Monitoring},
  journal = {arXiv preprint arXiv:2012.02539},
  year = {2020}
}
sym

Resource-Constrained Federated Learning with Heterogeneous Labels and Models
Gautham Krishna Gudur, Bala Shyamala Balaji, Satheesh Kumar Perepu
3rd International Workshop on Artificial Intelligence of Things (AIoT)
ACM KDD 2020

pdf / abstract / slides / bibtex

Various IoT applications demand resource-constrained machine learning mechanisms for different applications such as pervasive healthcare, activity monitoring, speech recognition, real-time computer vision, etc. This necessitates us to leverage information from multiple devices with few communication overheads. Federated Learning proves to be an extremely viable option for distributed and collaborative machine learning. Particularly, on-device federated learning is an active area of research, however, there are a variety of challenges in addressing statistical (non-IID data) and model heterogeneities. In addition, in this paper we explore a new challenge of interest - to handle label heterogeneities in federated learning. To this end, we propose a framework with simple $\alpha$-weighted federated aggregation of scores which leverages overlapping information gain across labels, while saving bandwidth costs in the process. Empirical evaluation on Animals-10 dataset (with 4 labels for effective elucidation of results) indicates an average deterministic accuracy increase of at least ~16.7%. We also demonstrate the on-device capabilities of our proposed framework by experimenting with federated learning and inference across different iterations on a Raspberry Pi 2, a single-board computing platform.

@article{gudur_aiot20,
  author = {Gudur, Gautham Krishna and Balaji,
  Bala Shyamala and Perepu, Satheesh K},
  title = {Resource-Constrained Federated Learning
  with Heterogeneous Labels and Models},
  journal = {arXiv preprint arXiv:2011.03206},
  year = {2020}
}
sym

A Dynamically Adaptive Movie Occupancy Forecasting System with Feature Optimization
Sundararaman Venkataramani, Ateendra Ramesh, Sharan Sundar S, Aashish Kumar Jain, Gautham Krishna Gudur, Vineeth Vijayaraghavan
Workshop on Learning and Mining with Industrial Data (LMID)
IEEE ICDM 2019

pdf / abstract / bibtex

Demand Forecasting is a primary revenue management strategy in any business model, particularly in the highly volatile entertainment/movie industry wherein, inaccurate forecasting may lead to loss in revenue, improper workforce allocation and food wastage or shortage. Predominant challenges in Occupancy Forecasting might involve complexities in modeling external factors – particularly in Indian multiplexes with multilingual movies, high degrees of uncertainty in crowdbehavior, seasonality drifts, influence of socio-economic events and weather conditions. In this paper, we investigate the problem of movie occupancy forecasting, a significant step in the decision making process of movie scheduling and resource management, by leveraging the historical transactions performed in a multiplex consisting of eight screens with an average footfall of over 5500 on holidays and over 3500 on nonholidays every day. To effectively capture crowd behavior and predict the occupancy, we engineer and benchmark behavioral features by structuring recent historical transaction data spanning over five years from one of the top Indian movie multiplex chains, and propose various deep learning and conventional machine learning models. We also propose and optimize on a novel feature called Sale Velocity to incorporate the dynamic crowd behavior in movies. The performance of these models are benchmarked in real-time using Mean Absolute Percentage Error (MAPE), and found to be highly promising while substantially outperforming a domain expert’s predictions.

@inproceedings{venkataramani_icdmw19,
  author = {Venkataramani, Sundararaman and 
  Ramesh, Ateendra and S, Sharan Sundar and 
  Jain, Aashish Kumar and Gudur, Gautham Krishna
  and Vijayaraghavan, Vineeth},
  title = {A Dynamically Adaptive Movie Occupancy 
  Forecasting System with Feature Optimization},
  booktitle = {International Conference on Data 
  Mining Workshops (ICDMW)},
  pages = {799--805},
  year = {2019},
  organization = {IEEE}
}
sym

A Vision-based Deep On-Device Intelligent Bus Stop Recognition System
Gautham Krishna Gudur, Ateendra Ramesh, Srinivasan R
8th International Workshop on Pervasive Urban Applications (PURBA)
ACM UbiComp 2019 [Oral Presentation]

pdf / abstract / code / slides / bibtex

Intelligent public transportation systems are the cornerstone to any smart city, given the advancements made in the field of self-driving autonomous vehicles - particularly for autonomous buses, where it becomes really difficult to systematize a way to identify the arrival of a bus stop on-the-fly for the bus to appropriately halt and notify its passengers. This paper proposes an automatic and intelligent bus stop recognition system built on computer vision techniques, deployed on a low-cost single-board computing platform with minimal human supervision. The on-device recognition engine aims to extract the features of a bus stop and its surrounding environment, which eliminates the need for a conventional Global Positioning System (GPS) look-up, thereby alleviating network latency and accuracy issues. The dataset proposed in this paper consists of images of 11 different bus stops taken at different locations in Chennai, India during day and night. The core engine consists of a convolutional neural network (CNN) of size ~260 kB that is computationally lightweight for training and inference. In order to automatically scale and adapt to the dynamic landscape of bus stops over time, incremental learning (model updation) techniques were explored on-device from real-time incoming data points. Real-time incoming streams of images are unlabeled, hence suitable ground truthing strategies (like Active Learning), should help establish labels on-the-fly. Light-weight Bayesian Active Learning strategies using Bayesian Neural Networks using dropout (capable of representing model uncertainties) enable selection of the most informative images to query from an oracle. Intelligent rendering of the inference module by iteratively looking for better images on either sides of the bus stop environment propels the system towards human-like behavior. The proposed work can be integrated seamlessly into the widespread existing vision-based self-driving autonomous vehicles.

@inproceedings{gudur_purba19,
  author = {Gudur, Gautham Krishna and Ramesh, 
  Ateendra and R, Srinivasan},
  title = {A Vision-Based Deep On-Device 
  Intelligent Bus Stop Recognition System},
  booktitle = {Adjunct Proceedings of the 2019 ACM  
  International Joint Conference on Pervasive and
  Ubiquitous Computing and Proceedings of the 2019 
  ACM International Symposium on Wearable Computers},
  pages = {963--968},
  numpages = {6},
  year = {2019}
}
sym

Label Frequency Transformation for Multi-Label Multi-Class Text Classification
Raghavan A K, Venkatesh Umaashankar, Gautham Krishna Gudur
GermEval, KONVENS 2019
(Winner of Shared Task 1, Subtask (a) in post-evaluation phase)

pdf / abstract / poster / code / bibtex

In this paper, we (Team Raghavan) describe the system of our submission for GermEval 2019 Task 1 - Subtask (a) and Subtask (b), which are multi-label multi-class classification tasks. The goal is to classify short texts describing German books into one or multiple classes, 8 generic categories for Subtask (a) and 343 specific categories for Subtask (b). Our system comprises of three stages. (a) Transform multi-label multi-class problem into single-label multi-class problem. Build a category model. (b) Build a class count model to predict the number of classes a given input belongs to. (c) Transform single-label problem into multi-label problem back again by selecting the top-k predictions from the category model, with the optimal k value predicted from the class count model. Our approach utilizes a Support Vector Classification model on the extracted vectorized tf-idf features by leveraging the Byte-Pair Token Encoding (BPE), and reaches f1-micro scores of 0.857 in the test evaluation phase and 0.878 in post evaluation phase for Subtask (a), while 0.395 in post evaluation phase for Subtask (b) of the competition. We have provided our solution code in the following link: https://github.com/oneraghavan/germeval-2019.

@inproceedings{raghavan_konvens19,
  author = {Raghavan, AK and Umaashankar, Venkatesh
  and Gudur, Gautham Krishna},
  title = {Label Frequency Transformation for 
  Multi-Label Multi-Class Text Classification},
  booktitle = {Proceedings of the 15th Conference on 
  Natural Language Processing (KONVENS 2019)},
  pages = {341--346},
  year = {2019},
}
sym

ActiveHARNet: Towards On-Device Deep Bayesian Active Learning for Human Activity Recognition
Gautham Krishna Gudur, Prahalathan Sundaramoorthy, Venkatesh Umaashankar
3rd International Workshop on Embedded and Mobile Deep Learning (EMDL), ACM MobiSys 2019 [Oral Presentation]
Also presented as a poster at EEML 2020

pdf / abstract / code / video / slides / bibtex

Various health-care applications such as assisted living, fall detection etc., require modeling of user behavior through Human Activity Recognition (HAR). HAR using mobile- and wearable-based deep learning algorithms have been on the rise owing to the advancements in pervasive computing. However, there are two other challenges that need to be addressed: first, the deep learning model should support on-device incremental training (model updation) from real-time incoming data points to learn user behavior over time, while also being resource-friendly; second, a suitable ground truthing technique (like Active Learning) should help establish labels on-the-fly while also selecting only the most informative data points to query from an oracle. Hence, in this paper, we propose ActiveHARNet, a resource-efficient deep ensembled model which supports on-device Incremental Learning and inference, with capabilities to represent model uncertainties through approximations in Bayesian Neural Networks using dropout. This is combined with suitable acquisition functions for active learning. Empirical results on two publicly available wrist-worn HAR and fall detection datasets indicate that ActiveHARNet achieves considerable efficiency boost during inference across different users, with a substantially low number of acquired pool points (at least 60% reduction) during incremental learning on both datasets experimented with various acquisition functions, thus demonstrating deployment and Incremental Learning feasibility.

@inproceedings{gudur_emdl19,
  author = {Gudur, Gautham Krishna and 
  Sundaramoorthy, Prahalathan and 
  Umaashankar, Venkatesh},
  title = {ActiveHARNet: Towards On-Device 
  Deep Bayesian Active Learning for Human 
  Activity Recognition},
  booktitle = {The 3rd International Workshop 
  on Deep Learning for Mobile Systems and 
  Applications},
  pages = {7--12},
  numpages = {6},
  year = {2019}
}
sym

HARNet: Towards On-Device Incremental Learning using Deep Ensembles on Constrained Devices
Prahalathan Sundaramoorthy, Gautham Krishna Gudur, Manav Rajiv Moorthy, R Nidhi Bhandari, Vineeth Vijayaraghavan
2nd International Workshop on Embedded and Mobile Deep Learning (EMDL), ACM MobiSys 2018 [Oral Presentation]

pdf / abstract / code / slides / bibtex

Recent advancements in the domain of pervasive computing have seen the incorporation of sensor-based Deep Learning algorithms in Human Activity Recognition (HAR). Contemporary Deep Learning models are engineered to alleviate the difficulties posed by conventional Machine Learning algorithms which require extensive domain knowledge to obtain heuristic hand-crafted features. Upon training and deployment of these Deep Learning models on ubiquitous mobile/embedded devices, it must be ensured that the model adheres to their computation and memory limitations, in addition to addressing the various mobile- and user-based heterogeneities prevalent in actuality. To handle this, we propose HARNet - a resource-efficient and computationally viable network to enable on-line Incremental Learning and User Adaptability as a mitigation technique for anomalous user behavior in HAR. Heterogeneity Activity Recognition Dataset was used to evaluate HARNet and other proposed variants by utilizing acceleration data acquired from diverse mobile platforms across three different modes from a practical application perspective. We perform Decimation as a Down-sampling technique for generalizing sampling frequencies across mobile devices, and Discrete Wavelet Transform for preserving information across frequency and time. Systematic evaluation of HARNet on User Adaptability yields an increase in accuracy by ~35% by leveraging the model's capability to extract discriminative features across activities in heterogeneous environments.

@inproceedings{sundaramoorthy_emdl18,
  author = {Sundaramoorthy, Prahalathan and 
  Gudur, Gautham Krishna and Moorthy, 
  Manav Rajiv and Bhandari, R Nidhi and 
  Vijayaraghavan, Vineeth},
  title = {HARNet: Towards On-Device 
  Incremental Learning Using Deep Ensembles 
  on Constrained Devices},
  booktitle = {Proceedings of the 2nd International 
  Workshop on Embedded and Mobile Deep Learning},
  pages = {31--36},
  numpages = {6}
  year = {2018}
}
sym

A Generic Multi-modal Dynamic Gesture Recognition System using Machine Learning
Gautham Krishna G, Karthik Subramanian Nathan, Yogesh Kumar B, Ankith A Prabhu, Ajay Kannan, Vineeth Vijayaraghavan
IEEE FICC 2018

pdf / abstract / code / slides / bibtex

Human computer interaction facilitates intelligent communication between humans and computers, in which gesture recognition plays a prominent role. This paper proposes a machine learning system to identify dynamic gestures using triaxial acceleration data acquired from two public datasets. These datasets, uWave and Sony, were acquired using accelerometers embedded in Wii remotes and smartwatches, respectively. A dynamic gesture signed by the user is characterized by a generic set of features extracted across time and frequency domains. The system was analyzed from an end-user perspective and was modelled to operate in three modes. The modes of operation determine the subsets of data to be used for training and testing the system. From an initial set of seven classifiers, three were chosen to evaluate each dataset across all modes rendering the system towards mode-neutrality and dataset-independence. The proposed system is able to classify gestures performed at varying speeds with minimum preprocessing, making it computationally efficient. Moreover, this system was found to run on a low-cost embedded platform – Raspberry Pi Zero (USD 5), making it economically viable.

@inproceedings{krishna_ficc18,
  author = {Krishna, G Gautham and Nathan,
  Karthik Subramanian and Kumar, B Yogesh 
  and Prabhu, Ankith A and Kannan, Ajay and
  Vijayaraghavan, Vineeth},
  title = {A Generic Multi-modal Dynamic Gesture
  Recognition System Using Machine Learning},
  booktitle = {Future of Information and 
  Communication Conference},
  pages = {603--615},
  year = {2018},
  organization = {Springer}
}
sym

Electroencephalography Based Analysis of Emotions Among Indian Film Viewers
Gautham Krishna G, G Krishna, N Bhalaji
ICAICR 2017, Springer

pdf / abstract / slides / bibtex

The film industry has been a major factor in the rapid growth of the Indian entertainment industry. While watching a film, the viewers undergo an experience that evolves over time, thereby grabbing their attention. This triggers a sequence of processes which is perceptual, cognitive and emotional. Neurocinematics is an emerging field of research, that measures the cognitive responses of a film viewer. Neurocinematic studies, till date, have been performed using functional magnetic resonance imaging (fMRI); however recent studies have suggested the use of advancements in electroencephalography (EEG) in neurocinematics to address the issues involved with fMRI. In this article the emotions corresponding to two different genres of Indian films are captured with the real-time brainwaves of viewers using EEG and analyzed using R language.

@inproceedings{krishna_icaicr17,
  author={G, Gautham Krishna and Krishna, G and 
  Bhalaji, N},
  title = {Electroencephalography Based Analysis 
  of Emotions Among Indian Film Viewers},
  booktitle = {International Conference on Advanced
  Informatics for Computing Research},
  pages = {145--155},
  year = {2017},
  organization = {Springer}
}
sym

Analysis of Routing Protocol for Low-power & Lossy Networks in IoT Real Time Applications
Gautham Krishna G, G Krishna, N Bhalaji
ICRTCSE 2016, Procedia Computer Science

pdf / abstract / bibtex

The wide-scaled sensing by Wireless Sensor Networks (WSN) has impacted several areas in the modern generation. It has offered the ability to measure, observe and understand the various physical factors from our environment. The rapid increase of WSN devices in an actuating-communicating network has led to the evolution of Internet of Things (IoT), where information is shared seamlessly across platforms by blending the sensors and actuators with our environment. These low cost WSN devices provide automation in medical and environmental monitoring. Evaluating the performance of these sensors using RPL enhances their use in real world applications. The realization of these RPL performances from different nodes focuses our study to utilize WSNs in our day-to-day applications. The effective sensor nodes (motes) for the appropriate environmental scenarios are analyzed, and we propose a collective view of the metrics for the same, for enhanced throughput in the given field of usage.

@article{krishna_icrtcse16,
  author = {Krishna, G Gautham and Krishna, G 
  and Bhalaji, N},
  title = {Analysis of Routing Protocol for Low-power
  and Lossy Networks in IoT Real Time Applications},
  journal = {Procedia Computer Science},
  volume = {87},
  pages = {270--274},
  year = {2016},
  publisher={Elsevier}
}
  Poster/Extended Abstract
sym

Heterogeneous Zero-Shot Federated Learning with New Classes for On-Device Audio Classification
Gautham Krishna Gudur, Satheesh Kumar Perepu
MobiUK 2021 (Third UK Mobile, Wearable and Ubiquitous Systems Research Symposium)

sym

Bayesian Active Learning for Wearable and Mobile Health
Gautham Krishna Gudur, Abhijith Ragav, Prahalathan Sundaramoorthy, Venkatesh Umaashankar
BDL 2020 (NeurIPS Europe meetup on Bayesian Deep Learning)

sym

Handling Real-time Unlabeled Data in Activity Recognition using Deep Bayesian Active Learning and Data Programming
Gautham Krishna Gudur, Prahalathan Sundaramoorthy, Venkatesh Umaashankar
MobiUK 2019 (Second UK Mobile, Wearable and Ubiquitous Systems Research Symposium), University of Oxford

sym

Neurocinematics: The Intelligent Review System
N Bhalaji, G Krishna, G Gautham Krishna
CBC 2015 (3rd International Conference on Cognition, Brain and Computation), IIT Gandhinagar

  Patents
sym

  Services
  • Program Committee Member/Reviewer
    • NeurIPS 2024
      - Efficient Natural Language and Speech Processing workshop (ENLSP)
      - Fine-Tuning in Modern Machine Learning: Principles and Scalability (FITML)
    • ICML 2024 - Efficient Systems for Foundation Models Workshop (ES-FoMo)
    • ICLR 2021 - Distributed and Private Machine Learning Workshop (DPML)
    • NeurIPS - Machine Learning for Health Workshop (ML4H)
      - ML4H 2020, ML4H 2019
    • KONVENS 2019, GermEval
  • Technical Reviewer of the book titled "Hands-On Meta Learning With Python"
  • Event Organizer of "Data Nuggets" - a Data Science event, Invente 2016
  Honors and Awards
  • Graduate Ph.D. Fellowship from Cockrell School of Engineering at UT Austin
  • Top 1 percentile in HackerRank – Algorithms Domain – Problem Solving (Advanced)
  • Full financial registration grant to attend ICLR 2021, NeurIPS 2020 and OxML 2020
  • Our project AIB (Automated Intelligent knowledge Base) won Ericsson's Top Performance Competition 2020 in the Operational Excellence category
  • Undergraduate financial research grant of INR 25,000 from SSN College of Engineering
  • Winner of GermEval Shared Task 1 Challenge - Subtask (a) @ KONVENS 2019 in post-evaluation phase
  • Top 10 percentile in 42nd National Mathematics Talent Competitions, India
  • Certification of Merit for Grade A1 in all subjects in AISSE (CBSE 10th boards)
  • Completed all 10 levels of UCMAS Mental Arithmetic (Abacus)
  • Division Level Badminton Player (U-19)
  • 29th Rank overall in Grade 3 Keyboard
  Talks
  • Machine Learning and Ubiquitous Computing (June 2022)
    SSN College of Engineering
  • Heterogeneous Zero-Shot Federated Learning with New Classes for On-Device Audio Classification (July 2021)
    MobiUK 2021
  • Telecom-Specific Language Translation using GCP (May 2021)
    Ericsson/Google Cloud Day
  • Resource-Constrained Machine Learning for Ubiquitous Computing Applications (Sept 2020)
    Flipped by GAIUS / slides
  Summer Schools
  MOOCs/Certifications

Template modified from this, this, this and this.