Resolving the Mixing Time of the Langevin Algorithm to its Stationary Distribution for Log-Concave Sampling
Analyzing the mixing time of the Langevin Algorithm for log-concave sampling to its stationary distribution
PDP: Parameter-free Differentiable Pruning is All You Need
Efficient and effective train-time pruning scheme, Parameter-free Differentiable Pruning (PDP), offers state-of-the-art qualities in model size, accuracy, and training cost.
Population Expansion for Training Language Models with Private Federated Learning
Expanding the population for training language models using federated learning and differential privacy to improve model utility and training latency
Conformalization of Sparse Generalized Linear Models
The blogpost discusses the conformalization of sparse generalized linear models and the challenges of computing confidence sets in regression problems.
DUET: 2D Structured and Equivariant Representations
The blogpost discusses DUET, a proposed method for 2D structured and equivariant representations that maintain transformation-related information in multiview self-supervised learning.
Differentially Private Heavy Hitter Detection using Federated Analytics
Practical heuristics to improve performance of prefix-tree based algorithms for differentially private heavy hitter detection, utilizing aggregate and local differential privacy with an adaptive hyperparameter tuning algorithm and exploration of different data-selection schemes.
UPSCALE: Unconstrained Channel Pruning
The blogpost explores UPSCALE, a technique that addresses the latency issues introduced by channel pruning in neural networks.
The Role of Entropy and Reconstruction for Multi-View Self-Supervised Learning
Examining the role of entropy and reconstruction for multi-view self-supervised learning (MVSSL) and its relation to mutual information (MI).
Optimize AWS Inferentia utilization with FastAPI and PyTorch models on Amazon EC2 Inf1 & Inf2 instances
Optimizing AWS Inferentia utilization with FastAPI and PyTorch models on Amazon EC2 Inf1 & Inf2 instances for improved performance and cost benefits
How Patsnap used GPT-2 inference on Amazon SageMaker with low latency and cost
Patsnap explains how they used GPT-2 inference on Amazon SageMaker to achieve low latency and cost.
GenAI most impactful tech of the decade | Gartner AI Hype Cycle
The blogpost discusses the impact of Generative AI and its position on the Gartner AI Hype Cycle.
Special report: Which generation is most serious about their streak?
Determining which generation is most committed to maintaining a streak of 365+ days and ranking them accordingly.