CloudflareCloudflare incident on September 17, 2024
An analysis of the 2024 Cloudflare incident due to an internal software error affecting Business plan websites, and the steps taken to prevent a similar issue in the future.
CloudflareRemoving uncertainty through "what-if" capacity planning
Modeling 'what-if' scenarios for capacity planning at Cloudflare: a look behind the curtain
Berkeley AILinguistic Bias in ChatGPT: Language Models Reinforce Dialect Discrimination
Exploring linguistic bias in ChatGPT's responses to different varieties of English and its implications on dialect discrimination
5 product lessons we learned from building Friend Streak
5 key insights for product managers from developing Friend Streak
PinterestFeature Caching for Recommender Systems w/ Cachelib
Optimizing recommender systems with feature caching using Cachelib by Meta Open Source.
CloudflareHow Cloudflare is helping domain owners with the upcoming Entrust CA distrust by Chrome and Mozilla
A technical blogpost discussing how Cloudflare is assisting domain owners affected by the upcoming distrust of Entrust CA certificates by Chrome and Mozilla
La Série entre amis : le nouveau moyen de se motiver à plusieurs
Discover how the new 'Série entre amis' feature on Duolingo enhances motivation through friend accountability.
EleutherAIThe Practitioner's Guide to the Maximal Update Parameterization
A practical guide to implementing and leveraging the benefits of Maximal Update Parameterization (μP) for neural network training, offering insights into stable hyperparameters, reduced tuning costs, improved loss at large scales, stable training, and predictable scaling.
DatabricksFine-tuning Llama 3.1 with Long Sequences
Optimizing long sequence fine-tuning with Llama 3.1 for high-quality Retrieval Augmented Generation (RAG) and tool use systems
EleutherAIThe Practitioner's Guide to the Maximal Update Parameterization
A practical guide to implementing the Maximal Update Parameterization for neural network training, offering benefits such as stable hyperparameters, reduced tuning costs, and improved training stability at large scales.
AWS MLFine-tune Meta Llama 3.1 models using torchtune on Amazon SageMaker
Fine-tune Meta Llama 3.1 models using torchtune on Amazon SageMaker
OpenAIGenmab launches “AI Everywhere”
Exploring the broad implementation of artificial intelligence with Genmab's latest launch.