Upcoming API Security Updates — Action Required
Upcoming API security updates requiring action from developers.
Looking for a developer experience engineer
This role has been filled
Improving language model behavior by training on a curated dataset
Enhancing language model performance through training on a carefully curated dataset
An update on our racial justice efforts
An update on DeepMind's racial justice efforts and intention to combat racism and advance racial equity.
Why Release a Large Language Model?
Exploring the benefits of creating and open sourcing a large language model for AI safety
Did I Break You? Reverse Dependency Verification
Exploring the process of reverse dependency verification and its importance in maintaining a tech stack amidst changing company dynamics
Activation Function Ablation
Examining the impact of removing activation functions in GPT-like autoregressive language models.
Finetuning Models on Downstream Tasks
Tuning the performance of GPT-Neo on downstream tasks using an evaluation harness.
Evaluating Different Fewshot Description Prompts on GPT-3
Evaluating the impact of different fewshot prompts on the performance of GPT-3.
On the Sizes of OpenAI API Models
Utilizing eval harness to determine the sizes of OpenAI API models based on their performance.
OpenAI Scholars 2021: Final projects
Exploring innovative final projects from OpenAI Scholars 2021 cohort.
Advancing sports analytics through AI research
Creating testing environments using sports analytics to advance AI research beyond the lab