Chrome 102: Window Controls Overlay, a Host of Finished Origin Trials, PWAs as File Handlers and More
The blogpost provides an overview of the new features and changes in Chrome 102 beta release, including the Window Controls Overlay for installed desktop web apps, finished origin trials, PWAs as file handlers, and more.
When a passion for bass and brass help build better tools
Exploring how Kevin Millikin's passion for bass and brass enriches his work as a software engineer on the DevTools team, showcased at PyCon US.
Tackling multiple tasks with a single visual language model
Introducing Flamingo, a powerful visual language model (VLM) that achieves state-of-the-art results in few-shot learning for diverse multimodal tasks.
DeepMind’s latest research at ICLR 2022
DeepMind's latest research at ICLR 2022 focuses on advancing AI generalisability, optimising learning, exploration, robust AI, and emergent communication.
Stanford AI Lab Papers and Talks at ICLR 2022
A blogpost about the papers and talks from the Stanford AI Lab presented at the International Conference on Learning Representations (ICLR) 2022.
DeepMind’s latest research at ICLR 2022
DeepMind's research teams will be presenting 29 papers, including 10 collaborations, at ICLR 2022 along with sponsorship and workshop organization support.
Learning Scala at SoundCloud
A backend developer shares their experience of learning Scala at SoundCloud after previously working with Golang.
Measuring Goodhart’s law
Exploring the impact of Goodhart’s law through measurement
Hierarchical text-conditional image generation with CLIP latents
Exploring hierarchical text-conditional image generation using CLIP latents
Hierarchical text-conditional image generation with CLIP latents
Using CLIP latents to generate images based on hierarchical text conditions
Measuring Goodhart’s law
Exploring the challenges of optimizing objectives that are difficult or costly to measure in light of Goodhart's law.
An empirical analysis of compute-optimal large language model training
An empirical analysis of compute-optimal large language model training