DatabricksWhat’s New in Azure Databricks at FabCon 2026: Lakebase, Lakeflow, and Genie
FabCon 2026 showcases Azure Databricks' Lakeflow, Lakebase, and Genie innovations—unifying data ingestion, operational data, and AI-assisted analytics on an open lakehouse foundation with deep Microsoft 365 integrations.
Google CloudBuild a Multi-Agent System for Expert Content with Google ADK, MCP and Cloud Run - Part 1
Part 1 shows how to build a modular Multi-Agent System (Dev Signal) that discovers Reddit questions, grounds findings with Google Cloud docs via MCP and the Google ADK, and generates blogs and visuals, deployed to Google Cloud Run with a long-term memory layer for future expansion.
Google CloudStreamline read scalability with Cloud SQL autoscaling read pools
Practical guide to scaling read workloads with Cloud SQL read pools and autoscaling, enabling a single read endpoint, automatic node adjustments, and cost-efficient high availability.
Google CloudNext-gen caching with Memorystore for Valkey 9.0, now GA
GA of Memorystore for Valkey 9.0 delivers massive throughput and lower latency with SIMD optimizations, pipeline memory prefetching, new HEXPIRE commands, geospatial querying, and cluster-enabled multi-database support for scalable, managed caching.
Google CloudBuilding Distributed AI Agents
Orchestrates a distributed team of specialized AI agents—researcher, judge, and orchestrator—into scalable microservices deployed on Cloud Run, enabling seamless frontend integration via the ADK and A2A protocol.
AWS MLIntroducing Nova Forge SDK, a seamless way to customize Nova models for enterprise AI
Nova Forge SDK unifies end-to-end LLM customization for enterprises, enabling developers to prepare data, train models, and deploy Nova variants across Amazon Bedrock and Amazon SageMaker AI with guided workflows and scalable automation.
AWS MLKick off Nova customization experiments using Nova Forge SDK
An end-to-end, Nova Forge SDK–driven guide to LLM customization—from data loading and supervised fine-tuning (SFT) to reinforcement fine-tuning (RFT), evaluation on a Stack Overflow classifier, and deployment on SageMaker AI Inference.
Apple MLProse2Policy (P2P): A Practical LLM Pipeline for Translating Natural-Language Access Policies into Executable Rego
An end-to-end, LLM-driven pipeline that translates natural-language access policies into executable Rego (OPA policy-as-code), featuring policy detection, extraction, schema validation, linting, compilation and automated testing, demonstrated on the ACRE dataset with high compile and test pass rates to enable Zero Trust and auditable policy enforcement.
AWS MLMigrate from Amazon Nova 1 to Amazon Nova 2 on Amazon Bedrock
End-to-end guidance for migrating from Amazon Nova 1 to Nova 2 Lite on Amazon Bedrock, detailing feature upgrades, API changes, migration steps, and production-ready evaluation to maximize reasoning, tool use, and throughput.
AWS MLHow Bark.com and AWS collaborated to build a scalable video generation solution
How Bark.com and AWS engineered a scalable, AI-powered video generation pipeline using SageMaker, Bedrock, and multi-GPU inference to automate creative ideation, ensure brand consistency, and accelerate content production.
AWS MLBuild an AI-Powered A/B testing engine using Amazon Bedrock
AI-powered, adaptive A/B testing on AWS using Amazon Bedrock and MCP to deliver real-time personalization with a serverless, scalable architecture.
AWS MLEvaluating AI agents for production: A practical guide to Strands Evals
A practical guide to production-ready AI agent evaluation with Strands Evals, showing how to define cases and experiments, deploy LLM-based evaluators, and validate online and offline, multi-turn agent behavior.