Release Notes - December 2025
Reduced automation run time for Bitbucket and GitLab
Optimized git clone operations reduce run time and bandwidth usage for gitStream on Bitbucket and GitLab, helping you ship faster with less infrastructure strain.
New user-based triggers for tighter automation control
Ensure gitStream automations run only when the right people make changes. Use trigger.include.user and triggers.exclude.user to include or exclude bot accounts (e.g. Dependabot, Renovate, scanners), or limit specific automations to approved team members only. This can be valuable for teams that want different workflows for developers vs. automated tools. Read the docs.
Plugins now support 3rd party dependencies
Build more flexible gitStream plugins by loading pre-installed libraries (e.g. axios, moment, and lodash) or installing runtime dependencies for async workflows. Read the docs.
AI PR Review count now includes clean reviews
If you've been tracking the number of reviewed PRs in your AI Code Review metrics dashboard, you can expect a higher volume moving forward. PRs without issues are now included, improving visibility into automations your team is running and high-quality code your team creates.
New “LGTM” approval unlocks conditional workflows
is_LGTM is a new review output variable from the code-review action, indicating whether a pull request has received an explicit “looks good to me” approval. Review results are directly in your automation workflows, and enable you to build conditional logic that triggers only when a PR has been explicitly approved. Read the docs.
OPA v 4.1.3 upgrade
The OPA component now runs on v4.1.3, providing a more stable webhook receiver.
Release Notes - August 2025
Smarter, faster workflows with action-level execution control
gitStream now automatically skips unnecessary actions, reducing noise and accelerating PR workflows across GitHub, GitLab, and Bitbucket. The following actions only run when code is added or changed (commit pushed, non-draft PR created, or PR marked “ready for review”):
- add-code-comment
- code-review
- describe-changes
- explain-code-experts
- Actions are skipped for any other trigger.
Explicit triggers will continue to run, overriding the automatic skip logic. All other actions behave as before. See how it works.
Streamlined user and team management
These new APIs give more control and scalability over services, teams, and users:
- Services APIs: Define and update services programmatically, including repos and paths, with full CRUD operations. Read the docs.
- Teams API v2: Manage team structures at scale using bulk operations with CRUD operations for org-wide updates. Read the docs.
- Users API: Manage users with CRUD operations, including assigning users to teams and mapping identity across systems such as Git and project management tools. Read the docs.
You can also manage all users and teams directly from the LinearB app through a single interface. The new and improved page consolidates user details, permissions, team memberships, and connections to multiple platforms across Git and project management tools. For beta access to the management UI, contact your LinearB Customer Success Manager. Read the docs.
Developer surveys (beta)
Uncover additional team insight around developer sentiment and satisfaction with surveys. User admins can build surveys with out-of-the-box questions and distribute surveys via URL to start collecting anonymous feedback from team members. Analyze results in a heat map view with drill-down capabilities. For beta access, contact your LinearB Customer Success Manager. Read the docs.
Project Delivery Tracker for Azure DevOps
You can now use Project Delivery Tracker with Azure DevOps projects to measure delivery trends and drive continuous improvements. Read the docs.
Release Notes - November 2025
Improved accuracy for AI Code Reviews
Upgrade to Claude Sonnet 4.5
The latest Claude AI model, Sonnet 4.5, now powers the LinearB AI Code Review. With this upgrade, our internal testbench score improved by another 5% in accuracy, boosting review quality to provide you with more precise detection of issues and better context.
Language-specific review logic
AI Code Review now includes language-specific logic to improve accuracy and deliver more relevant, actionable feedback.
| Language | New rules |
| SQL |
|
| BigQuery (SQL variant) |
|
| TypeScript |
|
| C# |
|
| SCSS |
|
| HCL |
|
| YAML |
|
Clearer visuals for code suggestions in PR reviews
Reviews now visually flag issues that include suggested code corrections with a ⚒️ icon, helping you quickly distinguish between general comments and actionable fixes. This enhancement makes reviews easier to scan, helping you act on feedback faster and stay focused on what matters most in each PR.
When a suggested correction is available, you’ll see:
- A ⚒️ icon next to the issue title in the review summary
- A note in the comment body confirming a code suggestion is included
Measure AI adoption and impact across GitHub Copilot and Cursor
Now generally available to all users, track the real impact of GitHub Copilot and Cursor on your engineering productivity with out-of-the-box metrics dashboards in LinearB. Get a unified view of AI usage and performance across your organization:
- Measure adoption rates and daily active users for both tools
- Monitor AI-generated code suggestions and acceptance trends to uncover trust and productivity patterns
- Automatically detect and label PRs that involve AI tools to correlate activity with delivery metrics like cycle time, review time, and refactor rate
By centralizing your AI assistant metrics, you gain full context without toggling between platforms or sharing multiple API keys, making it easier to connect AI adoption directly to delivery outcomes. Learn more or read the docs on configuring GitHub Copilot and Cursor in LinearB.
Track Amazon Q adoption with the AI Insights Dashboard
You can now track AI usage metrics for Amazon Q, expanding support to 49 AI tools, including GitHub Copilot, Claude, Codex, Cursor, Gemini, and many more. Bring all your AI coding assistance usage metrics together in one place to monitor adoption, usage patterns, and productivity impact across your AI developer ecosystem.
Release Notes - October 2025
Clearer visibility into identified bugs and issues
The metrics dashboard for LinearB AI Review now tracks the total number of identified bugs and issues instead of the number of PRs with bugs and issues.
- Metrics labeled “PRs with Identified Bugs” and “PRs with Identified [Security, Performance, Readability, Maintainability, Scope] Issues” are replaced with “Identified Bugs” and “Identified [Security, Performance, Readability, Maintainability, Scope] Issues.”
- Each trend line now supports drill-downs, displaying PRs, repositories, and issues associated with that data point.
The previous metrics tracking PRs with identified issues remain available as predefined options when creating a custom dashboard. Learn more about findings in the Metrics Dashboard for LinearB AI Reviews.
Cleaner AI PR reviews for quicker fixes
AI Code Review now delivers cleaner, more focused feedback by:
- Grouping comments by commit, rather than across multiple comments.
- Automatically prioritizes the most critical issues first if multiple findings are detected.
- Filtering out repeated comments on the same issue.
Learn more about AI Code Reviews using LinearB’s AI.
Delete unused Git integrations
You can now delete unused Git integrations or remove inactive repositories directly in Company Settings > Git. This update enhances data accuracy, reduces noise from inactive repositories, and improves security by limiting access to select codebases. For more info, check out the docs on managing Git integrations.
OPA v4.0.7 upgrade
The OPA component now runs on v4.0.7, simplifying installation and strengthening default security settings. The new version reduces manual configuration and enforces a safer default policy, enabling faster and more secure deployments.
Kubernetes Secrets support
We now support Kubernetes Secrets to securely store sensor credentials and configuration tokens. This update improves DevOps security by aligning with native Kubernetes Secrets management best practices.
Release Notes - September 2025
Updates to our packages
We introduced a new Essentials plan to make LinearB more accessible for engineering teams to navigate the rapid change in engineering operations brought upon by AI. Seats start at $29 per contributor per month, including 1000 monthly credits (1,500 monthly credits for subscribers in the Enterprise plan) for AI-powered PR automations.
If you’re already a LinearB customer, don’t worry! Nothing changes for you at this time.
AI Insights dashboard (beta)
The AI Insights dashboard lets you grasp how AI shapes your delivery process with out-of-the-box tracking across tools. You can:
- Surface issues flagged by AI code reviews before they create bottlenecks
- Compare usage across AI tools to spot patterns in adoption and effectiveness
- Track adoption across more than 24 AI tools used across repos
- Measure trust by tracking how often developers accept AI-generated code
With these insights, evaluate how AI is used and guide engineering teams towards best practices to drive AI productivity. This feature is currently available in the Essentials package and will soon be available to existing customers. Explore the docs.
Out-of-the-box tracking for GitHub Copilot and Cursor (beta)
Integrations for GitHub Copilot and Cursor are now supported to track AI metrics for each tool:
- Monitor AI adoption and usage trends across your organization
- Measure acceptance rates to understand trust in AI-generated code
Contact your Customer Success Manager for beta access. Explore the docs to learn more about configuring Copilot and Cursor.
Remote MCP server (beta)
Get more flexibility in interacting with your engineering data to boost engineering productivity. Use natural language to:
- Generate custom reports that highlight delivery performance and team health
- Share clear insights with executives and stakeholders
- Tap into AI-powered recommendations to identify patterns and improve productivity
To enable LinearB’s remote MCP server with Claude, check out our docs. To help you get started, explore LinearB MCP use cases.
Release Notes - July 2025
More accurate AI reviews with better context
We've enhanced Retrieval-Augmented-Generation (RAG) on-the-fly to detect issues when code calls functions defined outside the current PR context. This helps identify:
- Incorrect argument types in function calls
- Mismatched number of arguments
- Missing await for async functions
- Issues in JavaScript, Python, and other dynamic languages
PR reviews now include project ticket details to instantly identify mismatches between requirements and implementation. Here's what's new:
- Automatically detect ticket keys in PRs
- Pull relevant details from your project management tool
- Flag scope gaps early in the process review
Trigger AI actions with commands - no config file needed
You can now run AI actions on demand by adding straightforward commands to your PR comments without needing a config file in the repo. Use commands such as:
- /gs review to trigger an AI code review
- /gs desc to generate a PR description
- /gs help to see supported actions
This is currently available for GitHub, with GitLab and Bitbucket support coming soon. Read the docs.
More secure with built-in AI security guardrails
AI features now validate every prompt before processing. Guardrails, powered by Prompt-Guard-66M, block jailbreak attempts and protect against malicious prompt injections, keeping your systems more secure without slowing you down.
Improved AI for generating PR descriptions
AI-generated PR descriptions are now powered by Sonnet 4 to improve context and clarity. We've also migrated the PR description functionality to the pr_agent service to improve performance and maintainability.
Support for PRs with large files
The maximum file size for PRs has increased from 1MB to 5MB, and compression functionality is now supported to process PRs with larger files. When a PR exceeds the processing limit, you’ll also see clearer, actionable guidance on best practices to help keep reviews moving smoothly.
Metrics for AI code reviews
A new AI Code Review Metrics Dashboard gives you complete visibility into how AI performs across your teams, pulling in data from PRs monitored by gitStream. Track key metrics such as:
- Number of reviewed PRs and total reviews to measure adoption
- AI-generated suggestions and lines accepted to assess quality and trust
- Issues detected across security, performance, maintainability, and readability
Filter by repos, contributor, or time range and share insights that matter most. Read the docs on metrics and findings.
Real-time system status
You can now monitor gitStream system statuses across GitHub, GitLab, and Bitbucket in real time. Check out linearbstatus.com.
Sunset: Resource Allocation emails
To reduce noise in your inbox, we've stopped sending email summaries on Resource Allocation. You'll still find everything you need in your LinearB dashboard, which has real-time data and filtering capabilities.
Archived release notes
View our previous release notes.
Release Note - June 2025
New Features
Teams
We’ve introduced a new Teams tab that brings together Iterations, Activity, Goals, and Coaching in one place.
- Pulse has been renamed to Iterations with a refreshed layout
- Track team-based iteration trends across multiple months
- Easily switch between current and completed views
- Quickly find your AI-generated iteration summaries
See Teams
February 2025
New Features
- Auto Contributor Merge—Automatically merges contributors across Git and PM tools when emails/provider IDs match, ensuring cleaner data and more accurate reporting. For further information, see Auto-Merge for Contributors in LinearB.
- GitStream for BitBucket Cloud—GitStream now supports automation on Bitbucket Cloud, enabling teams to integrate GitStream’s automation capabilities seamlessly into their Bitbucket workflows. For more information, see GitStream Automation for Bitbucket Cloud.
Enhancements
- Custom Project Management Type Support—LinearB now supports custom project management (PM) types, allowing teams to filter and analyze their own Jira or Azure DevOps (ADO) entity types—beyond standard system-defined types like Epics or User Stories.
-
- What’s Improved?
-
- Customers can now filter and analyze custom entity types (e.g., "EPICO" or other user-defined types).
- These custom types are available in Resource Allocation, Investment Strategy, and Forecasting filters.
- Previously, filtering was limited to system-defined types only.
- This enhancement ensures greater flexibility, allowing teams to work with their own classifications rather than relying solely on predefined system types.
- Improved Auto Repo Discovery for GitHub—Users can now define a regex pattern to automatically select organizations for repo discovery, eliminating manual selection and seamlessly adding new matching orgs. For further information, see Auto-Monitoring GitHub and GitLab Repositories.
- Developer Coaching Now Includes PM Data—The Wellness Workload widget now displays PM activities grouped by issue type, offering deeper insights into developer focus areas. For further information, see Wellness and Workload Tracking.
Beta Features
Contact LinearB Support for access.
- Team Git Scope—Enables teams to define and filter dashboards based on specific repositories and services they work on. For further information, see Filtering Team Insights with Git Scope.
Release Notes - January 2025
New Features
- Audit Log GA—The Audit Log feature in LinearB is now generally available (GA), providing teams with a detailed history of key actions and changes within the platform. This allows engineering leaders to track modifications, monitor user activity, and enhance security compliance with full visibility into project and workflow updates. For more information, see Using Audit Log.
- GitStream Soft Limits for Free Accounts—We are introducing GitStream Soft Limits to optimize our offering for free accounts. This update sets a monthly limit of 250 PRs per organization for free-tier users. For more information, see GitStream Soft Limits for Free Accounts.
- Failing Repo Indication—This new LinearB feature helps teams identify underperforming repositories based on key engineering metrics. By analyzing factors such as high PR rejection rates, excessive rework, long cycle times, or lack of activity, this feature provides clear alerts and insights to help engineering leaders proactively address bottlenecks, optimize workflows, and prevent project slowdowns.
Enhancements
- PM Custom Type support—PM Custom Type Support enhances LinearB’s flexibility by allowing teams to filter and analyze custom project management (PM) entity types from Jira and Azure DevOps (ADO). Instead of being limited to standard system-defined types like Epics or User Stories, teams can now use their own custom-defined issue types (e.g., “EPICO” or other unique entities) in key LinearB modules.
- New Design System
Beta Features
Contact LinearB Support for more details.
- GitStream for BitBucket—LinearB's paid plans support connecting to on-premises Git servers, including BitBucket Server (On-Prem). For more information, see Select BitBucket Server as Your Git Provider.
Comments
0 comments
Article is closed for comments.