[{"data":1,"prerenderedAt":847},["ShallowReactive",2],{"/en-us/blog/inside-look-how-gitlabs-test-platform-team-validates-ai-features":3,"navigation-en-us":47,"banner-en-us":468,"footer-en-us":478,"blog-post-authors-en-us-Mark Lapierre|Vincy Wilson":719,"blog-related-posts-en-us-inside-look-how-gitlabs-test-platform-team-validates-ai-features":746,"blog-promotions-en-us":785,"next-steps-en-us":837},{"id":4,"title":5,"authorSlugs":6,"authors":9,"body":12,"category":13,"categorySlug":13,"config":14,"content":18,"date":29,"description":19,"extension":30,"externalUrl":31,"featured":17,"heroImage":21,"isFeatured":17,"meta":32,"navigation":17,"path":33,"publishedDate":29,"rawbody":34,"seo":35,"slug":16,"stem":40,"tagSlugs":41,"tags":45,"template":15,"updatedDate":31,"__hash__":46},"blogPosts/en-us/blog/inside-look-how-gitlabs-test-platform-team-validates-ai-features.md","Inside look: How GitLab's Test Platform team validates AI features",[7,8],"mark-lapierre","vincy-wilson",[10,11],"Mark Lapierre","Vincy Wilson","AI is increasingly becoming a centerpiece of software development - many companies are integrating it throughout their DevSecOps workflows to improve productivity and increase efficiency. Because of this now-critical role, AI features should be tested and analyzed on an ongoing basis. In this article, we take you behind the scenes to learn how [GitLab's Test Platform team](https://handbook.gitlab.com/handbook/engineering/infrastructure/test-platform/) does this for [GitLab Duo](https://about.gitlab.com/gitlab-duo-agent-platform/) features by conducting performance validation, functional readiness, and continuous analysis across GitLab versions. With this three-pronged approach, GitLab aims to ensure that GitLab Duo features are performing optimally for our customers.\n\n> Discover the future of AI-driven software development with our GitLab 17 virtual launch event. [Watch today!](https://about.gitlab.com/eighteen/)\n\n## AI and testing\n\nAI's non-deterministic nature, where the same input can produce different outputs, makes ensuring a great user experience a challenge. So, when we integrated AI deep into the GitLab DevSecOps Platform, we had to adapt to our best practices to address this challenge.\nThe [Test Platform team's mission ](https://handbook.gitlab.com/handbook/engineering/infrastructure/test-platform/) is to help enable the successful development and deployment of high-quality software applications with continuous analysis and efficiency to help ensure customer satisfaction. The key to achieving this is by delivering tools that help increase standardization, repeatability, and test consistency.\nApplying this to GitLab Duo, our AI suite of tools to power DevSecOps workflows, means being able to continuously analyze its performance and identify opportunities for improvement. Our goal is to gain clear, actionable insights that will help us to enhance GitLab Duo's capabilities and, as a result, better meet our customers' needs.\n## The need for continuous analysis of AI\n\nTo continuously assess GitLab Duo, we needed a mechanism for analyzing feature performance across releases. Therefore, we created an AI continuous analysis tool to automate the collection and analysis of data to achieve this.\n![diagram of how the AI continuous analysis tool works](https://res.cloudinary.com/about-gitlab-com/image/upload/v1750099041/Blog/Content%20Images/Blog/Content%20Images/image1_aHR0cHM6_1750099041503.png)\n\n\u003Ccenter>\u003Cem>How the AI continuous analysis tool works\u003C/em>\u003C/center>\n\n### Building the AI continuous analysis tool\n\nTo gain detailed, user-centric insights, we needed to gather data in the appropriate context – in this case, the integrated development environment (IDE), as it is where most of our users access GitLab Duo. We narrowed this down further by opting for the Visual Studio Code IDE, a popular choice within our community. Once the environment was chosen, we automated entering code prompts and recording the provided suggestions. The interactions with the IDE are handled by the [WebdriverIO VSCode service](https://github.com/webdriverio-community/wdio-vscode-service), and CI operations are handled through [GitLab CI/CD](https://docs.gitlab.com/ci/). This automation significantly scaled up data collection and eliminated repetitive tasks for GitLab team members. To start, we have focused on measuring the performance of GitLab Duo Code Suggestions, but plan to expand to other GitLab AI features in the future.\n\n### Analyzing the data\n\nAt the core of our AI continuous analysis tool is a mechanism for collecting and analyzing code suggestions. This involves automatically entering code prompts, recording the suggestions provided, and logging timestamps of relevant events. We measure the time from when the tool provides an input until a suggestion is displayed in the UI. In addition, we record the logs created by the IDE, which report the time it took for each suggestion response to be received. With this data, we can compare the latency of suggestions in terms of how long it takes the backend AI service to send a response to the IDE, and how long it takes for the IDE to display the suggestion for the user. We then can compare latency and other metrics of GitLab Duo features across multiple releases. The GitLab platform has the ability to analyze [code quality](https://docs.gitlab.com/ci/testing/code_quality/) and [application security](https://docs.gitlab.com/user/application_security/), so we leverage these capabilities to enable the AI continuous analysis tool to analyze the quality and security of the suggestions provided by GitLab Duo.\n\n### Improving AI-driven suggestions\n\nOnce the collected data is analyzed, the tool automatically generates a single report summarizing the results. The report includes key statistics (e.g., mean latency and/or latency at various percentiles), descriptions of notable differences or patterns, links to raw data, and CI/CD pipeline logs and artifacts. The tool also records a video of each prompt and suggestion, which allows us to review specific cases where differences are highlighted. This creates an opportunity for the UX researchers and development teams to take action on the insights gained, helping to improve the overall user experience and system performance.\n\nThe tool is at an early stage of development, but it's already helped us to improve the experience for GitLab Duo Code Suggestions users. Moving forward, we plan to expand our tool’s capabilities, incorporate more metrics and consume and provide input to our [Centralized Evaluation Framework](https://docs.gitlab.com/development/ai_features/ai_evaluation_guidelines/), which validates AI models, to enhance our continuous analysis further.\n\n## Performance validation\n\nAs AI has become integral to GitLab's offerings, optimizing the performance of AI-driven features is essential. Our performance tests aim to evaluate and monitor the performance of our GitLab components, which interact with AI service backends. While we can monitor the performance of these external services as part of our production environment's observability, we cannot control them. Thus, including third-party services in our performance testing would be expensive and yield limited benefits. Although third-party AI providers contribute to overall latency, the latency attributable to GitLab components is still important to check. We aim to detect changes that might lead to performance degradation by monitoring GitLab components.\n### Building AI performance validation test environment\n\nIn our AI test environments, the [AI Gateway](https://docs.gitlab.com/architecture/blueprints/ai_gateway/#summary), which is a stand-alone service to give access to AI features to GitLab users, has been configured to return mocked responses, enabling us to test the performance of AI-powered features without interacting with third-party AI service providers. We conduct AI performance tests on [reference architecture environments of various sizes](https://docs.gitlab.com/administration/reference_architectures/). Additionally, we evaluate new tests in their own isolated environment before they're added to the larger environments.\n\n### Testing multi-regional latency\n\nMulti-regional latency tests need to be run from various geolocations to validate that requests are being served from a suitable location close to the source of the request. We do this today with the use of the [GitLab Environment Toolkit](https://gitlab.com/gitlab-org/gitlab-environment-toolkit). The toolkit provisions an environment in the identified region to test (note: both the AI Gateway and the provisioned environment are in the same region), then uses the [GitLab Performance Tool](https://gitlab.com/gitlab-org/quality/performance) to run tests to measure time to first byte (TTFB). TTFB is our way of measuring time to the first part of the response being rendered, which contributes to the perceived latency that a customer experiences. To account for this measurement, our tests have a check to help ensure that the [response itself isn't empty](https://gitlab.com/gitlab-org/quality/performance/-/blob/cee8bef023e590e6ca75828e49f5c7c596581e06/k6/tests/experimental/api_v4_code_suggestions_generation_streaming.js#L70).\nOur tests are expanding further to continue to measure perceived latency from a customer’s perspective. We have captured a set of baseline response times that indicate how a specific set of regions performed when the test environment was in a known good state. These baselines allow us to compare subsequent environment updates and other regions to this known state to evaluate the impact of changes. These baseline measurements can be updated after major updates to ensure they stay relevant in the future.\nNote: As of this article's publication date, we have AI Gateway deployments across the U.S., Europe, and Asia. To learn more, visit our [handbook page](https://handbook.gitlab.com/handbook/engineering/development/data-science/ai-powered/ai-framework/#-aigw-region-deployments).\n\n## Functionality\n\nTo help continuously enable customers to confidently leverage AI reliably, we must continuously work to ensure our AI features function as expected.\n\n### Unit and integration tests\n\nFeatures that leverage AI models still require rigorous automated tests, which help engineers develop new features and changes confidently. However, since AI features can involve integrating with third-party AI providers, we must be careful to stub any external API calls to help ensure our tests are fast and reliable.\n\nFor a comprehensive look at testing at GitLab, look at our [testing standards and style guidelines](https://docs.gitlab.com/development/testing_guide/).\n### End-to-end tests\nEnd-to-end testing is a strategy for checking whether the application works as expected across the entire software stack and architecture. We've implemented it in two ways for GitLab Duo testing: using real AI-generated responses and mock-generated AI responses.\n\n![validating features - image 2](https://res.cloudinary.com/about-gitlab-com/image/upload/v1750099041/Blog/Content%20Images/Blog/Content%20Images/image2_aHR0cHM6_1750099041504.png)\n\n\u003Ccenter>\u003Cem>End-to-end test workflow\u003C/em>\u003C/center>\n\n#### Using real AI-generated responses\n\nAlthough costly, end-to-end tests are important to help ensure the entire user experience functions as expected. Since AI models are non-deterministic, end-to-end test assertions for validating real AI-generated responses should be loose enough to help ensure the feature functions without relying on a response that may change. This might mean an assertion that checks for some response with no errors or for a response we are certain to receive.\n\nAI-driven functionality is not accessible only from within the GitLab application, so we must also consider user workflows for other applications that leverage these features. For example, to cover the use case of a developer requesting code suggestions in [IntelliJ IDEA](https://www.jetbrains.com/idea/) using the GitLab Duo plugin, we need to drive the IntelliJ application to simulate a user workflow. Similarly, to ensure that the GitLab Duo Chat experience is consistent in VS Code, we must drive the VS Code application and exercise the GitLab Workflow extension. Working to ensure these workflows are covered helps us maintain a consistently great developer experience across all GitLab products.\n#### Using mock AI-generated responses\n\nIn addition to end-to-end tests using real AI-generated responses, we run some end-to-end tests against test environments configured to return mock responses. This allows us to verify changes to GitLab code and components that don’t depend on responses generated by an AI model more frequently.\n\n> For a closer look at end-to-end testing, read our [end-to-end testing guide](https://docs.gitlab.com/development/testing_guide/end_to_end/).\n### Exploratory testing and dogfooding\n\nAI features are built by humans for humans. At GitLab, exploratory testing and dogfooding greatly benefit us. GitLab team members are passionate about what features get shipped, and insights from internal usage are invaluable in shaping the direction of AI features.\n\n[Exploratory testing](https://about.gitlab.com/topics/devops/devops-test-automation/#test-automation-stages) allows the team to creatively exercise features to help ensure edge case bugs are identified and resolved. Dogfooding encourages team members to use AI features in their daily workflows, which helps us identify realistic issues from realistic users. For a comprehensive look at how we dogfood AI features, look at [Developing GitLab Duo: How we are dogfooding our AI features](https://about.gitlab.com/blog/developing-gitlab-duo-how-we-are-dogfooding-our-ai-features/).\n\n## Get started with GitLab Duo\nHopefully this article gives you insight into how we are validating AI features at GitLab. We have integrated our team's process into our overall development as we iterate on GitLab Duo features. We encourage you to try GitLab Duo in your organization and reap the benefits of AI-powered workflows.\n\n> Start a [free trial of GitLab Duo](https://about.gitlab.com/gitlab-duo-agent-platform/#free-trial) today!\n\n_Members of the GitLab Test Platform team contributed to this article._","ai-ml",{"template":15,"slug":16,"featured":17},"BlogPost","inside-look-how-gitlabs-test-platform-team-validates-ai-features",true,{"title":5,"description":19,"authors":20,"heroImage":21,"tags":22,"category":13,"date":29,"body":12},"Learn how we continuously analyze AI feature performance, including testing latency worldwide, and get to know our new AI continuous analysis tool.",[10,11],"https://res.cloudinary.com/about-gitlab-com/image/upload/v1750099033/Blog/Hero%20Images/Blog/Hero%20Images/blog-image-template-1800x945%20%2811%29_78Dav6FR9EGjhebHWuBVan_1750099033422.png",[23,24,25,26,27,28],"AI/ML","features","DevSecOps platform","inside GitLab","testing","performance","2024-06-03","md",null,{},"/en-us/blog/inside-look-how-gitlabs-test-platform-team-validates-ai-features","---\nseo:\n  title: \"Inside look: How GitLab's Test Platform team validates AI features\"\n  description: >-\n    Learn how we continuously analyze AI feature performance, including testing\n    latency worldwide, and get to know our new AI continuous analysis tool.\n  ogTitle: \"Inside look: How GitLab's Test Platform team validates AI features\"\n  ogDescription: >-\n    Learn how we continuously analyze AI feature performance, including testing\n    latency worldwide, and get to know our new AI continuous analysis tool.\n  noIndex: false\n  ogImage: >-\n    https://res.cloudinary.com/about-gitlab-com/image/upload/v1750099033/Blog/Hero%20Images/Blog/Hero%20Images/blog-image-template-1800x945%20%2811%29_78Dav6FR9EGjhebHWuBVan_1750099033422.png\n  ogUrl: >-\n    https://about.gitlab.com/blog/inside-look-how-gitlabs-test-platform-team-validates-ai-features\n  ogSiteName: https://about.gitlab.com\n  ogType: article\n  canonicalUrls: >-\n    https://about.gitlab.com/blog/inside-look-how-gitlabs-test-platform-team-validates-ai-features\ntitle: \"Inside look: How GitLab's Test Platform team validates AI features\"\ndescription: Learn how we continuously analyze AI feature performance, including testing latency worldwide, and get to know our new AI continuous analysis tool.\nauthors:\n  - Mark Lapierre\n  - Vincy Wilson\nheroImage: https://res.cloudinary.com/about-gitlab-com/image/upload/v1750099033/Blog/Hero%20Images/Blog/Hero%20Images/blog-image-template-1800x945%20%2811%29_78Dav6FR9EGjhebHWuBVan_1750099033422.png\ntags:\n  - AI/ML\n  - features\n  - DevSecOps platform\n  - inside GitLab\n  - testing\n  - performance\ncategory: ai-ml\ndate: '2024-06-03'\nslug: inside-look-how-gitlabs-test-platform-team-validates-ai-features\nfeatured: true\ntemplate: BlogPost\n---\n\nAI is increasingly becoming a centerpiece of software development - many companies are integrating it throughout their DevSecOps workflows to improve productivity and increase efficiency. Because of this now-critical role, AI features should be tested and analyzed on an ongoing basis. In this article, we take you behind the scenes to learn how [GitLab's Test Platform team](https://handbook.gitlab.com/handbook/engineering/infrastructure/test-platform/) does this for [GitLab Duo](https://about.gitlab.com/gitlab-duo-agent-platform/) features by conducting performance validation, functional readiness, and continuous analysis across GitLab versions. With this three-pronged approach, GitLab aims to ensure that GitLab Duo features are performing optimally for our customers.\n\n> Discover the future of AI-driven software development with our GitLab 17 virtual launch event. [Watch today!](https://about.gitlab.com/eighteen/)\n\n## AI and testing\n\nAI's non-deterministic nature, where the same input can produce different outputs, makes ensuring a great user experience a challenge. So, when we integrated AI deep into the GitLab DevSecOps Platform, we had to adapt to our best practices to address this challenge.\nThe [Test Platform team's mission ](https://handbook.gitlab.com/handbook/engineering/infrastructure/test-platform/) is to help enable the successful development and deployment of high-quality software applications with continuous analysis and efficiency to help ensure customer satisfaction. The key to achieving this is by delivering tools that help increase standardization, repeatability, and test consistency.\nApplying this to GitLab Duo, our AI suite of tools to power DevSecOps workflows, means being able to continuously analyze its performance and identify opportunities for improvement. Our goal is to gain clear, actionable insights that will help us to enhance GitLab Duo's capabilities and, as a result, better meet our customers' needs.\n## The need for continuous analysis of AI\n\nTo continuously assess GitLab Duo, we needed a mechanism for analyzing feature performance across releases. Therefore, we created an AI continuous analysis tool to automate the collection and analysis of data to achieve this.\n![diagram of how the AI continuous analysis tool works](https://res.cloudinary.com/about-gitlab-com/image/upload/v1750099041/Blog/Content%20Images/Blog/Content%20Images/image1_aHR0cHM6_1750099041503.png)\n\n\u003Ccenter>\u003Cem>How the AI continuous analysis tool works\u003C/em>\u003C/center>\n\n### Building the AI continuous analysis tool\n\nTo gain detailed, user-centric insights, we needed to gather data in the appropriate context – in this case, the integrated development environment (IDE), as it is where most of our users access GitLab Duo. We narrowed this down further by opting for the Visual Studio Code IDE, a popular choice within our community. Once the environment was chosen, we automated entering code prompts and recording the provided suggestions. The interactions with the IDE are handled by the [WebdriverIO VSCode service](https://github.com/webdriverio-community/wdio-vscode-service), and CI operations are handled through [GitLab CI/CD](https://docs.gitlab.com/ci/). This automation significantly scaled up data collection and eliminated repetitive tasks for GitLab team members. To start, we have focused on measuring the performance of GitLab Duo Code Suggestions, but plan to expand to other GitLab AI features in the future.\n\n### Analyzing the data\n\nAt the core of our AI continuous analysis tool is a mechanism for collecting and analyzing code suggestions. This involves automatically entering code prompts, recording the suggestions provided, and logging timestamps of relevant events. We measure the time from when the tool provides an input until a suggestion is displayed in the UI. In addition, we record the logs created by the IDE, which report the time it took for each suggestion response to be received. With this data, we can compare the latency of suggestions in terms of how long it takes the backend AI service to send a response to the IDE, and how long it takes for the IDE to display the suggestion for the user. We then can compare latency and other metrics of GitLab Duo features across multiple releases. The GitLab platform has the ability to analyze [code quality](https://docs.gitlab.com/ci/testing/code_quality/) and [application security](https://docs.gitlab.com/user/application_security/), so we leverage these capabilities to enable the AI continuous analysis tool to analyze the quality and security of the suggestions provided by GitLab Duo.\n\n### Improving AI-driven suggestions\n\nOnce the collected data is analyzed, the tool automatically generates a single report summarizing the results. The report includes key statistics (e.g., mean latency and/or latency at various percentiles), descriptions of notable differences or patterns, links to raw data, and CI/CD pipeline logs and artifacts. The tool also records a video of each prompt and suggestion, which allows us to review specific cases where differences are highlighted. This creates an opportunity for the UX researchers and development teams to take action on the insights gained, helping to improve the overall user experience and system performance.\n\nThe tool is at an early stage of development, but it's already helped us to improve the experience for GitLab Duo Code Suggestions users. Moving forward, we plan to expand our tool’s capabilities, incorporate more metrics and consume and provide input to our [Centralized Evaluation Framework](https://docs.gitlab.com/development/ai_features/ai_evaluation_guidelines/), which validates AI models, to enhance our continuous analysis further.\n\n## Performance validation\n\nAs AI has become integral to GitLab's offerings, optimizing the performance of AI-driven features is essential. Our performance tests aim to evaluate and monitor the performance of our GitLab components, which interact with AI service backends. While we can monitor the performance of these external services as part of our production environment's observability, we cannot control them. Thus, including third-party services in our performance testing would be expensive and yield limited benefits. Although third-party AI providers contribute to overall latency, the latency attributable to GitLab components is still important to check. We aim to detect changes that might lead to performance degradation by monitoring GitLab components.\n### Building AI performance validation test environment\n\nIn our AI test environments, the [AI Gateway](https://docs.gitlab.com/architecture/blueprints/ai_gateway/#summary), which is a stand-alone service to give access to AI features to GitLab users, has been configured to return mocked responses, enabling us to test the performance of AI-powered features without interacting with third-party AI service providers. We conduct AI performance tests on [reference architecture environments of various sizes](https://docs.gitlab.com/administration/reference_architectures/). Additionally, we evaluate new tests in their own isolated environment before they're added to the larger environments.\n\n### Testing multi-regional latency\n\nMulti-regional latency tests need to be run from various geolocations to validate that requests are being served from a suitable location close to the source of the request. We do this today with the use of the [GitLab Environment Toolkit](https://gitlab.com/gitlab-org/gitlab-environment-toolkit). The toolkit provisions an environment in the identified region to test (note: both the AI Gateway and the provisioned environment are in the same region), then uses the [GitLab Performance Tool](https://gitlab.com/gitlab-org/quality/performance) to run tests to measure time to first byte (TTFB). TTFB is our way of measuring time to the first part of the response being rendered, which contributes to the perceived latency that a customer experiences. To account for this measurement, our tests have a check to help ensure that the [response itself isn't empty](https://gitlab.com/gitlab-org/quality/performance/-/blob/cee8bef023e590e6ca75828e49f5c7c596581e06/k6/tests/experimental/api_v4_code_suggestions_generation_streaming.js#L70).\nOur tests are expanding further to continue to measure perceived latency from a customer’s perspective. We have captured a set of baseline response times that indicate how a specific set of regions performed when the test environment was in a known good state. These baselines allow us to compare subsequent environment updates and other regions to this known state to evaluate the impact of changes. These baseline measurements can be updated after major updates to ensure they stay relevant in the future.\nNote: As of this article's publication date, we have AI Gateway deployments across the U.S., Europe, and Asia. To learn more, visit our [handbook page](https://handbook.gitlab.com/handbook/engineering/development/data-science/ai-powered/ai-framework/#-aigw-region-deployments).\n\n## Functionality\n\nTo help continuously enable customers to confidently leverage AI reliably, we must continuously work to ensure our AI features function as expected.\n\n### Unit and integration tests\n\nFeatures that leverage AI models still require rigorous automated tests, which help engineers develop new features and changes confidently. However, since AI features can involve integrating with third-party AI providers, we must be careful to stub any external API calls to help ensure our tests are fast and reliable.\n\nFor a comprehensive look at testing at GitLab, look at our [testing standards and style guidelines](https://docs.gitlab.com/development/testing_guide/).\n### End-to-end tests\nEnd-to-end testing is a strategy for checking whether the application works as expected across the entire software stack and architecture. We've implemented it in two ways for GitLab Duo testing: using real AI-generated responses and mock-generated AI responses.\n\n![validating features - image 2](https://res.cloudinary.com/about-gitlab-com/image/upload/v1750099041/Blog/Content%20Images/Blog/Content%20Images/image2_aHR0cHM6_1750099041504.png)\n\n\u003Ccenter>\u003Cem>End-to-end test workflow\u003C/em>\u003C/center>\n\n#### Using real AI-generated responses\n\nAlthough costly, end-to-end tests are important to help ensure the entire user experience functions as expected. Since AI models are non-deterministic, end-to-end test assertions for validating real AI-generated responses should be loose enough to help ensure the feature functions without relying on a response that may change. This might mean an assertion that checks for some response with no errors or for a response we are certain to receive.\n\nAI-driven functionality is not accessible only from within the GitLab application, so we must also consider user workflows for other applications that leverage these features. For example, to cover the use case of a developer requesting code suggestions in [IntelliJ IDEA](https://www.jetbrains.com/idea/) using the GitLab Duo plugin, we need to drive the IntelliJ application to simulate a user workflow. Similarly, to ensure that the GitLab Duo Chat experience is consistent in VS Code, we must drive the VS Code application and exercise the GitLab Workflow extension. Working to ensure these workflows are covered helps us maintain a consistently great developer experience across all GitLab products.\n#### Using mock AI-generated responses\n\nIn addition to end-to-end tests using real AI-generated responses, we run some end-to-end tests against test environments configured to return mock responses. This allows us to verify changes to GitLab code and components that don’t depend on responses generated by an AI model more frequently.\n\n> For a closer look at end-to-end testing, read our [end-to-end testing guide](https://docs.gitlab.com/development/testing_guide/end_to_end/).\n### Exploratory testing and dogfooding\n\nAI features are built by humans for humans. At GitLab, exploratory testing and dogfooding greatly benefit us. GitLab team members are passionate about what features get shipped, and insights from internal usage are invaluable in shaping the direction of AI features.\n\n[Exploratory testing](https://about.gitlab.com/topics/devops/devops-test-automation/#test-automation-stages) allows the team to creatively exercise features to help ensure edge case bugs are identified and resolved. Dogfooding encourages team members to use AI features in their daily workflows, which helps us identify realistic issues from realistic users. For a comprehensive look at how we dogfood AI features, look at [Developing GitLab Duo: How we are dogfooding our AI features](https://about.gitlab.com/blog/developing-gitlab-duo-how-we-are-dogfooding-our-ai-features/).\n\n## Get started with GitLab Duo\nHopefully this article gives you insight into how we are validating AI features at GitLab. We have integrated our team's process into our overall development as we iterate on GitLab Duo features. We encourage you to try GitLab Duo in your organization and reap the benefits of AI-powered workflows.\n\n> Start a [free trial of GitLab Duo](https://about.gitlab.com/gitlab-duo-agent-platform/#free-trial) today!\n\n_Members of the GitLab Test Platform team contributed to this article._\n",{"title":5,"description":19,"ogTitle":5,"ogDescription":19,"noIndex":36,"ogImage":21,"ogUrl":37,"ogSiteName":38,"ogType":39,"canonicalUrls":37},false,"https://about.gitlab.com/blog/inside-look-how-gitlabs-test-platform-team-validates-ai-features","https://about.gitlab.com","article","en-us/blog/inside-look-how-gitlabs-test-platform-team-validates-ai-features",[42,24,43,44,27,28],"aiml","devsecops-platform","inside-gitlab",[23,24,25,26,27,28],"DSUh_oJiugjwn-bt9mBK1Uk3MUPOLtL9gtLYgUpelTQ",{"logo":48,"freeTrial":53,"sales":58,"login":63,"items":68,"search":388,"minimal":419,"duo":438,"switchNav":447,"pricingDeployment":458},{"config":49},{"href":50,"dataGaName":51,"dataGaLocation":52},"/","gitlab logo","header",{"text":54,"config":55},"Get free trial",{"href":56,"dataGaName":57,"dataGaLocation":52},"https://gitlab.com/-/trial_registrations/new?glm_source=about.gitlab.com&glm_content=default-saas-trial/","free trial",{"text":59,"config":60},"Talk to sales",{"href":61,"dataGaName":62,"dataGaLocation":52},"/sales/","sales",{"text":64,"config":65},"Sign in",{"href":66,"dataGaName":67,"dataGaLocation":52},"https://gitlab.com/users/sign_in/","sign in",[69,98,198,203,307,368],{"text":70,"config":71,"menu":73},"Platform",{"dataNavLevelOne":72},"platform",{"type":74,"columns":75},"cards",[76,82,90],{"title":70,"description":77,"link":78},"The intelligent orchestration platform for DevSecOps",{"text":79,"config":80},"Explore our Platform",{"href":81,"dataGaName":72,"dataGaLocation":52},"/platform/",{"title":83,"description":84,"link":85},"GitLab Duo Agent Platform","Agentic AI for the entire software lifecycle",{"text":86,"config":87},"Meet GitLab Duo",{"href":88,"dataGaName":89,"dataGaLocation":52},"/gitlab-duo-agent-platform/","gitlab duo agent platform",{"title":91,"description":92,"link":93},"Why GitLab","See the top reasons enterprises choose GitLab",{"text":94,"config":95},"Learn more",{"href":96,"dataGaName":97,"dataGaLocation":52},"/why-gitlab/","why gitlab",{"text":99,"left":17,"config":100,"menu":102},"Product",{"dataNavLevelOne":101},"solutions",{"type":103,"link":104,"columns":108,"feature":177},"lists",{"text":105,"config":106},"View all Solutions",{"href":107,"dataGaName":101,"dataGaLocation":52},"/solutions/",[109,133,156],{"title":110,"description":111,"link":112,"items":117},"Automation","CI/CD and automation to accelerate deployment",{"config":113},{"icon":114,"href":115,"dataGaName":116,"dataGaLocation":52},"AutomatedCodeAlt","/solutions/delivery-automation/","automated software delivery",[118,122,125,129],{"text":119,"config":120},"CI/CD",{"href":121,"dataGaLocation":52,"dataGaName":119},"/solutions/continuous-integration/",{"text":83,"config":123},{"href":88,"dataGaLocation":52,"dataGaName":124},"gitlab duo agent platform - product menu",{"text":126,"config":127},"Source Code Management",{"href":128,"dataGaLocation":52,"dataGaName":126},"/solutions/source-code-management/",{"text":130,"config":131},"Automated Software Delivery",{"href":115,"dataGaLocation":52,"dataGaName":132},"Automated software delivery",{"title":134,"description":135,"link":136,"items":141},"Security","Deliver code faster without compromising security",{"config":137},{"href":138,"dataGaName":139,"dataGaLocation":52,"icon":140},"/solutions/application-security-testing/","security and compliance","ShieldCheckLight",[142,146,151],{"text":143,"config":144},"Application Security Testing",{"href":138,"dataGaName":145,"dataGaLocation":52},"Application security testing",{"text":147,"config":148},"Software Supply Chain Security",{"href":149,"dataGaLocation":52,"dataGaName":150},"/solutions/supply-chain/","Software supply chain security",{"text":152,"config":153},"Software Compliance",{"href":154,"dataGaName":155,"dataGaLocation":52},"/solutions/software-compliance/","software compliance",{"title":157,"link":158,"items":163},"Measurement",{"config":159},{"icon":160,"href":161,"dataGaName":162,"dataGaLocation":52},"DigitalTransformation","/solutions/visibility-measurement/","visibility and measurement",[164,168,172],{"text":165,"config":166},"Visibility & Measurement",{"href":161,"dataGaLocation":52,"dataGaName":167},"Visibility and Measurement",{"text":169,"config":170},"Value Stream Management",{"href":171,"dataGaLocation":52,"dataGaName":169},"/solutions/value-stream-management/",{"text":173,"config":174},"Analytics & Insights",{"href":175,"dataGaLocation":52,"dataGaName":176},"/solutions/analytics-and-insights/","Analytics and insights",{"title":178,"type":103,"items":179},"GitLab for",[180,186,192],{"text":181,"config":182},"Enterprise",{"icon":183,"href":184,"dataGaLocation":52,"dataGaName":185},"Building","/enterprise/","enterprise",{"text":187,"config":188},"Small Business",{"icon":189,"href":190,"dataGaLocation":52,"dataGaName":191},"Work","/small-business/","small business",{"text":193,"config":194},"Public Sector",{"icon":195,"href":196,"dataGaLocation":52,"dataGaName":197},"Organization","/solutions/public-sector/","public sector",{"text":199,"config":200},"Pricing",{"href":201,"dataGaName":202,"dataGaLocation":52,"dataNavLevelOne":202},"/pricing/","pricing",{"text":204,"config":205,"menu":207},"Resources",{"dataNavLevelOne":206},"resources",{"type":103,"link":208,"columns":212,"feature":296},{"text":209,"config":210},"View all resources",{"href":211,"dataGaName":206,"dataGaLocation":52},"/resources/",[213,246,268],{"title":214,"items":215},"Getting started",[216,221,226,231,236,241],{"text":217,"config":218},"Install",{"href":219,"dataGaName":220,"dataGaLocation":52},"/install/","install",{"text":222,"config":223},"Quick start guides",{"href":224,"dataGaName":225,"dataGaLocation":52},"/get-started/","quick setup checklists",{"text":227,"config":228},"Learn",{"href":229,"dataGaLocation":52,"dataGaName":230},"https://university.gitlab.com/","learn",{"text":232,"config":233},"Product documentation",{"href":234,"dataGaName":235,"dataGaLocation":52},"https://docs.gitlab.com/","product documentation",{"text":237,"config":238},"Best practice videos",{"href":239,"dataGaName":240,"dataGaLocation":52},"/getting-started-videos/","best practice videos",{"text":242,"config":243},"Integrations",{"href":244,"dataGaName":245,"dataGaLocation":52},"/integrations/","integrations",{"title":247,"items":248},"Discover",[249,254,259,263],{"text":250,"config":251},"Customer success stories",{"href":252,"dataGaName":253,"dataGaLocation":52},"/customers/","customer success stories",{"text":255,"config":256},"Blog",{"href":257,"dataGaName":258,"dataGaLocation":52},"/blog/","blog",{"text":260,"config":261},"The Source",{"href":262,"dataGaName":258,"dataGaLocation":52},"/the-source/",{"text":264,"config":265},"Remote",{"href":266,"dataGaName":267,"dataGaLocation":52},"https://handbook.gitlab.com/handbook/company/culture/all-remote/","remote",{"title":269,"items":270},"Connect",[271,276,281,286,291],{"text":272,"config":273},"GitLab Services",{"href":274,"dataGaName":275,"dataGaLocation":52},"/services/","services",{"text":277,"config":278},"Community",{"href":279,"dataGaName":280,"dataGaLocation":52},"/community/","community",{"text":282,"config":283},"Forum",{"href":284,"dataGaName":285,"dataGaLocation":52},"https://forum.gitlab.com/","forum",{"text":287,"config":288},"Events",{"href":289,"dataGaName":290,"dataGaLocation":52},"/events/","events",{"text":292,"config":293},"Partners",{"href":294,"dataGaName":295,"dataGaLocation":52},"/partners/","partners",{"config":297,"title":300,"text":301,"link":302},{"background":298,"textColor":299},"url('https://res.cloudinary.com/about-gitlab-com/image/upload/v1777322348/qpq8yrgn8knii57omj0c.png')","#000","What’s new in GitLab","Stay updated with our latest features and improvements.",{"text":303,"config":304},"Read the latest",{"href":305,"dataGaName":306,"dataGaLocation":52},"/releases/whats-new/","whats new",{"text":308,"config":309,"menu":311},"Company",{"dataNavLevelOne":310},"company",{"type":103,"columns":312},[313],{"items":314},[315,320,326,328,333,338,343,348,353,358,363],{"text":316,"config":317},"About",{"href":318,"dataGaName":319,"dataGaLocation":52},"/company/","about",{"text":321,"config":322,"footerGa":325},"Jobs",{"href":323,"dataGaName":324,"dataGaLocation":52},"/jobs/","jobs",{"dataGaName":324},{"text":287,"config":327},{"href":289,"dataGaName":290,"dataGaLocation":52},{"text":329,"config":330},"Leadership",{"href":331,"dataGaName":332,"dataGaLocation":52},"/company/team/e-group/","leadership",{"text":334,"config":335},"Team",{"href":336,"dataGaName":337,"dataGaLocation":52},"/company/team/","team",{"text":339,"config":340},"Handbook",{"href":341,"dataGaName":342,"dataGaLocation":52},"https://handbook.gitlab.com/","handbook",{"text":344,"config":345},"Investor relations",{"href":346,"dataGaName":347,"dataGaLocation":52},"https://ir.gitlab.com/","investor relations",{"text":349,"config":350},"Trust Center",{"href":351,"dataGaName":352,"dataGaLocation":52},"/security/","trust center",{"text":354,"config":355},"AI Transparency Center",{"href":356,"dataGaName":357,"dataGaLocation":52},"/ai-transparency-center/","ai transparency center",{"text":359,"config":360},"Newsletter",{"href":361,"dataGaName":362,"dataGaLocation":52},"/company/contact/#contact-forms","newsletter",{"text":364,"config":365},"Press",{"href":366,"dataGaName":367,"dataGaLocation":52},"/press/","press",{"text":369,"config":370,"menu":371},"Contact us",{"dataNavLevelOne":310},{"type":103,"columns":372},[373],{"items":374},[375,378,383],{"text":59,"config":376},{"href":61,"dataGaName":377,"dataGaLocation":52},"talk to sales",{"text":379,"config":380},"Support portal",{"href":381,"dataGaName":382,"dataGaLocation":52},"https://support.gitlab.com","support portal",{"text":384,"config":385},"Customer portal",{"href":386,"dataGaName":387,"dataGaLocation":52},"https://customers.gitlab.com/customers/sign_in/","customer portal",{"close":389,"login":390,"suggestions":397},"Close",{"text":391,"link":392},"To search repositories and projects, login to",{"text":393,"config":394},"gitlab.com",{"href":66,"dataGaName":395,"dataGaLocation":396},"search login","search",{"text":398,"default":399},"Suggestions",[400,402,406,408,412,416],{"text":83,"config":401},{"href":88,"dataGaName":83,"dataGaLocation":396},{"text":403,"config":404},"Code Suggestions (AI)",{"href":405,"dataGaName":403,"dataGaLocation":396},"/solutions/code-suggestions/",{"text":119,"config":407},{"href":121,"dataGaName":119,"dataGaLocation":396},{"text":409,"config":410},"GitLab on AWS",{"href":411,"dataGaName":409,"dataGaLocation":396},"/partners/technology-partners/aws/",{"text":413,"config":414},"GitLab on Google Cloud",{"href":415,"dataGaName":413,"dataGaLocation":396},"/partners/technology-partners/google-cloud-platform/",{"text":417,"config":418},"Why GitLab?",{"href":96,"dataGaName":417,"dataGaLocation":396},{"freeTrial":420,"mobileIcon":425,"desktopIcon":430,"secondaryButton":433},{"text":421,"config":422},"Start free trial",{"href":423,"dataGaName":57,"dataGaLocation":424},"https://gitlab.com/-/trials/new/","nav",{"altText":426,"config":427},"Gitlab Icon",{"src":428,"dataGaName":429,"dataGaLocation":424},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1758203874/jypbw1jx72aexsoohd7x.svg","gitlab icon",{"altText":426,"config":431},{"src":432,"dataGaName":429,"dataGaLocation":424},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1758203875/gs4c8p8opsgvflgkswz9.svg",{"text":434,"config":435},"Get Started",{"href":436,"dataGaName":437,"dataGaLocation":424},"https://gitlab.com/-/trial_registrations/new?glm_source=about.gitlab.com/get-started/","get started",{"freeTrial":439,"mobileIcon":443,"desktopIcon":445},{"text":440,"config":441},"Learn more about GitLab Duo",{"href":88,"dataGaName":442,"dataGaLocation":424},"gitlab duo",{"altText":426,"config":444},{"src":428,"dataGaName":429,"dataGaLocation":424},{"altText":426,"config":446},{"src":432,"dataGaName":429,"dataGaLocation":424},{"button":448,"mobileIcon":453,"desktopIcon":455},{"text":449,"config":450},"/switch",{"href":451,"dataGaName":452,"dataGaLocation":424},"#contact","switch",{"altText":426,"config":454},{"src":428,"dataGaName":429,"dataGaLocation":424},{"altText":426,"config":456},{"src":457,"dataGaName":429,"dataGaLocation":424},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1773335277/ohhpiuoxoldryzrnhfrh.png",{"freeTrial":459,"mobileIcon":464,"desktopIcon":466},{"text":460,"config":461},"Back to pricing",{"href":201,"dataGaName":462,"dataGaLocation":424,"icon":463},"back to pricing","GoBack",{"altText":426,"config":465},{"src":428,"dataGaName":429,"dataGaLocation":424},{"altText":426,"config":467},{"src":432,"dataGaName":429,"dataGaLocation":424},{"title":469,"button":470,"config":475},"See how agentic AI transforms software delivery",{"text":471,"config":472},"Sign up for GitLab Transcend on June 10",{"href":473,"dataGaName":474,"dataGaLocation":52},"/releases/whats-new/#sign-up","transcend event",{"layout":476,"icon":477,"disabled":36},"release","AiStar",{"data":479},{"text":480,"source":481,"edit":487,"contribute":492,"config":497,"items":502,"minimal":708},"Git is a trademark of Software Freedom Conservancy and our use of 'GitLab' is under license",{"text":482,"config":483},"View page source",{"href":484,"dataGaName":485,"dataGaLocation":486},"https://gitlab.com/gitlab-com/marketing/digital-experience/about-gitlab-com/","page source","footer",{"text":488,"config":489},"Edit this page",{"href":490,"dataGaName":491,"dataGaLocation":486},"https://gitlab.com/gitlab-com/marketing/digital-experience/about-gitlab-com/-/blob/main/content/","web ide",{"text":493,"config":494},"Please contribute",{"href":495,"dataGaName":496,"dataGaLocation":486},"https://gitlab.com/gitlab-com/marketing/digital-experience/about-gitlab-com/-/blob/main/CONTRIBUTING.md/","please contribute",{"twitter":498,"facebook":499,"youtube":500,"linkedin":501},"https://twitter.com/gitlab","https://www.facebook.com/gitlab","https://www.youtube.com/channel/UCnMGQ8QHMAnVIsI3xJrihhg","https://www.linkedin.com/company/gitlab-com",[503,550,603,647,674],{"title":199,"links":504,"subMenu":519},[505,509,514],{"text":506,"config":507},"View plans",{"href":201,"dataGaName":508,"dataGaLocation":486},"view plans",{"text":510,"config":511},"Why Premium?",{"href":512,"dataGaName":513,"dataGaLocation":486},"/pricing/premium/","why premium",{"text":515,"config":516},"Why Ultimate?",{"href":517,"dataGaName":518,"dataGaLocation":486},"/pricing/ultimate/","why ultimate",[520],{"title":521,"links":522},"Contact Us",[523,526,528,530,535,540,545],{"text":524,"config":525},"Contact sales",{"href":61,"dataGaName":62,"dataGaLocation":486},{"text":379,"config":527},{"href":381,"dataGaName":382,"dataGaLocation":486},{"text":384,"config":529},{"href":386,"dataGaName":387,"dataGaLocation":486},{"text":531,"config":532},"Status",{"href":533,"dataGaName":534,"dataGaLocation":486},"https://status.gitlab.com/","status",{"text":536,"config":537},"Terms of use",{"href":538,"dataGaName":539,"dataGaLocation":486},"/terms/","terms of use",{"text":541,"config":542},"Privacy statement",{"href":543,"dataGaName":544,"dataGaLocation":486},"/privacy/","privacy statement",{"text":546,"config":547},"Cookie preferences",{"dataGaName":548,"dataGaLocation":486,"id":549,"isOneTrustButton":17},"cookie preferences","ot-sdk-btn",{"title":99,"links":551,"subMenu":559},[552,555],{"text":25,"config":553},{"href":81,"dataGaName":554,"dataGaLocation":486},"devsecops platform",{"text":556,"config":557},"AI-Assisted Development",{"href":88,"dataGaName":558,"dataGaLocation":486},"ai-assisted development",[560],{"title":561,"links":562},"Topics",[563,568,573,578,583,588,593,598],{"text":564,"config":565},"CICD",{"href":566,"dataGaName":567,"dataGaLocation":486},"/topics/ci-cd/","cicd",{"text":569,"config":570},"GitOps",{"href":571,"dataGaName":572,"dataGaLocation":486},"/topics/gitops/","gitops",{"text":574,"config":575},"DevOps",{"href":576,"dataGaName":577,"dataGaLocation":486},"/topics/devops/","devops",{"text":579,"config":580},"Version Control",{"href":581,"dataGaName":582,"dataGaLocation":486},"/topics/version-control/","version control",{"text":584,"config":585},"DevSecOps",{"href":586,"dataGaName":587,"dataGaLocation":486},"/topics/devsecops/","devsecops",{"text":589,"config":590},"Cloud Native",{"href":591,"dataGaName":592,"dataGaLocation":486},"/topics/cloud-native/","cloud native",{"text":594,"config":595},"AI for Coding",{"href":596,"dataGaName":597,"dataGaLocation":486},"/topics/devops/ai-for-coding/","ai for coding",{"text":599,"config":600},"Agentic AI",{"href":601,"dataGaName":602,"dataGaLocation":486},"/topics/agentic-ai/","agentic ai",{"title":604,"links":605},"Solutions",[606,608,610,615,619,622,626,629,631,634,637,642],{"text":143,"config":607},{"href":138,"dataGaName":143,"dataGaLocation":486},{"text":132,"config":609},{"href":115,"dataGaName":116,"dataGaLocation":486},{"text":611,"config":612},"Agile development",{"href":613,"dataGaName":614,"dataGaLocation":486},"/solutions/agile-delivery/","agile delivery",{"text":616,"config":617},"SCM",{"href":128,"dataGaName":618,"dataGaLocation":486},"source code management",{"text":564,"config":620},{"href":121,"dataGaName":621,"dataGaLocation":486},"continuous integration & delivery",{"text":623,"config":624},"Value stream management",{"href":171,"dataGaName":625,"dataGaLocation":486},"value stream management",{"text":569,"config":627},{"href":628,"dataGaName":572,"dataGaLocation":486},"/solutions/gitops/",{"text":181,"config":630},{"href":184,"dataGaName":185,"dataGaLocation":486},{"text":632,"config":633},"Small business",{"href":190,"dataGaName":191,"dataGaLocation":486},{"text":635,"config":636},"Public sector",{"href":196,"dataGaName":197,"dataGaLocation":486},{"text":638,"config":639},"Education",{"href":640,"dataGaName":641,"dataGaLocation":486},"/solutions/education/","education",{"text":643,"config":644},"Financial services",{"href":645,"dataGaName":646,"dataGaLocation":486},"/solutions/finance/","financial services",{"title":204,"links":648},[649,651,653,655,658,660,662,664,666,668,670,672],{"text":217,"config":650},{"href":219,"dataGaName":220,"dataGaLocation":486},{"text":222,"config":652},{"href":224,"dataGaName":225,"dataGaLocation":486},{"text":227,"config":654},{"href":229,"dataGaName":230,"dataGaLocation":486},{"text":232,"config":656},{"href":234,"dataGaName":657,"dataGaLocation":486},"docs",{"text":255,"config":659},{"href":257,"dataGaName":258,"dataGaLocation":486},{"text":250,"config":661},{"href":252,"dataGaName":253,"dataGaLocation":486},{"text":264,"config":663},{"href":266,"dataGaName":267,"dataGaLocation":486},{"text":272,"config":665},{"href":274,"dataGaName":275,"dataGaLocation":486},{"text":277,"config":667},{"href":279,"dataGaName":280,"dataGaLocation":486},{"text":282,"config":669},{"href":284,"dataGaName":285,"dataGaLocation":486},{"text":287,"config":671},{"href":289,"dataGaName":290,"dataGaLocation":486},{"text":292,"config":673},{"href":294,"dataGaName":295,"dataGaLocation":486},{"title":308,"links":675},[676,678,680,682,684,686,688,692,697,699,701,703],{"text":316,"config":677},{"href":318,"dataGaName":310,"dataGaLocation":486},{"text":321,"config":679},{"href":323,"dataGaName":324,"dataGaLocation":486},{"text":329,"config":681},{"href":331,"dataGaName":332,"dataGaLocation":486},{"text":334,"config":683},{"href":336,"dataGaName":337,"dataGaLocation":486},{"text":339,"config":685},{"href":341,"dataGaName":342,"dataGaLocation":486},{"text":344,"config":687},{"href":346,"dataGaName":347,"dataGaLocation":486},{"text":689,"config":690},"Sustainability",{"href":691,"dataGaName":689,"dataGaLocation":486},"/sustainability/",{"text":693,"config":694},"Diversity, inclusion and belonging (DIB)",{"href":695,"dataGaName":696,"dataGaLocation":486},"/diversity-inclusion-belonging/","Diversity, inclusion and belonging",{"text":349,"config":698},{"href":351,"dataGaName":352,"dataGaLocation":486},{"text":359,"config":700},{"href":361,"dataGaName":362,"dataGaLocation":486},{"text":364,"config":702},{"href":366,"dataGaName":367,"dataGaLocation":486},{"text":704,"config":705},"Modern Slavery Transparency Statement",{"href":706,"dataGaName":707,"dataGaLocation":486},"https://handbook.gitlab.com/handbook/legal/modern-slavery-act-transparency-statement/","modern slavery transparency statement",{"items":709},[710,713,716],{"text":711,"config":712},"Terms",{"href":538,"dataGaName":539,"dataGaLocation":486},{"text":714,"config":715},"Cookies",{"dataGaName":548,"dataGaLocation":486,"id":549,"isOneTrustButton":17},{"text":717,"config":718},"Privacy",{"href":543,"dataGaName":544,"dataGaLocation":486},[720,734],{"id":721,"title":10,"body":31,"config":722,"content":724,"description":31,"extension":728,"meta":729,"navigation":17,"path":730,"seo":731,"stem":732,"__hash__":733},"blogAuthors/en-us/blog/authors/mark-lapierre.yml",{"template":723},"BlogAuthor",{"name":10,"config":725},{"headshot":726,"ctfId":727},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1749669066/Blog/Author%20Headshots/mark_lapierre.png","2Fnsk5H33npbli2fy9kMqu","yml",{},"/en-us/blog/authors/mark-lapierre",{},"en-us/blog/authors/mark-lapierre","0HzPTjvC6yQJRoZypDDz_Ow6FkOfcD8aea81kGSFg7o",{"id":735,"title":11,"body":31,"config":736,"content":737,"description":31,"extension":728,"meta":741,"navigation":17,"path":742,"seo":743,"stem":744,"__hash__":745},"blogAuthors/en-us/blog/authors/vincy-wilson.yml",{"template":723},{"name":11,"config":738},{"headshot":739,"ctfId":740},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1749669069/Blog/Author%20Headshots/vincy.jpg","1iyKndVlbE3dQnxOJoSY0q",{},"/en-us/blog/authors/vincy-wilson",{},"en-us/blog/authors/vincy-wilson","LvCuCwCbFKkkdcuAkl-tE7wn13eg9fSlEk_3q8yMPIg",[747,760,772],{"content":748,"config":758},{"title":749,"description":750,"authors":751,"date":753,"heroImage":754,"body":755,"category":13,"tags":756},"Atlassian will train on your data: Opt out with GitLab","Learn why Atlassian's latest move is a threat to data governance and how GitLab's approach helps ensure your customers' data stays private and protected.",[752],"Jessica Hurwitz","2026-05-04","https://res.cloudinary.com/about-gitlab-com/image/upload/v1773866173/vte9qh8rriznvyclhkes.png","Starting August 17, 2026, Atlassian will begin collecting customer metadata and in-app content from Jira, Confluence, and other cloud products to train its AI offerings, including Rovo and Rovo Dev. This announcement comes after [GitHub recently changed its Copilot data usage policy](https://about.gitlab.com/blog/github-copilots-new-policy-for-ai-training-is-a-governance-wake-up-call/). **Taken together, these changes suggest opt-out-by-default is becoming the industry norm. GitLab takes the opposite position: no data collection, no AI training on customer data, no matter what tier you're on.**\n\n[Atlassian's change](https://www.atlassian.com/trust/ai/data-contribution) is enabled by default for all cloud customers and affects roughly 300,000 organizations. For customers on the Free, Standard, and Premium tiers, metadata collection is mandatory and cannot be turned off. Only Enterprise-tier customers have the option to opt out. This policy change deserves a close read if your engineering, IT, and program management teams run on Atlassian because they are most exposed by this change — and least likely to have been consulted before it happened.\n\nAlthough the underlying governance questions are the same for both Atlassian and GitHub's changes, the data at risk is different. Where GitHub's change concerned source code and developer interactions, Atlassian's reaches into project plans, internal documentation, workflow configurations, and operational metadata across Jira, Confluence, and the broader Atlassian stack. **For organizations that rely on these tools as their system of record for how work gets planned and delivered, the implications run deep.**\n\n## What changed and what it means for your data\n\nAtlassian will collect two categories of information: \n\n- **Metadata:** de-identified operational signals like story points, sprint dates, and SLA values, including data from its Teamwork Graph and connected third-party apps  \n- **In-app content:** user-generated material such as Confluence page content, Jira issue titles, descriptions, and comments\n\nAtlassian says it will apply de-identification and aggregation before training. Collected data may be retained for up to seven years, with in-app data removed within 30 days of opt-out and models retrained within 90 days.\n\nThere are some exclusions: Customers using customer-managed encryption keys, Atlassian Government Cloud, Isolated Cloud, or those with HIPAA requirements are carved out from collection. But for the vast majority of Atlassian's cloud customer base, data collection will start unless you pay for the Enterprise tier and actively flip the switch.\n\nThis reverses Atlassian's prior stated position that customer data would not be used to train or improve AI services. Organizations that adopted Jira and Confluence to manage their most sensitive planning workflows, sprint boards, security tickets, incident postmortems, and internal documentation will soon be contributing that content to Atlassian's AI training pipeline, without ever being asked.\n\n## The governance gap in \"opt-out by default\"\n\nOpt-out-by-default data collection for AI training is an emerging pattern across the software industry. It raises the same set of questions every time: How does this interact with existing data processing agreements? Does the vendor's definition of \"metadata\" match what your legal and security teams would consider non-sensitive data?\n\n**For many organizations, the answer to these questions is \"we don't know.\"** \n\nWhen a vendor changes its data practices through a terms-of-service update, the burden falls on the customer to notice, evaluate the implications, and act within the window the vendor provides. \n\nThe mandatory nature of metadata collection on Free, Standard, and Premium tiers makes this more acute. The only exit is upgrading to Enterprise, which requires a minimum of 801 users and custom pricing that would represent a significant cost jump for teams that aren't there yet. Data protection, in other words, is now a purchasing decision.\n\nThe tiered structure also introduces a subtler problem. Metadata like story points, sprint velocity, SLA metrics, and task classifications may seem innocuous in isolation, but in aggregate they reveal project structure, team performance patterns, and delivery cadence. For organizations in competitive industries, that operational intelligence has real value, and \"de-identified\" does not necessarily mean \"non-sensitive\" once patterns are reconstructable at scale.\n\n## Why this matters more for Atlassian-stack organizations\n\nIn Atlassian-based organizations, Jira has been the center how teams plan, track, and deliver work. It’s the source of truth for sprint planning, bug tracking, release management, portfolio coordination, and cross-functional project execution. \n\nIn regulated industries like financial services, public sector and manufacturing, Jira and Confluence together hold sensitive operational data that may be subject to compliance requirements. The risk compounds for organizations that have expanded beyond Jira into the broader Atlassian ecosystem.\n\nWhen you run Jira, Confluence, Bitbucket, and Bamboo together, the surface area of data now feeding into AI training spans your project plans, internal documentation, source code metadata, and CI/CD configurations — each of which security and compliance teams would want to review before sharing with a vendor's training pipeline.\n\nAtlassian’s Teamwork Graph connectors add another dimension for customers who have integrated third-party tools, such as Slack, Figma, Google Drive, Salesforce, and ServiceNow, into their environment. Teamwork Graph connectors index relationship and activity signals from these connected apps, which means the metadata Atlassian collects will not be limited to what lives inside Atlassian products. For security and compliance teams accustomed to evaluating data flows on a per-vendor basis, this cross-platform reach complicates the assessment considerably.\n\nOrganizations that are already navigating [Atlassian's push from Data Center](https://about.gitlab.com/blog/atlassian-ending-data-center-as-gitlab-maintains-deployment-choice/) and Server editions to the cloud face a compounding challenge. Adding default AI data collection to that migration path raises the stakes further: **The question is no longer just \"do we move to Atlassian Cloud?\" but \"do we move to Atlassian Cloud knowing our data will feed AI training unless we're on the most expensive tier?\"**\n\n## What regulated industries should be evaluating now\n\nThe compliance implications vary by sector, but the obligation to reassess is consistent.\n\nIn financial services, frameworks like [SR 11-7](https://www.federalreserve.gov/supervisionreg/srletters/sr1107.htm) and [DORA](https://eur-lex.europa.eu/eli/reg/2022/2554/oj/eng) require documented, auditable oversight of third-party technology providers, including how those providers handle data. In the public sector, [NIST 800-53](https://csrc.nist.gov/publications/detail/sp/800-53/rev-5/final) and [FISMA](https://www.cisa.gov/topics/cyber-threats-and-advisories/federal-information-security-modernization-act) make controlling where sensitive data flows a foundational requirement. In healthcare, [HIPAA](https://www.hhs.gov/hipaa/index.html) governs how patient-adjacent data is handled by third parties. \n\nAcross the board, a material change in a vendor's data practices, such as Atlassian moving from \"we don't train on your data\" to \"we do, by default,\" triggers a documentation and risk reassessment obligation. \n\nInstitutions operating under the [EU AI Act](https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng) face an additional dimension: opt-out framing aligns with U.S. norms, while European regulators generally expect opt-in consent for data processing of this nature.\n\nIf your model risk or vendor management team documented Atlassian's data handling controls before this announcement, the question isn't whether this change triggers a reassessment obligation. It does. The question is whether your team can take action before August 17.\n\n## What to look for in your platform vendors\n\nCTOs and CISOs across regulated industries need to adopt AI in a way they can explain to regulators, boards, and customers. Because of this, GitLab operates within the following set of principles:\n\n**Unconditional data commitments, not tier-dependent protections.** Regulated organizations need to know, with specificity, what happens to their data. A commitment that varies by plan tier, or that requires action before a deadline, introduces exactly the kind of uncontrolled variable that keeps CISOs up at night.\n\n**Transparency and auditability.** Model risk management frameworks require organizations to understand the AI systems they deploy, including the training data and third parties involved. Vendors who cannot answer these questions clearly create documentation risk.\n\n**Separation between customer data and vendor AI training.** When a platform vendor trains models on customer usage data, workflows and operational patterns become inputs to a system that also serves competitors. For organizations where project structure or delivery cadence represents competitive advantage, that exposure matters.\n\n## How GitLab's approach differs\n\nGitLab doesn't train on customer data — at any tier, full stop. AI vendors powering GitLab Duo features are contractually prohibited from using customer inputs or outputs for their own purposes, [a commitment GitLab CEO Bill Staples](https://www.linkedin.com/posts/williamstaples_gitlab-1810-agentic-ai-now-open-to-even-activity-7443280763715985408-aHxf) has consistently reiterated.\n\n[GitLab's AI Transparency Center](https://about.gitlab.com/ai-transparency-center/) documents exactly which models power which features, how data is handled, and what vendor commitments are in place. [GitLab's AI Continuity Plan](https://handbook.gitlab.com/handbook/product/ai/continuity-plan/) documents how vendor changes are managed, including any material changes to how AI vendors treat customer data. For institutions managing third-party AI risk under DORA or similar frameworks, vendor continuity and concentration are active governance concerns, and having a documented plan for both is part of what responsible AI tooling looks like.\n\nFor organizations that require AI processing to stay within their own infrastructure, [GitLab Duo Agent Platform](https://about.gitlab.com/gitlab-duo/) is available with GitLab Self-Managed deployments, including support for integration with self-hosted AI models. This means prompts and code never leave the customer's environment. GitLab also provides IP indemnification for Duo-generated output, with no filters required and no activation steps needed. Where your data lives remains your choice, no matter your deployment model or subscription tier.\n\n> Whether your organization stays on Atlassian or begins evaluating alternatives, the conversation about who controls your data and how it gets used should be happening now. **The August 17 deadline is approaching, but you still have time to [try GitLab Ultimate with Duo Agent Platform for free today](https://gitlab.com/-/trials/new).**",[23,757],"product",{"featured":17,"template":15,"slug":759},"atlassian-will-train-on-your-data-opt-out-with-gitlab",{"content":761,"config":770},{"title":762,"description":763,"authors":764,"heroImage":766,"date":767,"body":768,"category":13,"tags":769},"GitLab and Anthropic: Governed AI for enterprise development","GitLab deepens its Anthropic Claude integration, bringing governed AI, access to new models, and cloud flexibility to enterprise software development.",[765],"Stuart Moncada","https://res.cloudinary.com/about-gitlab-com/image/upload/v1776457632/llddiylsgwuze0u1rjks.png","2026-04-28","For enterprise and public sector leaders, the tension is familiar: Software teams need to move faster with AI, while security, compliance, and regulatory expectations only get more stringent. GitLab deepens its Anthropic Claude integration so organizations get access to newly released Claude models inside GitLab’s intelligent orchestration platform where governance, compliance, and auditability already run.\n\nClaude powers capabilities across GitLab Duo Agent Platform as the default model out of the box, across a variety of use cases from code generation and review to agentic chat and vulnerability resolution. If you've used GitLab Duo, you've already experienced how Duo agents automate workflows across the entire software development lifecycle (SDLC).\n\nThis accelerates the integration of Claude’s capabilities into GitLab, broadens how enterprises can deploy them, and reinforces what makes GitLab fundamentally different as a platform for software development and engineering: governance, compliance, and auditability built into every AI interaction.\n\n> \"GitLab Duo has accelerated how our teams plan, build, and ship software. The combination of Anthropic's Claude and GitLab's platform means we're getting more capable AI without changing how we work or how it is governed.\"\n>\n> – Mans Booijink, Operations Manager, Cube\n\n## The real differentiator: Governed AI\n\nWith GitLab, governance controls and auditing are built into the SDLC. When Claude suggests a code change through the GitLab Duo Agent Platform, that suggestion flows through the same merge request process, the same approval rules, the same security scanning, and the same audit trail as every other change. AI doesn't get a shortcut around your controls. It operates within them.\n\nAs GitLab moves deeper into agentic software development, where AI autonomously handles well-defined tasks, the governance layer becomes more important. An AI agent that can open a merge request, help resolve a vulnerability, or refactor a service needs to be auditable, attributable, and subject to the same policy enforcement as a human developer. That requirement is an architectural decision GitLab made from the start, and one that grows more consequential as AI agents take on broader responsibilities.\n\n## Enterprise deployment flexibility\n\nThis also expands how organizations access the latest Claude models through GitLab. Claude is available within GitLab through Google Cloud's Vertex AI and Amazon Bedrock, which means enterprises can route AI workloads through the hyperscaler commitments and cloud governance frameworks they already have in place. No separate vendor contract. No new data residency questions. Your existing Google Cloud or AWS relationship is the on-ramp. \n\nGitLab is now also available in the [Claude Marketplace](https://claude.com/platform/marketplace), allowing customers to purchase GitLab Credits and apply them toward existing Anthropic spending commitments – consolidating AI spend and simplifying how teams discover and procure GitLab alongside their Anthropic investments.\n\n## Advancing an agentic future\n\nGitLab's vision for agentic software development, where AI handles defined tasks autonomously across planning, coding, testing, securing, and deploying, requires models with strong reasoning, reliability, and safety characteristics. It also requires a platform where those autonomous actions are fully governed.\n\nAgentic workflows demand models with strong reasoning, reliability, and safety characteristics, criteria that guide how GitLab selects and integrates AI model partners. And GitLab's governance framework helps ensure that as AI agents assume more advanced development work, enterprises maintain full visibility and control over what those agents do, when they do it, and how changes are tracked.\n\n## What this means for GitLab customers\n\nIf you're already using GitLab Duo Agent Platform, you'll get access to Claude models and deeper AI assistance across your software development lifecycle, all within the governance framework you already rely on.\n\nIf you're evaluating AI-powered software development platforms, you shouldn't have to choose between advanced AI capabilities and enterprise control. This strategic integration is built to deliver both.\n\n> Want to learn more about GitLab Duo Agent Platform? [Get a demo or start a free trial today](https://about.gitlab.com/gitlab-duo-agent-platform/).",[23,757,295],{"featured":17,"template":15,"slug":771},"gitlab-and-anthropic-governed-ai-for-enterprise-development",{"content":773,"config":783},{"title":774,"description":775,"authors":776,"heroImage":778,"date":779,"body":780,"category":13,"tags":781},"Give your AI agent direct, structured GitLab access with glab CLI","The GitLab CLI (glab) provides AI agents structured, reliable access to projects via the MCP, eliminating friction. This tutorial shows how you can speed up code review and issue triage.",[777],"Kai Armstrong","https://res.cloudinary.com/about-gitlab-com/image/upload/v1776347152/unw3mzatkd5xyfbzcnni.png","2026-04-27","\nWhen teams use GitLab Duo, Claude, Cursor, and other AI assistants, more of the development workflow runs through an AI agent acting on your behalf — reading issues, reviewing merge requests, running pipelines, and helping you ship faster. Most developers are already using the GitLab CLI (`glab`) from the terminal to interact with GitLab. Combining the two is a natural next step.\n\n\nThe problem is that without the right tools, AI agents are essentially guessing when it comes to your GitLab projects. They might hallucinate the details of an issue they've never seen, summarize a merge request based on stale training data rather than its actual state, or require you to manually copy context from a browser tab and paste it into a chat window just to get started. Every one of those workarounds is friction: it slows you down, introduces the possibility of error, and puts a hard ceiling on what your agent can actually do on your behalf. `glab` changes that by giving agents a direct, reliable interface to your projects.\n\n\nWith `glab`, your agent fetches what it needs directly from GitLab, acts on it, and reports back — so you spend less time relaying information and more time on the work that matters.\n\n\nIn this tutorial, you'll learn how to use `glab` to give AI agents structured, reliable access to your GitLab projects. You'll also discover how that unlocks a faster, more capable development workflow.\n\n\n## How to connect your AI agent to GitLab through MCP\n\n\nThe most direct way to supercharge your AI workflow is to give your AI agent native access to `glab` through Model Context Protocol ([MCP](https://about.gitlab.com/topics/ai/model-context-protocol/)).\n\n\n MCP is an open standard that lets AI tools discover and use external capabilities at runtime. Once connected, your AI assistant can read issues, comment on merge requests, check pipeline status, and write back to GitLab, all without copying anything from the UI or writing a single API call yourself.\n\n\n To get started, run:\n\n\n ```shell\n # Start the glab MCP server\n glab mcp serve\n ```\n\n\n Once your MCP client is configured, your AI can answer questions like *\"What's the status of my open MRs?\"* or *\"Are there any failing pipelines on main?\"* by querying GitLab directly, not scraping the web UI, not relying on stale training data. See the [full setup docs](https://docs.gitlab.com/cli/) for configuration steps for Claude Code, Cursor, and other editors.\n\n\n One detail worth knowing: `glab` automatically adds `--output json` when invoked through MCP, for any command that supports it. Your agent gets clean, structured data without you needing to think about output formats. And because `glab` uses the official MCP SDK, it stays compatible as the\n protocol evolves.\n\n\n We've also been deliberate about *which* commands are exposed through MCP. Commands that require interactive terminal input are intentionally\n excluded, so your agent never gets stuck waiting for input that will never come. What's exposed is what actually works reliably in an agent context.\n\n\n ## Let your AI participate in code review\n\n\n Most developers have a backlog of MRs waiting for review. It's one of the most time-consuming parts of the job and one of the best places to put\n AI to work. With `glab`, your agent doesn't just observe your review queue, it can work through it with you.\n\n\n ### See exactly what still needs addressing\n\n\n Start with this:\n\n\n ```shell\n glab mr view 2677 --comments --unresolved --output json\n ```\n\n\n This input returns the full MR: metadata, description, and every\n unresolved discussion, as a single structured JSON payload. Hand that to\n your AI and it has everything it needs: which threads are open, what the\n reviewer asked for, and in what context. No tab-switching, no copy-pasting\n individual comments.\n\n\n \n ```json\n {\n   \"id\": 2677,\n   \"title\": \"feat: add OAuth2 support\",\n   \"state\": \"opened\",\n   \"author\": { \"username\": \"jdwick\" },\n   \"labels\": [\"backend\", \"needs-review\"],\n   \"blocking_discussions_resolved\": false,\n   \"discussions\": [\n     {\n       \"id\": \"3107030349\",\n       \"resolved\": false,\n       \"notes\": [\n         {\n           \"author\": { \"username\": \"dmurphy\" },\n           \"body\": \"This error handling will swallow panics — consider wrapping with recover()\",\n           \"created_at\": \"2026-03-14T09:23:11.000Z\"\n         }\n       ]\n     },\n     {\n       \"id\": \"3107030412\",\n       \"resolved\": false,\n       \"notes\": [\n         {\n           \"author\": { \"username\": \"sreeves\" },\n           \"body\": \"Token refresh logic needs a test for the expired token case\",\n           \"created_at\": \"2026-03-14T10:05:44.000Z\"\n         }\n       ]\n     }\n   ]\n }\n ```\n\n\n Instead of reading through every thread yourself, you ask your agent  *\"what do I still need to fix in MR 2677?\"* and get back a prioritized summary with suggested changes. This all happens from a single command.\n\n\n ### Close the loop programmatically\n\n\n Once your AI has helped you address the feedback, it can resolve\n discussions:\n\n\n ```shell\n # List all discussions — structured, ready for the agent to process\n glab mr note list 456 --output json\n\n # Resolve a discussion once the feedback is addressed\n glab mr note resolve 456 3107030349\n\n # Reopen if something needs another look\n glab mr note reopen 456 3107030349\n ```\n\n\n\n ```json\n [\n   {\n     \"id\": 3107030349,\n     \"body\": \"This error handling will swallow panics — consider wrapping with recover()\",\n     \"author\": { \"username\": \"dmurphy\" },\n     \"resolved\": false,\n     \"resolvable\": true\n   },\n   {\n     \"id\": 3107030412,\n     \"body\": \"Token refresh logic needs a test for the expired token case\",\n     \"author\": { \"username\": \"sreeves\" },\n     \"resolved\": false,\n     \"resolvable\": true\n   }\n ]\n ```\n\n\n\n Note IDs are visible directly in the GitLab UI and API, no extra lookup needed. Your agent can work through the full list, verify each fix, and\n resolve as it goes.\n\n\n ## Talk to your AI about your code more effectively\n\n\n Even if you're not running an MCP server, there's a simpler shift that makes a huge difference: using `glab` to feed your AI better information.\n\n\n Think about the last time you asked an AI assistant to help triage issues or debug a failing pipeline. You probably copied some text from the GitLab UI and pasted it into the chat. Here's what your agent is actually\n working with when you do that:\n\n\n ```text\n open issues: 12 • milestone: 17.10 • label: bug, needs-triage ...\n ```\n\n\n Compare that to what it gets with `glab`:\n\n\n \n ```json\n [\n   {\n     \"iid\": 902,\n     \"title\": \"Pipeline fails on merge to main\",\n     \"labels\": [\"bug\", \"needs-triage\"],\n     \"milestone\": { \"title\": \"17.10\" },\n     \"assignees\": []\n   },\n   ...\n ]\n ```\n\n\n Structured, typed, complete; no ambiguity, no parsing guesswork. That's the difference between an agent that can act and one that has to ask\n follow-up questions.\n\n\n If you're using the MCP server, you get this automatically: `glab` adds `--output json` for any command that supports it. If you're working directly\n from the terminal, just add the flag yourself:\n\n\n ```shell\n # Pull open issues for triage\n glab issue list --label \"needs-triage\" --output json\n\n # Check pipeline status\n glab ci status --output json\n\n # Get full MR details\n glab mr view 456 --output json\n ```\n\n\n We've significantly expanded JSON output support in recent releases. It now covers CI status, milestones, labels, releases, schedules, cluster agents, work items, MR approvers, repo contributors, and more. If `glab` can\n retrieve it, your AI can consume it cleanly.\n\n\n ### A real workflow\n\n\n ```shell\n $ glab issue list --label \"needs-triage\" --milestone \"17.10\"\n --output json\n ```\n\n\n ```text\n Agent: I found 2 unassigned bugs in the 17.10 milestone that need triage:\n 1. #902 — Pipeline fails on merge to main (opened 5 days ago)\n 2. #903 — Auth token not refreshing on expiry (opened 4 days ago)\n Both are unassigned. Want me to draft triage notes and suggest assignees based on recent commit history?\n ```\n\n\n ## Your agent is never limited to built-in commands\n\n\n `glab`'s first-class commands cover the most common workflows, but your agent is never limited to them. Through `glab api`, it has authenticated access to the full GitLab REST and GraphQL API surface, using the same session, with no extra credentials or configuration required.\n\n\n This is a meaningful differentiator. Most CLI tools stop at what their commands expose. With `glab`, if GitLab's API supports it, your agent can do it. It's always working from a trusted, authenticated context.\n\n\n A practical example: fetching just the list of changed files in an MR before deciding which diffs to pull in full:\n\n\n ```shell\n # Get changed file paths — lightweight, no diff content yet\n glab api \"/projects/$CI_PROJECT_ID/merge_requests/$CI_MERGE_REQUEST_IID/diffs?per_page=100\" \\\n | jq '.[].new_path'\n\n# Then fetch only the specific file your agent needs\nglab api \"/projects/$CI_PROJECT_ID/merge_requests/$CI_MERGE_REQUEST_IID/diffs?per_page=100\" \\\n| jq '.[] | select(.new_path == \"path/to/file.go\")'\n ```\n\n\n ```text\n \"internal/auth/token.go\"\n \"internal/auth/token_test.go\"\n \"internal/oauth/refresh.go\"\n ```\n\n\n For anything the REST API doesn't cover (epics, certain work item queries, complex cross-project data),  `glab api graphql` gives you the full\n GraphQL interface:\n\n\n ```shell\n   glab api graphql -f query='\n {\n   project(fullPath: \"gitlab-org/gitlab\") {\n     mergeRequest(iid: \"12345\") {\n       title\n       reviewers { nodes { username } }\n     }\n   }\n }'\n ```\n\n ```json\n{\n   \"data\": {\n     \"project\": {\n       \"mergeRequest\": {\n         \"title\": \"feat: add OAuth2 support\",\n         \"reviewers\": {\n           \"nodes\": [\n             { \"username\": \"dmurphy\" },\n             { \"username\": \"sreeves\" }\n           ]\n         }\n       }\n     }\n   }\n }\n\n ```\n\n\n Your agent has a single, authenticated entry point to everything GitLab exposes without the token juggling, separate API clients, or configuration\n overhead.\n\n\n ## What's coming and your feedback\n\n\n Two improvements we're actively working on will make `glab` even more useful for agent workflows:\n\n\n **Agent-aware help text.** Today, `--help` output is written for humansvat a terminal. We're updating it to surface the non-interactive alternative\n for every interactive command, flag which commands support `--output json`, and generally make help a useful resource for agents discovering\n capabilities at runtime — not just humans.\n\n\n **Better machine-readable errors.** When something goes wrong today, agents get the same human-readable error messages as terminal users. We're\n changing that so errors in JSON mode return structured output, giving your agent the information it needs to handle failures gracefully, retry intelligently, or surface the right context back to you.\n\n\n Both of these are in active development. If you're already using `glab` with an AI tool, you're exactly the audience we want feedback from.\n\n\n * **What friction are you hitting?** Commands that don't behave well in agent contexts, error messages that aren't actionable, gaps in JSON output\n coverage. We want to know.\n\n * **What workflows have you unlocked?** Real usage patterns help us prioritize what to build next.\n\n\n Join the discussion in [our feedback issue](https://gitlab.com/gitlab-org/cli/-/issues/8177) — that's where we're shaping the roadmap for agent-friendliness, and where your input will have the most direct impact. If you've found a specific gap, [open an issue](https://gitlab.com/gitlab-org/cli/-/issues/new). If you've got a fix in mind, contributions are welcome. Visit [CONTRIBUTING.md](https://gitlab.com/gitlab-org/cli/-/blob/main/CONTRIBUTING.md) to get started.\n\n\n The GitLab CLI has always been about giving developers more control over their workflow. As AI becomes a bigger part of how we all work, that means making `glab` the best possible interface between your AI tools and your GitLab projects. We're just getting started and we'd love to build the next part with you.\n",[23,757,782],"tutorial",{"featured":17,"template":15,"slug":784},"give-your-ai-agent-direct-structured-gitlab-access-with-glab-cli",{"promotions":786},[787,800,811,823],{"id":788,"categories":789,"header":790,"text":791,"button":792,"image":797},"ai-modernization",[13],"Is AI achieving its promise at scale?","Quiz will take 5 minutes or less",{"text":793,"config":794},"Get your AI maturity score",{"href":795,"dataGaName":796,"dataGaLocation":258},"/assessments/ai-modernization-assessment/","modernization assessment",{"config":798},{"src":799},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1772138786/qix0m7kwnd8x2fh1zq49.png",{"id":801,"categories":802,"header":803,"text":791,"button":804,"image":808},"devops-modernization",[757,587],"Are you just managing tools or shipping innovation?",{"text":805,"config":806},"Get your DevOps maturity score",{"href":807,"dataGaName":796,"dataGaLocation":258},"/assessments/devops-modernization-assessment/",{"config":809},{"src":810},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1772138785/eg818fmakweyuznttgid.png",{"id":812,"categories":813,"header":815,"text":791,"button":816,"image":820},"security-modernization",[814],"security","Are you trading speed for security?",{"text":817,"config":818},"Get your security maturity score",{"href":819,"dataGaName":796,"dataGaLocation":258},"/assessments/security-modernization-assessment/",{"config":821},{"src":822},"https://res.cloudinary.com/about-gitlab-com/image/upload/v1772138786/p4pbqd9nnjejg5ds6mdk.png",{"id":824,"paths":825,"header":828,"text":829,"button":830,"image":835},"github-azure-migration",[826,827],"migration-from-azure-devops-to-gitlab","integrating-azure-devops-scm-and-gitlab","Is your team ready for GitHub's Azure move?","GitHub is already rebuilding around Azure. Find out what it means for you.",{"text":831,"config":832},"See how GitLab compares to GitHub",{"href":833,"dataGaName":834,"dataGaLocation":258},"/compare/gitlab-vs-github/github-azure-migration/","github azure migration",{"config":836},{"src":810},{"header":838,"blurb":839,"button":840,"secondaryButton":845},"Start building faster today","See what your team can do with the intelligent orchestration platform for DevSecOps.\n",{"text":841,"config":842},"Get your free trial",{"href":843,"dataGaName":57,"dataGaLocation":844},"https://gitlab.com/-/trial_registrations/new?glm_content=default-saas-trial&glm_source=about.gitlab.com/","feature",{"text":524,"config":846},{"href":61,"dataGaName":62,"dataGaLocation":844},1777934939729]