ActiveContext
Purpose
Section titled “Purpose”ActiveContext enables AI-powered features that understand your codebase by indexing code files into searchable embeddings. The system chunks code files into logical segments, generates vector embeddings for each chunk, and stores them in a vector database (Elasticsearch, OpenSearch, PostgreSQL with pgvector). This enables Retrieval Augmented Generation (RAG) features like Codebase as Chat Context in GitLab Duo, where the AI can semantically search your code to provide contextually relevant responses to your questions.
Quick start
Section titled “Quick start”Concepts
Section titled “Concepts”ActiveContext Abstraction Layer
Section titled “ActiveContext Abstraction Layer”The ActiveContext is an abstraction layer over different vector stores used for embeddings indexing, search, and other operations. See the Design Document and How It Works for an overview.
There are 2 ActiveContext workers:
Ai::ActiveContext::MigrationWorker
: runs every 5 minutes and picks up new migrationsAi::ActiveContext::BulkProcessWorker
: runs every minute and processes documents queued for embeddings generation
Code Embeddings Pipeline
Section titled “Code Embeddings Pipeline”The Code Embeddings Pipeline is an embeddings indexing pipeline for project code files that makes use of the ActiveContext abstraction layer. See the Design Document for an architecture overview.
The workers in the Code Embeddings Pipeline are scheduled through the Ai::ActiveContext::Code::SchedulingWorker
. The SchedulingWorker
runs every minute and checks which workers defined in the Ai::ActiveContext::Code::SchedulingService
are due to run. For further details, please refer to the Index State Management section in the Design Document.
AI Gateway Embeddings Generation Requests
Section titled “AI Gateway Embeddings Generation Requests”ActiveContext pipelines send embeddings generation requests to the AI Gateway.
The Code Embeddings Pipeline uses Vertex AI’s text-embedding-005
model and sends embeddings generation requests through the AI Gateway Vertex AI proxy endpoint: /v1/proxy/vertex-ai/
.
Related resources
Section titled “Related resources”How-to guides
Section titled “How-to guides”Start ActiveContext indexing
Section titled “Start ActiveContext indexing”To start running the ActiveContext indexing pipelines, you need to create an Ai::ActiveContext::Connection
record then activate it. This ensures that the pipeline workers (for example, the ones defined in Ai::ActiveContext::Code::SchedulingService
) will proceed to run.
Pause the Ai::ActiveContext::MigrationWorker
Section titled “Pause the Ai::ActiveContext::MigrationWorker”To pause the MigrationWorker
, you can use [Sidekiq Chatops command to drop jobs]:
/chatops run feature set drop_sidekiq_jobs_Ai::ActiveContext::MigrationWorker true --ignore-feature-flag-consistency-check
To un-pause/restart, simply delete the drop-job Feature Flag:
/chatops run feature delete drop_sidekiq_jobs_Ai::ActiveContext::MigrationWorker --ignore-feature-flag-consistency-check
Pause Indexing
Section titled “Pause Indexing”This pauses workers that have operations that access the vector store, ensuring that we avoid data loss during maintenance tasks in the vector store (like upgrades).
When ActiveContext is using Advanced Search settings
Pause indexing is controlled by ::Gitlab::CurrentSettings.elasticsearch_pause_indexing
when the ActiveContext connection is for Elasticsearch and it’s using the Advanced Search settings (see reference), ie:
conn = Ai::ActiveContext::Connection.active
conn.name=> "elastic"
conn.options=> { use_advanced_search_config: true }
In this scenario, ActiveContext’s pause indexing will be affected by any maintenance or upgrades done for Advanced Search.
When ActiveContext connection is using its own settings
TBA
Toggle ActiveContext indexing
Section titled “Toggle ActiveContext indexing”You can toggle ActiveContext indexing by deactivating or activating the relevant Ai::ActiveContext::Connection
.
To deactivate the active connection
This stops all ActiveContext-related workers.
⚠️ WARNING: This is a destructive action that will require a full reindex and should be used as a last resort. Pause indexing is the preferred method to use during incidents or maintenance.
Ai::ActiveContext::Connection.active.deactivate!
To reactivate a connection
⚠️ WARNING: if there is already an existing active connection, this will deactivate that other connection.
conn = Ai::ActiveContext::Connection.find_by(name: 'elastic')conn.activate!
Monitoring
Section titled “Monitoring”ActiveContext pipelines
Section titled “ActiveContext pipelines”Dashboard
Section titled “Dashboard”These are the same logs from the dashboard visualizations.
Code: Index State Management
- SaasInitialIndexingEventWorker - This worker marks namespaces as
pending
for initial indexing. - ProcessPendingEnabledNamespaceEventWorker - This worker processes
pending
namespaces, preparing the projects in the namespace for Initial Indexing - MarkRepositoryAsReadyEventWorker - Once Initial Indexing is completed, this worker marks a project as
ready
. This means that the project is ready for subsequent Incremental Indexing after pushes or merges to the default branch. - RepositoryIndexWorker - This worker executes indexing per repository. This includes both INITIAL and INCREMENTAL indexing.
- Initial Indexing Service - triggered by the
ProcessPendingEnabledNamespaceEventWorker
for eligible projects - Incremental Indexing Service - after a project has been initially indexed and marked as
ready
, triggered after commits are pushed or merged to the default branch - Code Indexer - this is the actual class that calls the
gitlab-elasticsearch-indexer
- Initial Indexing Service - triggered by the
Code: Embeddings Generation
- BulkProcessWorker - This will allow you to trace the embeddings generation process.
- Jobs - Tracks the start and finish of the bulk processing jobs
- Embeddings Generation requests - Tracks the actual call to the embeddings generation model (
Gitlab::Llm::VertexAi::Client
) - Error log - Errors encountered during the bulk embeddings generation process
ContentNotFoundError
- logged as a warning and the document is skipped- Other errors - logged as a warning and the documents are re-queued for processing
AI Gateway
Section titled “AI Gateway”These are AI Gateway logs and dashboards that are relevant to ActiveContext Code Embeddings pipeline.