Container registry for a secondary site
DETAILS: Tier: Premium, Ultimate Offering: Self-managed
You can set up a container registry on your secondary Geo site that mirrors the one on the primary Geo site.
NOTE: The container registry replication is used only for disaster recovery purposes. We do not recommend pulling the container registry data from the secondary. For a feature proposal to implement it in the future, see Geo: Accelerate container images by serving read request from secondary site for details. You or your GitLab representative are encouraged to upvote this feature to register your interest.
Supported container registries
Geo supports the following types of container registries:
Supported image formats
The following container image formats are support by Geo:
In addition, Geo also supports BuildKit cache images.
Supported storage
Docker
For more information on supported registry storage drivers see Docker registry storage drivers
Read the Load balancing considerations when deploying the Registry, and how to set up the storage driver for the GitLab integrated container registry.
Registries that support OCI artifacts
The following registries support OCI artifacts:
- CNCF Distribution - local/offline verification
- Azure Container Registry (ACR)
- Amazon Elastic Container Registry (ECR)
- Google Artifact Registry (GAR)
- GitHub Packages container registry (GHCR)
- Bundle Bar
For more information, see the OCI Distribution Specification.
Configure container registry replication
You can enable a storage-agnostic replication so it can be used for cloud or local storage. Whenever a new image is pushed to the primary site, each secondary site pulls it to its own container repository.
To configure container registry replication:
- Configure the primary site.
- Configure the secondary site.
- Verify container registry replication.
Configure primary site
Make sure that you have container registry set up and working on the primary site before following the next steps.
To be able to replicate new container images, the container registry must send notification events to the primary site for every push. The token shared between the container registry and the web nodes on the primary is used to make communication more secure.
-
SSH into your GitLab primary server and log in as root (for GitLab HA, you only need a Registry node):
sudo -i
-
Edit
/etc/gitlab/gitlab.rb
:registry['notifications'] = [ { 'name' => 'geo_event', 'url' => 'https://<example.com>/api/v4/container_registry_event/events', 'timeout' => '500ms', 'threshold' => 5, 'backoff' => '1s', 'headers' => { 'Authorization' => ['<replace_with_a_secret_token>'] } } ]
NOTE: Replace
<example.com>
with theexternal_url
defined in your primary site's/etc/gitlab/gitlab.rb
file, and replace<replace_with_a_secret_token>
with a case sensitive alphanumeric string that starts with a letter. You can generate one with< /dev/urandom tr -dc _A-Z-a-z-0-9 | head -c 32 | sed "s/^[0-9]*//"; echo
NOTE: If you use an external Registry (not the one integrated with GitLab), you only need to specify the notification secret (
registry['notification_secret']
) in the/etc/gitlab/gitlab.rb
file. -
For GitLab HA only. Edit
/etc/gitlab/gitlab.rb
on every web node:registry['notification_secret'] = '<replace_with_a_secret_token_generated_above>'
-
Reconfigure each node you just updated:
gitlab-ctl reconfigure
Configure secondary site
Make sure you have container registry set up and working on the secondary site before following the next steps.
The following steps should be done on each secondary site you're expecting to see the container images replicated.
Because we need to allow the secondary site to communicate securely with the primary site container registry, we need to have a single key pair for all the sites. The secondary site uses this key to generate a short-lived JWT that is pull-only-capable to access the primary site container registry.
For each application and Sidekiq node on the secondary site:
-
SSH into the node and log in as the
root
user:sudo -i
-
Copy
/var/opt/gitlab/gitlab-rails/etc/gitlab-registry.key
from the primary to the node. -
Edit
/etc/gitlab/gitlab.rb
and add:gitlab_rails['geo_registry_replication_enabled'] = true # Primary registry's hostname and port, it will be used by # the secondary node to directly communicate to primary registry gitlab_rails['geo_registry_replication_primary_api_url'] = 'https://primary.example.com:5050/'
-
Reconfigure the node for the change to take effect:
gitlab-ctl reconfigure
Verify replication
To verify container registry replication is working, on the secondary site:
- On the left sidebar, at the bottom, select Admin area.
- Select Geo > Nodes. The initial replication, or "backfill", is probably still in progress.
You can monitor the synchronization process on each Geo site from the primary site's Geo Nodes dashboard in your browser.
Troubleshooting
Confirm that container registry replication is enabled
This can be done with a check using the Rails console:
Geo::ContainerRepositoryRegistry.replication_enabled?
Missing container registry notification event
- When an image is pushed to the primary site's container registry, it should trigger a Container Registry notification
- The primary site's container registry calls the primary site's API on
https://<example.com>/api/v4/container_registry_event/events
- The primary site inserts a record to the
geo_events
table withreplicable_name: 'container_repository', model_record_id: <ID of the container repository>
. - The record gets replicated by PostgreSQL to the secondary site's database.
- The Geo Log Cursor service processes the new event and enqueues a Sidekiq job
Geo::EventWorker
To verify this is working correctly, push an image to the registry on the primary site, and run the following command on the Rails console to verify that the notification was received, and processed into an event:
Geo::Event.where(replicable_name: 'container_repository')
You can further verify this by checking geo.log
for entries from Geo::ContainerRepositorySyncService
.
Registry events logs response status 401 Unauthorized unaccepted
401 Unauthorized
errors indicate that the primary site's container registry notification is not accepted by the Rails application, preventing it from notifying GitLab that something was pushed.
To fix this, make sure that the authorization headers being sent with the registry notification match what's configured on the primary site, as should be done during step Configure primary site.
token from untrusted issuer: "<token>"
Registry error: To replicate a container image, Sidekiq uses JWT to authenticate itself towards the container registry. Geo replication takes it as a prerequisite that the container registry configuration has been done correctly.
Make sure that both sites share a single signing key pair, as instructed under Configure secondary site, and that both container registries, plus primary and secondary sites are all configured to use the same token issuer.
On multinode deployments, make sure that the issuer configured on the Sidekiq node matches the value configured on the registries.
Manually trigger a container registry sync event
To help with troubleshooting, you can manually trigger the container registry replication process:
- On the left sidebar, at the bottom, select Admin area.
- Select Geo > Sites.
- In Replication Details for a Secondary Site, select Container Repositories.
- Select Resync for one row, or Resync all.
You can also manually trigger a resync by running the following commands on the secondary's Rails console:
registry = Geo::ContainerRepositoryRegistry.first # Choose a Geo registry entry
registry.replicator.sync # Resync the container repository
pp registry.reload # Look at replication state fields
#<Geo::ContainerRepositoryRegistry:0x00007f54c2a36060
id: 1,
container_repository_id: 1,
state: "2",
retry_count: 0,
last_sync_failure: nil,
retry_at: nil,
last_synced_at: Thu, 28 Sep 2023 19:38:05.823680000 UTC +00:00,
created_at: Mon, 11 Sep 2023 15:38:06.262490000 UTC +00:00>
The state
field represents sync state:
-
"0"
: pending sync (usually means it was never synced) -
"1"
: started sync (a sync job is currently running) -
"2"
: successfully synced -
"3"
: failed to sync