Key takeaways
Save engineering hours
Reduce platform migration efforts
Cheap continous improvement
The motivation: why we needed something better
When we first adopted Bitbucket Pipelines, we did what most teams do: we wrote pipeline configurations directly in each repository. It was simple, and it worked. But as our organization grew, cracks started to appear. Here’s what pushed us toward building something better.
The maintenance burden
Every change to our build process required updating configurations across all repositories. Finding and fixing bugs was even worse—each project had its own slight variations, so we’d fix an issue in one place only to discover it existed in a different form somewhere else. What should have been a five-minute fix often turned into a full day of repository hopping.
Standardization across projects
Beyond basic builds, we wanted consistent behavior across all projects for things like:
- Automated documentation generation — ensuring every project produces up-to-date documentation in the same format.
- Code quality checks — linting, formatting verification, and static analysis.
- Security scanning — dependency vulnerability checks before deployment.
- Versioning and tagging — consistent semantic versioning across all releases.
- Artifact publishing — packages, images, and modules all follow the same conventions.
With per-project pipelines, each repository implemented these differently (or not at all). Some projects had thorough documentation steps; others had none. Some ran security scans; others skipped them temporarily and never added them back. Standardization was impossible to enforce.
Project growth
What starts as five repositories quickly becomes fifteen, then thirty, then more. Each new project copies a pipeline from an existing one, makes a few tweaks, and immediately begins drifting. Within months, you have dozens of slightly different implementations of what should be the same process.
Vendor lock-in
We’ve been burned before. Years ago, we used Atlassian Bamboo for CI/CD. When Atlassian shifted its focus to Bitbucket Pipelines, we migrated—a painful process that affected every project.
Each time you write vendor-specific pipeline syntax directly in your repositories, you’re accumulating migration debt. The more projects you have, the more expensive that debt becomes when it’s time to pay it off. We wanted an architecture that would make the next migration painless.
First attempt: centralized shell scripts
Our first step toward a solution was moving build logic into versioned shell scripts inside our Docker image. This gave us centralization—we could update and fix bugs in one place rather than across every repository.
Here’s a snippet from one of those scripts—just the NuGet publishing portion:
#!/usr/bin/env bash
# Global variables
_NUGETS_TO_PUBLISH=()
_PUBLISH_NUGETS="DISABLED"
_CURRENT_ARG=""
# Parameter parsing
while ((${#}))
do
__opt="${1}"
shift
case "${__opt}" in
--nugets)
_PUBLISH_NUGETS="ENABLED"
_CURRENT_ARG="NUGETS"
;;
*)
if [ "${_CURRENT_ARG}" == "NUGETS" ]
then
_NUGETS_TO_PUBLISH+=("${__opt}")
fi
;;
esac
done
# Publish packages
if [ "${_PUBLISH_NUGETS}" == "ENABLED" ]
then
_TEMP=${#_NUGETS_TO_PUBLISH[@]}
if [ ${_TEMP} -eq 0 ]
then
printf "Could not parse nuget parameters: %s\\n" "${_NUGETS_TO_PUBLISH[*]}"
exit 1
fi
for i in "${_NUGETS_TO_PUBLISH[@]}"
do
dotnet pack "src/${i}" -o nugets -c Release --no-build
done
dotnet nuget push "nugets/*.nupkg" -k "${SONATYPE_API_KEY}" \
-s "https://${SONATYPE_HOST}/repository/nuget-releases/"
fi
And this was just one of many sections. The full script handled Docker builds, NPM publishing, test execution, documentation generation, license headers, and more—all in a single 300+ line bash script.
But shell scripts came with their own problems. The syntax is notoriously confusing and prone to subtle bugs. Error messages are cryptic and unhelpful. Injecting environment variables into commands requires careful escaping, and getting that escaping wrong leads to silent failures or security issues. Proper error handling is essentially nonexistent without verbose boilerplate.
The final nail in the coffin was the feedback loop. Our Docker image, loaded with build tools, takes over 20 minutes to build. Every change to a shell script meant rebuilding that image. A simple typo? That’s 20 minutes wasted. Forgot to handle an edge case? Another 20 minutes. This overhead made iteration painfully slow and mistakes expensive.
We needed a better approach.
Our solution: a parameterized Python build program
The breakthrough came when we moved our build logic out of the Docker image entirely. Instead of shell scripts baked into the image, we created a Python program that lives in its own repository. The key insight: we pull and install this program as a step within the pipeline itself, then call it with parameters in the next step.
This simple change eliminated the 20-minute feedback loop completely. The Docker image only needs to be rebuilt when we update build tools, which happens rarely. The Python program? We can update it, push, and see results in seconds on the next pipeline run.
Python solved nearly all the pain points we had with shell scripts. Clear error messages. Proper exception handling. Easy string manipulation without escaping nightmares. Compare the shell script above to the Python equivalent:
def publish_projects_as_nuget_packages(build: DotnetBuild) -> None:
if not build.projects_to_publish_as_nuget_package:
return
for project in build.projects_to_publish_as_nuget_package:
run_command(
build,
["dotnet", "pack", project.path, "--no-restore",
"--output", "nugets", "--configuration", "Release"],
)
for file_to_publish in glob.glob("nugets/*.nupkg"):
run_command(
build,
["dotnet", "nuget", "push", file_to_publish,
"-k", build.sonatype_api_key,
"-s", f"https://{build.sonatype_host}/repository/nuget-releases/"],
retry_attempts=10,
retry_delay=5.0,
)
retry_attempts parameter—adding retry logic in bash would have been another 20 lines of boilerplate. In Python, it’s a function parameter.
What this means in practice
Let’s compare where we started and where we are now.
The starting point (per-project pipeline files)
Each project contained its own pipeline configuration—often 50-100 lines that handled building, testing, and deploying. A typical pipeline looked something like this:
image: our-build-image:latest
pipelines:
default:
- step:
name: Build and Test
script:
- dotnet restore
- dotnet build --configuration Release
- dotnet test IntegrationTest --logger trx
- dotnet test UnitTest --logger trx --collect:"XPlat Code Coverage"
- reportgenerator -reports:**/coverage.cobertura.xml -targetdir:coverage
- step:
name: Publish NuGet Packages
script:
- dotnet pack Shared.Protocols -c Release -o ./packages
- dotnet pack Shared.Client -c Release -o ./packages
- dotnet nuget push ./packages/*.nupkg --source $NUGET_SOURCE --api-key $NUGET_API_KEY
- step:
name: Build and Push Docker Images
services:
- docker
script:
- docker build -t $REGISTRY/api:$BITBUCKET_COMMIT -f Api/Dockerfile .
- docker build -t $REGISTRY/worker:$BITBUCKET_COMMIT -f Worker/Dockerfile .
- echo $DOCKER_PASSWORD | docker login -u $DOCKER_USERNAME --password-stdin $REGISTRY
- docker push $REGISTRY/api:$BITBUCKET_COMMIT
- docker push $REGISTRY/worker:$BITBUCKET_COMMIT
And this is a simplified example—real pipelines often have deployment steps, environment-specific logic, and various workarounds accumulated over time. Now multiply this by dozens of repositories.
Where we are now (centralized Python program)
The pipeline file in each project has shrunk to just the essentials: specify the Docker image, install the build program, and call it with parameters that describe what to build. Here’s a complete pipeline:
image: our-build-image:latest
pipelines:
default:
- step:
name: Build, Test, and Publish
script:
- uv tool install our-internal-pipeline-builder
- |
dotnet-builder \
--project-to-test 'IntegrationTest' \
--project-to-test 'UnitTest' \
--project-to-cover 'Api' \
--project-to-publish-as-nuget-package 'Shared.Protocols' \
--project-to-publish-as-nuget-package 'Shared.Client' \
--project-to-publish-as-docker-image 'Api' \
--project-to-publish-as-docker-image 'Worker'
That’s the entire file. All the complexity—test execution, code coverage, package publishing, Docker builds—lives inside the builder program. The pipeline just declares what to build, not how. Adding a new project takes minutes: copy the template, adjust the parameters, and done.
The trade-offs
Initial investment versus long-term payoff
Building this system wasn’t free. We went through two iterations—first the shell scripts (which taught us what not to do), then the Python program. The full journey took around three to four months of experimentation and refinement.
But consider the alternative: we now have dozens of projects, and the build system handles everything from testing to Docker publishing packages. Without centralization, every new feature or bug fix would need to be implemented separately in each project. The math becomes obvious quickly.
Where complexity lives
Per-project pipelines
Centralized build program
Every developer writes and debugs pipeline logic
Developers focus on code;
pipeline team handles build logic
Tribal knowledge about “how project X builds”
Single source of truth with documentation
New hires spend days learning each project’s quirks
New hires learn one system that works everywhere
Runtime overhead
Installing the Python program adds roughly 5-10 seconds to each build. For builds that run 10-20 minutes, this is negligible. The trade-off is worth it: those seconds buy us instant updates to build logic across all projects, without touching any repository.
The hidden benefit: fearless changes
Before, adding a new feature to our pipeline—like the retry logic we mentioned earlier—meant updating dozens of repositories and hoping we didn’t miss any or introduce subtle bugs along the way. Now, we add it once, test it once, and every project benefits on the next build. This confidence has made us far more willing to improve our build process over time.
The next challenge: platform migration
Remember the vendor lock-in problem we mentioned earlier? Our journey from Bamboo to Bitbucket taught us that migrations are painful. But we’re about to put our architecture to the test—and this time, we’re prepared.
The Bitbucket pricing shift
In December 2025, Atlassian announced changes to the pricing for Bitbucket Pipelines’ self-hosted runners. Previously, running builds on your own infrastructure was free—you provided the servers, Bitbucket provided the orchestration at no additional cost.
Under the new model, self-hosted runners will cost $15 per concurrent build slot per month. While Atlassian has indicated they’re reconsidering this approach in response to community feedback, the announcement was a wake-up call.
Why does this hit us hard
Our CI/CD strategy is built around offloading as much work as possible to automation, freeing developers to focus on complex tasks that drive the software forward. This means we run a lot of pipelines:
- Automated dependency updates: We use Renovate to automatically detect and upgrade dependencies. When a new version is detected, a slim pipeline runs on the Renovate branch to verify that builds and tests still pass. If everything succeeds, the update is auto-merged—no human intervention required. This alone generates a steady stream of pipeline runs.
- Periodic performance testing: Performance tests are essential but time-consuming. Running them on every push would double or triple our pipeline time, so we run dedicated performance test pipelines periodically or on demand. These can run for hours, committing benchmark results back to the project for tracking over time.
- Feature branch pipelines: Not every branch needs the full build treatment. Our feature pipelines are slimmer—versioning works differently, and some publishing steps are disabled. This keeps feedback fast for work-in-progress code.
All of this runs in parallel, around the clock. Self-hosted runners let us handle this workload without worrying about per-minute costs or concurrency limits. A per-slot pricing model changes that equation entirely—and validated every concern that pushed us toward centralization in the first place.
Why we're exploring Tekton pipelines
We’re exploring Tekton Pipelines as our migration path. Tekton is an open-source, Kubernetes-native CI/CD framework that reached its 1.0 milestone in mid-2025—a significant marker of production readiness after six years of development.
What makes Tekton appealing
- Vendor independence: Tekton is a Cloud Native Computing Foundation project backed by Google, Red Hat, IBM, and others. No single company controls its roadmap or pricing. The project will exist regardless of any one vendor’s business decisions.
- Kubernetes-native architecture: If your organization already runs Kubernetes, Tekton feels natural. Pipelines are defined as Kubernetes custom resources—the same YAML patterns your operations team already understands. Tasks run as Pods, which means all your existing Kubernetes knowledge applies: resource limits, node selectors, secrets management, and monitoring.
- True self-hosting: You run Tekton on your own infrastructure. The only costs are the compute resources you already control. No per-seat licensing, no per-minute billing, no surprise pricing changes.
What migration looks like for us
This is where our centralized build system proves its value. A team with per-project pipelines would need to rewrite every single pipeline file in Tekton’s format. For us, the migration scope is dramatically smaller:
- Our Docker image remains unchanged—it contains build tools, not pipeline logic.
- Our Python build program remains unchanged—it doesn’t know or care what invokes it.
- We write one Tekton pipeline template that installs and calls our build program.
- Each project gets a thin Tekton configuration that passes the same parameters.
The pipeline syntax changes from Bitbucket’s format to Tekton’s format, but our actual build logic stays exactly the same. We estimate a migration effort of days rather than months—a fraction of what it would take to rewrite dozens of pipelines.
Key takeaways for decision makers
Trade-offs at a glance
Aspect
Initial setup time
Adding a new project
Bug fixes and cross-cutting changes
Onboarding new developers
Platform migration effort
Runtime overhead
KPIs from our experience
These numbers are approximations based on our journey, but they give a sense of the impact:
Metric
Time to fix a build bug
Time to add a new pipeline feature
Pipeline configuration lines per project
Developer time on pipeline maintenance
Confidence in making changes
Estimated migration effort (30 projects)
Note
Principles worth remembering
If you’re evaluating your CI/CD strategy, here’s what our journey taught us:
Invest in abstraction early
Separate tools from logic
Treat vendor lock-in as a certainty, not a risk
Consider Tekton if you're already on Kubernetes
Conclusion
The shift from Bitbucket Pipelines to Tekton isn’t just about avoiding a pricing change—it’s about taking control of a critical piece of our development infrastructure. Our custom build approach made that kind of flexibility possible from day one. When the next platform shift comes (and it will), we’ll be ready again.