← Blog
CI/CD Pipeline Explained: From Code to Production in Simple Steps
guide

CI/CD Pipeline Explained: From Code to Production in Simple Steps

CI/CD pipelines explained for beginners: what CI and CD mean, how pipelines work, GitHub Actions example, and common mistakes to avoid.

· 11 min read

CI/CD Pipeline Explained: From Code to Production in Simple Steps

I pushed to main on a Friday afternoon. No tests, no staging, no nothing. Just git push origin main and a deploy script that ran on every merge. I closed my laptop, drove home, and started making dinner.

By the time the pasta was boiling, my phone was buzzing. Slack. PagerDuty. A text from my manager that just said “site is down.” I spent the next four hours rolling back a three-line CSS change that somehow broke the login form in production. On a Friday night. With cold pasta.

That was the night I decided to learn what a CI/CD pipeline actually is, instead of pretending the deploy script was one.

CI vs CD vs CD (Yes, There Are Two CDs)

The acronym soup trips everyone up. Let me untangle it.

Continuous Integration (CI) is the practice of merging code into a shared branch frequently — multiple times a day, ideally — and running automated checks every time. Tests, linting, security scans, whatever your team deems important. The point is catching problems early, when the change is small and the context is still fresh in your head.

Before CI, teams would develop in isolation for weeks, then merge everything at once. That merge day was always a disaster. CI makes the pain small and continuous instead of large and catastrophic. Small paper cuts over compound fractures.

Continuous Delivery (CD #1) means your code is always in a deployable state. Every commit that passes the pipeline could go to production. But a human still pushes the button. You’ve got a release candidate sitting there, tested and packaged, waiting for someone to say “go.”

Continuous Deployment (CD #2) removes the human. Every commit that passes all checks goes straight to production automatically. No button. No waiting. Scary? A little. But if your tests are solid, it’s actually less risky than batching up two weeks of changes and deploying them all at once while crossing your fingers.

Most teams start with CI, add Continuous Delivery, and only move to Continuous Deployment once they trust their test suite. That trust takes time. And it should.

A Pipeline in Plain English

Strip away the tooling and a pipeline is just a series of automated steps your code goes through between “I wrote it” and “users see it.” Here’s the flow:

1. Commit. You push code to a repository. Git picks it up. This is the trigger — the starting gun.

2. Build. The pipeline compiles your code, installs dependencies, bundles assets. Whatever “getting the code ready” means for your stack. For a Node app, that’s npm install and maybe a webpack build. For Go, it’s go build. If this step fails, nothing else runs. No point testing code that doesn’t compile.

3. Test. Unit tests first — fast, isolated, covering individual functions. Then integration tests — checking that components talk to each other correctly. Maybe end-to-end tests if you’ve got them. The pipeline stops at the first failure. This is the safety net. This is what would have caught my Friday disaster.

4. Quality checks. Linting, code formatting, security scanning, maybe a check for test coverage thresholds. Some teams gate on all of these. Others pick the ones that matter most to them. The point is consistency — these checks run every single time, not just when someone remembers.

5. Deploy to staging. If everything passes, the code lands in an environment that mirrors production. This is where you catch the stuff automated tests miss — visual bugs, performance issues, weird edge cases that only show up with real-ish data.

6. Deploy to production. The final step. Either a human approves it (Continuous Delivery) or it happens automatically (Continuous Deployment). Either way, the code is now live.

That’s it. Six steps. Every CI/CD system in the world is some variation of this sequence, with extra steps bolted on depending on the team’s needs.

A Real Example with GitHub Actions

Theory is great. Let me show you what this actually looks like. Here’s a GitHub Actions workflow for a Node.js app — the kind of thing you’d put in .github/workflows/ci.yml:

name: CI/CD Pipeline

on:
  push:
    branches: [main]
  pull_request:
    branches: [main]

jobs:
  build-and-test:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: 20

      - name: Install dependencies
        run: npm ci

      - name: Run linter
        run: npm run lint

      - name: Run tests
        run: npm test

      - name: Build
        run: npm run build

  deploy:
    needs: build-and-test
    if: github.ref == 'refs/heads/main' && github.event_name == 'push'
    runs-on: ubuntu-latest

    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Deploy to production
        run: ./scripts/deploy.sh
        env:
          DEPLOY_TOKEN: ${{ secrets.DEPLOY_TOKEN }}

Let me walk through this.

The on block defines triggers. This pipeline runs on every push to main and every pull request targeting main. Pull requests get the build and test steps — you find out if your code is broken before it merges. Pushes to main get the full pipeline, including deploy.

build-and-test is a job. It runs on a fresh Ubuntu machine. Steps execute in order: clone the repo, set up Node, install dependencies with npm ci (deterministic installs, not npm install), lint, test, build. Any step fails, the whole job fails.

deploy is a separate job that only runs after build-and-test succeeds (needs: build-and-test). The if condition restricts it to pushes on main — pull requests don’t trigger deploys. The deploy script uses a secret token stored in GitHub’s secrets, never hardcoded.

Thirty lines of YAML. That’s enough to catch most of the catastrophic problems. My Friday pasta disaster would have been stopped at the test step, and I’d have gotten a red check on my pull request instead of a PagerDuty alert during dinner.

Common Tools (Pick One, Learn It Well)

The CI/CD tool market is crowded. Here are the ones that matter:

GitHub Actions. Built into GitHub. Free for public repos, generous free tier for private ones. If your code is already on GitHub, start here. The ecosystem of pre-built actions is huge, and the YAML syntax is straightforward once you get past the initial learning curve.

GitLab CI/CD. Built into GitLab. Similar idea, different YAML format. If your team uses GitLab, don’t fight it. GitLab’s CI is genuinely good — some would argue better than GitHub Actions for complex pipelines.

Jenkins. The old guard. Open source, self-hosted, infinitely configurable. Also infinitely frustrating to maintain. Jenkins can do anything, but it takes more work to set up and keep running. I’ve seen Jenkins servers that were treated like pets — named, pampered, feared. If your company already runs Jenkins, you’ll learn Jenkins. If you’re starting fresh, pick something hosted.

CircleCI. Cloud-hosted, clean interface, good Docker support. Popular with startups. Does the job without much fuss.

Honestly, the tool matters less than you think. The concepts transfer. I’ve used all four, and the mental model is the same everywhere: trigger, build, test, deploy. The YAML is different. The dashboards are different. The core loop is identical.

If you’re just learning, use GitHub Actions. It’s free, it’s where most open-source projects live, and you’ll have something running in under an hour.

Pipeline Anti-Patterns (Learn From My Mistakes)

I’ve built bad pipelines. More than I’d like to admit. Here’s what I learned the hard way.

No tests in the pipeline. A pipeline that only builds and deploys is a glorified FTP script. The whole point is catching problems before production. If your pipeline doesn’t run tests, it’s not a CI/CD pipeline. It’s automated recklessness.

Tests that take 45 minutes. I worked on a project where the test suite took nearly an hour. People started pushing without waiting for results. They’d merge PRs with a “it’s probably fine” attitude, and then it wasn’t fine. If your pipeline is too slow, people will route around it. Optimize the slow tests. Parallelize. Run the fast ones first so you fail early.

Deploying on Friday afternoon. I already told you this story. Don’t deploy on Friday. Or do, but only if you have a solid rollback strategy and nothing planned for the evening. And maybe not even then.

Secrets in the repo. I’ve seen API keys committed to .github/workflows/ci.yml in plain text. Once. That repo was public. The key was for a paid service. You can guess how that ended. Use your CI tool’s secret management. Every single one has it.

Ignoring flaky tests. A test that passes 90% of the time is not a test. It’s a random number generator that occasionally tells you your code is broken. Fix flaky tests immediately or delete them. Don’t just re-run the pipeline until it goes green — that teaches your team to distrust the whole system.

No staging environment. Deploying straight from “tests pass” to production is bold. Sometimes too bold. A staging environment catches the things tests miss. Layout bugs. Performance regressions. That weird thing that only happens when the database has more than ten thousand rows.

Your First Pipeline: Step by Step

If you’ve never set up a pipeline before, here’s how to get started. Twenty minutes, tops.

  1. Create a GitHub repository with a simple project. A Node.js app with a few tests, a Python script with pytest, whatever your stack is. The app doesn’t matter. The pipeline does.

  2. Create the workflow file. Make a .github/workflows/ directory in your repo. Add a file called ci.yml. Paste the example from above, adjusted for your language.

  3. Push it. Commit and push. Go to the “Actions” tab in your GitHub repo. Watch it run. There’s something satisfying about watching those green checkmarks appear one by one. And something useful about watching a red X appear when you deliberately break a test.

  4. Break something on purpose. Write a failing test. Push it. Watch the pipeline catch it. This is the moment it clicks — the pipeline is your safety net, not a bureaucratic obstacle.

  5. Add a deploy step. Even if it just echoes “Deploying…” for now. The structure matters. Later, you’ll replace that echo with a real deploy command.

  6. Set up branch protection. In your repo settings, require the CI check to pass before merging pull requests. This is where the pipeline goes from “nice to have” to “actually enforced.”

That’s a working CI/CD pipeline. It’s basic, but it’s real. From here, you add complexity as you need it — caching dependencies to speed things up, matrix builds to test across multiple versions, deployment to cloud environments. If you’re headed toward cloud infrastructure, our guide to learning cloud computing from scratch covers the foundation you’ll need for production deployments.

But the skeleton is always the same: trigger, build, test, deploy.

Where CI/CD Meets the Bigger Picture

CI/CD doesn’t exist in a vacuum. It’s one piece of a broader ecosystem. Infrastructure as Code tools like Terraform define where your code runs. Container orchestration with Kubernetes manages how it scales. Monitoring tools like Prometheus tell you if it’s actually working after deployment.

If you’re thinking about certifications to formalize this knowledge, our cloud certification decision tree can help you figure out which path makes sense for where you are right now.

But CI/CD is the piece that ties the workflow together. Without it, everything else is manual. With it, you ship faster and sleep better. Literally. I sleep better now that I have pipelines. Friday nights included.


FAQ

Do I need to know Docker to use CI/CD?

No. Docker is common in pipelines — most CI runners use containers under the hood — but you don’t need to understand Docker to set up a basic pipeline. GitHub Actions, for example, handles the container stuff for you. You just write your steps and it runs them. That said, once you start building more complex pipelines with custom environments, Docker knowledge becomes genuinely useful.

What’s the difference between CI/CD and DevOps?

DevOps is the broader culture and set of practices around collaboration between development and operations. CI/CD is a specific technical practice within that. You can do CI/CD without calling yourself a DevOps team, and you can claim to be a DevOps team without actually having CI/CD — though I wouldn’t recommend that last one.

How long should a pipeline take to run?

Under ten minutes is the target. Under five is ideal. Once you pass fifteen minutes, developers start context-switching while waiting, and that kills momentum. If your pipeline is slow, look at parallelizing tests, caching dependencies, and only running the full suite on main while PRs get a faster subset.

Can I use CI/CD for personal projects?

Absolutely. I use it for everything, even tiny side projects. It takes five minutes to set up and saves you from the “it works on my machine” problem forever. Plus, having CI/CD in your personal repos looks good on a resume. It signals that you care about code quality even when nobody’s watching.


Ready to level up your engineering workflow? Explore SkillRealm Learn —>

Ready to learn smarter?

Join the early access and be the first to try SkillRealm Learn.

No spam, ever. Unsubscribe anytime.

ci/cd pipeline explained beginner what is ci cd simple explanation continuous integration continuous deployment guide github actions ci cd tutorial