Back to blog
Engineering velocityFriction Score

What is engineering friction — and why most teams can't measure it

March 10, 2026·6 min read

Ask ten engineering managers what slows their team down and you'll get ten different answers. "Our CI is flaky." "PRs sit for days." "We find out about prod issues from Slack, not our tools." These answers sound different but they describe the same root cause: engineering friction.

Defining friction

Engineering friction is any recurring force that interrupts the flow from code written to code shipped. It's not a bug or a one-off incident — it's a pattern. A CI pipeline that breaks twice a week. A PR queue that averages 72 hours to review. A deployment that fails silently 10% of the time.

The defining characteristic of friction is that it compounds. Each small interruption doesn't just cost the time to fix it — it costs context-switching overhead, team trust in the tools, and the cognitive load of tracking things manually.

Why most teams can't measure it

The problem with friction isn't that teams don't know it exists. It's that it's invisible in aggregate. GitHub has no "friction dashboard." Linear shows you open issues but not the pattern of how they got there. Datadog monitors your production systems, not your development workflow.

The result: teams know something is wrong, but can't point to what or how much. When you can't measure it, you can't prioritise fixing it, justify tooling investment, or track whether things are improving.

The five main friction patterns

  • CI failures — especially on the main branch, where a broken pipeline blocks the whole team.
  • Flaky tests — tests that alternate between passing and failing for the same code, causing teams to start ignoring CI results entirely.
  • Stale PRs — open pull requests with no review activity for 48+ hours. A symptom of reviewer overload or unclear ownership.
  • Deployment failures — the same deploy step failing repeatedly without a structured issue to track investigation.
  • TODO debt — TODO/FIXME comments committed to production without any tracking in the issue tracker.

Making friction measurable: the Friction Score

Deviera's Friction Score is a 0–100 composite metric computed from signal activity in your workspace. It weights signals by severity: critical events (main-branch CI failures, deployment outages) carry 4× weight; high-severity events (non-main CI failures, stale PRs, bug-labeled issues) carry 2×; medium events carry 1×.

A score under 25 means your team is shipping cleanly. 25–50 is normal for an active team. Above 50, friction is accumulating and worth addressing systematically. Above 75, engineering is under heavy load.

What to do with the number

The Friction Score is most useful as a trend, not an absolute. A score of 40 isn't alarming on its own — but a score that was 15 three weeks ago and is now 40 is a clear signal that something changed. Track it weekly alongside your deployment frequency and you'll start to see how friction correlates with shipping velocity.

14-day free trial

Try Deviera for your team

Connect GitHub in under 5 minutes. No credit card required.

Start free trial