DORA metrics aren’t enough on their own. Here's how dev teams can make the leap to elite performance by focusing on pull request size and dev workflow while improving their cycle time.
What metric did they use to determine what “top 10%” means? Because that’s the part of this that seems most ridiculous to me given how situation-dependent most engineering decisions are. To illustrate with an extreme example: is “daily+ deployment frequency” a sign of an amazing engineering org if the thing being deployed is updates to your heart monitor firmware?
“DORA guys” came to our org in the past. And sing a song of “all successfully teams do that to, so you should too”. One of the my question, that was left unanswered, was did they analyse negative scenarios to check if their suggestions actually works and add too the reducing cycle times and what not?
And most of the time my cycle time is more depends on number of meetings I need to attend through day than on anything even remotely related to the coding.
I understand what DORA tries to do, but what they achieve is just another cargo cult.
What metric did they use to determine what “top 10%” means? Because that’s the part of this that seems most ridiculous to me given how situation-dependent most engineering decisions are. To illustrate with an extreme example: is “daily+ deployment frequency” a sign of an amazing engineering org if the thing being deployed is updates to your heart monitor firmware?
Same problem with “top 10%”.
“DORA guys” came to our org in the past. And sing a song of “all successfully teams do that to, so you should too”. One of the my question, that was left unanswered, was did they analyse negative scenarios to check if their suggestions actually works and add too the reducing cycle times and what not?
And most of the time my cycle time is more depends on number of meetings I need to attend through day than on anything even remotely related to the coding.
I understand what DORA tries to do, but what they achieve is just another cargo cult.