Friday, November 1, 2024

Introduction to DORA Metrics: The Four Keys to Measuring DevOps Performance

DORA (DevOps Research & Assessment) metrics consist of  essential measurements used to evaluate a software development team’s performance. Developed by Google’s DORA team over six years, these metrics are based on insights gathered from more than 31,000 professionals worldwide.

The key metrics are below.

#1. Deployment Frequency (DF)
#2. Lead Time for changes (LT)
#3. Mean Time To Recovery (MTTR)
#4. Change Failure Rate (CFR)

Why DORA metrics are important?

  • Helps you to predict software delivery performance
  • It focuses on Outcomes and not on Outputs
  • It is language and technology agnostic
  • Helps you to identify improvement areas 

Let's start look into each metric in detail.

#1. Deployment Frequency (DF)

    Deployment Frequency measures how often code changes are deployed to production, reflecting the agility of the development team in delivering updates, new features, and bug fixes. This metric directly indicates the team’s ability to bring value to end-users quickly and is essential for assessing the efficiency of the deployment process. This can be daily, daily, weekly, or monthly. This metrics gives insight on how frequently the team can deliver incremental improvements to users.

Why DF is important ? 
    The higher value of Deployment Frequency shows that the team can deliver new features, bug fixes, and enhancements will be quick and consistent. Frequent deployments suggest a team can respond quickly to customer feedback, market changes, critical bug fixes and security vulnerabilities. 

How to Calculate? 
    To determine this metric value, count the total number of deployments made into production for a given time period and this can be daily, weekly, or monthly.

Performance Levels

Elite: Teams in this category perform at the highest level of efficiency, deploying multiple times per day. This level indicates a highly automated and well-maintained deployment pipeline with minimal or no friction.

High: Teams deploying between once per day to once per week. High-frequency deployments demonstrate the team’s capability to roll out updates regularly without needing daily releases.

Medium: Teams deploying between once per week to once per month. This level suggests some stability in the deployment process but may indicate manual steps or longer testing cycles that slow down deployment.

Low: Teams deploying less than once per month or higher. This low value indicates that the team uses more traditional approach, with longer release cycles. There are opportunities for increasing automation, testing speed, or process improvement.

#2 .Lead Time for Changes (LT)

    Lead Time for Changes measures the time it takes for code changes to progress from the first commit to production deployment. This metric reflects the team's ability to respond to business needs effectively and deliver updates quickly.

Why it is Important? 
    A shorter Lead Time for Changes indicates that the team can address business requirements, bug fixes, customer feedback, and market changes more swiftly.

How to Measure ? 
    To calculate this metric, determine the median time from code commit to production deployment. Using the median minimizes the influence of outliers, providing a clearer view of typical lead times.

Performance Levels:

Elite: Less than one hour. This level indicates a highly optimized process with efficient pipelines and minimal bottlenecks.

High: Between one day and one week. Teams at this level balance speed with quality, demonstrating a well-managed release process.

Medium: Between one week and one month. This level reflects a stable process with potential for improvement, such as reducing manual steps or dependencies that may be causing delays.

Low: More than one month. higher lead times at this level highlight areas for improvement, including opportunities for enhanced automation, dependency reduction, or more streamlined testing.


#3. Mean Time to Recovery (MTTR)

    Mean Time to Recovery (MTTR) measures the average time required to restore service after an incident is detected or reported. This metric indicates the team's effectiveness in managing and resolving issues promptly.

Why it is important?
    A shorter MTTR reflects a team’s readiness and capability to manage unexpected failures, minimizing downtime and its impact on users and the business. Lower recovery times demonstrate a robust incident management process and enhance reliability.

How to Calculate?
    To calculate MTTR, find the average time taken from the identification of a problem to the resolution or fix. This average provides a realistic view of the typical recovery time, helping teams identify and reduce bottlenecks in their incident response process.

Performance Levels

Elite: Less than one hour. This level represents a highly effective and streamlined incident response process, ensuring rapid recovery.

High: Less than one day. Teams at this level demonstrate solid recovery processes, restoring service within a reasonable timeframe.

Medium: Less than one week. This level indicates a moderate response time, but there may be opportunities to improve processes and reduce dependencies causing delays.

Low: More than one week. Extended recovery times suggest a need for process improvements, such as refining detection, response procedures, or streamlining incident management workflows.


#4. Change Failure Rate (CFR)

    Change Failure Rate (CFR) measures the percentage of code changes that result in service degradation or require remediation. This metric indicates the stability and reliability of the deployment process.

Why it is important?
    A lower Change Failure Rate reflects a stable and mature delivery process with minimal disruptions caused by new deployments. CFR helps identify areas for improvement, guiding teams to enhance testing, deployment practices, and quality assurance.

How to Calculate?
    To calculate CFR, divide the number of failed changes by the total number of changes and multiply by 100. This percentage shows the likelihood of failures occurring with each deployment, highlighting process quality.

Performance Levels

Elite: 0–15%. This level demonstrates an exceptionally stable process, with most changes deployed smoothly and minimal need for rollbacks or fixes.

High: 16–30%. Teams at this level maintain a strong process, though occasional issues may arise that need attention.

Medium: 31–45%. This level suggests that while deployments are generally stable, there is room for process and quality improvements to reduce failures.

Low: 46% and above. High failure rates indicate the need for significant improvements in testing, quality control, or deployment practices to enhance stability.

Summary:

By understanding and optimizing Deployment Frequency (DF), teams can better align their practices with DevOps principles, promoting agility, rapid feedback, and faster delivery of value.

Lead Time for Changes (LT) is crucial for aligning development speed with business agility, helping teams deliver high-impact updates faster.

Mean Time to Recovery (MTTR) is a key metric for measuring reliability and responsiveness, helping teams focus on improving resilience and reducing downtime.

Change Failure Rate (CFR) is a key indicator of delivery reliability and supports continuous improvement efforts by reducing disruptions caused by deployments.

Saturday, July 13, 2024

Apache HOP - Hello World !

 


In this blog post, we will see, how we can do a very simple fancy application - Hello World !



Apache HOP - Up and running

 

In this blog post, let's see how we can download and run Apache HOP (2.9) in Windows 11 environment.


1. Navigate to Apache HOP website. - https://hop.apache.org/ 

2. Click Download menu option and will navigate to Download page. -https://hop.apache.org/download/ 

3. Click  "apache-hop-client-2.9.0.zip" tod download the Apache HOP client application.





4. Extract the ZIP file

5. Click on hop-gui.bat file



6. You can see the following screen while starting Apache HOP.


Volia, here the welcome screen from Apache HOP GUI editor.


Happy Hopping !!!




Apache HOP - Core keywords


Here are the core keywords used in Apache HOP.

1. Pipelines : A pipelines is set of actions for the transform. It can read the data from the source, process and write the data.

2. HOP: To connect two actions, the HOP will be created. 

3. Worflow: It has starting and end points. Majorly consists set of pipelines to execute. 

4. Connectors: Connectors are bridges to connect external systems like database, files systems with Apache HOP. 

5. Plugins: Plug-ins are prebuilt tools to expand the Apache HOP's capabilities. 


Happy Hopping !



Friday, July 12, 2024

Apache HOP - Quick Introduction


What is Apache HOP ?

In simple, Apache HOP is a data engineering and orchestration platform. HOP is abbreviated as Hop Orchestration Platform

Apache HOP allows users to visually create data pipelines and workflows.

Why we need Apache HOP ?

Apache HOP helps users to automate data extraction from different data sources, performs data cleaning and data transformations and load them into other data sources.


Apache HOP vs Apache Airflow

Feature

Apache Hop

Apache Airflow

Focus

Data Integration & Orchestration

Workflow Orchestration & Scheduling

Strengths

- User-friendly visual interface

- Pre-built transformations

- Integrates with various data sources

- Real-time data processing

- Flexible scheduling & dependency management

- Supports diverse platforms (local, cloud)

- Integrates with various data processing tools

- Strong community & plugin ecosystem

Weaknesses

- Limited complex workflow scheduling

- Steeper learning curve (code-centric)

- Requires more technical expertise

Platform

Windows, MacOS and Linux

MacOS and Linux

Language

Built on Java

Built on Python



Apache HOP vs Apache Nifi

Feature

Apache Hop

Apache NiFi

Focus

Data Integration & Orchestration

Data Ingestion & Stream Processing

Strengths

- User-friendly visual interface for building data pipelines

- Pre-built transformations for data manipulation

- Integrates with various data sources

- Handles large data volumes (with powerful engines)

- Highly scalable for real-time data processing

- Wide range of processors for data manipulation

- Focuses on data flow & provenance

- Distributed and fault-tolerant architecture

Weaknesses   

- Less emphasis on streaming data compared to NiFi

- Limited built-in scheduling capabilities (requires Airflow)

- Steeper learning curve for complex configurations

- Requires more technical expertise for managing data flow

Platform

Windows, MacOS and Linux

Windows, MacOS and Linux

Language

Built on Java

Built on Java



Apache HOP vs Microsoft SSIS

Feature

Apache Hop

Microsoft SSIS

Type

Open-source data integration and orchestration platform

Proprietary data integration tool included with Microsoft SQL Server

Cost

Free and open-source

Paid (bundled with SQL Server licenses)

Deployment

On-premises or cloud (with cloud providers offering Hop environments)

On-premises only (requires a Windows Server)

User Interface

Visual interface with drag-and-drop functionality

Visual interface with a steeper learning curve

Data Sources / Destinations

Integrates with a wide variety of data sources and destinations

Primarily designed for integration with Microsoft products and databases

Real-time Processing

Supports real-time data processing with proper configuration

Primarily focused on batch data processing (ETL)

Scalability

Scales horizontally by adding more nodes

Scales vertically by adding more resources to a single server

Community & Support

Large and active open-source community with extensive online resources

Vendor support available through Microsoft licensing agreements


Apache HOP vs Azure Data Factory (ADF)

Feature

Apache Hop

Azure Data Factory (ADF)

Type

Open-source data integration and orchestration platform

Cloud-based, managed service from Microsoft Azure

Cost

Free and open-source

Paid service with various pricing tiers based on usage

Deployment

On-premises or cloud (with cloud providers offering Hop environments)

Cloud-based only (runs on Microsoft Azure)

User Interface

Visual interface with drag-and-drop functionality

Web-based visual interface with some code editing options

Data Sources / Destinations

Integrates with a wide variety of data sources and destinations

Primarily designed for integration with Azure services and other Microsoft products, but also supports various cloud and on-premises data sources

Real-time Processing

Supports real-time data processing with proper configuration

Supports real-time and batch data processing

Scalability

Scales horizontally by adding more nodes

Managed service that scales automatically based on your needs

Community & Support

Large and active open-source community with extensive online resources

Vendor support available through Microsoft Azure support channels





Tuesday, January 24, 2023

Endpoint Execution filters in Minimal API (.NET 7)

Endpoint Execution filters are new Minimal API in .NET 7 and this feature allows developers to perform validations before the actual API request is executed. The developers can validate the input parameters (from the body/query string or URL template) and validate user authentication information as well. 

These validations will make the system more stable and secure as the input parameters are validated before executing the actual code. 

There are three types of Endpoint Execution filters are available.

  • Before Endpoint Execution Filter
  • After Endpoint Execution Filter
  • Short Circuit Execution Filter
To implement Endpoint execution filters, you need to use the IEndpointFilter interface and implement  
InvokeAsync(EndpointFilterInvocationContext context, EndpointFilterDelegate next ) with your custom logic code. You can access HTTPContext information by using  EndpointFilterInvocationContext context variable.

Before Endpoint Execution Filter:
As the name suggests, first the input parameters or the user authentication information are validated before executing the actual code. 


After Endpoint Execution Filter:
In this type of filter, the actual code is executed first and then the result of the execution will be used for further processing or transformation. 




Short-Circuit Execution Filter:
In this type of filter, the actual code will not be executed instead, the different logic will be performed and the response sent to the user.


The sample project can be downloaded from Github 

Happy Coding !!





Sunday, January 22, 2023

Getting started with Minimal API - Create your project with dotnet CLI

The below blog post helps you to create a new Minimal API project using Visual Studio 2022 in a step-by-step manner. 

https://www.codingfreaks.net/2023/01/getting-started-with-minimal-api-first.html


You can use very simple dotnet CLI command to create a plain Minimal API project.

Syntax:

                dotnet new web


C:\Users\Murali\Documents\temp>dotnet new web -n HelloWorld


The above command creates a new Minimal API project with the name HelloWorld.

When you open the project in Visual Studio and it appears like the below.



Happy Coding !!

Getting started with Minimal API - The first project


This blog post explains, how to create your first Minimal API project using Visual Studio 2022 with .NET 7.

Step 1: Launch VS 2022 or higher 

Step 2: Choose Asp.net Core Web API



Step 3: Click Next and Enter the Project Name, Location name and Solution Name



 

Step 4: Choose Framework Version as .NET 7 & the minimum requirement is .NET 6.

Authentication Type: None

Configure for HTTPS: Yes. Do check the checkbox.

Enable Docker: No. Don’t Check the checkbox

Below are the important to enable Minimal API.

Use Controllers (uncheck to use Minimal API): No (Don’t check the checkbox)

Enable OpenAPI Support: Yes. Do check the checkbox.

Do not use top-level statements: Yes. Do check the checkbox.

 



Step 5: Once the application is successfully created, it appears like the below.


 

 

Step 6: Press F5, to run the application.

The Swagger UI page has been launched and you can view the Weatherforecast API as below.



Step 7: Click on the /weatherforecast API name, click Try it out, execute and you can see the results as below.

 


Happy Coding !!




New features of Minimal APIs in .NET 7

 Here are new features that are released on .NET 7 for Minimal API.





















Tuesday, January 3, 2023

Create a new row in empty collection in Power Apps

 

Here is a syntax for creating a new row in the existing, empty collection.

Patch (<CollectionName>, defaults(<CollectionName>), {})

Here is an example of creating a new row in the existing, empty collection.

Patch (Users, defaults(Users), {})

Now, the Users collection will have one empty row and this updated User collection can be used to bind Gallery or datatable in Power Apps. 


Happy Coding !