Monday, December 16, 2019

How to use Azure DevOps API for Get,Patch, Post in ADO using C#?

In this blog, I am going to discuss about how to GET,PATCH and POST the ADO Rest API's using C#.

AzureDevOps provides rest API's as their endpoints to fetch the data,partial update(patch), or to create in ADO. We can call these API's in different programming languages. Below is how we can use it in C#

GET:

Below is a generic function which can be used as "GET" function in c#

public string getAdoapi(string pat,string link)
{
    string Url = link;
    string responseBody = "";
    using (HttpClient client = new HttpClient())
    {
        client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
        client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Basic", Convert.ToBase64String(
        ASCIIEncoding.ASCII.GetBytes(
        string.Format("{0}:{1}", "", pat))));
        using (HttpResponseMessage response = client.GetAsync(Url).Result)
        {
        response.EnsureSuccessStatusCode();
        responseBody = response.Content.ReadAsStringAsync().Result;
        }

    }
    return responseBody;
}


POST:

Similarly for POST, apart from API and PAT the function requres postJson i.e the JSON which is contains the data that has to be posted over to ADO.

public string postAdoapi(string pat,string link,string postJson)
{
    string Url = link;
    string responseBody = "";
    using (HttpClient client = new HttpClient())
    {
    client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
    client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Basic", Convert.ToBase64String(
    ASCIIEncoding.ASCII.GetBytes(
    string.Format("{0}:{1}", "", pat))));
    var method = new HttpMethod("POST");
    var request = new HttpRequestMessage(method, Url)
    {
    Content = new StringContent(postJson, Encoding.UTF8, "application/json")
    };
    using (HttpResponseMessage response = client.SendAsync(request).Result)
    {
    response.EnsureSuccessStatusCode();
    }
    }

}

Here is how you can pass JSON in c#:

 public static string postmyJSON(string xyz)
        {
            return JsonConvert.SerializeObject(new
            {
                activitytitle = "Notification from AzureDevOps",
                activitySubtitle = "BuildQueued",
                impactedarea = "http://dev.azure.com/myorg",
                description = descriptionforqueues
            });

        }

PATCH:

As you saw in POST API, we need to have patchjson needs to be passed to the function patchAdoapi. You can use the same postmyJSON for creating JSON for patch(purely depends on what kind of json you need) and also call the patch function simillar to postAdoapi: - myfunction(postmyJSON(xyz)pat)

public static void patchAdoapi(string pat, string link,string patchJson)
{
string responseBody = "";
string Url = link;
//HttpClient client = new HttpClient();
using (HttpClient client = new HttpClient())
{
    client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json-patch+json"));
    client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Basic", Convert.ToBase64String(
    ASCIIEncoding.ASCII.GetBytes(
    string.Format("{0}:{1}", "", pat))));
    var method = new HttpMethod("PATCH");
    var request = new HttpRequestMessage(method, Url)
    {
        Content = new StringContent("[" + patchJson + "]", Encoding.UTF8, "application/json-patch+json")
    };
    using (HttpResponseMessage response = client.SendAsync(request).Result)
    {
        response.EnsureSuccessStatusCode();
        responseBody = response.Content.ReadAsStringAsync().Result;
    }
    //var responseMessage = await client.PatchAsync(new Uri(Url), httpContent);
    } 
}

PATCH JSON : 

 public static string patchmyJSON(string xyz)
        {
            return JsonConvert.SerializeObject(new
            {
                activitytitle = "Notification from AzureDevOps",
                activitySubtitle = "BuildQueued",
                impactedarea = "http://dev.azure.com/myorg",
                description = descriptionforqueues
            });


        }

Now you just need to call these generic functions as required:

Here link refers to the API all the below functions. For example:

objGetApi is object of the class I created.

string listofProjects = objGetApi.getAdoapi(pat,Url = string.Format(
            @"https://dev.azure.com/{0}/_apis/projects?$top=500&api-version=5.1",
        accountName));

For POST:

objPostApi.postADOapi(pat,@"https://dev.azure.com/{0}/{1}/_apis/wit/workitems/${type}api-version=5.1",accountName),postJson);

For PATCH:

objPatchApi.patchADOapi(pat, projectsUrl = string.Format(@"https://vsaex.dev.azure.com/{0}/_apis/groupentitlements/{1}?api-version=5.1-preview.1", accountName, groupId), jsonProject);

Hope this helps!!





Wednesday, November 27, 2019

How to fetch all projects from AzureDevOps and list it in Excel?

In this blog I am going explain how you can export the list of all projects of an Azure Devops organistaions to an excel for reporting purposes:

The document referred is: https://docs.microsoft.com/en-us/rest/api/azure/devops/core/projects/list?view=azure-devops-rest-5.1

Use the below power shell code and execute it:

#Enter the organisation name
$vstsaccount="Enter Organisation Name"
#Enter the API
$api="https://dev.azure.com/$($vstsaccount)/_apis/projects?`$top=200&api-version=5.1"
#Enter the user id
$user="arun.varriar@orgname.com"
#Enter a PAT which has org level access
$token="Enter org access PAT"

$base64AuthInfo=[Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(("{0}:{1}" -f $user,$token)))
#Invoke the API
$result = Invoke-RestMethod -Uri $api -Method Get -ContentType "application/json" -Headers @{Authorization=("Basic {0}" -f $base64AuthInfo)} 

#Write the result to excel
$result | select -Expand value | ForEach {
        $_.name = $_.name -join ' '
        $_
    } | Export-Csv C:\MyFolder\list.csv -NoTypeInformation

Here in API the filter criteria we have used is $top=200, so it will list only the top 200 projects in ADO(done with an assumption that we have projects less than 200, so all the projects will be listed).

So without using the $top if you want to list all the projects you need to use Continuation Token which is going to be the heart of API calls. To get the list of all projects using ContinuationToken use the below script:

$vstsaccount=""
$projects=$null
$user=""
$token="pat"
$base64AuthInfo=[Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(("{0}:{1}" -f $user,$token)))
$ContinuationToken=$null
$j=$true
while($j  -eq $true)
{
$projectname="https://dev.azure.com/$($vstsaccount)/_apis/projects?ContinuationToken=$ContinuationToken&api-version=5.1"
$result = Invoke-WebRequest -Uri $projectname -Method Get -ContentType "application/json" -Headers @{Authorization=("Basic {0}" -f $base64AuthInfo)} 
$continuationToken = $result.Headers.'x-ms-continuationtoken'
$projectsets =$result.Content | ConvertFrom-Json
$projects += $projectsets.value.name
$projectsets | select -Expand value | ForEach {
        $_.name = $_.name -join ' '
        $_
    } | Export-Csv C:\MyFolder\list.csv -NoTypeInformation -Append
    if($ContinuationToken -eq $null)
    {
    $j=$false
    }

}


You will have an excel with list all projects in the path specified. ENJOY!!!

How Selenium grid can be established for Protractor Solutions?

In this we are going to look into how we can use the selenium grid for parallel execution of the Protractor tests.

Using the selenium grid we can achieve parallel test execution across different browsers, OS, machines, which in turn saves a lot of our time.

The steps to follow are:

1.Download the latest Selenium Standalone Server https://selenium.dev/downloads/




2.Copy this file to a safe folder and open the command prompt.Change the working folder to the path where the selenium standalone server exists.









3.Now execute the below command:

java -jar selenium-server-standalone-3.141.59.jar -role hub








Now we can see that the selenium hub is up and running. So the nodes should connect to this ip address or machine.

Now if you want check see how many machines are connected to this hub you simply browse using the ip address: http://192.168.29.223:4444/grid/console .

4. Next step would be to establish the connection from node to grid.For this download the selenium standalone server and the browser driver to the node machine.





5.Now open the command prompt change the working folder to the path where both driver and selenium server exists.

Here I am using the same machine as hub and node. So I will open another instance of command prompt.I will change the working folder to the same path C:\SeleniumAuto

java -Dwebdriver.chrome.driver="driver path" -jar selenium-server-standalone-3.141.59.jar -role node -hub "ip adress of machine"

java -Dwebdriver.chrome.driver="C:\SeleniumAuto\chromedriver_78.exe" -jar selenium-server-standalone-3.141.59.jar -role node -hub http://192.168.29.223:4444/wd/hub

Once run the command, we can see the the node is registered:


Now if hub machine cmd, you will able to see that a node has be registered.

Now if we refresh the browser then we can see that a node has been registered:


The same steps can be 4 and 5 can be repeated for any number of nodes.

Here I have used rand ports to register the node. We can specify the ports for the nodes and hub if there any security issues. For example:

1.Node:

java -Dwebdriver.chrome.driver="C:\SeleniumAuto\chromedriver_78.exe" -jar selenium-server-standalone-3.141.59.jar -role node -hub http://192.168.29.223:4444/wd/hub -port -5555

Simillarly add  port in Hub too.

Once all the nodes are registered your grid is ready, up and running.Now you are good to run your tests.





Tuesday, November 19, 2019

How to upscale and downscale the VMSS using powershell?

Today, I am gonna discuss about how we can increase/ decrease the count of machines in a VMSS in Azure. This script can be extremely useful while using the AzureDevops pipeline for scaling the VMSS.
I have used this script for changing the count of the number of machines depending the number of test cases to executed(in automated testcases).

Below is script:

$resourceGroup = "Enter the resource group name"
$vmName = "Enter VMSS Name"

$vmss = Get-AzVmss -ResourceGroupName $resourceGroup -VMScaleSetName $vmName
#Sku capacity is number of machines that will run once this script is executed
$vmss.sku.capacity = 5

Update-AzVmss -ResourceGroupName $resourceGroup -Name $vmName -VirtualMachineScaleSet $vmss

While using the above script for automating, you may need to login to the Azure using the poweshell itself using "Connect-AzAccount":

$azureAccountName ="Client Id"
$azurePassword = ConvertTo-SecureString "client_password" -AsPlainText -Force

$psCred = New-Object System.Management.Automation.PSCredential($azureAccountName, $azurePassword)

$tenantId="Enter Tenant id"

Connect-AzAccount -ServicePrincipal -Credential $psCred -Tenant $tenantId

This can be very useful when you need to scale up or down with using the Azure DevOps pipeline. Even the script can be effectively used in Azure Devops Powershell tasks.

Thursday, November 14, 2019

How to download the contents in Azure Blob storage using Powershell?

In this blog, I am going to discuss about how we can download the files inside our blob storage using powershell. For basic understanding about blob storage, you can refer to this link:https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction

For this you need to have azure cli installed in your machine as a prerequisite: https://docs.microsoft.com/en-us/cli/azure/install-azure-cli-windows?view=azure-cli-latest

Now in power-shell run the script as below:

#Note the Container name from storage explorer
$container_name = 'packer-test'


#Enter path where you need to download the files from blob
$destination_path = 'C:\Path'

#Copy the connection string of the storage account where blob exists:
#To get the connection string go to Storage account and check for Access Keys

# For more details on connection string, refer: https://docs.microsoft.com/en-us/azure/storage/common/storage-configure-connection-string


$connection_string = 'xxxxxxxxxxxxxxxxxxxxxxxxw==;EndpointSuffix=core.windows.net'

$storage_account = New-AzureStorageContext -ConnectionString $connection_string

# Get the blobs list
$blobs = Get-AzureStorageBlob -Container $container_name -Context $storage_account

# Just download the blob which is required. Here I, have mentioned "myfiletodownload.txt"

Get-AzureStorageBlobContent `
    -Container $container_name -Blob "myfiletodownload.txt" -Destination $destination_path `
    -Context $storage_account

Now if you check in your destination path, you will find the myfiletodownload.txt(blob content) downloaded.

Sunday, November 10, 2019

How to clone Azure DevOps Git Repo using Git Command Line(GitBash)

In this blog, I am going tell how to clone the git repos in Azure Devops to your local machine using the command line GitBash. For this as prerequisite you need to install git bash from https://git-scm.com/downloads















Once you install open the git bash:



 Now change the working folder/path to which you want to clone the repo:













Once you change the path, open Azure Devops repo. Here my repo name is: "DemoClone"











On the left side click on "clone" and copy the clone link:




















Now paste the clone link in git bash using the command: "Git clone". Once you enter you may be asked for either PAT: https://docs.microsoft.com/en-us/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate?view=azure-devops&tabs=preview-page
or login id and password.
Once completed you will get the message as shown below:















Clone is completed...Now you can work with the cloned files!!!

Now open the the folder and we will able to see that a new folder with repo name will be created over there:








Navigate inside to see the files inside the repo:








Now in the git bash change the path to inside the folder being created  while cloning - "DemoClone"













To see the history enter command: Git log













Now you are done!!
























Wednesday, November 6, 2019

How to use Configuration values in Azure Functions?

Usage of config files are important to when we have generic variables,parameters  and environment values to store and call them in the source code. In Azure functions also we can maintain the configuration parameters when we write them from Cloud. In this blog I am going to show how we can use configuration parameters in Azure Function in cloud.

Steps:

1.Create an azure function and go to platform features.
2.Select Configuration  from platform features:
3.Select "Application Settings" and click on "New Application String"
4.Enter configuration parameter name and values:
 Here the parameter name is "testvalue" and is value is "sample"
5. Now use them in the code:

using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Host;
Magic Line:
Private static string superSecret=System.Environment.GetEnvironmentVariable("testvalue");





You are good to go now!!!

Similarly you can access the KeyVault values in config file and source codes.


















Sunday, November 3, 2019

Creating a test run and updating results using AzureDevOps REST API


Azure DevOps has the capability of associating Test Cases with automated test methods, when using supported test frameworks. The list of supported frameworks can be found here.

However, if you are writing tests using Selenium Java or any of the frameworks that are not supported - but still want to track your results against a Test Suite in ADO, we need a solution that can update the Test Cases / Test Points in ADO based on the actual test results.

One approach to do this will be to use ADO REST API’s to update the results back to ADO once the automated test is executed independently. This document explains the steps to be performed in order to create a test run, add test points and update the results using ADO’s REST APIs. A generic solution will ideally have a configuration file or some mechanism which maps a test method to a test point in ADO.

Background:

A test point is a unique combination of test case, test suite, configuration, and tester. For example, if a test case exists in two suites or for multiple configurations in the same suite, then they will have different test points. More about test points can be found in this Microsoft page.

The below image represents a typical Test Plan / Test Suite / Test Case / Test Point combination, along with the Test run and outcome:

















Below are the steps to be performed:

  1. Identify the test points that needs to be executed.
  2. Create a test run with the test points.
  3. Update the outcome.
Sample scenario:

Below we have a Test plan – “DemoAPI” with a suite “APITest” having 2 test cases. The 2 test cases present inside the suites have 2 different test points.








Below we have a Test plan – “DemoAPI” with a suite “APITest” having 2 test cases. The 2 test cases present inside the suites have 2 different test points.

The test points for the test cases needs to be determined using the API calls first:
Step 1:


Example:


Response:





















Once we get the test points, we need to generate a test runs by adding those points to start execution.

Step 2:


Example:


Body:
{
  "name": "API Demo Run",
  "plan": {
    "id": "1"
  },
  "pointIds": [
    1,
    2
   
  ]
}
Response:




















We will get a response with the Test Run Id (that just got created) and its details. Now, if we go to “Runs” in ADO we should be able to see that a new test run has been created with the Run Id in the response, 12 in this case and Run name “API Demo Run”.


If we select that test run and see the test results, then we can see the same 2 test points in unspecified states.
If we select that test run and see the test results, then we can see the same 2 test points in unspecified states.
Now here, these test points for which the execution has started will have corresponding result ids generated within the Test Run, which starts with 100000 and increments by 1. The order of ids generated depends on the test points order provided in the body of the POST. In this example the order of points was 1 and 2, so the result ids will be 100000 and 100001 respectively.

Step 3:

Now we need post an outcome and state against the result id’s (100000,100001) generated in Test Run using the API.


Example:


Body:
[
      {
              "id": 100000,
              "state": "Completed",
        "outcome": "Failed"
       
      },
       {
              "id": 100001,
              "state": "Completed",
        "outcome": "Passed"
       
      }
           
    ]
Response:





















Refresh the test run page in ADO, and we can see that the state of the test run has been changed from “In Progress” to “Needs Investigation”:
In ADO, “Needs Investigation” appeared because, there are failed test cases inside the test run.
To see the results, navigate inside the test run:

















Now if we navigate to the Test plans page, we can see that test execution result has been updated:

Common Pitfall
Creating too many test runs. Do not follow a strategy that creates a test run per test point– this would result in too many test runs being created. Instead, create a single test run and add the points that you plan to execute, in it.


References: