Wednesday, 28 September 2016

Updating web.config for Azure Releases via Team Service Releases

Updating web.config for Azure Releases via Team Service Releases

Problem

I want to be able to modify various parts of a web.config file of a azure app service when I release to certain live environments.

Team services already provisions features such as slots that allow you to have specific appconfig and connection strings specific to certain slots.

If you want to modify other slots Team Service also has a specific type of task that can tokenize and transform a web.config:



However I did not feel this was an elegant or succint solutions.  Instead I wanted a task that would ensure that my stage/live and any other environments have the correct settings.

To do this I used Powershell or a Powershell task in TeamServices:

Solution

Instead I use a powershell which runs against a live release in Team Services. This powershell script will switch slots (effectively putting the code live) then will ensure the stage and live web.config have the correct settings.  

After that I run an integration test that ensures my web.config transformations are correct.

The powershell is broken up into 4 tasks:
  1.  Perform a swap of slots
  2. Download the web.config for an environment via FTP to a build client temp directory
  3. Perform web.config transform
  4. Re-upload the modified file via FTP
Below is the script:

#
# SwapSlots.ps1
#

param (
   [string] $AzureWebsiteName,
   [string] $From,
   [string] $To
)

Switch-AzureWebsiteSlot -Name $AzureWebsiteName -Slot1 $From -Slot2 $To -Force -Verbose


function DownloadFile ($sourceuri,$targetpath,$username,$password){
 # Create a FTPWebRequest object to handle the connection to the ftp server
 $ftprequest = [System.Net.FtpWebRequest]::create($sourceuri)
# set the request's network credentials for an authenticated connection
 $ftprequest.Credentials = New-Object System.Net.NetworkCredential($username,$password)
$ftprequest.Method = [System.Net.WebRequestMethods+Ftp]::DownloadFile
 $ftprequest.UseBinary = $true
 $ftprequest.KeepAlive = $false
# send the ftp request to the server
 $ftpresponse = $ftprequest.GetResponse()
# get a download stream from the server response
 $responsestream = $ftpresponse.GetResponseStream()
# create the target file on the local system and the download buffer
 try
 {
 $targetfile = New-Object IO.FileStream ($targetpath,[IO.FileMode]::Create)
 [byte[]]$readbuffer = New-Object byte[] 1024
# loop through the download stream and send the data to the target file
 do{
 $readlength = $responsestream.Read($readbuffer,0,1024)
 $targetfile.Write($readbuffer,0,$readlength)
 }
 while ($readlength -ne 0)
$targetfile.close()
 }
 catch
 {
 $_|fl * -Force
 }
}

function UploadFile ($sourceuri,$targetpath,$username,$password){

 # Create a FTPWebRequest object to handle the connection to the ftp server
 $ftprequest = [System.Net.FtpWebRequest]::create($sourceuri)
# set the request's network credentials for an authenticated connection
 $ftprequest.Credentials = New-Object System.Net.NetworkCredential($username,$password)
$ftprequest.Method = [System.Net.WebRequestMethods+Ftp]::UploadFile
 $ftprequest.UseBinary = $true
 $ftprequest.KeepAlive = $false
# Read the File for Upload
$FileContent = gc -en byte $targetpath
$ftprequest.ContentLength = $FileContent.Length
# Get Stream Request by bytes
$Run = $ftprequest.GetRequestStream()
$Run.Write($FileContent, 0, $FileContent.Length)
# Cleanup
$Run.Close()
$Run.Dispose()
}

function ModifyWebConfigEndPoint ($file, $oldValue, $newValue) {
    #Mod File for production
    $webConfig = $file
    $doc = (Get-Content $webConfig) -as [Xml]
    $root = $doc.get_DocumentElement();
    $newSet = $root.'system.serviceModel'.client.endpoint.address.Replace($oldValue,$newValue);
    $root.'system.serviceModel'.client.endpoint.address = $newSet
    $doc.Save($webConfig)
}

# Global Settings
$Password = "<FTPPassword>"
$RemoteFile = "ftp://ftpURL/web.config"

# Download and modify Config for production (master)
$LocalFile = $env:Temp + "\web.config"
Write-Host "TempDirectoryIs"$LocalFile
$Username = "ftpusername"
#Remove-Item $LocalFile
DownloadFile $RemoteFile $LocalFile $Username $Password
ModifyWebConfigEndPoint $LocalFile 'val1' 'val2'
UploadFile $RemoteFile $LocalFile $Username $Password
Write-Host "Modified web config for "

# Download and modify Config for production (staging)
$LocalFile = $env:Temp + "\web.config"
Write-Host "TempDirectoryIs"$LocalFile
$Username = "ftpusername"
#Remove-Item $LocalFile
DownloadFile $RemoteFile $LocalFile $Username $Password
ModifyWebConfigEndPoint $LocalFile 'val1' 'val2'
UploadFile $RemoteFile $LocalFile $Username $Password
Write-Host "Modified web config for (staging)"

Its worth noting that the function that does the changing of the web.config is specific to in my case service endpoints but you can create specific or generic functions to update any part of the web.config file as it is just xml.

function ModifyWebConfigEndPoint ($file, $oldValue, $newValue) {
    #Mod File for production
    $webConfig = $file
    $doc = (Get-Content $webConfig) -as [Xml]
    $root = $doc.get_DocumentElement();
    $newSet = $root.'system.serviceModel'.client.endpoint.address.Replace($oldValue,$newValue);
    $root.'system.serviceModel'.client.endpoint.address = $newSet
    $doc.Save($webConfig)
}

Wednesday, 15 October 2014

Managing Microsoft Dynamics CRM Solutions in Source Control and Visual Studio

Background

We now manage our development world using agile.  And as such we link work items to specific user stories and features.  In our development team we are doing this via TFS.  When ever we check source code in we find the relevant work item, which is parented by a user story and feature alike.  

This is fine if you are writing code in a native Visual Studio code-base, but not fine if you are wanting to link changes made to your dynamics CRM to specific work items.  

Intended Solution

My idea was to export or unpack a specific set of changes to CRM via the unmanaged solutions save them unpacked as XML to a location where my source control could pick them up and then apply them.  Once in source control I could then associate changes to work items for CRM customization.

Once in source control I could then re-pack the solution into a deploy-able zip so that I can then apply this to me staging environment.  

So in summary the steps are:
  1. Get a list of solutions for a particular CRM instance from Visual Studio
  2. Export the solution 
  3. Unpack the solution to a location visible to source control
  4. Check in changes for solution
  5. Continuous integration server would then pick up any changes from the solution and re-pack them into deploy-able zip
  6. deploy to staging environment

Create a Visual Studio Extension that lists solutions for an instance of CRM

Using the Visual Studio SDK and the CRM SDK  I have created a Visual Studio Extension.

As you can see it populates a list of solutions in CRM based upon the Organization name:
CRM Solutions Manager
The user selects a solution from the list and then clicks download.  This used the CRM SDK and webservice to pull down the ZIP as a stream of bytes and then unpacks to a given location.

The location and CRM webservice call details are stored in a series of options built into the Visual Studio Extension.  As per below

CRM Solutions Manager Options
Once the download is complete it will download to the given source path where your source control software (Ours is TFS) can pick it up, at that point you can add the unpacked XML can be added to source control. As seen below.



As if by magic you now have CRM Solutions unpacked managed within visual studio and in source control.  The next step is to link this to your CI or build process.

Re-pack the solution from check in and Build Deploy-able solution zip

Once the files are checked into source control, we would have CI (Continuous Integration) or build server in place to pick up those changes and download them.

Our CI server has Cruise Control.NET installed and also Microsoft Dynamics CRM SDK.

As part of your build process you then run a command that packs the solution ready for deployment to your staging or UAT environment.

The command is as follows:

C:\CRM_SDK\SDK\Bin\SolutionPackager.exe /action Pack /zipfile <YourLocation>.zip /folder <YourLocationFolder>
If you like the visual Studio Extension related to this post please comment and I will look at providing it. 

Saturday, 23 February 2013

Javascript Builds with Require.js

Background

With the advent of complex client side applications, comes the need to be able to build and deploy these applications from your code base.  Require.js build  functionality offers this.  I will talk through a typical example of usage.

Installation

Please make sure you have node.js installed before you begin.  The next thing you need to do is to install require.js by running the following command:

npm install -g requirejs

Usage

Create a bash script to run the build

My preference is to create a build directory in your app directory as follows. Containing a bash script (.sh) file and the associated app.build.js file that has the app build options.


The file contains a command that runs the build using require js and a reference to your build file as follows:

r.js -o app/build/app.build.js

Create an app build file

The file contains the commands required for JS to pick up the require.js configuration and some other defaults for css build and directory locations as follows: 


({
appDir: "../",
baseUrl: "scripts",
dir: "../../dist",
mainConfigFile: "../scripts/main.js",
name: "main",
optimizeCss: "standard"
})

  • The appDir directive status where the main app directory is, in our case it is on directory up from the file.
  • the baseUrl is the url for our scripts.
  • The dir is the location for the completed build
  • mainConfigFile is the location of our require.js settings
  • name: the main entry point of the appllication
  • optimizeCss is an extra option defining how to optimize any css

Cleaning Up

As part of the build you can add deletions to your bash scripts to make sure your distribution or dist directory is easy to deploy.

This can be done by specifying remove bash script commands as follows:


cd dist
rm -rf build build.txt .bowerrc component.json scripts/views scripts/vendor/backbone-amd scripts/vendor/jquery scripts/vendor/roundabout scripts/vendor/threedubmedia scripts/vendor/underscore-amd/docs scripts/vendor/underscore-amd
rm -rf scripts/vendor/requirejs/dist scripts/vendor/requirejs/docs scripts/vendor/requirejs/tests
rm -rf scripts/vendor/requirejs/.gitignore scripts/vendor/requirejs/component.json scripts/vendor/requirejs/index.html scripts/vendor/requirejs/.gitignore scripts/vendor/requirejs/component.json scripts/vendor/requirejs/LICENSE
rm -rf scripts/vendor/requirejs/package.json scripts/vendor/requirejs/README.md scripts/vendor/requirejs/tasks.txt scripts/vendor/requirejs/testBaseUrl.js scripts/vendor/requirejs/updatesubs.sh

Summary

Using require.js is a good way of keeping your javascript releases optimized and clean.  Have a go and enjoy :-)

Wednesday, 13 February 2013

The Power of Bower

The Power of Bower

What is it?

Bower is a package manager for web applications.  I would liken it to nuget for JavaScript   It allows you to install lots of different web artefacts and keep you dependencies clean in your project.

This is especially useful with the rise of modular javascript applications. Such as Knockout.js, Backbone.js or Angular.js.  Many of these framework use tools such as underscore.js to manage dependencies   Using Bower to get the classes required for your app simplifies things your development and build processes.

Example

Installing Bower

Step 1
Install node.js which should ship with npm

Step 2
Install Bower using npm type the following in a command prompt that can access npm:
npm install bower -g
Step 3
Create a bower config file .bowerrc in the root of the website application source as follows:

Step 4
Now we need to edit the file that tells bower where to create the directories for all your dependencies:


{
"directory" : "app/scripts/vendor"
}

I am putting my files in app/scripts/vendor.

Step 5
The next step is to tell bower what the packages are required for the application.  This is done from within a component.json file.  Please see below the format for the file:


{
"name": "Charles Bikhazi Website",
"version": "1.0.0",
"dependencies": {
"jquery": null,
"backbone-amd": null,
"underscore-amd": null,
"requirejs": null,
"roundabout": "https://github.com/fredhq/roundabout.git",
"threedubmedia": "https://github.com/threedubmedia/jquery.threedubmedia.git"
}
}

As you can see the key part in this file is the dependencies section.  You can specify the short-name in here that will lookup github for the source alternatively if this is not available you can specify the github URL   The second part of the declaration allows us to specify the version of the file if required.  Specifying null gets the latest version.

Step 6
Now comes the magic :-). If everything is setup correctly all you need to is browse to the root of directory (were your bower config files live) and run


bower install
Getting a similar output as below:

This will have the affect of creating all the directories for each library and giving you all the relevant source files you need.  the beauty of this is that all you need is to modify your .json config file and re-run install and hey presto all the latest version of the libraries you need :-)

Happy Bowering!



Monday, 11 February 2013

The Death of Flash?


The Death of Flash?

Background

The throughput of digital work these days normally requires a level of highly interactive, video or animation elements.  This means that the word flash get banded about during discussions. I think it’s important we distinguish the difference between flash and creating digital offerings that include interaction, animation or video.

History

Over the last 8-10 year or so Flash Technology has enabled us to create web based animation and interactivity that most other technologies cannot offer.  With flash we can create highly stylised and interactive, web pages, banners, games and websites.  However there are a number of problems with Flash that argue it is not always the right technology to use. 

Over the past 2-3 years HTML and its associated technology (JavaScript and CSS3) has evolved (HT ML5) to a point that it can offer most if not all the features that flash offers. 

Current Pro and Cons

Below are some of the pros and cons of each technology:

Flash

Pros

  • ·         Re-scalable vector graphics providing resolution & cross-browser independence 
  • ·         Excellent multimedia support & high degree of interactivity 

Cons

  • ·         Proprietary technology & high cost of development 
  • ·         Breaks web fundamentals, prone to design abuse and security vulnerabilities   
  • ·         Limited developer community, resulting in expensive resources

HTML5

Pros

  • ·         Short learning curve for Web Developers 
  • ·         Open Standard defined by W3C & backed by industry heavyweights including Apple, Microsoft, Google and several others 
  • ·         Promotes the “Write once, run everywhere” paradigm for web development
  • ·         Great advocate for hardware acceleration, provides unmatched power to developers 
  • ·         Zero development cost 

Cons

  • ·         Still a draft, subject to change 
  • ·         Evolving standard, browsers lack full support

Lessons learned and the future

So where does this leave us?  Well I think we need to identify the core issues for most companies which is cost and the ability to find resource for projects and align these with the right technology.  Both of the these issue would point to using HTML5 for new projects as the cost of using HTML will keep going down over using flash and the number of developers for HTML5 is now outstripping flash.

However this is not to say that we should never use flash ever again as we may need to maintain current flash websites and its possible the client will explicitly ask for or need flash.

Tuesday, 2 October 2012

Country List for ASP.NET MVC

A Common problem in code is creating country lists for forms in web applications.  To avoid keeping your own lists internally or refreshing the countries all the time simply use the .NET Framework culture info to get your list for you :-) As Below:

Country List for ASP.NET MVC

Step 1 - Add a method to your service layer

 public IEnumerable<SelectListItem> GetCountries()
 {

  RegionInfo country = new RegionInfo(new CultureInfo("en-US", false).LCID);
  List<SelectListItem> countryNames = new List<SelectListItem>();

  
 //To get the Country Names from the CultureInfo installed in windows
 foreach (CultureInfo cul in CultureInfo.GetCultures(CultureTypes.SpecificCultures))
 {
   country = new RegionInfo(new CultureInfo(cul.Name, false).LCID);
   countryNames.Add(new SelectListItem() { Text = country.DisplayName, Value = country.DisplayName });
 }


//Assigning all Country names to IEnumerable
IEnumerable<SelectListItem> nameAdded = countryNames.GroupBy(x => x.Text).Select(x => x.FirstOrDefault()).ToList<SelectListItem>().OrderBy(x => x.Text);
            return nameAdded;
}


Step 2 - Add a call in your controller to populate a view bag

ViewBag.CountryList = GetCountries();
 

Step 3 - Bind your html drop down list and model in your view

<td>
@Html.DropDownListFor(model => model.Country, (IEnumerable<SelectListItem>)ViewBag.CountryList)
</td>