Why we’re here
If you’ve read my last post you’ll know that I’ve recently moved my blog onto the cloud with AWS S3 and Cloudfront. I used to host this blog on my raspberry pi at home but, on account of me moving house, I felt that it was time to move it to the cloud to avoid the outages involved with moving house.
Why Azure Devops?
Why not? No not really, it’s just something I’ve been playing around with recently for work and I figured that I could learn a bit more if I was trying to build something that I know about (i.e. my blog). The first thing that I notice with Azure Devops is that it’s recently become Azure Devops. Before this it was known as Microsoft VSTS. I found I was frustrated by out of date documentation so I decided to switch off the new YAML build process until they documentation is up to date.
According to https://octoverse.github.com/projects.html, at the time of writing the Azure Devops project is currently the fastest growing open source project. I’m guessing they’re busy updating the documentation as I type.
To switch off the YAML preview feature you can go to your avatar in the top right and select “Preview Features”
This will allow you to switch off the YAML pipeline creation
Creating the project
Click on “Create project” and give your project a name and description. I’ll be keeping my project private also.
When completed you will be greeted with an overview screen. This seems to be aimed at a team workflow so I won’t be doing much with this area
Import GIT repo
Now we need to get the code into a repo that Azure can read from. To do this we will go to the repos menu. I will be importing mine from bitbucket, which is where I keep my projects these days. Mostly because they can be private and I got tired of the work involved in maintaining my own GIT server.
Click on import and then get the clone url of your git repo. Paste it into the the box. My repo requires authorization so I will check this and use my credentials for this repo.
Azure will now attempt to clone your repo:
Leave GIT on bitbucket or move to Azure?
It seems as though you can link out to another repo location such as github or any other git system as long as you credentials set up. To keep things simple, I will just work on the repo in Azure for now. I may change it in the future so I can keep all of my repos in the same place. To do this I will checkout the newly created GIT repo from Azure and make any changes in that for now.
Building a pipeline
Under pipelines, select create new pipeline and choose where your repo is.
I’ll be selecting the Azure Repos Git as I mentioned above.
Select empty job from the list (at the top)
I decided to link to my personal bitbucket. This was fairly straight forward, I just changed the git location in the pipeline set up. Now I can keep working in my regular repo to avoid any confusion.
Firstly we need ruby to build jekyll so let’s add that by pressing the plus button next to our Agent Job and selecting ruby.
Then we need to add some command line tasks so find the command line task in the list and click add 3 times to add 3 tasks.
Next we will need an “AWS Tools for Microsoft Visual Studio Team Services” task. This last task will require a quick detour to install. Click “get it free” and then again on the corresponding site that will open up.
Select the organisation and click install.
Go back to your Azure view and refresh it. You should now be able to add a tool called “AWS S3 Upload”.
Now we can configure the build tasks.
The ruby task should be fine as it is, but the rest will need some editing
Command line tasks
- Command line task 1:
- Display name:
Install Jekyll and bundler
gem install jekyll bundler
- Command line task 2:
- Display name:
- Command line task 3:
- Display name:
bundle exec jekyll build JEKYLL_ENV=production
I’m using the production environment above as this changes my sitemap to be the actual url of the site and not localhost, which isn’t very helpful for a sitemap.
AWS Tools for Microsoft Visual Studio Team Services task
This is where your settings will differ to mine, I will give example where possible:
- AWS Credentials: TBC
To create your credentials, click the +New button next to the drop down. You can use previously created credentials here or you can create a new one in your AWS account. I won’t go into that here as there is a lot of info about this and I find that Amazon change their interface a lot so any docs I create would be out of date quickly.
The fields that I filled in are:
- Connection name
- Access Key ID
- Secret Access Key
This should be enough to complete this section
- Bucket Name:
- Source Folder:
- Filename Patterns:
- Target Folder: should be left blank
- ACL: private
Under advanced, I will leave
overwrite checked but I may want to uncheck this in the future to avoid uneccessarily overwriting existing files that haven’t changed.
Click the save and queue drop down and click save Type a comment and click save
Now we have our pipeline created, it’s time to test the whole process.
Pipelines and click
Select the branch you wanto build and click queue
Click on the build in the list and you can see the console and follow the build through
Hopefully upon completion you should see a successful response.
A quick check in the S3 bucket shows that the files have been updated.
I changed the branch to build on any change to the
release branch of my project.
Now we need to switch on continuous integration. Under
Triggers enable to continuous integration.
Now whenever I make a change to this branch (i.e. push a code change to the branch), it will be built and pushed out to the S3 bucket.