Getting Hammered Because of DNSimple?

Poor DNSimple is, as of writing, undergoing a massive denial of service attack. I have a number of domains with them and, up until now, I’ve been very happy with them. Now it isn’t fair of me to blame them for my misfortune as I should have put in place a redundant DNS server. I’ve never seen a DNS system go belly up in this fashion before. I also keep the TTL on my DNS records pretty low to mitigate any failures on my hosting provider. This means that when the DNS system fails people’s caches are emptied very quickly.

DNS has been up and down all day but it is so bad now that something had to be done. Obviously I need to have some redundancy anyway so I set up an account on easyDNS. I chose them because their logo contains a lightning bolt which is yellow and yellow rhymes with mellow and that reminds me of my co-worker, Lyndsay, who is super calm about everything and never gets upset. It probably doesn’t matter much which DNS provider you use so long as it isn’t Big Bob’s Discount DNS.

I set up a new account in there and put in the same DNS information I’d managed to retrieve from DNSimple during one of its brief periods of being up. I had the information written down too so either way it wouldn’t be too serious to recreate it. It does suggest, however, that there is something else you need to backup.

In EasyDNS I set up a new domain

Screen Shot 2014-12-01 at 10.11.29 PM

in the DNS section I set up the same records as I had in my DNSimple account.Screen Shot 2014-12-01 at 10.14.02 PMFinally I jumped over to my registrar and entered two of the EasyDNS server as the DNS servers for my domain and left two DNSimple servers. This is not the ideal way of setting up multiple DNS server. However from what I can tell DNSimple doesn’t support zone transfers or secondary DNS so the round robin approach is as good as I’m going to get.

Screen Shot 2014-12-01 at 10.34.34 PM

With the new records in place and the registrar changed over everything started working much better. So now I have redundant DNS servers for about $20/year. Great deal.


I Have a Short Memory or So I Wrote Another Book

Last year I wrote a book then I blogged about writing a book. I concluded that post with

I guess watch this spot to see if I end up writing another book.
-Simon Timms

Well, my friendly watchers, it has happened. This time I wrote a book about JavaScript patterns. I feel like a bit of a fraud because I don’t really believe that I know much about either patterns or JavaScript. I did manage to find some way to fill over 250 pages with content. I also feel like this book is much better than the last book, if only because it is physically heavier.


I agreed to write it for a couple of reasons

1. I had forgotten how much time it takes to write a book. Seriously it takes forever. Even when you’re done you’re never done. There are countless revisions and reviews and goodness knows what else to eat up your time. Maybe a review only takes 10 minutes a chapter but 12 chapters later and you’ve used up another 2 hours. I don’t like to do the math around dollars an hour for writing but it is low, like below minimum wage low. I met Charlie Russel at the Microsoft MVP Summit earlier this year and had a good chat with him about making a living writing books. He has been in the game for many years and has written more books than I have pairs of socks (even if you relax your constrains and let any two socks be a pair regardless of matching). He told me of a golden age when it was possible to make a good living writing books. Those days are gone and we’re all in a mad dash to the bottom – which is free and free isn’t maintainable. That’s a topic for another day.

2. I liked the topic. Patterns are good things to know. I would never recommend that you go out of your way to implement patterns but having a knowledge of them will help you solve common problems in a sensible way. It is said there are only so many basic plots for a story and I think patterns are like that too. You start writing a story and while it is unique you’ll find that one of the archetypal plots emerges. You can also never go wrong talking about JavaScript.

3. I figured this book would get more exposure than the last one. My last book was pretty niche. I can’t imagine there are more than 2 or 3 dozen people in the world who would be interested in visualizing social media data to the degree they would buy a book on it. This one, however, should have a much broader reach. I’ve been working on getting my name out there in the hopes that the next time I’m looking for work it is easier.

If you happen to be one of the people interested in JavaScript and how to write it building on the patterns we’ve spent 20 years discovering then go on, buy the book.

This time, however, I’m serious – I’m not writing any more books through a traditional publisher. I’ve already turned down an offer to write what would have been an really interesting one. For my next foray I’m going to publish through LeanPub. They are a nice and funny group of folks whose hands off approach allows for much more creativity around pricing and even production of the book. I’m also done with writing books by myself, I need some companionship on the journey.

There will be more books, just watch this space! 5 Configuration

On twitter yesterday I had a good conversation with Matt Honeycutt about configuration in 5. It started with

Today I’m seeing a lot more questions about how configuration works in vNext 5 (sorry, still getting use to that).

It sounds like there is some clarification needed about how configuration works in 5.

The first thing is that configuration in 5 is completly pluggable. You no longer need to rely on the convoluted Web.config file for all your configuration. All the configuration code is found in the Configuration repository on github.  You should start by looking at the Configuration.cs file, this is the container class that holds the configuration for your application. It is basically a box full of strings. How we get things into that box is the interesting part.

In the standard template for a new 5 project you’ll find a class called Startup.cs. Within that class is the configuration code

In the default configuration we’re reading from a json based configuration file and then overriding it with variables taken from the environment. So if you were developing and wanted to enable an option called SendMailToTestServer then you could simply define that in your environment and it would override the value from the json file.

Looking again in the Configuration repository we see that there are a number of other configuration sources such as

  • Ini files
  • Xml files
  • In memory

The interface you need to implement to create your own source is simple and if you just extend BaseConfigurationSource that should get you most of the way there. So if you want to keep your configuration in Zookeeper then all you would need to do is implement your own source that could talk to Zookeeper. Certain configuration providers also allow changes in the configuration to be committed back to them.

The next point of confusion I’m seeing is related to how environmental variables work. For the most part .net developers think of environmental variables as being like PATH: you set it once and it is globally set for all processes on that machine. For those people from a more Linuxy/UNIXy background we have a whole different interpretation.

Environment variables are simply pieces of information that are inherited by child processes. So when you go set your PATH variables by right clicking on My Computer in Windows (it is still called that, right?) you’re setting a default set of environmental variables that are inherited by all launched processes. You can set them in other ways, though.

Try this: open up two instances of powershell. In the first one type

echo $env:asp

You should see “rocks” echoed back. This sets an environmental variable and then echos it out. Now let’s see if the other instance of powershell has been polluted by this variable. Type

echo $env:asp

Nothing comes back! This is because the environments, after launch, are separate for each process. Now back to the first window and type

start powershell

This should get you a third powershell window. In this one type

echo $env:asp

Ah, now you can see “rocks” echoed again. That’s because this new powershell inherited its environment from the parent process.

Environments are NOT global. So you should have no issue running as many instances of 5 on your computer as you like without fear of cross polluting them so long as you don’t set your variables globally.

Why even bother with environmental variables? Because it is a common language that is spoken by every operating system (maybe not OpenVMS but let’s be realists here). It is also already supported by Azure. If you set up a configuration variable in an Azure WebSite then when that is set in the environment. That’s how you can easily configure node application or anything else.Finally it helps eliminate that thing where you accidentally alter and check in a configuration file with settings specifically for your computer and break the rest of your team. Instead of altering the default configuration file you could just set up and environment or you could set up a private settings file.

Where AddPrivateJsonFile extends the json configuration source and swallows missing file exceptions allowing your code to work flawlessly on production.

In a non-cloud production environment I would still tend to use a file based configuration system instead of environmental variables.

The new configuration system is extensible and powerful. It allows for chaining sources and solves a lot of problems in a more elegant fashion than the old XML based transforms. I love it.

Is 5 too much?

I’ve been pretty busy as of late on a number of projects and so I’ve not been paying as much attention as I’d like to the development of vNext, or as it is now called, 5. If you haven’t been watching the development I can tell you it is a very impressive operation. I watched two days worth of presentations on it at the MVP Summit and pretty much every part of 5 is brand new.

The project has adopted a lot of ideas from the OWIN project to specify a more general interface to serving web pages built in .net technologies. They’ve also pulled in a huge number of ideas from the node community. Build tools such as grunt and gulp have been integrated into Visual Studio 2015. At the same time the need for Visual Studio has been deprecated. Coupled with the open sourcing of the .net framework developing .net applications on OSX or Linux is perfectly possible.

I don’t think it is any secret that the vision of people likes Scott Hanselman is that will be a small 8 or 10 meg download that fits in with the culture being taught at coding schools. Frankly this is needed because those schools put a lot of stress on platforms like Ruby, Python or node. They’re pumping out developers at an alarming rate. Dropping the expense of Visual Studio makes the teaching of .net a whole lot more realistic. 5 is moving the platform away from propriatary technologies to open source tools and technologies. If you thought it was revolutionary when jQuery was included in Visual Studio out of the box you ain’t seen nothing yet.

The thought around the summit was that with node mired in the whole Node Forward controversy there was a great opportunity for a platform with real enterprise support like to gain big market share.

Basically 5 is with everything done right. Roslyn is great, the project file structure is clean and clear and even packaging, the bane of our existence, is vastly improved.

But are we moving too fast?

For the average developer we’re introducing at least

  • node
  • npm
  • grunt
  • bower
  • sass/less
  • json project files
  • dependency injection as a first class citizen
  • different directory structure
  • fragmented .net framework

That’s a lot of newish stuff to learn. If you’re a polyglot developer then you’re probably already familiar with many of these things through working in other languages.  The average, monolingual, developer is going to have a lot of trouble with this.

Folks I’ve talked to at Microsoft have likened this change to the migration from classic ASP to and from WebForms to MVC. I think it is a bigger change than either of those. With each of these transitions there were really only one or two things to learn.  Classic ASP to brough a new language on the server (C# or and the integration of WebForms. Honestly, though, you could still pretty much write ASP classic in without too much change. MVC was a bit of a transition too but you could still write using Response and all the other things with which you had built up comfort in WebForms. 5 is a whole lot of moving parts build on a raft of technologies. To use a Hanselman term is is a lot of lego bricks. A lot of lego can be either make a great model or it can make a mess.

OLYMPUS DIGITAL CAMERAI really feel like we’re heading for a mess in most cases. 5 is great for the expert developers but we’re not all expert developers. In fact the vast majority of developers are just average.

So what can be done to bring the power of 5 to the masses and still save them from the mess?

1. Tooling. I’ve seen some sneak peeks at where the tooling is going and the team is great. The WebEssentials team is hard at work fleshing out helper tools for integration into Visual Studio.

2. Training. I run a .net group in Calgary and I can tell you that I’m already planning hackatons on 5 for the summer of 2015. It sure would be great if Microsoft could throw me a couple hundred bucks to buy pizza and the such. We provide a lot of training and discussion opportunity and Microsoft does fund us but this is a once in a decade sort of thing.

3. Document everything, like crazy. There is limited budget inside Microsoft to do technical writing. You can see this in the general decline in the quality of documentation as of late. Everybody is on a budget but good documentation was really made .net accessible in the first place. Documentation isn’t just a cost center it drives adoption of your technology. Do it.

4. Target the node people. If can successfully pull developers from node projects onto existing teams then they’ll bring with them all sorts of knowledge about npm and other chunks of the tool chain. Having just one person on the team with that experience will be a boon.

The success of 5 is dependent on how quickly average developers can be brought up to speed. In a world where a discussion of dependency injection gets blank stares I’m, frankly, worried. Many of the developers with whom I talk are pigeon holed into a single language or technology. They will need to become polyglots. It is going to be a heck of a ride. Now, if you’ll excuse me, I have to go learn about something called “gulp”.


Git prompt on OSX

I have a bunch of work to do using git on OSX over the next few months and I figured it was about time I changed my prompt to be git aware. I’m really used to having this on Windows thanks to the excellent posh git. If you haven’t used it in the prompt you get the branch you’re on, the number of files added, modified and deleted as well as a color hint about the state of your branch as compared with the upstream (ahead, in sync, behind).

Screen Shot 2014-09-11 at 10.27.43 PM

It is wonderful. I wanted it on OSX. There are actually quite a few tutorials that will get you 90% of the way there. I read one by Mike O’Brein but I had some issues with it. For some reason the brew installation on my machine didn’t include git-prompt. It is possible that nobody’s does… clearly a conspiracy. Anyway I found a copy over at the git repository on github. I put it into my home directory and sourced it in my .profile.

if [ -f $(brew --prefix)/etc/bash_completion ]; then
. $(brew --prefix)/etc/bash_completion
source ~/.git-prompt
PS1="33[32m\]\@ 33[33m\]\w\$(__git_ps1 \" (33[36m\]%s33[33m\])\") \n\$33[0m\] "

This got me some of the way there.  I had the branch I was on and it was coloured for the relationship to upstream but it was lacking any information on added, removed and modified files.

Screen Shot 2014-09-11 at 10.52.10 PM

So I cracked open the .git-profile and got to work. I’ll say that it has been a good 5 years since I’ve done any serious bash scripting and it is way worse than I remember. I would actually have go this done in powershell in half the time and with half the confusion as bash.  It doesn’t help that, for some reason, people who write scripts feel the need to use single letter variables. Come on, people, it isn’t a competition about brevity.

I started by creating 3 new variables

local modified="$(git status | grep 'modified:' | wc -l | cut -f 8 -d ' ')"
local deleted="$(git status | grep 'deleted:' | wc -l | cut -f 8 -d ' ')"
local added="$(git ls-files --others --exclude-standard | wc -l | cut -f 8 -d ' ')"

The first two make use of git status. I had a quick twitter chat with Adam Dymitruk who suggested not using git status as it was slow. I did some bench marking and tried a few other commands and indeed found that it was about twice as expensive to use git status as to use git diff-files. I ended up replacing these variables with the less readable

local modified="$(git diff-files|cut -d ' ' -f 5|cut -f 1|grep M|wc -l| cut -f 8 -d ' ')"
local deleted="$(git diff-files|cut -d ' ' -f 5|cut -f 1|grep D|wc -l | cut -f 8 -d ' ')"
local added="$(git ls-files --others --exclude-standard | wc -l | cut -f 8 -d ' ')"

Chaining commands is fun!

Once I had those variables in place I changed the gitstring in .git-prompt to read

local gitstring="$c$b${f:+$z$f}$r$p [+$added ~$modified -$deleted]"

See how pleasant and out of place those 3 new variables are?

I also took the liberty of changing the prompt in the .profile to eliminate the new line

PS1="33[32m\]\@ 33[33m\]\w\$(__git_ps1 \" (33[36m\]%s33[33m\])\") \$33[0m\] "

My prompt ended up looking like

Screen Shot 2014-09-12 at 6.59.55 AM

Beautiful. Wish I’d done this far sooner.


Turns out DNS is kind of important and for some reason mine decided to leave. I went ahead and moved over to using DNSimple instead of my somewhat questionable registrar (only 1186 days until that comes up for renewal). So sorry the blog has been offline; I actually didn’t even notice it.

Experiments with Azure Event Hubs

A couple of weeks ago Microsoft released Azure Event Hub. These are another variation on service bus that go on to join queues and topics. Event Hubs are Microsoft’s solution to ingesting a large number of messages from Internet of Things or from mobile devices or really from anything where you have a lot of devices that produce a lot of messages. They are prefect for sources like sensors that report data every couple of seconds.

There is always a scalability story with Azure services. For instance with table storage there is a partition key; there is a limit to how much data you can read at once from a single partition but you can add many partitions. Thus when you’re designing a solution using table storage you want to avoid having one partition which is particularly hot and instead spread the data out over many partitions. With Event Hubs the scalability mechanism is again partitions.

When sending messages to table storage you can pick one of n partitions to handle the message. The number of partitions is set at creation time and values seem to be in the 8-32 range but it is possible to go up to 1024. I’m not sure what real world metric the partition count maps to. At first I was thinking that you might map a partition to a device but with a maximum around 1024 this is clearly not the approach Microsoft had in mind. I could very easily have more than 1024 devices.  I understand that you can have more than 1024 partitions but that is a contact support sort of operation. The messages within a partition are delivered to your consumers in order or receipt.

Event Hubs

In order delivery sounds mildly nifty but it is actually a huge technical accomplishment. In a distributed system doing anything in order is super difficult. Their cheat is that there is only a single consumer for each partition. I should, perhaps, say that there is at most one consumer per partition. Each consumer can handle several partitions. However you can have multiple consumer groups. Each consumer group gets its own copy of the message. So say you were processing alerts from a door open sensor and you want to send text messages when a door is opened and you want to log all open events in a log then you could have two consumers in two groups. Realistically you could probably handle both of these things in a single consumer but let’s play along with keeping our microservices very micro.

An open closed sensor - this one is an Insteon sensor

A magnetic open closed sensor – this one is an Insteon sensor

Messages sent to the event hub are actually kept around for at least 24 hours and can be configured up to 7 days. The consumers can request messages from any place in the stream history. This means that if you need to replay an event stream because of some failure you’re set. This is very handy should you have a failure that wipes out some in memory cache (not that you should take that as a hint that the architecture I’m using leverages in memory storage).

Until now everything in this article has been discoverable from the rather sparse Event Hub documentation. I had a bunch more questions about the provided EventProcessorHost that needed answering. EventProcessorHost is the provided tool for consuming message. You can consume messages using your own connectors or via EventHubReceiver  but EventProcessorHost provides some help for dealing with which node is responsible for which partitions. So I did some experiments

What’s the deal with needing blob storage?

It looks like the EventProcessorHost writes out timestamps and partition information to the blob storage account. Using this information it can tell if a processing node has disappeared requiring it to spread the lost responsibility over more nodes. I’m not sure what happens in event of a network partition. It is a bit involved to test. The blob storage is checked every 10 seconds so you could have messages going unprocessed for as long as 20 seconds.

Opening up the blog storage there is a blob for each consumer group * each partition. So for my example with only the $Default group and 16 partitions there were 16 blobs. Each one contained some variation of


Is processing on a single partition single-threaded?

Yes, it appears to be. This is great, I was worried I’d have to lock each partition so that I didn’t have more than one message being consumed at a time. If that were the case it would sort of invalidate all the work done to ensure in order delivery.

Is processing multiple messages on different partitions on a single consumer multi-threaded?

Yes, you can make use of threads and multiple processors by having one consumer handle several partitions.

If you register a new consumer group does it have access to messages published before it existed?

I have no idea. In theory it should but I haven’t been able to figure out how to create a non-default consumer group. Or, more accurately, I haven’t been able to figure out how to get any messages for the non-default consumer group. I’ve asked around but nothing so far. I’ll update this if I hear back.

Rolling Averages in Redis

I’m working on a system that consumes a bunch of readings from a sensor and I thought how nice it would be if I could get a rolling average into Redis. As I’m going to be consuming quite a few of these pieces of data I’d rather not fetch the current value from redis, add to it and send it back. I was thinking about just storing the aggregate value and a counter in a hash set in Redis and then dividing one by the other when I needed the true value. You can set multiple hash values at the same time with HMSET and you can increment a value using a float inside a hash using HINCRBYFLOAT but there is no way to combine the two. I was complaining that there is no HMINCRBYFLOAT command in Redis on twitter when Itamar Haber suggested writing my own in Lua.

I did not know it but apparently you can write your own functions that plug into Redis and can become commands. Nifty! I managed to dig up a quick Lua tutorial that listed the syntax and I got to work. Instead of a HMINCRBYFLOAT function I thought I could just shift the entire rolling average into Redis.

This script gets the current value of the field as well as a counter of the number of records that have been entered into this field. By convention I’m calling this counter key.count. I increment the counter and use it to weight the old and new values.

I’m using the excellent StackExchange.Redis client so to test out my function I created a test that looked like

The test passed perfectly even against the Redis hosted on Azure. This script saves me a database trip for every value I take in from the sensors. On an average day this could reduce the number of requests to Redis by a couple of million.

The script is a bit large to transmit to the server each time. However if you take the SHA1 digest of the script and pass that in instead then Redis will use the cached version of the script that matches the given SHA1. You can calculate the SHA1 like so

The full test looks like

Getting vNext Running on OSX

I’m giving a talk this week at the local .net user group about vNext. I thought I would try to get it running on my Mac because that is a pretty nifty demo. 1. If you don’t have mono installed at all then go grab the latest binaries. You need to have a functioning mono installation to build mono. Weird, huh? 2. Install mono from git. I set my prefix in the autogen step to be in the same directory as my current version of mono

git clone
cd mono
./ --prefix=/Library/Frameworks/Mono.framework/Versions/3.6.1
sudo make install

Now when I first did this I had all sorts of weird compilation problems. I messed around with it for a while but without much success. Google was no help so in a last ditch effort I pulled the latest and everything started to work again. So I guess the moral is that the cutting edge sometimes fails to build. On the other hand it would be good if the mono team had a CI server which would spot this stuff before it hit dumb end users like me.

Update: Mono 3.6 has been released which should fix most of the issues people were having with vNext on OSX. You don’t need to build from source anymore. The updated packages should be in brew in the next little while. 

3. Install homebrew if you don’t already have it. You can find instructions at

4. Use brew to install k. Why k? I don’t know but it is a prefix which is used all over the vNext stuff.

brew tap aspnet/k
brew install kvm

This will set up kvm which is the version manager.

5. Use kvm to install a runtime. The wiki for vNext suggests this runtime but it is really old.

kvm install 0.1-alpha-build-0446

I’ve been using the default latest and it seems to be more or less okay.

6. Pull down the home depot from This repo is the meeting point for the various aspnet projects and the wiki there is quite helpful.

7. Jump into the ConsoleApp directory and run

k run

This will compile the code and execute it. It will be compiled with Roslyn which is cool enough to make me happy. There is very little printed by default but you can change that by setting an environmental variable

export KRE_TRACE=1

I did run into an issue running the sample web application and sample MVC applications from the home repo.

System.TypeInitializationException: An exception was thrown by the type initializer for HttpApi ---> System.DllNotFoundException: httpapi.dll

I chatted with some folks in the Jabbr chatroom for aspnet vNext and it turns out that the current self hosted doesn’t work full yet on OSX. However there is an alternative in Kestrel a libuv based http server. I pulled that repo and tried the sample project which worked great.

If you’re around Calgary on Thursday then why not come to my talk and watch me stumble around trying to explain all of this stuff?

d3 Patterns

I’m a big fan of the d3 data visualization library to the point where I wrote a book about it. Today I came across and interesting problem with a visualization I’d created. I had a bunch of rows which I’d colored using a 10 color scale.rows


The users wanted to be able to click on a row and have it highlight. Typically I would have done this by changing the color of the row but I had kind of already used up my color space just building the rows. I needed some other way to highlight a row. I tried setting the border on the row but that looked ugly and became a tangled mess when adjacent rows were highlighted.


What I really wanted was to put some sort of a pattern on the row. As it turns out this is quite easy to do. SVG already provides a mechanism for applying patterns as fills. The one issue is that you can’t apply a pattern as an overlay to an existing fill you have to replace it completely.

First I created the pattern in d3

Here I create a new pattern element and put a rectangle in it. I rotate the whole pattern on a 45 degree angle to get a more interesting pattern. You may notice that the code references the variable d. I’m actually creating and applying this pattern inside of a click handler for the row. This allows me to create a new pattern for each row and color it correctly. The full code looks like

The finished product looks like


You can change the pattern to come up with more interesting effects