I am an Ironman

I’ve been training for over a year to run through the finish line at Ironman Wisconsin. I spent countless hours swimming, biking and running to achieve the goal I set in front of me. In 2016 alone I spent 222 hours  and 36 minutes training. I traveled over 3,500 miles using my body alone. I climbed over 135k feet of elevation. I can’t imagine how many pints of Ben and Jerrys I ate or how many additional hours I slept.

It was all worth it to cross the finish line.

Pre-race

Leading up the race I was pretty excited and couldn’t sleep the night before. I went to bed around 9PM and woke up around 2:30 AM. The transition area opened at 5 AM for body painting and to do last minute checks for the gear. I rode my bike up to the Monona Terrace and checked my gear, got painted and then spent about an hour waiting the sun come up over the lake. I made my way down to the swim start and met some of my ROI teammates and cooled my nerves by chatting about what we had in store. Many of my teammates are diabetic so they had to get their meters and pumps setup at the table right off the start and finish. It made me realize that I had no business being worried about my swim.

Swim

I positioned myself in the middle-rear of the mass of people that would be starting at the swim. I wanted to avoid the inner most group as it would be really busy and avoid the outer most group as there were a lot of people still getting in the water. I figured there would be a lull where I was positioned. I was wrong. I was constantly trampled by people trying to move towards the center. I had a hard time not getting really pissed off and making a scene but I just stayed the course and swam in the right direction. After the first and second turn there was a lot of open water and I could swim freely without too much traffic. I managed to finish the swim in around 1:30. It was exactly the time I was hoping for. I swam way straighter than I could have imagined.

Bike

The bike is the event I enjoy the most. I told myself if I could get out of the swim, I could finish an Ironman, so I was very excited for the bike. I transitioned relatively quickly and hopped on my bike. I felt amazing. It was exactly what I wanted to do for the last couple weeks: ride a bike in the Ironman. The crowd support in Madison was amazing. I made it to Mt. Horeb and saw my dad and he gave words of support and I was off. The first time I climbed Midtown Rd, I was blow away by the number of people. It was single file because of the number of spectators. I got more smacks on the rump than I’m willing to admit and I made it up the hill without really realizing I was climbing. Around mile 75, I started to feel a bit tired but was coming up to Mount Horeb again and I knew my family would be there. Seeing them was a huge boost. Their support got me through the entire day.

I continued on my journey, passing a couple friends on the way. Every town and group of spectators kept me going. Around mile 95, I started feeling a bit drained and was very ready to be done with my bike. Once I hopped on Whalen Road, I knew it was the home stretch and pushed out the last couple miles. I finished my bike around 7 hours and was exactly on track with my day overall.

Run

The run was hard. The second I got off my bike, I took a breather and sat in transition for 5 minutes just recuperating before getting up and going. Right as I ran out the gate I had friends and family waiting to cheer me on and it got me going enough to start at a pretty good pace. For the first 9 miles I felt really strong. I was running about a 10 minute\mile pace and figured I could finish around 8 PM. I “had” to take several pictures with my mom. It was a huge boost physically and mentally. (thanks, Mom)

Once I started making my second lap of the two lap course, I started to falter. My legs weren’t cramped, they were just completely spent. I could barely shuffle once I made it to mile 20. I saw many people I knew and even ran along side some of my teammates but still couldn’t push myself past the pain. The last 6 miles were the most trying miles I’ve spent in my running shoes. I had to alternate between running and walking but knew I could make the finish. It was just a matter of time before getting back to the capital square to see the finish line. I ended up “running” a 5:26 marathon. It was extremely painful but I never really felt like quitting. Running through the chute was amazing.

I ended up finishing just before 10 PM. I spent nearly 15 hours swimming, biking and running my way to the finish line.

I now have a hat, shirt, metal and jacket that tell me I’m an Ironman. I might even get a tattoo if I feel like it…Above all, I have the achievement that no one can take from me and the realization that I can truly do whatever I put my mind to. I had amazing support from my friends, family and total strangers. It was an experience I’ll remember forever.

 

Code Golf: Executing Solutions with Azure Functions

After I first showed off Code Golf, many people’s reaction was that it really needed a way for you to execute your solutions against a problem. I did a little bit of experimentation with Azure Automation but eventually settled on Azure Functions are the perfect engine to process solutions.

Here’s how it works.

Each problem has an input field that allows the author to pass in prerequisites to problem so people have access to predefined variables or possibly functions.

An example of taking advantage of input would be in the solution below. Although it doesn’t solve the problem it does show that the variable is being defined and access automatically.

Azure Functions are implemented with the Azure Web Jobs SDK. It’s just a glorified Azure Web App that can host and execute these tiny snippets of code. In order to upload a solution to Azure Functions, you can take advantage of the Kudu API and use the virtual file system to upload new files onto the Azure Web App. Functions require two files and a folder to be present before showing up in Azure.

You need to create a folder with the name of the Function you’d like to define. I’m just using GUIDs for Code Golf since I delete the function right after it’s executed. You don’t need to directly create a folder with the VFS API. You can instead upload a file into a folder that doesn’t exist and the entire structure will be created. You’ll need to authenticate with Basic authentication in order to perform this operation. Here’s how I’m currently doing it in Code Golf.

In the above example I’m creating a function.json file that contains the binding definitions for the function. You can see examples of this directly in the Azure portal or in the Functions documentation. You need to create the function.json first or it won’t work in Azure.

My function.json consists of an HTTP trigger binding for input and an HTTP binding for output. Authentication isn’t being enforced on the endpoints since they are just being deleted right after execution.

After creating the function.json you need to create the actual executable file. If you’re using PowerShell, this would be run.ps1. If you’re using C#, this would be run.csx. Here’s an example of what a run.ps1 looks that is generated by Code Golf. Nothing is processed from the HTTP trigger. We just start running the solution developed by the solution author. You can see the Input was prepended to the author’s solution.

Starting a function is as easy as making an HTTP GET to the Function URL.

Output from the solution is stored in an $output variable and written out to the output binding. This value is then sent down as the content body of the request to start this Function. Output is then written back to the solution output pane on the Code Golf website. Magic.

Accessing the Kudu API for a Azure Functions App

Azure Functions is a new feature of Azure that lets you easily create “serverless” snippets of code that integrate different systems. Functions can be triggered in numerous ways and all you need to do is write the integration code between the inputs and outputs. They allow for binding to HTTP Endpoints, triggering on DocumentDB changes or generic Web Hooks among other things. Each Azure Functions app is running an instance of Kudu for managing the source control and file system for the functions on the system.

Kudu has an API that is accessible via the URL https://<functions app name>.scm.azurewebsites.net>. To access the API, you need to use Basic Auth and provide the Deployment Credentials for the Azure Web App. Since Functions has it’s own blade, you need to download the publish settings from the Azure App Services blade instead. You can get them by clicking the ellipses next to the Functions service and clicking Get publish profile.

Once you download the profile, open it with a text editor and you can grab the userName and userPWD values. As a test you can use your favorite REST client and do things like create folders using the VFS API.

The changes should be immediately reflected in your Kudu web portal and on the file system of your web app.

More information about the Kudu API can be found here.

Code Golf

Code Golf is the concept that you complete a particular programming problem in the least amount of characters. Over the last few days, I’ve spent a bunch of time playing around with some new technologies and come up with my own implementation of the Code Golf concept.

Code-Golf.com is up and running.

The general concept is that you can define a problem for a particular language. The different fields in the problem designer use Microsoft’s Monaco editor so you have syntax highlighting and IntelliSense for various languages. In this case, you can use Markdown which will later be turned into HTML using Showdown JS.

Once a question is defined, users can then propose solutions in the selected language. Scores are tallied based on the count of characters. Right now there isn’t any validation as to whether the solution correctly solves the problem. It would be great to actually run the problems in a sandbox to determine is the result is valid. There are services like Sphere Engine that provide this type of support.

Technology Stack

ASP.NET MVC .NET Core – I opt’d to play around with .NET Core a bit. The new project system takes advantage of a project.json file to manage settings and dependencies. You can redirect the build to work on different flavors of .NET. I had to use the full .NET 4.5.2 version in order to take advantage of the DocumentDB C# library because it doesn’t support CORECLR yet.

Bower, NPM and NuGet – I used Bower to bring in a couple client side libraries including bootstrap, jQuery and Showdown JS. This was really straight forward and similar to adding NuGet references. I also took advantage of NPM to bring in the Monaco editor and Gulp. NuGet was used to bring in the GitHub OAuth provider and some other dependencies. Mixing all these different package management systems was a little weird and proved a little tricky since they all use their own configuration file etc.

Azure DocumentDB – I’m using Azure DocumentDB to store all the problems, solutions, users and languages. The C# wrapper exposes a LINQ query syntax for documents which is really handy. It has some quirks but is pretty easy to work around.

Front End – The front end takes advantage of Bootstrap and jQuery. I’ll likely mess around with Angular or ReactJS as I move forward but figured I wanted to get a quick prototype out before diving into a new framework. I used Microsoft’s Monaco editor for problems and solutions. Monaco is the same editor that VS Code uses. The Monaco project just enables users to host the editor in a regular web browser. Once I got NPM to install the distro correctly, it was really easy to work with.

Creating new Monaco editors is really straightforward once it’s setup. The editor itself has a lot of options.

Here’s an example of editing PowerShell in Monaco.

 

Authentication – I created a new GitHub application and used the GitHub OAuth provider to enable authentication.

Next Step

Get the project open sourced on GitHub 

I need to make sure I’m not leaking any API keys, etc. Once I clean it up, I intend on putting it on GitHub.

Execution Engine

A basic execution and validation engine would be a great addition. Using something like Sphere Engine is expensive and overkill. I might start with C# and PowerShell language support since I know those the best.

A Year with the Garmin 920xt

I’ve been training ALL year for the Ironman Wisconsin. My device of choice for tracking my workouts has been the Garmin 920xt. It’s a great little device and has some awesome features. I picked it up from eBay for 300 dollars in like new condition. It came in the original box and included the heart rate monitor.

Look and Feel

The watch looks pretty cool. It’s way better than my previous GPS watch. I had the Forerunner 301 and it was HUGE.

 

The new watch is much better.

IMG_2134

It still isn’t small. Everyone once and awhile I will wear it around when not training and I’ve had several people comment on the size. I even had someone mention that it looked like I had a TV on my wrist. Overall, it doesn’t bother me. When I was running, bike or swimming, I had no problem. It fit comfortably and never got in the way.

The buttons are easy to use and obvious at their function. Once you get the feel for it you can click around really easily. The interface it super simple and allows for customizable views during training and customizable watch faces when you aren’t.

Use While Training

Training is really why I got this watch and I was super impressed. I’ll break it down by activity.

Biking

Riding with the watch is great. It monitors my speed and cadence really well. I never had problems with it drops GPS. It was comfortable enough to wear for 6+ hour rides. I kind of wish I would have bought a bike mount because checking your speed when you’re at 45 MPH is kinda tricky when you’re wearing a watch. I had setup a notification to display my pace at every 5 miles. The watch would beep and vibrate at this time and show the time elapsed during the duration of the last 5 miles. I also used this watch while on the trainer and while using Zwift. It connected easily to both my cadence and speed sensor on my bike.

Running

Just as with biking, the watch was great for running. I was used to running with a phone strapped to my arm so the watch was a nice deviation. It was a lot lighter and easier to look at to gauge both my pace and distance. I setup a notification so that the watch would beep and vibrate every mile to show me my pace. This became almost necessary for me to ensure that I kept pace while running long distances. I’m horrible at this normally so I could easily check the pace whenever I felt a little out of it. I also used this watch to run on the treadmill and was extremely impressed by the accuracy when doing this. It was almost dead on with what the dreadmill was reading.

Swimming

Swimming was probably the weakest sport for the watch but I don’t have an huge gripes against it. When I was in the pool it could easily track my laps. Every once and awhile it would drop a lap and by the end of a 2000 meter swim, it might register 1900 instead. When I was in open water the water was usually pretty accurate but I found that it didn’t do a good job of keeping pace throughout my swim. It only found my location at large intervals.

I also had a couple experiences where the watch just totally wonked out when I was swimming and registered some strange tracking like below. I was never in the middle or the lake nor on land during that swim.

Other Tech

The watch also offers a bunch of other features that are pretty common to other smart watches.

Direct Computer Connection and Charger 

The Garmin comes with a USB charger and computer connector. It’s super proprietary and clips onto the base of the watch face and charges and syncs via 4 pins. I never had problems with the system. It linked to my computer and also charged from a wall outlet just fine.

IMG_2135

Garmin Connect

Garmin Connect is the desktop software and web-based application for linking your watch and workouts. The Connect website has all kinds of features. It’s pretty slick. I never really took full advantage of the system but it does some neat stuff around reporting and connecting with others. It’s super modern and easy to use. I setup Connect to sync automatically to my Strava account and that’s where I did most of my workout tracking.

I did install a couple apps via Connect IQ and wasn’t super impressed by the quality or quantity of the apps available. My favorite apps were the watch faces. I used the Healthboy face the most.

 

 

Bluetooth 

I used bluetooth to sync all my workouts to my phone and then upload to Connect and eventually Strava. It worked fine. It required that I install the Garmin Connect app on my iPhone but once that was done it worked pretty automatically. I also setup phone notifications on my watch. Honestly, I really didn’t need to be notified of work emails during my work outs so it was kind of more annoying than anything. The only time I usually had my phone even close to me when working out though was when I was biking. Occasionally, it was nice when friends or family were trying to get a hold of me when I was out on a really long ride.

Wifi

I couldn’t get this to work. You setup the wifi connect via the Garmin Connect desktop software and then the watch is supposed to connect to your wifi. Mine never worked. If it wasn’t for the bluetooth connection, this would have been a deal breaker for me.

Battery Life

This thing lasts forever. I could easily do a 7 hour ride and still have 60% power left. If you are training and using GPS, then it’s going to drain a lot faster but if you aren’t it will last days. It also charges pretty quickly. I usually just charged it when not training but whenever I needed a rush charge job it would only take an hour or two to get to a full charge.

Overall

I would recommend this watch to anyone serious about triathlon. My training wouldn’t have been nearly as successful without it.

PowerShell Decompiled: How do loops work?

In my quest to figure out why handles.ps1 was running very slowly when looping over the 3.4 million handles on my system, I started to realize that the loop structure itself was a lot slower than a language like C#. I know that PowerShell is compiled to an executable lambda statement when it meets particular criteria so I went to figure out what a loop looks like when it’s compiled. The aptly named Compiler class takes care of this. The Compiler class is invoked on ScriptBlocks to generate in-memory functions that are a representation of that block. The Compiler class is an implementation of a ICustomAstVisitor2 that is developed to “Visit” each type of expression in a block. Example of this are ForEachStatementAst and ReturnStatementAst.

While perusing the source code for the Compiler class you’ll come across a lot of Visit* methods. Each of these methods takes a particular part of the PowerShell AST and turns it into a LINQ Expression that can then be used to generate a executable lambda. Let’s look at what makes up a couple of the looping statements.

While Loops

While loop are the basis for a lot of other loops when you break down the Compiler class. The while loop is actually implemented by using several different labels and gotos to achieve loop functionality. At the top of the GenerateWhileLoop method, you’ll see a comment that defines the pseudo-code for a PowerShell while loop.

There are 3 different distinct labels. The LoopTop, BreakTarget and ContinueTarget. Gotos will jump to the various labels based on whether you use the continue or break keyword or move to the next iteration of the loop.

Given the loop: while($true) { }

The resulting output will be:

.Extension {
    .Block() {
        .Label
        .LabelTarget continue:;
        .Extension {
            .Default(System.Void)
        };
        .If (
            .Block() {
                .Default(System.Void);
                True
            }
        ) {
            .Try {
                .Block() {
                    .Call System.Management.Automation.PipelineOps.CheckForInterrupts($context);
                    .Block() {
                        .Default(System.Void)
                    };
                    .Goto continue { }
                }
            } .Catch (System.Management.Automation.BreakException $var1) {
                .If (
                    .Call $var1.MatchLabel("")
                ) {
                    .Break break { }
                } .Else {
                    .Rethrow
                }
            } .Catch (System.Management.Automation.ContinueException $var2) {
                .If (
                    .Call $var2.MatchLabel("")
                ) {
                    .Continue continue { }
                } .Else {
                    .Rethrow
                }
            }
        } .Else {
            .Default(System.Void)
        };
        .Label
        .LabelTarget break:
    }
}

ForEach Loops

ForEach loops are just a modification of the while loop. The comment in the Compiler class states the pseudo-code for a loop looks like this.

You can see that the ForEach is just turned into a while statement that uses the IEnumerable class to call MoveNext and Current to move through the collection.

Given the loop: foreach($x in $i) { }

The resulting output will be:

    .Try {
        .Block() {
            $old_foreach0 = .Call System.Management.Automation.VariableOps.GetVariableValue(
                .Constant(local:foreach),
                $context,
                (System.Management.Automation.Language.VariableExpressionAst)null);
            .Default(System.Void);
            $enumerable1 = .Call System.Management.Automation.VariableOps.GetVariableValue(
                .Constant(i),
                $context,
                .Constant($i));
            $foreach2 = .Extension {
                .Invoke (.Constant<System.Runtime.CompilerServices.CallSite`1[System.Func`3[System.Runtime.CompilerServices.CallSite,System.Object,System.Collections.IEnumerator]]>(System.Runtime.CompilerServices.CallSite`1[System.Func`3[System.Runtime.CompilerServices.CallSite,System.Object,System.Collections.IEnumerator]]).Target)(
                    .Constant<System.Runtime.CompilerServices.CallSite`1[System.Func`3[System.Runtime.CompilerServices.CallSite,System.Object,System.Collections.IEnumerator]]>(System.Runtime.CompilerServices.CallSite`1[System.Func`3[System.Runtime.CompilerServices.CallSite,System.Object,System.Collections.IEnumerator]]),
                    $enumerable1)
            };
.If (
                $foreach2 == null && $enumerable1 != null
            ) {
                $foreach2 = .Call (.NewArray System.Object[] {
                    (System.Object)$enumerable1
                }).GetEnumerator()
            } .Else {
                .Default(System.Void)
            };
            .Call System.Management.Automation.VariableOps.SetVariableValue(
                .Constant(local:foreach),
                (System.Object)$foreach2,
                $context,
                (System.Management.Automation.Language.AttributeAst[])null);
            .If ($foreach2 != null) {
                .Extension {
                    .Block() {
                        .Label
                        .LabelTarget continue:;
                        .Extension {
                            .Default(System.Void)
                        };
                        .If (
                            .Block() {
                                .Extension {
                                    .Block() {
                                        $funcContext._currentSequencePointIndex = 1;
                                        .If ($context._debuggingMode > 0) {
                                            .Call ($context._debugger).OnSequencePointHit($funcContext)
                                        } .Else {
                                            .Default(System.Void)
                                        };
                                    .Rethrow
                                }
                            } .Catch (System.Management.Automation.ContinueException $var2) {
                                .If (
                                    .Call $var2.MatchLabel("")
                                ) {
 .Continue continue { }
                                } .Else {
                                    .Rethrow
                                }
                            }
                        } .Else {
                            .Default(System.Void)
                        };
                        .Label
                        .LabelTarget break:
                    }
                }
            } .Else {
                .Default(System.Void)
            }
        }
    } .Finally {
        .Call System.Management.Automation.VariableOps.SetVariableValue(
            .Constant(local:foreach),
            $old_foreach0,
            $context,
            (System.Management.Automation.Language.AttributeAst[])null)
    }
}

For Loops

For loops are again a modification of the while loop but with the added control and condition of the iterator.

Given the loop: for($i = 0; $i -lt 10; $i++) { }

The resulting output will be:

.Block() {
    .Default(System.Void);
    .Call System.Management.Automation.VariableOps.SetVariableValue(
        .Constant(i),
        (System.Object)0,
        $context,
        (System.Management.Automation.Language.AttributeAst[])null)
}
.Block() {
    .Default(System.Void)
}
.Block(System.Object $var1) {
    $var1 = .Call System.Management.Automation.VariableOps.GetVariableValue(
        .Constant(i),
        $context,
        .Constant($i));
    .Call System.Management.Automation.VariableOps.SetVariableValue(
        .Constant(i),
        .Extension {
            .Invoke (.Constant<System.Runtime.CompilerServices.CallSite`1[System.Func`3[System.Runtime.CompilerServices.CallSite,System.Object,System.Object]]>(System.Runtime.CompilerServices.CallSite`1[System.Func`3[System.Runtime.CompilerServices.CallSite,System.Object,System.Object]]).Target)(
                .Constant<System.Runtime.CompilerServices.CallSite`1[System.Func`3[System.Runtime.CompilerServices.CallSite,System.Object,System.Object]]>(System.Runtime.CompilerServices.CallSite`1[System.Func`3[System.Runtime.CompilerServices.CallSite,System.Object,System.Object]]),
                $var1)
        },
        $context,
        (System.Management.Automation.Language.AttributeAst[])null);
    .If ($var1 == null) {
        (System.Object)0
    } .Else {
        $var1
    }
}
.Call System.Management.Automation.VariableOps.GetVariableValue(
    .Constant(i),
    $context,
    .Constant($i))
10
.Extension {
    .Invoke (.Constant<System.Runtime.CompilerServices.CallSite`1[System.Func`4[System.Runtime.CompilerServices.CallSite,System.Object,System.Int32,System.Object]]>(System.Runtime.CompilerServices.CallSite`1[System.Func`4[System.Runtime.CompilerServices.CallSite,System.Object,System.Int32,System.Object]]).Target)(
        .Constant<System.Runtime.CompilerServices.CallSite`1[System.Func`4[System.Runtime.CompilerServices.CallSite,System.Object,System.Int32,System.Object]]>(System.Runtime.CompilerServices.CallSite`1[System.Func`4[System.Runtime.CompilerServices.CallSite,System.Object,System.Int32,System.Object]]),
        .Call System.Management.Automation.VariableOps.GetVariableValue(
            .Constant(i),
            $context,
            .Constant($i)),
        10)
}
.Block() {
    .Block() {
        .Default(System.Void);
        .Call System.Management.Automation.VariableOps.SetVariableValue(
            .Constant(i),
            (System.Object)0,
            $context,
            (System.Management.Automation.Language.AttributeAst[])null)
    };
    .Extension {
 .Block() {
            .Label
            .LabelTarget looptop:;
            .Extension {
                .Default(System.Void)
            };
            .If (
                .Extension {
                    .Invoke (.Constant<System.Runtime.CompilerServices.CallSite`1[System.Func`3[System.Runtime.CompilerServices.CallSite,System.Object,System.Boolean]]>(System.Runtime.CompilerServices.CallSite`1[System.Func`3[System.Runtime.CompilerServices.CallSite,System.Object,System.Boolean]]).Target)(
                        .Constant<System.Runtime.CompilerServices.CallSite`1[System.Func`3[System.Runtime.CompilerServices.CallSite,System.Object,System.Boolean]]>(System.Runtime.CompilerServices.CallSite`1[System.Func`3[System.Runtime.CompilerServices.CallSite,System.Object,System.Boolean]]),
                        .Block() {
                            .Extension {
                                .Block() {
                                    $funcContext._currentSequencePointIndex = 2;
                                    .If ($context._debuggingMode > 0) {
                                        .Call ($context._debugger).OnSequencePointHit($funcContext)
                                    } .Else {
                                        .Default(System.Void)
                                    };
                                    .Default(System.Void)
                                }
                            };
  .Extension {
                                .Invoke (.Constant<System.Runtime.CompilerServices.CallSite`1[System.Func`4[System.Runtime.CompilerServices.CallSite,System.Object,System.Int32,System.Object]]>(System.Runtime.CompilerServices.CallSite`1[System.Func`4[System.Runtime.CompilerServices.CallSite,System.Object,System.Int32,System.Object]]).Target)(
                                    .Constant<System.Runtime.CompilerServices.CallSite`1[System.Func`4[System.Runtime.CompilerServices.CallSite,System.Object,System.Int32,System.Object]]>(System.Runtime.CompilerServices.CallSite`1[System.Func`4[System.Runtime.CompilerServices.CallSite,System.Object,System.Int32,System.Object]]),
                                    .Call System.Management.Automation.VariableOps.GetVariableValue(
                                        .Constant(i),
                                        $context,
                                        .Constant($i)),
                                    10)
                            }
                        })
                }
            ) {
                .Block() {
                    .Try {
                        .Block() {
                            .Call System.Management.Automation.PipelineOps.CheckForInterrupts($context);
                            .Block() {
                                .Default(System.Void)
                            }
                        }
                    } .Catch (System.Management.Automation.BreakException $var1) {
                        .If (
                            .Call $var1.MatchLabel("")
                        ) {
   .Break break { }
                        } .Else {
                            .Rethrow
                        }
                    } .Catch (System.Management.Automation.ContinueException $var2) {
                        .If (
                            .Call $var2.MatchLabel("")
                        ) {
                            .Continue continue { }
                        } .Else {
                            .Rethrow
                        }
                    };
                    .Label
                    .LabelTarget continue:;
                    .Extension {
                        .Block() {
                            $funcContext._currentSequencePointIndex = 1;
                            .If ($context._debuggingMode > 0) {
                                .Call ($context._debugger).OnSequencePointHit($funcContext)
                            } .Else {
                                .Default(System.Void)
                            };
                            .Default(System.Void)
                        }
                    };
                    .Block(System.Object $var3) {
                        $var3 = .Call System.Management.Automation.VariableOps.GetVariableValue(
                            .Constant(i),
                            $context,
  .Constant($i));
                        .Call System.Management.Automation.VariableOps.SetVariableValue(
                            .Constant(i),
                            .Extension {
                                .Invoke (.Constant<System.Runtime.CompilerServices.CallSite`1[System.Func`3[System.Runtime.CompilerServices.CallSite,System.Object,System.Object]]>(System.Runtime.CompilerServices.CallSite`1[System.Func`3[System.Runtime.CompilerServices.CallSite,System.Object,System.Object]]).Target)(
                                    .Constant<System.Runtime.CompilerServices.CallSite`1[System.Func`3[System.Runtime.CompilerServices.CallSite,System.Object,System.Object]]>(System.Runtime.CompilerServices.CallSite`1[System.Func`3[System.Runtime.CompilerServices.CallSite,System.Object,System.Object]]),
                                    $var3)
                            },
                            $context,
                            (System.Management.Automation.Language.AttributeAst[])null);
                        .If ($var3 == null) {
                            (System.Object)0
                        } .Else {
                            $var3
                        }
                    };
                    .Goto looptop { }
                }
            } .Else {
                .Default(System.Void)
            };
            .Label
            .LabelTarget break:
        }
    }
}

So there is a lot more going on when PowerShell is looping than when languages like C# are looping. Might be why my functions are a little slow. 😉

On to My Next Adventures! Ironman, Contracting and Europe

And that’s a wrap! Today is my last day at Concurrency as a full time employee. They are an awesome company to work for and I’m really happy to have had the opportunity to learn some stuff about consulting and work with some great people. That said, I have some big plans in store for the next couple of months.

First of all, I need to finish a race I’ve been training for all year: Ironman Wisconsin. I’m finally reaching the point where 2-hour+ a day training sessions (more on the weekend!) are getting old so I’m stoked to run through that finish line. All that’s left is to swim 2.4 miles, bike 112 and run 26.2. It’s been exciting to see what I’ve already been able to accomplish so I’m pumped to put all my training to use. If you’re bored on Sunday, September 11th, live track me or come out to Madison for an awesome time on the course. There is amazing crowd support at this race. The costumes and beer probably make it more fun for the spectathletes.

In addition to training, I’ve also been raising money for kids dealing with Type 1 diabetes through the organization Riding on Insulin. So if you have a couple extra bucks and want to help sponsor some action sport camps for some rad kids, I’m sure they would appreciate every cent.

Once I finish my Ironman in sub-10 hours (just kidding..), I’ll be on to my next adventure. I’ve picked up some part time contract work for a company called StealthBits. The security space has been an interest of mine for quite some time so I’m happy to start breaking into that by making some sweet security related software. I’m going to be doing a lot more research and focused software development. Working part time should also enable me to blog, work on some open-source projects and travel around without the 40-hour+ grind.

Finally, I’m headed to Europe! I’ll be stationed right outside Geneva, Swizterland (at least at some point). I’ll be working part time and traveling around Europe with my girlfriend and bike. I expect to be in Europe for about three months. We have some great places on the agenda. So if you happen to be in Switzerland, Belgium, Netherlands, Germany , Croatia, Slovenia or Italy, we should grab a beer and talk some shop. I won’t be making it to the MVP Summit this year so make sure to visit Paddy’s for me.

 

Enabling Output of PowerShell Compilation

A long time ago I wrote a blog post about how to grab the internal compiled script blocks that are produced when PowerShell turns an AST into a executable, compiled lambda. Now that PowerShell is open source, we can do some really need things with the source to actually output that information.

I just submitted a pull request to the repository to enable the ability to use the DebugViewWriter class in CORECLR. This means that we can take Expression objects and output them as a meaningful string of pseudocode. In my local copy of the repository, I also updated the Compiler class to output to the Trace stream during compilation of AST objects. After an AST is turned into an Expression, we can use the extension method, ToDebugString, to output the pseudocode.

Now I can go ahead and define the ENABLE_BINDER_DEBUG_LOGGING flag in the System.Management.Automation project.json.

Once this is done, what you’ll see if that any time an AST is compiled, there will be output to the Debug Console in VS Code that shows the expression that was a result of the command that you executed. Below you’ll see some info about how the prompt function is called.

Getting Started with openHAB

openHAB is an open-source home automation project, aimed to be vendor and technology agnostic. It integrates with all kinds of devices and platforms. It also allows you to create rules and scripts to trigger when certain conditions are met. It offers a simple, built-in UI and native apps for Android and iOS. In this post we’ll look at what it takes to get up and running with openHAB and start communicating with an Insteon device.

The first step is to download the openHAB runtime core and addons. The runtime core has all the guts to get the system up and running. The system is Java based so you’ll need Java installed. This also means it should work on any platform that supports Java. The addon package allows you to selectively choose what openHAB loads so you only have the components that you really need in your system.

Runtime and Addon Downloads

Runtime and Addon Downloads

Another handy little tool to download is the openHAB Designer. It has some syntax highlighting and validation for the openHAB file types and reads the openHAB directory structure to present it in an IDE-like fashion.

designer

After you’ve downloaded the packages, unzip the runtime core package to a good location. Once this is extracted, you’ll want to bring in the addons that you require for your configuration. The addons come out of the addon package and individual ones can be plopped into the addons directory of the runtime core folder structure. I only needed Insteon but grabbed a couple other ones. To integrate with various systems, you’ll want the “binding” addon for that system. The binding has the knowledge of how to communicate with that system. I also grabbed a couple persistence addons. These allow you to configure data persistence of the devices in your home for reporting. They aren’t required for the system to function.

Addons Directory

Addons Directory

Once the addons are installed, you’ll want to create a new configuration file for openHAB. In the configurations directory, you’ll need to create a copy of openhab_default.cfg and rename it openhab.cfg.

configs

You can then change all the settings you need for various bindings, persistence or the system in general. Each of the bindings have examples in the cfg file so you can just uncomment the line and move ahead. Here’s the configuration for my Insteon PLM. It’s connected to a USB (COM4) port on my machine.

plmbinding

Next, you’ll want to setup devices and groups. This is accomplished by creating a .items file in the items folder. It can have any name as long as the extension is .items. Groups are just logical groupings and can be nested. Items are actual devices located throughout the house. They can be part of multiple groups. For example, my basement lights are part of the basement group and the lights group. A full description of the items file syntax can be found on the openHAB wiki. Here’s an example of my .items file open in the openHAB Designer.

items

Once items are configured, you’ll want to setup a sitemap. A sitemap defines how the UI of your openHAB setup will look. There are a bunch of controls like text, switches, selections and images. For a full list, check out the openHAB wiki. It seems pretty simple but I’m sure with a complex configuration it can get pretty gnarly. Here’s a simple sitemap that has a single page with a switch to turn off and on my basement lights.

sitemap

After all this is configured, you can start up the openHAB server by running setup.bat in the root folder. Once running, you’ll be able to access the UI through a web browser at http://localhost:8080/openhab.app.

webapp

They also offer a Android and iOS app. I installed it on my phone and had to configure the external\internal URL. If you didn’t setup credentials (which is by default), you can leave those blank. Once connected it looks very similar to the web app but native to the phone.

IMG_1333

So far, it looks like a really neat solution. It turns on and off my basement light just fine; even from my phone. It’s a bit of a learning curve getting started but seems really powerful. I have a few more Insteon devices lying around that I’ll get hooked up so I’ll see how that progresses. Apparently, it can integrate with Nest as well so I will have to get that up and running too.

 



Reading Temperature from a Google Nest Thermostat with PowerShell

I recently acquired a Google Nest thermostat (on sale at Target!). Even before buying one I was already thinking about the automation possibilities. Google provides a REST interface for the Nest device. It’s accessible via OAuth and very easy to consume. The core tenant of their API is that there are very few endpoints and a single data model for multiple devices. In this post we’ll look at how to get up and running with the Google Nest REST API.

nest

First, you need to create a developer account. Once this is done you need to create a new application under your developer account. It asks you a few questions about the types of devices and permissions you’ll need and from there you are ready to start interacting with your Nest device.

application

On the right hand side of the application page, you’ll see a couple text boxes with some key information in them. The first Authorization URL box would be the URL that you send the end user to. This prompts the user to authenticate and give permission to the application to access their device.

oauth

After clicking accept, the user will be presented with a PIN code page. This PIN needs to be provided back to the calling application. It will be inserted into the Access Token URL in place of the AUTHORIZATION_TOKEN portion of the URL provided in the Authorization Token URL on the application page.

pincode

The application, or PowerShell in this example, can then invoke the access token URL with the authorization token and it will receive an access token. This access token can then be provided to the various endpoints available in the Nest REST API.

Below, you can see the first command receives the access token, Next, it calls the devices endpoint with the auth key set to the value of the access_token. The returned value is the devices data model that includes all the information about the devices accessible to the calling application and tied to the specified user’s account. As you can see, I returned the current state of my HVAC system and the ambient tempature surrounding the thermostat.

PowerShell

There’s tons of nifty information for each device. There are also a bunch of cool devices to interact with.

permissions

There is also the ability to work with virtual devices so that an actual device isn’t needed. It’s pretty sweet to see this type of home automation so simply accessible. I hope to stumble upon some new devices in the near future and come up with some cool integrations.



-->