What were they thinking

As of late I have been spending a lot of time working on a “legacy” system. The term legacy means different things for different groups, in the circle I’m working with these days it means a system that is no longer being actively developed. As has been stated by mean, software isn’t “dead” it’s “done”.

The catalyst for the work I’m doing is security requirements that need to be applied. While applying the updated security polices I’ve also been tasked with migrating the system from our data center to our cloud data center. As I move through the system I’m confronted with implementations that looking at now seem crazy; however, it’s these situations where it is important to apply two mindsets. One is giving others the benefit of the doubt. In a case like this, those that were working on this system used, for the time, the best options the technology stack could offer. For example, the extensive use of WCF seems silly now; however, at the time it was a recommended approach to hooking up independent systems. When reviewing the code in light of this you can really see how those that worked on it were making thoughtful and considerate choices.

The second mindset that is important to apply, and in my mind is more important to adhere to, is to assume good intent. This is important because even if someone did NOT follow the prevailing approach for the time, or use the best in class technology they had a good reason, or good intent for doing what they did. Consider those that came before as doing the best work they could. Perhaps they had constraints that prevented them from making, what would seem like, a better choice. Perhaps their was a time constraint as is so often the case. Perhaps, as we so often assume, they didn’t know any better, but even then, they were trying their best, which is worthy of acknowledgement.

Now how does this help, well often it allows you to see the system in the light of what it does well. It will provide a perspective that can outline what the system can offer and, as I’m finding, it will allow you to find an approach to extend/adjust the system to comply with the new constraints you are tackling.

So the next time you work on a legacy system and you are cursing the fool that came before, stop and think... give them the benefit of the doubt and assume they had good intentions. You will land in a better spot.

Being a good customer means being able to complain in a way that is helpful

BAD SURVEY: Tell us what you think... OK, GREAT, EXCELLENT, AMAZING!!!!I'm often told that I deal with difficult situations in a way that results in getting what I want. To me this is quick an interesting observation because I find that most often what people mean when they say that is, when I receive a poor customer experience I'm able to turn things around so that I obtain a desirable outcome, thus turning a bad experience into a good one.

This manifest itself most often when I'm out and about with my wife. We will be at a retail store, or out to dinner and something will go wrong. Often my wife will turn to me and suggest that I handle it since "I know how to deal with these things". So what is the secret to my taking a bad situation and turning it around... being a good customer.

As it turns out business want to make money. Shocking, I know. The way they make money is by advertising to attract customers to frequent their brick and mortar locations to use their services or purchase their goods. Knowing this puts the customer in wonderful position. How so? Well said business has already spent a measurable amount of money to get you in the door and ideally they would like you to come back without having to spend a lot more money on advertising. If you have had a bad experience and they are made aware of this then the responsible business will train their employees to try and recover the situation by offsetting the bad with good. This can be in the form of a coupon, discount, or even the wonderful freebie.

But there is the human element to these situations that can result in you missing out on ANY benefit. Case in point, let's say you are out for dinner and you have a bad experience and feel you are do some offsetting consideration how can you go about achieving this goal and turning the situation into a benefit.

My tried and true method is to make someone aware and stick to the facts. Often this translates to asking to speak to the person in charge AND being specific. If you are at a retail store and find that you are charged the wrong price don't dispute the matter with the cashier. They won't have access to change a price in the Point of Sale, also they are most often NOT involved in causing the issue so don't waste their time and yours trying to make them fix the price at the register. Ask to have your ticket transferred to the customer service desk. Remember you can always get a refund on the missed price item and thus holding up the line only annoys people and will NOT get you want you want, which should be the price you expected to pay.

When you find a person "in charge" stick to the facts. Can you show them in print the price you expected to pay? If so you almost are guaranteed to be able to buy the item at the price in print. If it's just the price you "thought" you say you have very little ground to stand on. But let's say you HAVE the e-mail or print ad with the price listed and it's less than what was on your receipt. If it's less than 1-2% of the overall purchase then there will probably be little discussion about the difference being refunded; however, more than that and it's possible the company could be losing money on the deal and with margins the way they are it could be a deal breaker.

What to do in those situations? First, align your objective. You want the price listed and the company doesn't want to lose money, sounds like a great time to bargain. If you are committed to buying the item then any discount at all is a win, so.. start big and back down. Ask for the full difference in price, if they don't budge ask for 75% less, then 25%... nothing hurts in asking. Also this is when it helps to be mindful of your surroundings. If you think "making a stink" and drawing attention will get you what you want, along the lines of "give me what I want and I'll stop making trouble" you are slipping into the mindset of being a trouble maker. Let me ask you, do you want to help a troublemaker. Probably not. If you are polite and DO NOT draw undo attention then a person in charge is more inclined to think that they can "make you happy" without others needing to catch wind of the price change and cause a stamped.

At this point once you have a "person in charge" and they realize you are looking for some type of compensation/consideration, it helps to be clear about your problem. What I mean by this is explain your desire to continue to be a customer and that this misinformation about price will make it hard for you to "trust" their advertising; however, you do really like their store and are interested in coming back; otherwise your inclined to just use the online retail which guarantees the price. This last line is the kicker. Retail in particular is very aware that it is at a disadvantage with the online competition so reminding them of this, while sometimes a risk, often results in the push to get the desired consideration.

All of this advice also is mute if you are not a considerate customer. Don't be rude, don't belittle people who are employee's of the company. Be respectful and attempt to avoid making "problems" for others.

I hope my approach helps you complain in a helpful way.

Benefits of Working with .NET ConcurrentQueue<T>

Kolejka.jpeg*
Working with a queue in .NET is an excellent experience and provides a great way to decouple execution paths. Once you start working with in memory queues your just a hope, skip, and a jump away from using "infrastructure" queues such as Service Bus, RabbitMQ, or even MSMQ on Windows Desktop.

While it is quite possible to work with the standard .NET Queue<T> object if you find that you are in need of using a lock to ensure consistent results with putting things in the queue (i.e. Enqueue) and taking things off (i.e Dequeue or Peek) or even just to obtain a consistent reliable count, well then you should probably take a look at ConcurrentQueue<T>.

There is a whole section of the .NET Framework with deals with concurrency concerns by creating specific objects for Dictionaries, Stacks, Queues and more that are considered thread safe.

When I started working with ConcurrentQueues I found it helpful to put my own wrapper around the concurrent queue object and so I've demonstrated that in the following sample which I hope others find useful.

Link to example: https://gist.github.com/briannipper/c861e708874428d4dc6dda5817411c70
*Public Domain, Link

Things I need to remember... IIS Express ports for HTTPS and Configure WCF Rest endpoints for HTTPS

Don't Forget
As I have been writing consistently for a bit now I've already found myself referring back to things I've posted, which has prompted me to try and post items which have taken a bit of time to work out. It seems that the act of "writing it down" causes me to remember information better. Perhaps it's all the thought into attempting to write something useful.

This post includes two details which took me more than a few hours to sort out so I hope to help others who have a similar need. Funny enough it could be me in the future after I've forgotten this information.

First up is a little detail that I know I've forgotten and looked up more than once, and depending on the google/bing results I find sooner or later.

Setting up IIS Express to host a site via HTTPS

When configuring your site to run in IIS Express using Visual Studio it's fairly trivial to set the attribute to use SSL (i.e. HTTP over SSL which is HTTPS) as it's just an "enable" in the drop box for use SSL. Additionally when you do this it will ensure that a local cert is setup as well which should be locally trusted so that the browser doesn't through any warnings; but a detail that could trip you up is you'll need to use a PORT within a specific pre-allocated range as IIS Express grabs the range just for this purposes.

Port range 44300 to 44399

A great troubleshooting at pluralsight that includes this little gem on port ranges along with other tips.

Setup bindings for WCF REST service end-points to use HTTPS be default

I'll be the first to acknowledge that this is really legacy information as WCF REST services have been essentially deprecated by .NET WebAPI technologies, and for that I'm VERY pleased; however, working with legacy technologies has NOT been deprecated so I'll post this here for others benefit.

This nugget of information took me the better part of the morning to dig out of a the Microsoft site on A Developer's Introduction to Windows Communication Foundation 4.

The detail I needed was found under the sub-heading aptly named Default Binding Configurations where it outlines that if you would like a particular binding to be used by default you simply need to create the binding without a name and by this convention all service end-points will, be default utilize this binding. The same is true of a behavior configuration, omit the name property and the binding or behavior is adopted by all services defined in the project.

From the same document they provide the following sample which I've replicated and modified to demonstrate how to enable HTTPS for all your service end-points.


<configuration>
<system.serviceModel>
<bindings>
<basicHttpBinding>
<binding>
<security mode="Transport" />
</binding> </basicHttpBinding> </bindings>
</system.serviceModel>
</configuration>

.NET MSTest and DeploymentItem Attribute

I was recently reminded that order of operation can byte you when trying to troubleshoot intermittent unit test failures.

First, I’ll be the first to admit that if you want to avoid problems with unit testing it’s best to avoid any dependency outside of your actual code base, things like databases, API and even the file system are best avoided in execution of your unit tests. That last one, the file system, in some applications is really hard to avoid.

If you happen to be using MSTest, a helpful attribute for your test is DeploymentItem. This particular attribute allows you to define a file within your unit test project assuming it’s marked as “Copy Always”. With this attribute in place you can then combine this with TestContext.DeploymentDirectory to find this sample file and then do what you need to do.

In my particular case I was having an issue with the unit tests failing on the build machine but not locally. After stumbling around for a bit I finally feel back to the old debugging standard of printing out the entire directory of files to the console. It was at this point that I realized I was able to create files in the expected location so why could I not find the DeploymentItem? The answer, side effects.

In one test I “moved” the resource item, and it had just been a situation where locally the tests “happen” to execute in an order such that the move operation was happening at the end of the test cycle, thus masking the problem. When I switch the test in question to “copy” the file, which was still a valid test mind you my problems went away.

As an alternative approach to solving this problem you could make the file an embedded resource and then, in the arrange part of the test, write out the file to disk and then perform the test to ensure that the file always exists at the start.

I hope this information helps others avoid wasting time with these type of silly order of operation problems.

Keep Calm and Get the Facts

KEEP CALM and GET THE FACTS It often amazes me at how often I find myself apologizing to others for rashly commenting on a problem before I have all the facts. In fact, after so many years as a professional in the IT space and dealing with countless critical situation/severity one problems (a.k.a. Sev1, crit-sit) I would be able to react in a more appropriate and beneficial manner. The sad truth is that, as I write this post. I JUST "did it wrong" on a Sev1 from earlier in the day and found myself sheepishly apologizing for my brash response.

Perhaps by writing this down and publishing for others to read I can obtain the positive results of public shame to result in my correcting my behavior. At least there is a chance someone else can read this and benefit from my mistakes.

The fundamental problem I seem to have is that I hear a few pieces of information or even just the subject line of an e-mail or short text message and immediately jump to a conclusion. Having done a bit of a personal post-mortem, I find that this tends to be my go to response when there is a heightened sense of urgency. Perhaps when there is a Sev1 and my flight or fight response kicks in the adrenaline kicks in and causes my brain to overly react, at least that is the excuse I'm going to tell myself. What can I do about this? First, remember to breath, while it is perhaps a cliche it has been proven over and over again that taking a few deep breaths and making sure you have oxygen flowing helps you to think clearly. I also, personally, think that the time it takes you to breath can help dissipate some of that "nervous" energy that is initially kicked off by the excitement.

Even if you forget to take the deep breaths, the one detail that must not be forgotten is this... Get the Facts. This is summed up quite nicely in a wise proverb which reads "When anyone replies to a matter before he hears the facts, it is foolish and humiliating" (Proverbs 18:13). Let me tell you, I feel like a fool when I realize that I've spoken about a problem, incorrectly. and it all could have been avoided simply by obtaining all the facts, which often are readily available.

Related to this, when you find that facts you have distributed were inaccurate, it is vital to attempt to distribute the correction as quickly as possible. Even if you find that you don't have corrected information, but you know what was previously stated was inaccurate, it's helpful for others to know that so that decisions are not made on faulty details.

I hope that my faults and lessons learned will be of benefit to others going forward.

Compose a message through some else’s eyes

Twilight Zone: The Eye of the Beholder
I've always enjoyed science fiction work that intended to make the onlooker reflect on their world view, you know, really make you think about yourself. A wonderful example of that is the classic Twilight Zone: The Eye of the Beholder.

Although I don't think it's needed at this point I will mention there are spoilers about this particular episode.

The story outlines the tragic plight of a woman who has been horrible disfigured and all other attempts to make her look "normal" have failed until this final last ditch attempt. Through out you don't see the woman or the faces of the other characters. That is until the end when they unveil the woman by removing the bandages and you see, what at the time of filming, would have been considered an attractive movie star and as the camera pans around you also realize that everyone else appears to be "disfigured" according to what the vast majority of humans look like. Now your mind is blown and you spend the rest of the day really thinking about what you consider "normal". Mission accomplished.

Clearly this is a story that attempts to drive home the point that, as the old adage goes "beauty is in the eye of the beholder". Well the principle can hold true for our communication. The meaning of our words, written, spoken, etc. are in the minds of the recipient.

While I agree that what constitutes a normal appearance is very subjective and is driven by cultural biases and trends of the time I would propose that communication can fall into this same arena, but in a sort of microcosm of time. Most of us have probably experienced a situation where you've sent a text, e-mail or social media post and the results where widely varied. After getting this feedback and re-reading the message through the lens of those other readers you can clearly see why they interpreted your words as they did.

This does not always mean that something horrible was conveyed or that you horrible offended a portion of the population, it could simply be that there was a misalignment, but still wouldn't it be great to find ways to avoid it in the first place. Perhaps these next communication tips could help you as they have helped me.

Know your audience and write to them, not to yourself. When you draft communication or are speaking to a group consider the audience and avoid the trap of assuming they know what you know. This doesn't mean you have to spell out every little detail and link to a thousand different references, but be aware that terms often have context and you should try to explain the context when possible as this can avoid a great deal of misalignment.

Tone of voice DOES NOT travel across written word without a HINT. This is easily illustrated in the form of j/k. When you write a text message to someone that is clearly meant to be sarcastic tacking a little j/k at the end avoids the possibility of the other person taking your words literally. It is wise to do this in all formal settings, such as speaking to a larger group beyond your immediate colleges or friends that probably better understand your sense of humor.

When you have the benefit of having some time to draft a response consider re-reading a message after some time passes from initially writing it. Not only will you catch obvious typos you may find that portions of the message carry a tone you didn't intend. Maybe you were hungry when you were writing and rushed through something or you had a bad morning due to something completely unrelated. Take a break and return to re-read with a "fresh" pair of eyes.

I hope this will benefit you as much as it has benefited me. As I've found effective communication and the conveying of ideas in a way that aligns two unique personalities is like having a super power and I'm sure if harnessed correctly it can be used to conquer the world.

HTTP BIN - Developer Tools

One of my favorite tools in my 10 in 1 screwdriver. The appeal is that in a single tool, just by flipping some pieces around you have just what you are looking for. In many minor jobs around the house I can do everything needed just with this single tool.

When writing software you can find similar valuable tools and one such valuable tool is http://httpbin.org

Like with most tools, the simplicity can be deceptive because how hard could it be to have an API that just echo's back what you sent. But then that is what is so great about httpbin, it's done and it's very robust.

By way of example, when building a front-end you may find that the API isn't complete and thus you want to "stub out" the calls. With httpbin you can send a call such that it echo's back what you want and thus you are wiring up to an actual HTTP call.

On the server side you might have the same need to wire up to a service that isn't something you are allowed to call from your local machine, but with httpbin you have the means of mocking with an actual HTTP call.

Using telnet to troubleshoot connectivity.... and watch Star Wars

So the teaser I'll lead off with is, at the end of this post you'll know how to watch Star Wars A New Hope in ASCII art form (i.e. text based graphics)... So let's begin.

Recently I found myself working on a legacy application such that I had to redeploy the multi-node application into in Infrastructure running in Azure. Two of the models communicate via a WCF binding over net.tcp. While I was operating under the impression that the firewall had been opened up to allow the communication I was a bit stuck on how to validate that the communication was working. So I wandered up to the networking area to chat with our Network Architect. As I often find, chatting with those who have an expertise different than your own requires patience and effort in the area of translation. Even though we both work in the IT field we each have our jargon to deal with, but the price of admission is well worth it.

In just a few minutes of explaining my challenge I had been provided with an excellent means of confirming that, from a networking perspective, communication was possible. Enter telnet. Telnet, if you may not know is a protocol that has come and gone in terms of it's heyday. Also to be clear, telnet is NOT secure and should NOT be left running, but in our case it's helpful as a tool for a short period of time.

As originally stated I need to confirm communication via net.tcp between two nodes in a network was working, so how can one accomplish this "easily", well what we want to do is emulate the communication, so you can use telnet on the same port as the net.tcp listener and if you get a blank screen you've got connectivity. When a telnet client calls out to the target server over a port that is listening for traffic, say for an HTTP or NET.TCP request the response will come back and the telnet client won't know what to do other than show a blank screen. BUT it proves that the communication is working on that port. So the primary question is answered.

So to the example of how this work and on to watching Star Wars.

Enable Telnet Client
BE SURE TO DISABLE TELNET WHEN YOU ARE DONE WHICH I'LL MENTION AT THE END.

- Open a PowerShell command as Admin
- Enter the command
Enable-WindowsOptionalFeature -Online -FeatureName "TelnetClient"
- Close PowerShell
- Open command prompt
- Enter the command replacing your info as needed
Telnet [IP or DNS] [Port for net.tcp listener]
Example: Telnet google.com 443

If you get a blank screen then an application is listening on the target port and communication is possible via NET.TCP (or HTTPS, etc).

if you get the error "Could not open connection to the host, on port xxx: connect faild" then you might need to go back to the Firewall to see if something else is blocking.

What about Star Wars
As promised, if you have enabled telnet and are done troubleshooting you can check out Star Wars via telnet by opening a telnet connection as follows.

telnet towel.blinkenlights.nl


Disable Telnet Client
- Open a PowerShell command as Admin
- Enter the command
Disable-WindowsOptionalFeature -Online -FeatureName "TelnetClient"

Talk to the Rubber Duck

*See attribution of image at the bottom.
One of the greatest tricks I have had the pleasure of participating in is that of helping someone solve a problem simply by listening. I'm sure there is a more accurate description of this phenomenon, but in the world of programming it's called Rubber Duck Debugging. I was first introduced to this idea from listening to Jeff Atwood podcasts/interviews as well as reading his blog under CodingHorror.com he wrote about the topic under the title Ruber Duck Problem Solving.

Why the term Rubber Duck? Really it has nothing to do with the rubber duck, the intent is to encourage someone to solve their own problems by working through the very difficult challenge of describing what the problem actually is. It is often surprising to people when they discover that simply the act of attempting to describe a problem helps them to find the answer they were looking for.

So my approach to assist others is by asking questions that I often ask myself to help them work through the Rubber Duck process. For example if a developer where to ask me to help them with a particular bug the questions I ask them are seemingly obvious, but surprisingly, their answering them can often lead to an answer. 

So what type of questions should you ask? The most obvious is "what is the problem" this question alone can often help you find the root of the issue and resolve the problem. Let's demonstrate this by way of a contrived example.

A developer is having a problem with her application communicating with a third party API service. So that is the "obvious" problem, but now comes the next question, are you ready... "why is that problem occurring?" 
So how can the developer answer this question? Debugging is a great place to start. A great place to start is by attempting to reproduce the issue locally. It's worth noting that in our example this doesn't result in re-producing the issue found on the sever. Now in some cases, such as when the developer isn't permitted to remotely access the server being impacted, there is still much that has been found. How so? We now know that the application is NOT the issue, nor is the API service broken. This points to an environmental issue. So even when you can't reproduce the bug locally you obtain more information. In our derived example the developer now has facts that he can presented to other IT engineers such as networking, security or even server admins to engage with in a meaningful conversation to solve the problem.

Another interesting side effect of engaging in this approach is that you will find you have a broader understanding of how systems connect and interact. Additionally your "Google Fu" is improved as well as you can key in on the specific phrases related to your problem. This is an excellent skill to develop and when you find yourself in the position of the rubber duck you can ask the right questions to help someone else solve their own problem.

So go be a rubber duck.

*By gaetanlee - https://www.flickr.com/photos/gaetanlee/298160434/, CC BY 2.0, Link

Example of using LazyInitializer.EnsureInitialized

When looking at making systems more efficient it's helpful to think about being lazy. A helpful tool in the .NET tool belt is the static class under the System.Threading namespace LazyInitializer. In particular this class contains a method EnsureInitialized

This method is very simple to use and provides a convenient way to ensure that an initialization is called only once based on the value of the target property already being populated.

For example, if you need to load a file as part of the setting of values in an application you can use the EnsureInitialized method.

The following is a derived example of using the class to illustrate the usage pattern.

If you are having trouble viewing the below code you can use this link to view the gist github. https://gist.github.com/briannipper/ac2778ccd0d15b4ab217083331419ae7

Automate Work and Life with IFTTT and Office 365 Flow

I'm a huge fan of automation for situations where I find myself doing something mindlessly over and over again, like punching a clock. The way I approach these problems is to find a way to eliminate some portion of the task and iterate till it just happens on it's own.

To avoid confusion, when I say punching a clock what I mean is knowing a rough approximation of how long I worked on a given day so that when I need to put in my "billable vs. non-billable" time at the end of the week I have a rough idea of how much time I spent on working each day. As such the way I "manually" accomplished this was putting the time I arrived at work and the time I left into a spread sheet.

So, step one, avoid having to open excel to enter the information. This was easy using a feature of Office 365 called Flow. It's an application that smells very similar to the populate If This Then That site (a.k.a. IFTTT.com). Flow makes it easy to setup operations that interact with other parts of Office 365, such as creating files in OneDrive based off of attachments in an e-mail, creating reminders in Outlook, even putting info into a spreadsheet.

Quick aside, Microsoft is bad at naming things, like really bad. In the case of Flow it's both the application as well as an instance of an activity that you create. So going forward when I say I created a "Flow" I mean I created an activity within the application Flow... right, bad at naming things... but I digress.

All flows are comprised of at least two parts. An event/trigger and an action. For example, I could create a flow which would be described as follows...

Every time I receive an e-mail (i.e. trigger/event) take all attachments on the e-mail and place them in OneDrive under a folder with the same name as the "from" e-mail address (i.e. action taken).

Pretty handy. 

So in my case, the action was pretty easy, put a row of data in a spread sheet. But what would be my trigger? After some digging I landed on an interesting feature of Flow the application. Along with the Web Site/App up in the Office 365 subscription there is a mobile application you can install on any mobile device. Along with that you can create a "button" which can act as a trigger, so right from your phone you just tap and the trigger is fired. It has the added bonus of providing GPS data, if you chose, which can also be embedded in the flow for use in the action. So now my flow could be described as..

Every time I press a button in the mobile Flow App on my device (i.e. trigger/event) put a row of data in a spread sheet I specify that includes the time the flow started to represent when I start/end work along with the GPS of where I pressed the button (i.e. action taken)

At first I thought this was great and wonderful and was impressed that I had done all of this in the course of a few hours. However, after the novelty wore off I found that often I would forgot to press the button, and per my original goal I was still mindlessly doing something, even if it was a much smaller process.

Enter IFTTT. IFTTT is probably also something that an enterprise could use; however, I most often think of it as a consumer service. IFTTT has a much easier way to describe creating activities, which they refer to as recipes. These recipes have the same "ingredients" as a flow. A trigger followed by an action. I should mention that Flow seems to be easier to customize as well as to support more extensive chained activities, but I digress. IFTTT also has a mobile app and, unlike flow, it supports GEO-Fencing, or the idea of a trigger based on entering or exiting a set of geo coordinates.

I had already been using IFTTT for my "personal" automation projects. As such I had already had been using IFTTT to trigger an SMS to my lovely wife when I arrived at work or when I was heading out. In hindsight I'm wondering why this didn't hit me right away, but simply forwarding the SMS I sent to my spouse as an e-mail to my work address could be the I needed for the FLOW.

So by "coupling" IFTTT and FLOW I was able to get the best of all worlds. I just go about my business and my spread sheet is updated for me. I even left the button flow in place so that when I work remote I can still keeping things down to a single button press and move on.

What type of activities would like to automate in your day to day routine?

Fire in London helped preserve the city of Savannah GA through the modern age

A view of Savannah as it stood the 29th of March 1734
By Pierre Fourdrinier and James Oglethorpe
[Public domain or Public domain], via Wikimedia Commons
Once upon a time, before my wife and I had kids, we visited the city of Savannah, GA. We had the opportunity to take a carriage ride through some of the most oldest portions of the planned city. A detail of the tour that has stuck out in my mind was how much of the original street remained unchanged. The reason this stuck out was, as I had found in visiting a few other historic locations on the east cost of the US, many locations as old, if not older, had to be altered to accommodate motor vehicles. So why was that not the case for the planned city of Savannah GA? Because of fires in London. Huh?

The founder of Savannah was James Oglethorpe. Born in England in 1696, he would have been familiar with the fires that ravaged that part of the world in 1666. The great fire of London burned through much of the city due to many factors, one of which was the way homes were built, basically right up against one another. Out of that fire a new approach to city building came into fashion that of having easements between homes to help avoid the spread of fire.

This fascinated me as it turned out that in attempting to address the concern of fire by spacing out plots of lands and having larger pathways between the homes it resulted in a design that allowed the original layout to survive, more or less, down to our modern age.

As time progressed and the advent of items such as cars came along Savannah found it self having plenty of room to allow cars to flow in both directions, for sidewalks to be put in, for construction of sewers and electricity to be distributed all without the need to tear down or reroute the flow of movement.

Understanding the chain of events that caused choices to be made can be one of the most rewarding results in study of history.

So dear reader, what historical tidbits have captured your interest?

Creating a DSC module to install Oracle Client and disable enable UAC

You ever get a task and think, how hard can this be? Then you begin down what turns into a seemingly never ending journey.

Welcome to mixing old and new technologies.

I found myself in the position of needing to install the Oracle Client on a Windows VM running in Azure using Desired State Configuration (a.k.a. DSC) and, well, let's just say it was an interesting journey.

Listed directly after this is the snippet of the DSC which I'm posting here in hopes that others who have a similar need may stumble across this and save themselves from headache.

I'll provide a break down of the major pieces. I have intentionally left the logging pieces in the script so that any who would be so bold as to cut and paste without reading over at least have a log to dig into.

A few important prerequisites to using this script.

  • You'll need a DSC module that downloads the Oracle Client Package, the one I used was for 11gR2 and includes an answer file for the client.
    • These items will be specific to your situation.
  • Your Local Configuration Manager (a.k.a. LCM) should be set to allow DSC to reboot.
  • This script WILL disable UAC and if for some odd reason after the reboot the script did NOT continue running UAC is LEFT in a disabled state...
!! WARNING !! UAC SHOULD NOT BE LEFT DISABLED
Be sure you understand what this script is doing and how to diagnose the state of the VM once the script runs.



Script InsallOracleWithAnswerFile {
TestScript = {
$oraclePathTest = Test-Path HKLM:\SOFTWARE\Wow6432Node\ORACLE\KEY_OraClient11g_home1;
$regVal = Get-ItemProperty -Path HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System -Name EnableLUA;
if (($oraclePathTest) -and ($regVal.EnableLUA -eq 1)) {
return $True
}
else {
return $False
}
}
SetScript = {
$continueProcessing = $True;
$logTimestamp = Get-Date -Format yyMMddHHmm;
$logPath = "C:\MyFolder\OracleInstallStatus";
$logFile = "$logPath\install-$logTimestamp.log";
Add-Content -Path $logFile -Value "-- Begin Overall Oracle Install With Awnser --";
Add-Content -Path $logFile -Value "-- Opening - Get Curernt UAC VALUE --";
$regVal = Get-ItemProperty -Path HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System -Name EnableLUA;
Add-Content -Path $logFile -Value "UAC Reg Key:";
Add-Content -Path $logFile -Value $regVal;

if ($regVal.EnableLUA -eq "1") {
Add-Content -Path $logFile -Value "-- Change UAC to disable (i.e. 0) --";
Set-ItemProperty -Path HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System -Name EnableLUA -Value 0;
Add-Content -Path $logFile -Value "-- Set Global Falg to trigger Reboot --";
$global:DSCMachineStatus = 1;
$continueProcessing = $False;
}

if ($continueProcessing) {
Add-Content -Path $logFile -Value "-- Begin Check for Oracle Install --";
$oracleRegPath = "HKLM:\SOFTWARE\Wow6432Node\ORACLE\KEY_OraClient11g_home1";
Add-Content -Path $logFile -Value "Oracle Reg Path:";
Add-Content -Path $logFile -Value $oracleRegPath;
$oraclePathTest = Test-Path $oracleRegPath;
Add-Content -Path $logFile -Value "Results of Testing Reg Path";
Add-Content -Path $logFile -Value $oraclePathTest;
if ($oraclePathTest -eq $False) {
$params = "-silent -nowelcome -noconsole -waitforcompletion -noconfig -responseFile C:\MyFolder\OraclePackage\Oracle11gClientx86\11g\x86\client\response\client_030514.rsp";
$oracleClientExe = "C:\MyFolder\OraclePackage\Oracle11gClientx86\11g\x86\client\setup.exe";
Add-Content -Path $logFile -Value "--- Starting Oracle Install ---";
Start-Process -FilePath $oracleClientExe -ArgumentList $params -Wait -Passthru;
Add-Content -Path $logFile -Value "--- Finish Oracle Install ---";
}
else {
Add-Content -Path $logFile -Value "Oracle Reg Key was found so NOT running install."
}

Add-Content -Path $logFile -Value "-- Secondary - Get Current UAC Value --";
$regVal = Get-ItemProperty -Path HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System -Name EnableLUA;
Add-Content -Path $logFile -Value "UAC Reg Key:";
Add-Content -Path $logFile -Value $regVal;
if ($regVal.EnableLUA -eq "0") {
Add-Content -Path $logFile -Value "-- Change UAC to enable (i.e. 1) --";
Set-ItemProperty -Path registry::HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System -Name EnableLUA -Value 1;
}

Add-Content -Path $logFile -Value "-- Set Global Falg to trigger Reboot --";
$global:DSCMachineStatus = 1;
}

Add-Content -Path $logFile -Value "-- FINISH Overall Oracle Install with Answer --"
}
GetScript = {@{Result = "InsallOracleWithAnswerFile"}}
DependsOn = "[Archive]OraclePackageExtract"
}

Test Script

In short the test script is confirming two things.
  1. Is the Oracle Client installed based on the expected registry key being present.
  2. Is UAC in the ENABLED state, also checked by looking at the expected registry key value.
If either of these conditions are FALSE there is a need to run the set script.

Set Script

This is where it gets hairy interesting.
I'll skip commenting on the logging as it should be clear what the logs are doing, an interesting point about the way the logging is done is that it can indicate how many times this script is run since the log is written to a file with a date time stamp as the name which I found handy since I put myself into an infinite loop a few times.

Since I'll have to essentially provide myself a circuit breaker I set a continue flag at the start that I can use later on.

Due to the way the Oracle Installer works, even WITH the answer file you'll still get a UAC prompt before you can begin the install, thus the first part of this set script will obtain the value of the UAC via a registry call.

Once I have the UAC value I check to see the state of the value, in the case where it is ON (i.e. 1) I want to disable it so I change the registry value, also setting my continue flag to false and set the environment variable to tell DSC to reboot after this script is completed. Namely...
$gobal:DSCMachineStatus = 1;

At this point I'm checking my flag to ensure that I SKIP attempting to install the oracle client, this is because even thought we updated the registry it does go into effect until we reboot and thus no need to run the Oracle Installer.

So we log that we can't install the oracle client and continue on.

The next bit also results in NOT applying changes since the UAC value in the registry is technically STILL 1 because UAC isn't ACTUALLY persisted until reboot.

I am being sloppy at the end and RESETTING the DSCMachineStatus. Meh.

So now we reboot.

Now running through the script a second time AFTER the reboot we will be left in a state where the UAC IS disabled but we DO NOT have the Oracle Client installed, so when we run through the set script the oracle install WILL occur.

We finish out the second run by enabled UAC and another reboot.

When the system comes up on the third time UAC is ENABLED and Oracle Client is installed.

Do please let me know if you run into issues with this script in the comments.

Resilient Scripting is scripting that can be rerun... safely


Making a script that is resilient can mean many different things to different people, IMHO an important one is to be able to re-run a script "safely". By safely I mean to minimize side effects and to prevent negative consequences.

To illustrate let's say we have an install process and we need to log details about what happens when we run the install.

To keep this simple let's just focus on the logging requirement.

We want a log file that we can look at when an install happens. So a simple approach would be as follows.

Set-Content log.txt "Information about Install"

Nice and simple, we have satisfied the requirement. But let's see if we can make this one liner more resilient.

$logFileTime = Get-Date yyMMddhhmmss
$logFileName = $logFileTime + "_log.txt"
Set-Content $logFileName "Information About Install"

Now even if we run this install multiple times we will have a script for each instance of the install even when it happens on the same day during the same minute. Further more an interesting side effect of making a script more resilient is that we know can see how many times this install is run because we have a file for each instal attempt. So the benefits of making it more resilient compound.

I hope this simple example helps you dear reader see how taking a second pass at making a script more resilient, even a one liner can benefit from a second pass. 


Dogfooding is important for business policies as well

Perhaps you are unfamiliar with the term "dogfooding", in short it is the idea that a company or organization use its own Product or Service in addition to providing it to the others. A more complete definition can be found on Wikipedia under Eating your own dog food.

The reasons a group might chose to dogfood are varied, but an important reason is to ensure that they are familiar with the challenges with using the good or service, or to put it in business jargon... they want to find opportunities to make, what they make, better.

For groups that are just starting to do this it can be EYE OPENING what their customers/patrons are dealing with to use their stuff. Often the overall impact is great and everybody wins. So why not do this for policies within, let's say an IT organization. How would one go about dogfooding a policy you may ask? Thanks for asking... let me tell ya what I think.

For any organization to be successful there needs to be consistency in the way it approaches work. This is especially true for IT groups. The term consistent and repeatable is often spoken when discussing how to approach various aspects of the IT assembly line. At times the way to achieve this goal is to  determine a policy that governs the way IT associates do their work. Examples of this could be found in policies about documenting changes, how to request access to a system. You can see this even outside of IT, such as when a communications department issues a corporate policy on how to setup your e-mail signatures.

So back to the point I was making, it can be of GREAT benefit if the ones making the policy are sure to follow their own policy to ensure they are not creating any undo burden on those that the policy is enforced on. This can best be illustrated with a concrete example. Let's look at having a source code repository based on the GIT system with a Pull Request policy...

Now, most GIT systems have the concept of a branch policy that permits the owner to require a Pull Request to merge changes along with some level of approvers and perhaps even all comments to be resolved and a CI build to complete successful. If the corporate policy is to have this as the branch policy it stands to reason that even repos for the said owners should also have this policy. So why wouldn't they?

This is why a slippery slope mindset can occur and what can wind up causing groups to NOT dogfood their own policy. Further along our example, the policy maker has a repo which only he uses, so no reason to have a PR model, right I mean who is going to approve it if he's the only one working on the repo? This is where the real benefit of dogfooding the policy comes into play. Right off the bat it will require the policy maker to grasp the need of doing one of two things.

The first option would be to create an addendum to the policy which lifts the branch policy in terms of self review or having any reviews at all, but perhaps they still need to create a CI build which would again be ideal and help ensure the intent of the policy stick, i.e. that the intent of the policy is to have repos which are of high quality and the CI build on the PR prevents the master branch from being in a "unbuildable" state.

The second option, and the one I personally prefer, the policy owner would need to invite someone into the repo to do reviews thus further ensuring that policy is more fully adopted.

So if you find yourself in a position of setting policy it is helpful for all involved, including you dear policy maker, to adhere to the policy yourself.