Here’s a good litmus test to determine how seasoned a developer is: ask him to design a method that is conditional clause heavy. Most developers will instinctively resort to arrow code: the excessive nesting of conditional clauses which pushes the code out into an arrow formation. Coding Horror has a good rundown of the problem and some tips to minimize it.

And you know you’re definitely in trouble when the code you’re reading is regularly exceeding the right margin on a typical 1280×1024 display

Guard clause is the most important concept to mitigate arrow code. Most developers are unaware how important it is. First, as explained in the article, it minimize convoluted conditional clauses (cyclomatic complexity). Since they act as sentry points, you can make sure that parameters are satisfied first before moving to the next line of code. Succeeding code can execute with a privilege that variables will always be present, no checks needed (thus reducing additional conditional clauses). Second, it creates a clean and elegant method anatomy: guard clauses, purpose of the method, and return. This makes the method self-documenting and predictable. This also allows you to plan usage of resources, which brings me to my third and last point: efficiency. Resources will only be spent when they are actually needed.



Coding Horror:

Don’t feel inadequate if you aren’t lining your nest with the shiniest, newest things possible. Who cares what technology you use, as long as it works, and both you and your users are happy with it?

That’s the beauty of new things: there’s always a new one coming along. Don’t let the pursuit of new, shiny things accidentally become your goal. Avoid becoming a magpie developer. Be selective in your pursuit of the shiny and new, and you may find yourself a better developer for it.

There’s a shittier variation of the magpie developer: the one who keeps moving on to the next shiny object without even learning the fundamentals. They’re just in it to tickle their fancy, not to get things done. Stay the hell away from them.

Outside local news (and occasional live sport event), I get most of my video content from the Internet. Through the years there has been a steady uptick of video content from the Internet . Most of the contents are foreign, however, so when a Netflix clone became available, it was a no-brainer for me. iflix is a video on-demand service mainly targeted to Southeast Asia (currently only in Malaysia, Philippines and Thailand).

I’ve been trying iflix for more than a month now. If you are experimenting on the idea cord-cutting or just looking for an alternative way to get more video content, read on.

Things they got right

Streaming. I was pleasantly surprised with iflix’ streaming stability. The Philippine broadband situation is shitshow right now; however, iflix’ adaptive bit-rate streaming—video quality changes depending on the quality of bandwidth of connection—actually makes a difference. I can watch any movie in any device with almost zero hiccup. I have a 2MBps connection in my house, and it holds up pretty good. Streaming over cellular data is decent. The videos stream almost flawlessly as long as you’re on 3G or higher.

Price. iflix has an aggressive pricing of P129 per month. This is a third of the average cost of a cable monthly subscription. This is dirt cheap: a single movie rental in Apple App Store or Google Play Movies ranges between P200-P600. To put it in another perspective, if you watch re-runs of a TV show like Friends or How I Met Your Mother, you could purchase a USB hard drive to store videos. Assuming that obtaining the videos does not cost you anything, you’re out at least P 3,000 for the hard drive alone. The same amount of money can be for a 2-year iflix subscription. Again, dirt cheap.

Things they need to improve

Technology. This is where things go south. iflix has mobile and web apps; the mobile apps are, I believe, web apps wrapped with a native shell. Like most non-native mobile apps, they are buggy and kludgy.

  • General navigation. The mobile app’s navigation is plagued by slow performance. For example, tapping a movie does not instantly bring you to the next screen. Additionally, pressing the back button shows the previous screen and a progress indicator at the top bar until target screen is loaded. The scrolling is not buttery smooth and doesn’t have inertial scrolling. The search experience is also something to be desired. Tapping the search icon exposes a slow, sometimes unresponsive, webpage-like experience. These are all symptoms of a web app dressed as a native app.
  • Screen projection. I’ll be honest, this is the main reason why I purchased the service. Sitting back on your couch, picking up any movie and watching it on a big screen anytime of the day is very appealing to me. iflix almost got it right. It supports both Chromecast 1 & 2. I can project to any of these devices with zero issues. However, it does not support Google Cast enabled devices such us Google Nexus Player. Unlike Google Cast implementation like Spotify where it hides unsupported devices, iflix embarrassingly brings up unsupported devices in its app only to flake out when you stream the video. It does not support AirPlay. No, projecting your desktop to Apple TV doesn’t count.

Content Selection. If you are looking for fresh content, iflix is not for you. Do not expect the latest episode of Arrow or The Big Bang Theory here. However, if you enjoy watching reruns (like I do), that’s where the real value comes in. iflix is geared toward this: catching up shows you missed or binge watching an entire season of a show. The movies are also slim pickings. According to this FAQs they update their movie catalog in a weekly or monthly basis; however, I would imagine that their movie catalog, at best, is a few thousand.

Should you get it?

If you have a decent internet connection without aggressive capping you should get it. Despite my disappointments, it’s hard to say no to because of its price.

I’ve a side project that has been put on hold for a while and I decided to pick it up last week. It’s a small, two-part web app. One part pulls data from Twitter.

In order to make authorized calls to Twitter API, an application must first obtain access tokens from Twitter. There are two ways to do this: OAuth access tokens in behalf of the user or Application-only Authentication. I pondered on writing my own full-pledge OAuth client library but it’s an overkill for my requirements. I settled on Application-only Authentication.

Writing a client library for Twitter sounds fun and cool until you actually do it. It’s meticulous and finicky. In fact, scouring the Internet for a working code was a fruitless endeavor.  Half of my Internet searches yielded half-baked answers. The other half suggested that I should just use third-party libraries which I was adamant about—I don’t want to miss the opportunity to learn in this project. Desperation led me to Twitter’s documentation: I have to do everything from scratch without any help from StackOverflow.

I use HttpClient for everything that’s HTTP—API call, file upload, etc.—it provides better granularity when sending and receiving HTTP requests and response, respectively. Despite using HttpClient for a while now, using it for OAuth tested my patience. Letting an API specification dictate my code is not fun at all and mostly trial-and-error. To make things worst, the errors are often cryptic or vague.

Several hours later, I was able to pull together a working build. Application-only Authorization is comprise of three parts: preparing your keys, retrieving the bearer token and sending actual API call.

Preparing Your Keys
Application-only Authentication has a limitation: it does not have user context so it has a few limitations. However, contrary to what most developers believe, it’s often suffice to most situation. Here’s how to prepare consumer key and consumer secret key when sending an HTTP request (excerpt from Twitter):

  1. URL encode the consumer key and the consumer secret according to RFC 1738. Note that at the time of writing, this will not actually change the consumer key and secret, but this step should still be performed in case the format of those values changes in the future.
  2. Concatenate the encoded consumer key, a colon character “:”, and the encoded consumer secret into a single string.
  3. Base64 encode the string from the previous step.
    var encodedConsumerKey       = HttpUtility.UrlEncode(_ConsumerKey);
    var encodedConsumerKeySecret = HttpUtility.UrlEncode(_ConsumerKeySecret);
    var encodedPair              = Base64Encode(String.Format("{0}:{1}", encodedConsumerKey, encodedConsumerKeySecret));

Retrieving Bearer Token
This is where I did a lot of trial-and-error. Preparing and sending the request requires a good understanding of how HTTP requests work. Converting that knowledge to .NET/C# is challenging if you’re inexperienced.

  • The request must be a HTTP POST request.
  • The request must include an Authorization header with the value of Basic <base64 encoded value from step 1>.
  • The request must include a Content-Type header with the value of application/x-www-form-urlencoded;charset=UTF-8.
  • The body of the request must be grant_type=client_credentials.
    var requestToken = new HttpRequestMessage {
        Method      = HttpMethod.Post,
        RequestUri  = new Uri("oauth2/token", UriKind.Relative),
        Content     = new StringContent("grant_type=client_credentials")
    requestToken.Content.Headers.ContentType = new MediaTypeWithQualityHeaderValue("application/x-www-form-urlencoded") { CharSet = "UTF-8" };
    requestToken.Headers.TryAddWithoutValidation("Authorization", String.Format("Basic {0}", encodedPair));

Making the Actual API Call
This is the easy part. Once you have the bearer token, just add an `Authorization` header to the request with the bearer token as its value and do a `Post` call.

    requestData.Headers.TryAddWithoutValidation("Authorization", String.Format("Bearer {0}", bearerToken));
    var results = await HttpClient.SendAsync(requestData);
    return await results.Content.ReadAsStringAsync();

Here’s the full working method:

        public override async Task Post(string path, HttpContent content) {
            var bearerToken = await GetToken();
            if (bearerToken == null || bearerToken == string.Empty)
                throw new Exception("Bearer token cannot  be empty");
            var requestData = new HttpRequestMessage{
                                    Method      = HttpMethod.Post,
                                    Content     = content,
                                    RequestUri  = new Uri(path, UriKind.Relative),
            requestData.Headers.TryAddWithoutValidation("Authorization", String.Format("Bearer {0}", bearerToken));
            var results = await HttpClient.SendAsync(requestData);
            return await results.Content.ReadAsStringAsync();
        private async Task GetToken() {
            if (_ConsumerKey == null || _ConsumerKey == string.Empty)
                throw new Exception("No Consumer Key found.");
            if (_ConsumerKeySecret == null || _ConsumerKeySecret == string.Empty)
                throw new Exception("No Consumer Secret Key found.");
            var encodedConsumerKey       = HttpUtility.UrlEncode(_ConsumerKey);
            var encodedConsumerKeySecret = HttpUtility.UrlEncode(_ConsumerKeySecret);
            var encodedPair              = Base64Encode(String.Format("{0}:{1}", encodedConsumerKey, encodedConsumerKeySecret));
            var requestToken = new HttpRequestMessage {
                Method      = HttpMethod.Post,
                RequestUri  = new Uri("oauth2/token", UriKind.Relative),
                Content     = new StringContent("grant_type=client_credentials")
            requestToken.Content.Headers.ContentType = new MediaTypeWithQualityHeaderValue("application/x-www-form-urlencoded") { CharSet = "UTF-8" };
            requestToken.Headers.TryAddWithoutValidation("Authorization", String.Format("Basic {0}", encodedPair));
            var bearerResult    = await HttpClient.SendAsync(requestToken);
            return JObject.Parse(await bearerResult.Content.ReadAsStringAsync())["access_token"].ToString();
        private static string Base64Encode(string plainText) {
            var plainTextBytes = System.Text.Encoding.UTF8.GetBytes(plainText);
            return System.Convert.ToBase64String(plainTextBytes);

I was contemplating our strategy on how to localize Voyadores—our ERP product—and I was torn whether I should do a file-based approach or a database approach. Localization is typically file-based but my spider sense is telling me that, in our case, it could spell trouble down the road. Turns out my hunch wasn’t far-fetched. This is a hot topic.

Rick Strahl (An MS MVP) has a great tool kit for managing localization via the DB – offer the ability to update and modify on demand through a controlled environment and does much of the heavy lifting for you.


Resx Resources are also static – they are after all compiled into an assembly. If you want to make changes to resources you will need to recompile to see those changes. ASP.NET 2.0 introduces Global and Local Resources which can be stored on the server and can be updated dynamically – the ASP.NET compiler can actually compile them at runtime. However, if you use a precompiled Web deployment model the resources still end up being static and cannot be changed at runtime. So once you’re done with compilation the resources are fixed.

This was the deal breaker. I can put up with Resx being XML—XML documents are a pain to deal with—it’s clumsy and becomes unwieldy when the file is too big. However, our app has increasingly demanded flexibility. The ability to add and modify application messages and notifications without re-deploying the app saves us dozens of developer hours. It also offloads tasks from our developer to BAs (Business Analysts).

Lastly, I have a beef with applications unnecessarily pulling data from disparate sources. It creates ambiguity to the framework and confuses the developers. We strongly encourage developers to be mindful when picking persistence strategy of their resources (data, configurations, localizations, application variables, etc.). More on this on a later post.


I have developed mostly web apps in my programming career. There have been a few sporadic opportunities to develop desktop apps but mostly small, uninteresting projects. So when a client came up to us and asked us to develop an offline desktop client, I was taken aback. This is not our comfort zone. Yes it’s still .NET, it’s still C# but XAML and MVVM?

Needless to say, we went for it. After months of whirlwind requirements and intensive coding, we are now doing User Acceptance Testing (UAT). We are still not out of the woods but I’m confident that it’s all downhill from here. However, the last few months weren’t easy.It was intense and exhausting. We had to learn new things in crunch time. These are the things that stuck.

Lock down your OS requirements

To mitigate support and deployment headaches, we initially set Windows 7 with SP1 as a minimum requirement for the application’s operating system. At the very least, this would ensure a fresher .NET Framework. However, OS version is just one of the problems. We overlooked the microprocessor architecture: most of our client’s users are using 32-bit Windows. Since developers typically use 64-bit OSs, we have to recompile the application for 32-bit OS. We also looked for 32-bit version of SQL Server Express. While these are fairly trivial to solve, we could’ve save ourselves a lot of time had we anticipated these requirements.

Asynchronous programming is your new best friend

It’s easy to take for granted how modern web technology stacks support asynchronous right out of the gate. While WPF has rich support for asynchronous programming, baking it right into your application is not trivial. Your team needs a solid understanding on how to use asynchronous programming to build a responsive and desirable user experience (UX). Previous experience is definitely beneficial. Lastly, picking the right technology strategy is also critical. For example, using Model-View-ViewModel (MVVM) framework to build the app is one of the best decisions we’ve made in support of making responsive application. MVVM supports Asynchronous Commands that can be tied to a UI control asynchronously.

Installers can be tricky

Creating an installer for a WPF application is painless and easy. Visual Studio does everything for you. It finds all the required libraries of your application and bundles them with the installer. When you install this to the user’s machine, any pre-requisite will be automatically downloaded. It should be a perfect strategy. Unless when it’s not. This became one of the most painful tasks during testing. It turns out because of our client’s strict borderline-to-poor intranet settings (see below), downloading pre-requisite components during installation became a hair-pulling experience: our client’s internet connection is throttled therefore it’s glacially slow. It took us almost a day to download a 22MB cumulative patch for SQL Server Express.

This could be solved in several ways. Obviously, an improved policy is the best way. However, that typically involves approval from layers and layers of management. If you don’t want to deal with that, download and bundle components ahead of time.

Strategize when picking local persistence

Local persistence has two obvious benefits: offline mode and performance boost through local caching. There are myriad of options and depending on your requirements, picking the right strategy can be daunting. You can opt for a full-fledged relational database server like SQL Server Express (which we did) or MySQL however, typically entails big checklist of pre-requisite to the user’s machine. You can also opt for a file-based database like Microsoft Access or even Microsoft Excel but this could mean licensing cost for your client. Lastly, you can pick a more passive data storage like SQL Server Compact or SQLite—something I wished we considered more. The setup overhead for these embedded data storage is vastly minimal compared to a full-fledged RDBMS like SQL Server Express. Embedded database are usually free to download and distribute.

Corporate settings

This is what caught us off guard. I’ve seen how hostile corporate intranet settings can be to applications but I didn’t realized how difficult it was until now. The best way to solve this issue is to make sure that the business process owner (the owner of the application) understands the requirements of their application. You can work your way around the settings but it won’t get you far. They need to agree with you on the resources your application will be using in their environment. They also need to get on board on what kind of permission your application needs. Resolving these kind of issues give your application stable room without resorting to crazy solution.

Finishing a difficult project is an exhilarating experience for me and I could honestly say that this is one of the most challenging projects I’ve ever done. Imagine how amazing this makes me feel.

James Hague:

I eventually saw the BASIC listing for his program. It was hundreds and hundreds of lines of statements to change colors and draw points and lines. There were no loops or variables. To animate the blood he drew a red pixel, waited, then drew another red pixel below it. All the coordinates were hard-coded. How did he keep track of where to draw stuff? He had a piece of graph paper that he updated as he went.

My prior experience hurt me in this case. I was thinking about theprogram, and how I could write something that was concise and clean. The guy who wrote the skull demo wasn’t worried about any of that. He didn’t care about what the program looked like or how maintainable it was. He just wanted a way to present his vision.

There’s a lesson there that’s easy to forget–or ignore. It’s extremely difficult to be simultaneously concerned with the end-user experience of whatever it is that you’re building and the architecture of the program that delivers that experience. Maybe impossible.

Good read and guilty as charged.

I can no longer write anything these days without laying down frameworks, importing libraries and following conventions. That includes weekend projects. To be fair, if you write software for a living, this kind of mindset is mostly valuable in prototyping. If you’re doing production code, I would argue that not only this is counter productive but dangerous as well.

I went back and forth whether or not to get the new MacBook. I had few reservations but I was intrigued by this machine. Last week, I was ready to pick up a new Mac and I was decided that I am going to get the early 2015 13″ MacBook Air (MBA). I went home with this machine instead.

It turns out that the 13″ MBA is out of stock and the last one they have is this MacBook with bumped up specs: 512GB SSD, 8 GB of RAM and 1.2 Core M processor. To get this configuration, I have to order online and wait at least 12 days. I was powerless to resist.

I’ve been using it for a week and so far everything is great.

MacBook 2015

The new MacBook’s Retina display is gorgeous. This is my first non-iOS Retina device—I used MacBook Air 13″ for more than three years—and the display is easily the most obvious benefit. Everything is crisp and the viewing angles are great. It has the same 1440 by 900 resolution, 16:10 ratio as the MBA 13″ so moving around on the desktop is familiar.

I picked up the Space Gray and I love it. The metallic Apple logo blends really well with the color. I was a little disappointed when I found out that they took out the iconic white logo but the new logo aesthetics is a worthy replacement. Using the device everyday and seeing it in different angles makes you appreciate it more. It is beautiful. I think it will become a classic color for Apple laptops.

The 12″ size is also growing on me. Few weeks ago, I had to give up my 13″ MBA for one of my developers so I borrowed my wife’s 11″ MBA. It wasn’t for me. The keyboard is too cramped and the screen is too small for my taste. However, using it for a few weeks somehow rearranged my fingers’ muscle memory so when I transitioned to the new 12″ MacBook, it seemed to work. My hands are less tense when typing and the previous orientation from my 13″ MBA gelled well. Typing on the keyboard, however, is another story. I’ll get to that later.

The slimness and weight of this device is ridiculous. It’s very comfortable to hold and to carry around. It has enough heft that you don’t need to worry about it slipping through your hand.

Performance and Battery
One of the reservations I had prior to purchasing this device is how it will perform with virtual machines (VM). Windows VM is a critical part of my day-to-day work because I use Visual Studio. Googling around, there’s a lot of criticisms regarding its specs. However, I was really skeptical reading them. Most of them are based on benchmark and not actual day-to-day experiences. The most “direct” feedback I got was this article from Gizmodo saying that the only times they noticed a slowdown is only when “running a Windows virtual machine in the background, while jumping around OS X Yosemite“. It sounded anecdotal to me so I was relieved that everything is still zippy when I run Parallels 10 loaded with Windows 8.1, Visual Studio 2013 and SQL Server 2012.

I have not noticed any significant gain in battery versus MBA. I usually have it fully charged before I leave for the office. Late in the afternoon I typically hit 10% or below. I have not tested the battery usage thoroughly so mileage may vary.

Keyboard and Trackpad
I abandoned using mouse ever since I moved to Mac. I think Mac’s trackpad is one of the best input devices out there so I’ve relied on it ever since. I use ‘Tap to click’ and ‘Three finger drag’ so I was slightly annoyed how much was changed to setup the latter. I have also yet to fully realize the use of Force Touch. I want to use it more often but it is not natural enough. I am intrigued by the application of the haptic feedback but it’s still in infancy.

The keyboard, however, is the one I am having hard time getting used to. I’ve read several criticisms about it including Marco Arment’s comment that the limited key-travel depth is something to be desired and leads to error-prone typing. However, typing relies heavily on muscle memory so I think I just need to give it time. I typed in 13″ MBA keyboard for 3 years, I don ‘t think I can shake that off in just a few days. One of the things I noticed with this keyboard is how I need to trust my keystrokes more. The more I trust my keystrokes—instead of consciously watching or worrying about them—the better my typing experience becomes. I get less typo and I type faster. It is slightly mentally straining, I will concede.

So far, so good
Unlike Marco, I won’t be returning this. I like it enough to overlook its flaws. I think it will become a better computer over time once I’ve acclimatized with its idiosyncrasies. I am looking forward to its future versions, it might be the last computer I’d ever buy.

On the tail end of our WPF client project, we started getting a ‘Task is cancelled’ exception from a method that posts JSON data to a REST API. I know that this exception is just a pointer to the actual problem and I’m confident that it’s not a code issue because the last modification made to the method was 4 weeks ago.

It turns out, since we started testing rigorously, we are sending large amount of JSON to the API. Our knee-jerk reaction was to set  the amount of JSON we can post to the maximum value (of course):

protected virtual string Serialize(object obj, bool maxResult) {
            var javaScriptSerializer            = new JavaScriptSerializer();
            javaScriptSerializer.MaxJsonLength  = Int32.MaxValue;
            return javaScriptSerializer.Serialize(obj);

We’ve mistakenly thought that this solved the problem. Few days later, we started getting the same error again so we decided to take a long-term approach on the issue. We cannot hope that there will be less data. We can, however, chop the data and send them by batch:

public virtual async Task PostBatch<T> (string path, IEnumerable<T> collection, SyncServiceContainer<T> container, int take) {
            int max               = (collection.Count() / take) + 1;
            var contents          = new List();
            for (int i = 0; i < max; i++) {
                var slicedCollection   = collection.Skip(take * i).Take(take).ToList();
                container.Data         = slicedCollection;
            foreach (var content in contents) {
                await Post(path, content);

A few things can be said about the PostBatch method. First, the collection parameter is data that needs to be chopped. We used IEnumerable<T> so it can accept any type. Second, the container parameter is just an object that wraps the collection. That class has a property called Data which will contain the chopped collection. The API that receives the data should have an object parameter the same as SyncServiceContainer object.

This resolved our problem completely.



TV executives are redefining their business models as they navigate the television industry’s shift to a multimedia sphere.

“It’s not the future. Digitalization is today. Physical capital barriers have crumbled as ‘open content’ on the Web undermines the traditional scheme of delivering news,” ABS-CBN chief digital officer Donald Lim said on the sidelines of the Philippine Marketing Association event in June.

I disagree. This is not just a medium problem. If it is then they can make their contents available online and their problem would go away. I’d argue that some of their contents are already online yet they are still in the same predicament. The problem is traditional content. It is really baffling that up to now, local TV networks are still using the same old, tired formula: pop star-ridden variety shows and soaps. The creativity has stagnated.

Television is not an endangered medium—I believe it is undergoing the same transformation that smartphone had, but I digress—people are still crazy with Game of Thrones and The Walking Dead. These are TV-first content and they are great content. Shows like Last Week Tonight are very successful because they are breaking the barriers of traditional content. They experiment. They create thoughtful, entertaining content.

I think local TV networks are out of ideas. They are desperately feeding the fickle-minded masses and they know it is not sustainable. If redefining business model and workforce downsizing are the only things up their sleeve then they need to brace themselves. Shit is about to hit the fan.