Outside local news (and occasional live sport event), I get most of my video content from the Internet. Through the years there has been a steady uptick of video content from the Internet . Most of the contents are foreign, however, so when a Netflix clone became available, it was a no-brainer for me. iflix is a video on-demand service mainly targeted to Southeast Asia (currently only in Malaysia, Philippines and Thailand).

I’ve been trying iflix for more than a month now. If you are experimenting on the idea cord-cutting or just looking for an alternative way to get more video content, read on.

Things they got right

Streaming. I was pleasantly surprised with iflix’ streaming stability. The Philippine broadband situation is shitshow right now; however, iflix’ adaptive bit-rate streaming—video quality changes depending on the quality of bandwidth of connection—actually makes a difference. I can watch any movie in any device with almost zero hiccup. I have a 2MBps connection in my house, and it holds up pretty good. Streaming over cellular data is decent. The videos stream almost flawlessly as long as you’re on 3G or higher.

Price. iflix has an aggressive pricing of P129 per month. This is a third of the average cost of a cable monthly subscription. This is dirt cheap: a single movie rental in Apple App Store or Google Play Movies ranges between P200-P600. To put it in another perspective, if you watch re-runs of a TV show like Friends or How I Met Your Mother, you could purchase a USB hard drive to store videos. Assuming that obtaining the videos does not cost you anything, you’re out at least P 3,000 for the hard drive alone. The same amount of money can be for a 2-year iflix subscription. Again, dirt cheap.

Things they need to improve

Technology. This is where things go south. iflix has mobile and web apps; the mobile apps are, I believe, web apps wrapped with a native shell. Like most non-native mobile apps, they are buggy and kludgy.

  • General navigation. The mobile app’s navigation is plagued by slow performance. For example, tapping a movie does not instantly bring you to the next screen. Additionally, pressing the back button shows the previous screen and a progress indicator at the top bar until target screen is loaded. The scrolling is not buttery smooth and doesn’t have inertial scrolling. The search experience is also something to be desired. Tapping the search icon exposes a slow, sometimes unresponsive, webpage-like experience. These are all symptoms of a web app dressed as a native app.
  • Screen projection. I’ll be honest, this is the main reason why I purchased the service. Sitting back on your couch, picking up any movie and watching it on a big screen anytime of the day is very appealing to me. iflix almost got it right. It supports both Chromecast 1 & 2. I can project to any of these devices with zero issues. However, it does not support Google Cast enabled devices such us Google Nexus Player. Unlike Google Cast implementation like Spotify where it hides unsupported devices, iflix embarrassingly brings up unsupported devices in its app only to flake out when you stream the video. It does not support AirPlay. No, projecting your desktop to Apple TV doesn’t count.

Content Selection. If you are looking for fresh content, iflix is not for you. Do not expect the latest episode of Arrow or The Big Bang Theory here. However, if you enjoy watching reruns (like I do), that’s where the real value comes in. iflix is geared toward this: catching up shows you missed or binge watching an entire season of a show. The movies are also slim pickings. According to this FAQs they update their movie catalog in a weekly or monthly basis; however, I would imagine that their movie catalog, at best, is a few thousand.

Should you get it?

If you have a decent internet connection without aggressive capping you should get it. Despite my disappointments, it’s hard to say no to because of its price.

I’ve a side project that has been put on hold for a while and I decided to pick it up last week. It’s a small, two-part web app. One part pulls data from Twitter.

In order to make authorized calls to Twitter API, an application must first obtain access tokens from Twitter. There are two ways to do this: OAuth access tokens in behalf of the user or Application-only Authentication. I pondered on writing my own full-pledge OAuth client library but it’s an overkill for my requirements. I settled on Application-only Authentication.

Writing a client library for Twitter sounds fun and cool until you actually do it. It’s meticulous and finicky. In fact, scouring the Internet for a working code was a fruitless endeavor.  Half of my Internet searches yielded half-baked answers. The other half suggested that I should just use third-party libraries which I was adamant about—I don’t want to miss the opportunity to learn in this project. Desperation led me to Twitter’s documentation: I have to do everything from scratch without any help from StackOverflow.

I use HttpClient for everything that’s HTTP—API call, file upload, etc.—it provides better granularity when sending and receiving HTTP requests and response, respectively. Despite using HttpClient for a while now, using it for OAuth tested my patience. Letting an API specification dictate my code is not fun at all and mostly trial-and-error. To make things worst, the errors are often cryptic or vague.

Several hours later, I was able to pull together a working build. Application-only Authorization is comprise of three parts: preparing your keys, retrieving the bearer token and sending actual API call.

Preparing Your Keys
Application-only Authentication has a limitation: it does not have user context so it has a few limitations. However, contrary to what most developers believe, it’s often suffice to most situation. Here’s how to prepare consumer key and consumer secret key when sending an HTTP request (excerpt from Twitter):

  1. URL encode the consumer key and the consumer secret according to RFC 1738. Note that at the time of writing, this will not actually change the consumer key and secret, but this step should still be performed in case the format of those values changes in the future.
  2. Concatenate the encoded consumer key, a colon character “:”, and the encoded consumer secret into a single string.
  3. Base64 encode the string from the previous step.
    var encodedConsumerKey       = HttpUtility.UrlEncode(_ConsumerKey);
    var encodedConsumerKeySecret = HttpUtility.UrlEncode(_ConsumerKeySecret);
    var encodedPair              = Base64Encode(String.Format("{0}:{1}", encodedConsumerKey, encodedConsumerKeySecret));

Retrieving Bearer Token
This is where I did a lot of trial-and-error. Preparing and sending the request requires a good understanding of how HTTP requests work. Converting that knowledge to .NET/C# is challenging if you’re inexperienced.

  • The request must be a HTTP POST request.
  • The request must include an Authorization header with the value of Basic <base64 encoded value from step 1>.
  • The request must include a Content-Type header with the value of application/x-www-form-urlencoded;charset=UTF-8.
  • The body of the request must be grant_type=client_credentials.
    var requestToken = new HttpRequestMessage {
        Method      = HttpMethod.Post,
        RequestUri  = new Uri("oauth2/token", UriKind.Relative),
        Content     = new StringContent("grant_type=client_credentials")
    };
 
    requestToken.Content.Headers.ContentType = new MediaTypeWithQualityHeaderValue("application/x-www-form-urlencoded") { CharSet = "UTF-8" };
    requestToken.Headers.TryAddWithoutValidation("Authorization", String.Format("Basic {0}", encodedPair));

Making the Actual API Call
This is the easy part. Once you have the bearer token, just add an `Authorization` header to the request with the bearer token as its value and do a `Post` call.

    requestData.Headers.TryAddWithoutValidation("Authorization", String.Format("Bearer {0}", bearerToken));
 
    var results = await HttpClient.SendAsync(requestData);
    return await results.Content.ReadAsStringAsync();

Here’s the full working method:

        public override async Task Post(string path, HttpContent content) {
 
            var bearerToken = await GetToken();
 
            if (bearerToken == null || bearerToken == string.Empty)
                throw new Exception("Bearer token cannot  be empty");
 
            var requestData = new HttpRequestMessage{
                                    Method      = HttpMethod.Post,
                                    Content     = content,
                                    RequestUri  = new Uri(path, UriKind.Relative),
                                };
 
            requestData.Headers.TryAddWithoutValidation("Authorization", String.Format("Bearer {0}", bearerToken));
 
            var results = await HttpClient.SendAsync(requestData);
            return await results.Content.ReadAsStringAsync();
        }
 
 
        private async Task GetToken() {
 
            if (_ConsumerKey == null || _ConsumerKey == string.Empty)
                throw new Exception("No Consumer Key found.");
 
            if (_ConsumerKeySecret == null || _ConsumerKeySecret == string.Empty)
                throw new Exception("No Consumer Secret Key found.");
 
            var encodedConsumerKey       = HttpUtility.UrlEncode(_ConsumerKey);
            var encodedConsumerKeySecret = HttpUtility.UrlEncode(_ConsumerKeySecret);
            var encodedPair              = Base64Encode(String.Format("{0}:{1}", encodedConsumerKey, encodedConsumerKeySecret));
 
            var requestToken = new HttpRequestMessage {
                Method      = HttpMethod.Post,
                RequestUri  = new Uri("oauth2/token", UriKind.Relative),
                Content     = new StringContent("grant_type=client_credentials")
            };
 
            requestToken.Content.Headers.ContentType = new MediaTypeWithQualityHeaderValue("application/x-www-form-urlencoded") { CharSet = "UTF-8" };
            requestToken.Headers.TryAddWithoutValidation("Authorization", String.Format("Basic {0}", encodedPair));
 
            var bearerResult    = await HttpClient.SendAsync(requestToken);
            return JObject.Parse(await bearerResult.Content.ReadAsStringAsync())["access_token"].ToString();
        }
 
        private static string Base64Encode(string plainText) {
            var plainTextBytes = System.Text.Encoding.UTF8.GetBytes(plainText);
            return System.Convert.ToBase64String(plainTextBytes);
        }

I was contemplating our strategy on how to localize Voyadores—our ERP product—and I was torn whether I should do a file-based approach or a database approach. Localization is typically file-based but my spider sense is telling me that, in our case, it could spell trouble down the road. Turns out my hunch wasn’t far-fetched. This is a hot topic.

Rick Strahl (An MS MVP) has a great tool kit for managing localization via the DB – offer the ability to update and modify on demand through a controlled environment and does much of the heavy lifting for you.

[…]

Resx Resources are also static – they are after all compiled into an assembly. If you want to make changes to resources you will need to recompile to see those changes. ASP.NET 2.0 introduces Global and Local Resources which can be stored on the server and can be updated dynamically – the ASP.NET compiler can actually compile them at runtime. However, if you use a precompiled Web deployment model the resources still end up being static and cannot be changed at runtime. So once you’re done with compilation the resources are fixed.

This was the deal breaker. I can put up with Resx being XML—XML documents are a pain to deal with—it’s clumsy and becomes unwieldy when the file is too big. However, our app has increasingly demanded flexibility. The ability to add and modify application messages and notifications without re-deploying the app saves us dozens of developer hours. It also offloads tasks from our developer to BAs (Business Analysts).

Lastly, I have a beef with applications unnecessarily pulling data from disparate sources. It creates ambiguity to the framework and confuses the developers. We strongly encourage developers to be mindful when picking persistence strategy of their resources (data, configurations, localizations, application variables, etc.). More on this on a later post.

 

I have developed mostly web apps in my programming career. There have been a few sporadic opportunities to develop desktop apps but mostly small, uninteresting projects. So when a client came up to us and asked us to develop an offline desktop client, I was taken aback. This is not our comfort zone. Yes it’s still .NET, it’s still C# but XAML and MVVM?

Needless to say, we went for it. After months of whirlwind requirements and intensive coding, we are now doing User Acceptance Testing (UAT). We are still not out of the woods but I’m confident that it’s all downhill from here. However, the last few months weren’t easy.It was intense and exhausting. We had to learn new things in crunch time. These are the things that stuck.

Lock down your OS requirements

To mitigate support and deployment headaches, we initially set Windows 7 with SP1 as a minimum requirement for the application’s operating system. At the very least, this would ensure a fresher .NET Framework. However, OS version is just one of the problems. We overlooked the microprocessor architecture: most of our client’s users are using 32-bit Windows. Since developers typically use 64-bit OSs, we have to recompile the application for 32-bit OS. We also looked for 32-bit version of SQL Server Express. While these are fairly trivial to solve, we could’ve save ourselves a lot of time had we anticipated these requirements.

Asynchronous programming is your new best friend

It’s easy to take for granted how modern web technology stacks support asynchronous right out of the gate. While WPF has rich support for asynchronous programming, baking it right into your application is not trivial. Your team needs a solid understanding on how to use asynchronous programming to build a responsive and desirable user experience (UX). Previous experience is definitely beneficial. Lastly, picking the right technology strategy is also critical. For example, using Model-View-ViewModel (MVVM) framework to build the app is one of the best decisions we’ve made in support of making responsive application. MVVM supports Asynchronous Commands that can be tied to a UI control asynchronously.

Installers can be tricky

Creating an installer for a WPF application is painless and easy. Visual Studio does everything for you. It finds all the required libraries of your application and bundles them with the installer. When you install this to the user’s machine, any pre-requisite will be automatically downloaded. It should be a perfect strategy. Unless when it’s not. This became one of the most painful tasks during testing. It turns out because of our client’s strict borderline-to-poor intranet settings (see below), downloading pre-requisite components during installation became a hair-pulling experience: our client’s internet connection is throttled therefore it’s glacially slow. It took us almost a day to download a 22MB cumulative patch for SQL Server Express.

This could be solved in several ways. Obviously, an improved policy is the best way. However, that typically involves approval from layers and layers of management. If you don’t want to deal with that, download and bundle components ahead of time.

Strategize when picking local persistence

Local persistence has two obvious benefits: offline mode and performance boost through local caching. There are myriad of options and depending on your requirements, picking the right strategy can be daunting. You can opt for a full-fledged relational database server like SQL Server Express (which we did) or MySQL however, typically entails big checklist of pre-requisite to the user’s machine. You can also opt for a file-based database like Microsoft Access or even Microsoft Excel but this could mean licensing cost for your client. Lastly, you can pick a more passive data storage like SQL Server Compact or SQLite—something I wished we considered more. The setup overhead for these embedded data storage is vastly minimal compared to a full-fledged RDBMS like SQL Server Express. Embedded database are usually free to download and distribute.

Corporate settings

This is what caught us off guard. I’ve seen how hostile corporate intranet settings can be to applications but I didn’t realized how difficult it was until now. The best way to solve this issue is to make sure that the business process owner (the owner of the application) understands the requirements of their application. You can work your way around the settings but it won’t get you far. They need to agree with you on the resources your application will be using in their environment. They also need to get on board on what kind of permission your application needs. Resolving these kind of issues give your application stable room without resorting to crazy solution.

Finishing a difficult project is an exhilarating experience for me and I could honestly say that this is one of the most challenging projects I’ve ever done. Imagine how amazing this makes me feel.

James Hague:

I eventually saw the BASIC listing for his program. It was hundreds and hundreds of lines of statements to change colors and draw points and lines. There were no loops or variables. To animate the blood he drew a red pixel, waited, then drew another red pixel below it. All the coordinates were hard-coded. How did he keep track of where to draw stuff? He had a piece of graph paper that he updated as he went.

My prior experience hurt me in this case. I was thinking about theprogram, and how I could write something that was concise and clean. The guy who wrote the skull demo wasn’t worried about any of that. He didn’t care about what the program looked like or how maintainable it was. He just wanted a way to present his vision.

There’s a lesson there that’s easy to forget–or ignore. It’s extremely difficult to be simultaneously concerned with the end-user experience of whatever it is that you’re building and the architecture of the program that delivers that experience. Maybe impossible.

Good read and guilty as charged.

I can no longer write anything these days without laying down frameworks, importing libraries and following conventions. That includes weekend projects. To be fair, if you write software for a living, this kind of mindset is mostly valuable in prototyping. If you’re doing production code, I would argue that not only this is counter productive but dangerous as well.

I went back and forth whether or not to get the new MacBook. I had few reservations but I was intrigued by this machine. Last week, I was ready to pick up a new Mac and I was decided that I am going to get the early 2015 13″ MacBook Air (MBA). I went home with this machine instead.

It turns out that the 13″ MBA is out of stock and the last one they have is this MacBook with bumped up specs: 512GB SSD, 8 GB of RAM and 1.2 Core M processor. To get this configuration, I have to order online and wait at least 12 days. I was powerless to resist.

I’ve been using it for a week and so far everything is great.

MacBook 2015

Display
The new MacBook’s Retina display is gorgeous. This is my first non-iOS Retina device—I used MacBook Air 13″ for more than three years—and the display is easily the most obvious benefit. Everything is crisp and the viewing angles are great. It has the same 1440 by 900 resolution, 16:10 ratio as the MBA 13″ so moving around on the desktop is familiar.

Physique
I picked up the Space Gray and I love it. The metallic Apple logo blends really well with the color. I was a little disappointed when I found out that they took out the iconic white logo but the new logo aesthetics is a worthy replacement. Using the device everyday and seeing it in different angles makes you appreciate it more. It is beautiful. I think it will become a classic color for Apple laptops.

The 12″ size is also growing on me. Few weeks ago, I had to give up my 13″ MBA for one of my developers so I borrowed my wife’s 11″ MBA. It wasn’t for me. The keyboard is too cramped and the screen is too small for my taste. However, using it for a few weeks somehow rearranged my fingers’ muscle memory so when I transitioned to the new 12″ MacBook, it seemed to work. My hands are less tense when typing and the previous orientation from my 13″ MBA gelled well. Typing on the keyboard, however, is another story. I’ll get to that later.

The slimness and weight of this device is ridiculous. It’s very comfortable to hold and to carry around. It has enough heft that you don’t need to worry about it slipping through your hand.

Performance and Battery
One of the reservations I had prior to purchasing this device is how it will perform with virtual machines (VM). Windows VM is a critical part of my day-to-day work because I use Visual Studio. Googling around, there’s a lot of criticisms regarding its specs. However, I was really skeptical reading them. Most of them are based on benchmark and not actual day-to-day experiences. The most “direct” feedback I got was this article from Gizmodo saying that the only times they noticed a slowdown is only when “running a Windows virtual machine in the background, while jumping around OS X Yosemite“. It sounded anecdotal to me so I was relieved that everything is still zippy when I run Parallels 10 loaded with Windows 8.1, Visual Studio 2013 and SQL Server 2012.

I have not noticed any significant gain in battery versus MBA. I usually have it fully charged before I leave for the office. Late in the afternoon I typically hit 10% or below. I have not tested the battery usage thoroughly so mileage may vary.

Keyboard and Trackpad
I abandoned using mouse ever since I moved to Mac. I think Mac’s trackpad is one of the best input devices out there so I’ve relied on it ever since. I use ‘Tap to click’ and ‘Three finger drag’ so I was slightly annoyed how much was changed to setup the latter. I have also yet to fully realize the use of Force Touch. I want to use it more often but it is not natural enough. I am intrigued by the application of the haptic feedback but it’s still in infancy.

The keyboard, however, is the one I am having hard time getting used to. I’ve read several criticisms about it including Marco Arment’s comment that the limited key-travel depth is something to be desired and leads to error-prone typing. However, typing relies heavily on muscle memory so I think I just need to give it time. I typed in 13″ MBA keyboard for 3 years, I don ‘t think I can shake that off in just a few days. One of the things I noticed with this keyboard is how I need to trust my keystrokes more. The more I trust my keystrokes—instead of consciously watching or worrying about them—the better my typing experience becomes. I get less typo and I type faster. It is slightly mentally straining, I will concede.

So far, so good
Unlike Marco, I won’t be returning this. I like it enough to overlook its flaws. I think it will become a better computer over time once I’ve acclimatized with its idiosyncrasies. I am looking forward to its future versions, it might be the last computer I’d ever buy.

On the tail end of our WPF client project, we started getting a ‘Task is cancelled’ exception from a method that posts JSON data to a REST API. I know that this exception is just a pointer to the actual problem and I’m confident that it’s not a code issue because the last modification made to the method was 4 weeks ago.

It turns out, since we started testing rigorously, we are sending large amount of JSON to the API. Our knee-jerk reaction was to set  the amount of JSON we can post to the maximum value (of course):

protected virtual string Serialize(object obj, bool maxResult) {
            var javaScriptSerializer            = new JavaScriptSerializer();
            javaScriptSerializer.MaxJsonLength  = Int32.MaxValue;
            return javaScriptSerializer.Serialize(obj);
}

We’ve mistakenly thought that this solved the problem. Few days later, we started getting the same error again so we decided to take a long-term approach on the issue. We cannot hope that there will be less data. We can, however, chop the data and send them by batch:

public virtual async Task PostBatch<T> (string path, IEnumerable<T> collection, SyncServiceContainer<T> container, int take) {
            int max               = (collection.Count() / take) + 1;
            var contents          = new List();
 
            for (int i = 0; i < max; i++) {
                var slicedCollection   = collection.Skip(take * i).Take(take).ToList();
                container.Data         = slicedCollection;
                contents.Add(SerializeContent(container));
            }
 
            foreach (var content in contents) {
                await Post(path, content);
            }
}

A few things can be said about the PostBatch method. First, the collection parameter is data that needs to be chopped. We used IEnumerable<T> so it can accept any type. Second, the container parameter is just an object that wraps the collection. That class has a property called Data which will contain the chopped collection. The API that receives the data should have an object parameter the same as SyncServiceContainer object.

This resolved our problem completely.

 

Rappler:

TV executives are redefining their business models as they navigate the television industry’s shift to a multimedia sphere.

“It’s not the future. Digitalization is today. Physical capital barriers have crumbled as ‘open content’ on the Web undermines the traditional scheme of delivering news,” ABS-CBN chief digital officer Donald Lim said on the sidelines of the Philippine Marketing Association event in June.

I disagree. This is not just a medium problem. If it is then they can make their contents available online and their problem would go away. I’d argue that some of their contents are already online yet they are still in the same predicament. The problem is traditional content. It is really baffling that up to now, local TV networks are still using the same old, tired formula: pop star-ridden variety shows and soaps. The creativity has stagnated.

Television is not an endangered medium—I believe it is undergoing the same transformation that smartphone had, but I digress—people are still crazy with Game of Thrones and The Walking Dead. These are TV-first content and they are great content. Shows like Last Week Tonight are very successful because they are breaking the barriers of traditional content. They experiment. They create thoughtful, entertaining content.

I think local TV networks are out of ideas. They are desperately feeding the fickle-minded masses and they know it is not sustainable. If redefining business model and workforce downsizing are the only things up their sleeve then they need to brace themselves. Shit is about to hit the fan.

We’ve been using Slack for a while now and we find it really useful when sending a receipt notification: an automated, pre-formatted message from an application. For example, you might want Dropbox to notify you after a file is copied to a certain directory.

In one of our projects, we are performing some back office task of importing files from an external source. It’s a lengthy and repetitive chore so I decided to create a nifty tool to automate the task.  Everything works great except that we have to constantly remote the server to check whether the import is finished. I figure that this would be a perfect use case for Slack APIs. Slack has an open and well-documented APIs. All I needed was to use web hooks which is basically just HTTP calls. There are several third-party library for .NET that encapsulate Slack’s API calls that can be found here. I highly recommend that you go plain vanilla and use plain HTTP calls. The steps are fairly trivial: setup your incoming web hook integration here (login needed). Once done, copy your web hook URL. Send a POST request with a JSON payload to your web hook URL.

So here’s how I did it. First, following the DRY principle, I have a base class for all of the basic HTTP calls. Below is a snapshot of my base class. You might want to add more methods like Get or Put for GET and PUT HTTP methods, respectively. Take notice of  the `SerializeContent` method. That’s how we will serialise local objects to JSON.

    public abstract class BaseSyncService : BaseService {
 
        private HttpClient _HttpClient;
        public HttpClient HttpClient {
            get {
                if (_HttpClient == null)
                    _HttpClient = new HttpClient {
                        BaseAddress = new Uri(BuildAPIURL())
                    };
 
                return _HttpClient;
            }
        }
 
        private string BuildAPIURL() {
            return ConfigSettings("APIEndPointURL");
        }
 
        protected virtual HttpContent SerializeContent(object obj) {
            return new StringContent(new JavaScriptSerializer().Serialize(obj), Encoding.UTF8, "application/json");
        }
 
        public virtual async Task Post(string path, HttpContent content) {
            using (HttpClient) {
 
                HttpResponseMessage response = new HttpResponseMessage();
                var result = await _HttpClient.PostAsync(path, content);
 
                return result.Content.ReadAsStringAsync().Result;
            }
        }
 
    }

So I can make a derived class called SlackSyncService, which looks like something like this:

    public class SlackSyncService : BaseSyncService {
        public async Task Notify(string message) {
            return await Post ("", SerializeContent(new { 
                                                        text        = message,
                                                        username    = "Import Bot",
                                                        icon_emoji  = ":thumbsup:"
                                                    }));
        }
    }

Now we’re getting somewhere. Keep in mind that our Post method is awaitable which will beautifully wait for Slack’s API response. Additionally, we can use .NET’s anonymous type to act as a container for our data. This can be serialised to JSON perfectly. There are so many settings to customise your post. You can find everything here.

Now I can just call this service class somewhere in my ViewModel:

        private async Task NotifySlack() {
            if (ShouldNotifySlack) {
                try {
                    await new SlackSyncService().Notify(String.Format("Files successfully imported! Total import time is {0}", ElapsedTime));
                } catch (Exception exception) {
                    EventHandler onExceptionFileImport = OnExceptionFileImport;
                    if (onExceptionFileImport != null)
                        onExceptionFileImport(this, exception.Message);
                }
            }
        }

(I know some of you are probably wondering why I seem to use new keyword willy-nilly and that this is anti-pattern but I have my reasons and it is a good topic for a separate blog post).

That’s it. Adding Slack integration to a .NET app is really easy and fun. It’s a great weekend project. Happy coding!