A monumental event, part 1

I realize it’s been a while since you heard from me. What can I say, demands from family and clients tend to keep me from writing. Besides, writing is hard. I prefer going for a ride on my beloved Harley to sitting at my desk trying to organize my thoughts.

This blog post series has been in the making for several years though, I feel that I ought to write it now, almost like a debt owed. In more that one respect this is a monumental event, I need to tell you about event driven serverless architecture.

An event-driven architecture uses events to trigger and communicate between decoupled services and is common in modern applications built with microservices. An event is a change in state, or an update, like an item being placed in a shopping cart on an e-commerce website.

https://aws.amazon.com/event-driven-architecture/

This probably sounds more than a bit abstract, I know it took me a while to get my head around it. Event driven architecture is a really deep down fundamental design choice, a choice to move away from big centralized applications to smaller integrated microservices. It is not just a product, or a tool, that you can start using. It is a different way of thinking about an entire application landscape.

Related reading Slaying the behemoth, Extension Design Principles

Let’s try to make this a bit less abstract. Please consider this usage scenario. Stuff & Co is a company that sells items to customers through several webshops. They have their own webshop but they also use several marketplaces to sell their items. All of a sudden I need to keep track of many events, not just incoming orders, also Item price and availability, customers, shipments, invoices. Even a small event like placing an item in a shopping basket becomes interesting. If 20 customers place an item in their basket while I only have 10 of these items in stock I’m going to need to gear up my replenishment or production processes.

Stuff & Co uses Business Central for their financial administration and warehousing. Unfortunately they find they are drowning in a million different integrations that slow down the primary process of their Business Central server. All these integrations are also impossible to maintain, it takes serious effort to integrate a new webshop, or to keep items up to date across the application landscape.

Stuff & Co are not alone in this. One of the biggest concerns my clients seem to have in the use of Business Central seems to be optimizing Business Central for posting as many documents, usually shipments, as possible without locks. This is nothing new and it is at the heart of my design philosophy; Business Central is used for the reliable tracking of money and goods. And nothing else.

The problem we have here of course is the “nothing else”. How are we going to do the other things we need to have done? How are my items and customers in many different companies, databases, or even applications going to stay in sync? How will I import and export my data? Invoice scanning and recognition? And many more questions. Fortunately my clients also gave me the answer for this, serverless computing. More than once I have helped clients to move non core Business Central processes to a serverless back-end they have already started to implement. Being independent I have used both Amazon Web Services and Microsoft Azure. I must confess to a slight preference for Azure, though that may be caused by years of conditioning. Both are able to provide the services I need but I will use Azure for demonstration purposes.

Using Business Central to handle data imports and exports is like using an 18 wheel truck to pop to the shop for a loaf of bread. Of course it’s possible. It’s also not practical and a waste of resources that can be better used elsewhere.

Why would I pay for a service just to send data to other systems, I hear you think. This, of course, is a very good question that is easy to answer. Scalability. The great joy of using an event driven architecture is that it is insanely easy to scale. Remember Stuff & Co and their multiple webshop integrations? What if I want to integrate a new webshop? Or notify customers that an item is being picked? By using an event driven architecture I can easily build small applications that will subscribe to events. If something changes I can simply change these small applications.

The big challenge is of course to integrate Business Central to the events infrastructure that Azure or AWS (or Google or any number of vendors) offers. I need to push my Business Central events to my serverless back-end and I need to subscribe to events that occur elsewhere in my application landscape. With the added challenge of not interrupting the flow of my Business Central core processes.

I realize I have taken up a lot of your time already. The second blog post in this series will describe sending events from Business Central to the Azure Event Grid, the third will describe subscribing to external events, the fourth and final blog post will describe how to make Business Central data available to your other microservice applications.

Photo by Pablo Heimplatz on Unsplash

Choose your… Browser?

Sometimes I just want to tell the world how awesome life is right now for Business Central developers. The tools we have available to us are only getting better. Mostly because we are now in direct dialogue with the people that create them. Just have a look at the Business Central GitHub, Yammer, and Twitter communities.

One of these tools I just had to share with you. the al.browser feature. It will allow you to choose which browser will be opened when you publish your extension from Visual Studio Code.

For me this is a great feature because I now don’t have to use my main browser, with all it’s history, for my demos, webinars, and training sessions.

When I mentioned this feature to the good people of ForNAV they were more than helpful in adding this feature to their report designer as well. Here is a sneak preview from the coming release, allowing you to preview your Business Central reports in the browser of your choice, while designing them.

Two years of extension building. Part 3

If this is the future, then that is alright then. Those were my words after getting off the new Harley Davidson LiveWire. For those of you that don’t know, the LiveWire is Harley Davidson’s all new electric motorcycle.

Related reading, On Harley’s, chaos, and Business Central

If you follow this blog at all you will know I have a, probably unhealthy, love of Harley Davidson motorcycles. I love riding them, and I love writing about them. There is something immensely gratifying on sitting on top of an engine so big that it has its own gravity field. These are loud, shaking, living, breathing beasts of motorcycles that are also well built and well engineered. So much so that despite their size they are the culminating companions for ceaseless cruising.

When it comes to writing it is just satisfying to scribble in superlatives. There is no way you can overdo writing about Harleys. Therefore, when I was invited to ride the new LiveWire at my local Harley dealer I grabbed my thesaurus and jumped at the chance.

Riding a LiveWire is like strapping a warp engine to your back and pressing its do not press button. Twisting the throttle launches you and the bike into an alternate reality where pedestrian things like natural laws don’t exist. In fact, the only things that exists there is a surge of quiet speed only punctuated by the mad whooping noises that emanate unbidden from the core of your being. It is quiet, poised, and handles like it is on rails. And still, despite it being all computers and software, it is still a living, breathing beast of a motorcycle. It’s a Harley and like any Harley it speaks to a part of your soul that most people don’t know they have. If this is the future, then that is alright. More than alright.

I’m not buying one though. Not because it is eye watering expensive, but because it does not work for me. For the simple reason that my left knee can’t handle the LiveWires riding position for more than twenty minutes.

This brings us to the real reason for writing this post. There is no point in buying an amazing bike if you can’t ride it just like there is no point in investing in tech that won’t serve your business. Which brings us back to extensions and VS Code.

I spent some time in the past six months helping some people in getting started with creating extensions. Most often these people are confused about how to get started because every time they did a training or saw a presentation they were drowned in stuff like Docker, Source Control, CICD, automated testing, Azure Functions, and more great tools. That is why I wanted to use this blog post to look at what you need to build extensions.

Before you all get on my case on how important source control is and how we need automated testing. I know it is important. But it is more important to have an easy way in and get started with building extensions. Source control, automated testing, and all sorts of other things are not needed to build an extension. What you need to get started is a Business Central cloud sandbox, VS Code and that is it. You don’t need anything else. Once you get going though you will need some, but maybe not all these things. Let me give you a guide into getting started with building extensions and improving your development process. This guide is based on my own experience as a small business owner who builds extensions for paying customers.

  • VS Code and AL development. Just create a new project on your local hard drive, connect to a Business Central sandbox and start coding. Don’t make it more complicated than this.
  • GitHub, you might want to work on your project with a colleague, or you may want to have a simple change log. GitHub is easy to learn and easy to start using. Stay away from Azure DevOps!
  • Docker, you may need to spin up a new docker container because you don’t want to wait for your sandboxes all the time. Don’t get started with Business Central on Docker unless you have 1 TB of free disk space.
  • Test Codeunits, at some point you may want to publish your extension to the app source. For this you need test Codeunits. Once you start building tests you will realize that you should have built your test before building your extension. Only you could not because you needed to learn how to make extensions first.
  • Azure and control add-ins, once you start working on extensions for cloud sandboxes you will run into things you just can’t to with AL. And things you can do better with other tools. Learning C#, .net core, JavaScript, and many other things will be next on your list.

For most businesses this will be enough. At least for now. Things like automated builds, automated testing, and all sorts of tools are simply not needed for small teams. It is not hard to spin up a container and running some tests manually. Nor is it hard to build an extension manually. Remember the LiveWire, you don’t need fancy tech if it does not suit your needs. Keep it as simple as you can!

Please note, once again, that I’m not arguing against automating your development process as far as you can. I’m just saying that you need to make getting started as easy as possible, and that not every developer needs to be a DevOps engineer.

This is, I think, the end of my experiences about getting started with extensions. It has been a fun journey; I hope I shared enough of it to inspire you to try some new things. For me personally the last twelve months have been a transformation from an employed developer to a small business owner. This change has given me a new perspective on many things and I’m sure that there are many exciting things still to learn and explore. Some things have not changed though. This crafty creative still likes to create cunning code, compose capital content, and commute on a commanding cruiser. If this is the future, then that is alright.

Photo by Jez Timms on Unsplash

Two years of extension building. Part 2

Let’s just say I like construction. I’m never happier than when I am building something, be it something physical or software. With creation comes learning, from creation actually I think. Come to think of it, learning mostly comes from messing stuff up and then fixing it better.

It may seem awkward to confess here that I keep on messing stuff up, after all many people pay me for my expertise. But I would not be an expert if I had not messed up, and learned, so much. My mistakes are like the dirt under my fingernails and the grease stains on my jeans. I wear them with pride.

Another great thing that comes from mistakes are stories to tell. Fortunately we can learn from each other’s mistakes. Don’t listen to negative people, I see people do this. Again and again. Not just people either, we’re not that special. When I have to give one of our cats his medicine the other will run away. Learning from one another just happens.

Back to mistakes. I have such an almighty balls-up to share I hardly know where to begin. It all began about three years ago when my coworkers and myself started using events.

Let’s be clear that what I am about to tell is not exclusive to events, it is something that has been with us for many years. I am about to tell you about our old friend the Commit().

Just a quick recap of the problem. When, in Business Central, you are writing to the database Business Central will wait committing the changes to the database until you are done and all the code has been executed. If at any point an error is raised all changes to the database are rolled back. However when we are running complex routines like posting we sometimes need to manually commit changes to the database before doing some cleanup or some other posting. This is fine, Business Central will never raise an error after a commit. To ensure this a simple design pattern is used.

  • Test near
  • Test far
  • Do it
  • Clean up

By moving all testing to before the actual posting we ensure everything is in order before the first commit.

And then Microsoft added a ton of events to all the posting Codeunits. This is not a problem as such, they are useful and necessary to us. Unfortunately by looking at the events list in VS Code we cannot see where the commits are. Enter my mistake, I raised an error after a commit. The result was a ton of half posted documents in a production database.

The problem here was that it was not as simple as removing an error message. This was a very complex extension that ensured the simultaneous and correct posting of multiple documents. Beside using many event subscribers itself it also triggered a lot of custom code in other extensions. Finding this problem, and fixing it, has given me a deeper insight in how to deal with events.

I had four insights that I would like to share.

First, obviously, know what you are subscribing to. This is where events make things harder, because we don’t change the original object we often don’t know what it is we influence. Fortunately Microsoft made it easy for us to check the standard code. Even in Business Central 2019 wave 2 and newer it is easy to unpack your symbols file and check the source code.

Second, keep it simple. I found it is hardly ever necessary to subscribe to events that are deeper than the OnBeforeRun or OnAfterRun events in posting Codeunits. If I do need something deeper it is usually because of a design flaw in my code. This makes sense if you look at the design pattern mentioned earlier. You either set or test something before posting or you clean up something after posting.

Third, leave the actual posting alone. The posting routines have been designed and tested in order to build trust. Accountants trust the posting routines of Business Central. If we start messing about with posting routines we lose that trust. Besides, the actual posting is also used by other third party extensions. If you touch the posting itself you will break something else.

Fourth, if you can’t keep it simple then override. Most posting Codeunits have Codeunits that call them. In those Codeunits you can override the standard posting Codeunit. This is a last resort though, it will certainly mean compatibility issues with other third party extensions.

[EventSubscriber(ObjectType::Codeunit, Codeunit::"Whse.-Post Shipment (Yes/No)", 'OnBeforeConfirmWhseShipmentPost', '', false, false)]
local procedure OnBeforeConfirmWhseShipmentPost(var WhseShptLine: Record "Warehouse Shipment Line"; var HideDialog: Boolean; var Invoice: Boolean; var IsPosted: Boolean);
var
    CustomPostingRED: Codeunit "Custom Posting RED";
begin
    if IsPosted then
        exit;
    if CustomPostingRED.PostWarehouseShipment(WhseShptLine, HideDialog, Invoice) then
        IsPosted := true;
end;

In the end I had to opt for the fourth option. I created a new Codeunit that runs a number of posting routines with some smart error catching.

There we have it. One more mistake, one more fix that made everything better. I hope this helps you in your quest for clean code. Do you have any embarrassing mistakes to share?

Photo by Christopher Burns on Unsplash

Two years of extension building. Part 1

The year is ending. Christmas is upon us, it is cold and dark and wet, and I have the biggest and scariest change of my working life coming next year. All this has put me in a melancholic mood and has me looking back at the last couple of years. I remembered it is two years ago that I created my first extension and almost two years ago that I started blogging about it. I realized it is time to make good on a promise I made to you, dear reader, and write about my experiences.

The first, and maybe not so obvious, thing I learned is that building extensions is all about creating value and reducing cost.

The way we create value for the organizations who employ us is by identifying what it is that makes those organizations unique and competitive. Then we take what is so unique and competitive and use clever automation to make that better.

Take for example a crane company. What makes that company competitive and unique is it’s ability to lift heavy things at a certain time and in a certain place. So in order to add value to this company we would have to create a better crane that can lift bigger or a wider variety of loads. We could also create clever planning software that would enable said company to do more jobs in a day. Both measures would result in the company doing more work and thus earning more money.

The way we save money for the organizations who employ us is by reducing operating costs. We make what they do cheaper and easier.

Going back to our crane company. We can use cheaper paper for the office stationery or choose to implement software to manage their parts store more efficiently.

Here is the kicker though. While there is a limit to the amount of money a company can save, there is no limit to the amount of money a company can make.

The goal for your innovative project should be to increase revenue more than the cost required to increase that revenue. For instance a new crane that can lift heavier loads quicker and has a lower service interval. Both innovations mean you can do more work in a day.

Related reading: extension design principles

Admittedly this has nothing to do with extensions. You should see this entire blog as a call to evaluate what you spend your time on and how that adds value to your customers. The reason this is the first thing I write about is because this is the foundation from which I started to design extensions.

When you know what adds value for your customer it is easy to determine your priorities. Develop unique software that adds value. Everything else can be off the shelf stuff.

Photo by EJ Yao on Unsplash