Low code solution with Azure Logic Apps and PowerBI

I’ve been recently working on a small project to support a new business process. Time to market was critical for the customer, to be able to capture emerged businesss oppurtunity. Budget was also tight, to not over-invest before bussiness case is validated.

There was a strong preference from the customer to do the whole data management via Excel to “keep it simple”. Not a surprise preference when you talk to sales people or CEO as in this case 🙂 There was also a need to enrich data with information from 3rd party systems and provide a number of reports.

High level architecture of this small system looks like this:

It was not the goal to avoid coding at all when building the solution, but the goal was to have a low code approach to save time.

The biggest saving was avoiding custom UI development completely, but still having the solution highly interactive from the users’ perspective. Please find below the description of how it was achieved.

Online sign-up form

For online sign-up form https://webflow.com/ was used. This tool allows to create websistes without the need to write any code. The only piece of JavaScript that had to be written was about making an AJAX request to custom API that would pass form data.

“CRM” via OneDrive and Excel

All the accounts were managed via Excel files. One file per parner company. That kind of approach has many benefits out of the box. Let’s mention a few:

  • Intuitive and flexible data management via Excel
  • Access management and sharing capabilities provided by OneDrive
  • Online collaboration and change tracking built-in

Azure Logic Apps – the glue

The core business logic was developed as custom service implemented in .NET Core and C#. This service also had its own database. Data edited in Excel files needed to be synced with the database in various ways:

  • changes made via Excel files needed to be reflected in central database
  • when data was modified by the business logic (for example status was changed and data generated as a result of the business flow), changes needed to be reflected back in the Excel to have a consistent view
  • when a new account was registered in the system, new Excel file to manage it was automatically created in OneDrive

All of that use cases were implemented via Azure Logic Apps. Logic App is composed from a ready to use building blocks. Here’s the example of single execution log of an example Logic App:

In this case, any time an Excel file is modified in OneDrive, a request is made to custom API to uplaod the file for processing the updates. Before the request, an access token is obtained. Processed file is saved for audit, and in case of error an email alert is sent.

Unther the hood Logic App is defined as a JSON file, so its definition can be stored in the code repository and deployed to Azure via ARM.

Power BI to provide data insights

Reporting was the ultimate goal of that project. Business needed to know about the performance of particular sales agents and internal employees for things like commission reporting and follow-up calls.

When comparing to developing a custom reporting solution, Power BI is super easy to create the UI to browse, filter and export data. Once the connection with database is established, data model can be defined to create interesting visualistations with extensive filtering options. All that features available for 9,99$/month/user.

If you know SQL, and relational data modelling, but are new to Power BI, I can recommend this tutorial to get up to the speed with Power BI:

Summary

Thanks to low-code and no-code tools like Azure Logic Apps, Power BI or Webflow, it was possible to deliver end-to-end solution that users were able to interact with, without any custom code to build UI. If that project included also UI and related backend developent to support UI, it would take a few times more to provide similar capabilities. We could imagine simple UI with less effort but it would be not even close to the rich capabilities provided by Power BI and Excel out of the box.

Happy low-coding! 🙂

.NET MAUI vs Xamarin.Forms

I’ve been focusing recently on Xamarin and also following the updates on MAUI. MAUI was started as a fork of Xamarin.Forms and this is how it should be seen – as next version of Xamarin.Forms. There will be no version 6 of Forms (current is version 5). Instead, Xamarin.Forms apps will have to be upgraded to MAUI – Multi-platform App UI. Alternative to upgrading is staying on Xamarin.Forms 5, that will be supported only for 12 months after MAUI official release. So if we want to be on supported tech stack, then we need to start getting familiar with MAUI.

MAUI and also whole Xamarin framework will be part of .NET 6. It was initially planned to release MAUI already on November 2021 with the new .NET release. Now we know that production-quality release was postponed to Q2 2022. Instead of production-ready release, we will keep getting preview versions of MAUI. It also means that Xamarin.Forms will be supported longer (Q2 2022 + 12 months).

OK, but what changes we can expect with MAUI? Below the summary of key differences when compared to Xamarin Forms 5.

1. Single project experience

In Xamarin.Forms we need to have separate project(s) for each platform and also project(s) for shared code. In MAUI we have an option to work with single project only that can target multiple platforms:

In single project we can still have platform-specific directories (under “Platforms” directory) or even have platform specific code in single file by using pre-process directives:

This is not something that MAUI introduced, this is achieved thanks to so SDK-style projects which are available in .NET 5. Already in .NET 5 we can multi-target projects and instruct MS Build which files or directories should be target-specific.

Example of multiple target in SDK-style project in csproj:

Example of conditional compilation based on target:

So this is not a MAUI magic, it is just about using .NET 5 capabilities.

2. Central assets management

Consequence of consistent single-project experience is also the need to manage assets in a single project. MAUI accomplishes that for PNG and SVG images by doing compilation-time image resizing. We can still have platform-specific resources if needed, for example for other formats.

But again, it is not a revolutionary change. MAUI is just integrating ResizetizerNT as integrated part of the framework.

So this MAUI feature is another low-hanging fruit. It was possible to achieve that also with Forms, but now you do not have to add additional libraries.

3. New decoupled architecture

New architecture seems to be the change where most of the efforts of MAUI team goes. This change is actually significant. It is a big refactoring of Forms driven by new architecture called Slim Renderers.

Slim renderers was kind of temporary name, so let’s not get used to that. The term that we should remember and get familiar with is Handler. Role of Renderers from Xamarin.Forms is taken by Handlers in MAUI.

What’s the main difference? We can summarise it with one world: decoupling. This is how it looks in Xamarin.Forms:

Renderers, that produce native views tree depend on Forms controls. In the above diagram you can see example for Entry control on iOS and Android platform but the same idea applies for other controls (like Button, Label, Grid etc) and other platforms (Windows, MacOS).

MAUI introduces new abstraction layer of controls interfaces. Handlers depend only on interfaces, but not on the implementation of UI controls:

This approach allow to decouple the responsibility of rendering platform-specific code that is handled by handlers from the implementation of the UI framework. MAUI is split into Microsoft.Maui.Core namespace and Microsoft.Maui.Controls namespace:

What is important to notice, is that support for XAML or bindings implementation was also decoupled from handling platform specific code. It makes the architecture much more open for crating alternative UI frameworks based on Maui.Core, but using different paradigms. We can already see experimental framework Comet using that approach and proposing MVU pattern instead of MVVM:

There is also a rumour around MAUI project that Fabulous framework could follow that path, but an interesting thing is that Fabulous team does not seem to share the enthusiasm 😉 It will be interesting to see how the idea of supporting F# in MAUI will evolve.

But it is important to notice that MAUI does not have built-in MVU support. MAUI Controls are designed to support MVVM pattern that we know from Xamarin.Forms. What is changing, is the open architecture enabling alternative approaches, but alternatives are not built-in into MAUI.

4. Mappers

Ok, so there is no Renderers, there are Handlers. So how to introduce custom rendering when needed? Since there is no custom renderers, can we still have custom rendering? Yes, but we need to get familiar with one more new term in MAUI: Mappers.

Mapper is a public static dictionary-like data type exposed by Handlers:

It maps each of the properties defined on controls interface into a platform-specific handler function to render given property.

If we need custom behaviour we can just map our own function from our application:

See this repository created by Javier Suárez with examples and detailed explanations on how to migrate from Forms to MAUI: https://github.com/jsuarezruiz/xamarin-forms-to-net-maui

And do not worry about your existing custom renderers, they will still work thanks to MAUI compatibility package. Although it is recommended to migrate them into handlers, to get the benefits of improved performance.

5. Performance improvements

The goal of new architecture is not only to decouple layers, but also to improve performance. Handlers are more lightweight compared to renderers, as each property is handled in a separate handler instead of having one big rendering function to update the whole component.

MAUI also avoids assembly scanning at startup to find custom renderers. Custom handlers for your custom controls are registered in an explicit way on app Startup:

One more performance improvement should be reduced view nesting. Forms have the concept of fast renderers, in MAUI all handlers should be by design “fast”.

But MAUI release was not postponed without any reasons. The team is still working on performance improvements. First benchmarks are showing that MAUI apps start even slower than Forms apps, see this issue for details: https://github.com/dotnet/maui/issues/822. In this case observed difference is not dramatic, it is 100ms, but still, we should not take for granted that MAUI is already faster.

6. BlazorWebView

Do you like to use Blazor for web UIs? Great news, with MAUI you will be able to use Blazor components also on all the platforms supported by MAUI (Android, iOS, Windows, MacOS). Components will render locally into HTML. HTML UI will run in web view, but it will avoid web assembly or SingalR, so we can expect relatively good performance.

And what is most important, Blazor components will be able to access native APIs from code behind! In think this is a great feature that opens scenarios for interesting hybrid architecture (combining native and web stack in a single app).

See this video for details: Introduction to .NET MAUI Blazor | The Xamarin Show

7. C# Hot Reload

And last but not least, since MAUI will be part of .NET 6 it will get also all other benefits that are coming with .NET 6. And one of them is hot reload for C# code. In Forms we have hot reload only for XAML, so it is a great productivity improvement, especially for UI development.

Summary

MAUI introduces significant changes but this framework can be still considered as evolution of Forms. Fingers crossed that it will reach Forms 5 stability and will make the framework even better thanks to the above improvements.

A note about complexity

I came across this very interesting reading about complexity: https://iveybusinessjournal.com/publication/coping-with-complexity/

What was the most useful advice that I’ve found there was the idea of improvisation. So far in the context of professional work I had rather negative connotation of improvisation. I’ve seen it as a way of hiding lack of preparation or knowledge that was rather degrading expected quality.

But I was wrong. Improvisation turns out to be a great tool that anyone can start using relatively easy to deal with complexity. The inspiration is taken from theater improvisation and music jam sessions where the play is based on “yes, and…” rule.

Basically the rule says that whoever enters the show, has to build his part based on what others have already said or played. Not trying to negate or start something irrelevant. Participants are expected to continue and extend the plot that was already created.

I find this rule very useful when working on the complex projects. I can recall many situations in projects when there seem to be so many options with so much uncertainty that it seemed impossible to progress in the right direction. Those situations can be unblocked by improvisation, where we are allowed to progress based on knowledge that is limited. And in VUCA world we all have limited understanding about any subject that is non-trivial. The key is to identify the minimum set of facts required to progress and create progress based on your own expertise on top of what was already created. The facts from which to start, are identified by the skill of listening to others, not focusing solely on your own part.

The rule of not negating others work is the key factor here. You are allowed to suggest turns into left or right but it should still be a progress of the same journey. We should not start from a completely new point on our own, as it creates even more VUCA.

By using this method we can progress even when we are not sure where to go (like in machine learning). We can use joint force to explore and move faster. While we move on, we will make mistakes but also crate chances for victories. Moving on is the key. Staying in palce paralyzed by hard decision is something that may kill the project. And negating or ignoring what was already said and done, does not create progress.

In VUCA world, being certain that we are on the optimal path is impossible. What is possible, is exploration. If we are focused and making every small step based on competent knowledge, then we can expect that partial results will be achieved on daily basis and eventually also bigger goals are very likely to be met. Probably in a way that was not initially expected.

What software architecture is about?

Martin Fowler has assembled great material explaining what is software architecture: https://martinfowler.com/architecture/

My key takeaway from this reading, is that software architecture is about making sure that adding functionality to software will not become more and more expensive as the time goes.

In the other words, the development effort that is done today should also support future innovation. We often hear that motto in business to focus on strengths and build on top of that. One of the strengths that an organization has, may be its software. So to follow the motto in the software teams, it should be much easier to build new products and services on top of the existing codebase and infrastructure, rather than starting from the scratch. Existing codebase should be a competitive advantage, not a burden.

But is it always like that? Organizations in some cases come to the conclusion that it makes more sense to start a product from the scratch or rewrite existing software to support new functionalities. Does it mean that the old systems has wrong software architecture?

Yes and no. Old architecture could be great and efficient to support the business goals of the past, but may be not suitable in the context of new reality and new business goals. Sometimes there must be a brave decision made to switch to the new architecture, to be able to stay on top. Otherwise new competitors that do not have to carry the burden of historical decisions, may grow much faster in the new reality and eventually take over the market. Proper timing of such technological shifts may be crucial for organizations to perform well.

Changing environment does not have to mean the change of the business model or market, but may also mean availability of new technologies or organizational changes.

Nevertheless, those kind of radical architecture changes should happen as seldom as possible as there is always a huge cost and complexity behind such shifts. Architecture should aim to predict likely scenarios and aim to be open for changes. There is always at least a few options that are supported by current constraints. Options more open for change and extensibility should be always preferred if the cost is comparable.

Azure Service Bus with MassTransit

Mass Transit project is a well known abstraction layer for .NET over most popular message brokers, covering providers like RabbitMQ or Azure Service Bus. It will not only configure underlying message broker via friendly API, but will also address issues like error handling or concurrency. It also introduces a clean implementation of saga pattern and couple of other patterns useful in distributed systems.

This article is a brief introduction to MassTransit for Azure Service Bus users.

Sending a message

When sending a message we must specify so called “send endpoint” name. Under the hood, send endpoint is Azure Service Bus queue name. When send method is called, queue is automatically created.

ISendEndpointProvider needs to be injected to call Send method. See producers docs for other ways on how to send a message.

Queue name can be specified by creating a mapping from message type to queue name: EndpointConvention.Map<IMyCommand>(new Uri("queue:my-command"));

Publishing an event

When publishing an event we do not have to specify any endpoint name to which we send the message. MassTransit by convention creates a topic corresponding to published massage full name (including namespace). So under the hood we have a concrete topic as we have concrete queue when sending a message, but in case of events we do not specify that topic explicitly. I find it a bit inconsistent, but I understand  the idea – conceptually on the abstraction level, publishing an event does not have a target receiver. It is up to subscribers to subscribe for it. Publisher just throws the event into the air.

To publish an endpoint we simply call Publish method on injected IPublishEndpoint

await _publishEndpoint.Publish<IMyEventDone>(new {
  MyProperty = "some value"
});

Event subscribers

It is important to understand how topics and subscriptions work in Azure Service Bus. Each subscription is a queue. When topic does not have any subscriptions, then events published to this topic are simply lost. This is a by-design behaviour in pub/sub pattern.

Consumes connect to subscriptions to process the messages. If there is no active consumers, then messages will be left in the subscription queue until there is a consumer or until a timeout. Subscription is a persistent entity, but consumers are dynamic processes. There may be multiple competing consumers for given subscription to scale-out message processing. Usually different subscriptions are created by different services/sub-systems interested in given events.

cfg.SubscriptionEndpoint<IMyEventDone>("my-subscription", c => {
  c.ConfigureConsumer<Consumer>(context);
});

Worth to mention that if we use MassTransit and we subscribe to a subscription endpoint but we will not register any consumers, then messages sent to this endpoint will be automatically moved to _skipped queue created for IMyEventDone type.

There is also an alternative way of creating subscriptions, which will use additional queue that will have messages from the subscription auto-forwarded, see docs for details.

Anonymous types for messages

It is recommended by MassTransit author to use interfaces for massage contracts. MassTransit comes with a useful Roslyn analysers package which simplifies using anonymous types as interface implementations.  After installing analyzers: Install-Package MassTransit.Analyzers we can automatically add missing properties with Alt+Enter:

Blazor Web Assembly large app size

There is a gotcha when crating Blazor application from the template that includes service worker (PWA support enabled). Notice that the service worker is pre-fetching all the static assets (files like JavaScript, CSS, images). See this code on GitHub as an example of the code generated by the default dotnet template for web assembly with PWA.

If you are using a bootstrap like https://adminlte.io/ then web assets folder will include all the plugins that the bootstrap comes with. You may use just 1% of the bootstrap functionality, but default service worker will pre-fetch all unused assets.

That can easily make initial size of the app loaded in the browser be around 40 MB (in the release mode). Do not think that all that files are necessary .net framework libraries loaded as web assembly. When looking into the network tab you’ll notice that most of the resources are js/css files that you had probably never used in your app.

Normal web assembly application size created with Blazor, even on top of the bootstrap or components library like https://blazorise.com/ (or both), should be less than 2 MB of network resources (excluding images that you may have).

So, please watch out for the default PWA service worker. It can make initial app loading time unnecessary long. If you are not providing any offline functionality, the easiest solution is just to remove service worker from your project by removing it from index.html file. Another option is to craft includes path pattern to include only what is really used.

How to save money in software projects?

Building software is expensive. Software developers and other necessary professionals have high salaries. Coding is time consuming. We all know that.

So how we can save money? By hiring cheaper developers? By hiring developers who code faster? By working longer hours? Well, not really, those solutions, if even rational, are only about moving costs from one place to another.

Biggest savings we can get by strictly controlling the scope of the project.

There is too much waste in the industry.  Many teams build features that nobody uses. Developers are solving abstract problems that do not really exist from business domain perspective. Business is asking for things that never convert to added value. Software should be an investment but often is only a cost.

How to prevent that? It’s hard, but here is my advice:

  • Think at least 3 times if the idea converts to value before starting building. Ask for feedback. Gather data. Starting a project based solely on intuition is a risky business. Without solid data, it’s like gambling. Money can be lost in this game, so be aware of how much risk you can accept.
  • Ok, so you have a proven arguments that the idea is wort building. Can it be achieved simpler? Maybe there is already a tool that you can use? Affordable SaaS platform? Open-source software? Think 3 times about how to make it efficiently.
  • If the project is not a standard thing that you did already – start small. Do the research. Build proof of concept of the unknown parts. Define solid technical foundation for the project. It will be harder to change it later. Check what works and what doesn’t in your case.
  • Do not involve big development resources before you know what you expect from them. Prepare the project. Know your budget. Know your timeline. Check with technical people if the assumptions are realistic. Make at least a rough estimate. Think about the risks that may impact the estimate. Are the risks acceptable?
  • Verify the results often and early. There should be something usable produced as soon as possible. Start from the core business. Do not build “nice to have” things when core functionality is not ready. Nice to have things may kill the project. Be strict about the scope and constantly challenge it. Is this story or sub-task worth doing? Asking that question is not a signal of laziness, it is a signal of caring about the budget, the timeline and the impact.

Based on my experience, especially from start-up and innovative projects, those are the rules that are often forgotten. We tend to rush, but as a result, instead of being quicker we may be slower because of loosing the correct way. We want to be agile but we forget about planning and create too much chaos instead of agility. We say that we care about quality of the product so we build the whole plastic toolbox instead of just one precise tool at time. We let engineers to own the estimates but we are not communicating the constraints and not controlling the scope.

Those are the major sins that I try to address by the above rules. I hope that you will find it useful also in your project.

Hangfire.io and .NET Expressions

I was troubleshooting an interesting bug recently thanks to which I’ve learned a bit more about Hangfire.io and expressions in .NET.

The situation was that Hangfire dashboard looked correctly, we had all jobs registered as expected. But what was actually executed by the scheduler for each job was same logic, which was supposed to be executed only for the last job. Nasty bug. We were not yet on production with hangfire.io, but still it was quite an unexpected behavior to see.

Reason was that we were wrapping each job in a class called JobRunner. This class was adding some generic functionality to update UI progress bars when jobs are running. Our code looked like that:

JobRunner runner = new JobRunner(myJobClass);
RecurringJob.AddOrUpdate(myJobClass.JobId, () => runner.Execute(), myJobClass.CronExpression);

Crucial thing to understand about Hangfire is that the what we pass to AddOrUpdate method is not a function to execute but an Expression describing the function to be executed. See this thread for difference between Expression<> and Func<>.

runner instance is not kept in memory or serialized. When Hangfire executes the job, it needs to create the instance by calling the constructor of given type. Constructor arguments are resolved from IoC container. In our case constructor argument was of type IJob. This interface was providing properties like JobId or CronExpression. So what was happening when EVERY job was running, was firsts implementation of IJob found in the container injected into a JobRunner. For each job same implementation of IJob was injected. And here we are – all jobs magically are executing same logic…

Now it seems quite obvious but it was necessary to learn couple of rules along the way to understand that behavior. It seems to be a common misunderstanding as there is even a comment about people making that mistake in hangfire.io source code, see Job.cs .

I hope this case study will help someone to avoid similar traps.

C# snippets in Azure APIM policies

It I was an interesting finding this week. I was not aware that it is possible to use multi-line C# code snippets inside Azure API Management policies.

For example if you’d like to calculate an expression (could be e.g. a backend service query parameter) based on a request header value, that you could use a snippet similar to this one:

@{ 
  /* here you can place multi-line code */ 
  new dict = new Dictionary<string, string>() {
    {"a", "1"}, 
    {"b", "2"}
  };
  var val = context.Request.Headers.GetValueOrDefault("test", "a");
  return dict.ContainsKey(val) ? dict[val] : "1";
}

Details about expressions syntax can be found here: here: https://docs.microsoft.com/en-us/azure/api-management/api-management-policy-expressions

One gotch’a was with moving that kind of policy definition to Terraform.  It is necessary to replace characters “<” and “>” with entities: &#60; and &#62; respectively. Otherwise Tarraform could not apply the changes, although it worked directly in Azure portal.

Worth to note that you could achieve the same by using control flow policies, but this example is only an illustration, you can have more complex snippets e.g. for request verification or composing/manipulating response body.

Lambda architcture λ

I’ve been doing some research recently about architectures for large scale data analysis systems. An idea that appears quite often when discussing this problem is lambda architecture.

Data aggregation

The idea is quite simple. Intuitive approach to analytics is to gather data as it comes and then aggregate data for better performance of analytic queries. E.g. when users do reports by date range, pre-aggregate all sales/usage numbers per day and then produce the result for given date range by making the sum of aggregates for each data in the range. If you have let’s say 10k transactions per day, that approach will create only 1 record per day. Of course in reality you’d probably need many aggregates for different dimensions, to enable data filtering, but still you will probably have much less dimensions than the number of aggregated rows.

Aggregation is not the only way to increase query performance. It could be any kind of pre-computing like batch processing, indexing, caching etc. This layer in lambda architecture is called “serving layer” as it’s goal is to server for the analytical queries as a fast source of information.

Speed layer

This approach has a significant downside – aggregated results are available after a delay. In the example above the aggregated data required to perform analytical queries, will be available on the next day. Lambda architecture mitigates that problem by introducing so called speed-layer. In our example that would be the layer keeping data for current day. The amount of data for a single day is relatively small and probably does not require aggregates to be queried or can fit into a relatively small number of fast and more expensive machines (e.g. using in-memory computing)

Queries

Analytical queries combine results from 2 sources: aggregates and speed layer. The speed layer can be also used to crate aggregates for the next day. Once data is aggregated it can be removed from speed layer to free the resources.

Master data

Let’s do not forget that besides speed layer and aggregates, there is also a so called master data that contains all raw, not aggregated records. This dataset in lambda architecture should be append-only

Technology

This architecture is technology-agnostic. For example you can build all the layers on top of SQL servers. But typically a distributed file system like  HDFS would be used as master data. MapReduce pattern would be used for batch processing the maser data. Technologies like Apache HBase or ElephantDb would be used to query the serving layer. And Apache Storm would be used for the speed layer. Those choices could be quite common in the industry but technology stack can vary a lot from project to project or company to company.

Sources

http://lambda-architecture.net/
https://amplitude.com/blog/2015/08/25/scaling-analytics-at-amplitude
https://databricks.com/glossary/lambda-architecture