Low code solution with Azure Logic Apps and PowerBI

I’ve been recently working on a small project to support a new business process. Time to market was critical for the customer, to be able to capture emerged businesss oppurtunity. Budget was also tight, to not over-invest before bussiness case is validated.

There was a strong preference from the customer to do the whole data management via Excel to “keep it simple”. Not a surprise preference when you talk to sales people or CEO as in this case 🙂 There was also a need to enrich data with information from 3rd party systems and provide a number of reports.

High level architecture of this small system looks like this:

It was not the goal to avoid coding at all when building the solution, but the goal was to have a low code approach to save time.

The biggest saving was avoiding custom UI development completely, but still having the solution highly interactive from the users’ perspective. Please find below the description of how it was achieved.

Online sign-up form

For online sign-up form https://webflow.com/ was used. This tool allows to create websistes without the need to write any code. The only piece of JavaScript that had to be written was about making an AJAX request to custom API that would pass form data.

“CRM” via OneDrive and Excel

All the accounts were managed via Excel files. One file per parner company. That kind of approach has many benefits out of the box. Let’s mention a few:

  • Intuitive and flexible data management via Excel
  • Access management and sharing capabilities provided by OneDrive
  • Online collaboration and change tracking built-in

Azure Logic Apps – the glue

The core business logic was developed as custom service implemented in .NET Core and C#. This service also had its own database. Data edited in Excel files needed to be synced with the database in various ways:

  • changes made via Excel files needed to be reflected in central database
  • when data was modified by the business logic (for example status was changed and data generated as a result of the business flow), changes needed to be reflected back in the Excel to have a consistent view
  • when a new account was registered in the system, new Excel file to manage it was automatically created in OneDrive

All of that use cases were implemented via Azure Logic Apps. Logic App is composed from a ready to use building blocks. Here’s the example of single execution log of an example Logic App:

In this case, any time an Excel file is modified in OneDrive, a request is made to custom API to uplaod the file for processing the updates. Before the request, an access token is obtained. Processed file is saved for audit, and in case of error an email alert is sent.

Unther the hood Logic App is defined as a JSON file, so its definition can be stored in the code repository and deployed to Azure via ARM.

Power BI to provide data insights

Reporting was the ultimate goal of that project. Business needed to know about the performance of particular sales agents and internal employees for things like commission reporting and follow-up calls.

When comparing to developing a custom reporting solution, Power BI is super easy to create the UI to browse, filter and export data. Once the connection with database is established, data model can be defined to create interesting visualistations with extensive filtering options. All that features available for 9,99$/month/user.

If you know SQL, and relational data modelling, but are new to Power BI, I can recommend this tutorial to get up to the speed with Power BI:


Thanks to low-code and no-code tools like Azure Logic Apps, Power BI or Webflow, it was possible to deliver end-to-end solution that users were able to interact with, without any custom code to build UI. If that project included also UI and related backend developent to support UI, it would take a few times more to provide similar capabilities. We could imagine simple UI with less effort but it would be not even close to the rich capabilities provided by Power BI and Excel out of the box.

Happy low-coding! 🙂

.NET MAUI vs Xamarin.Forms

I’ve been focusing recently on Xamarin and also following the updates on MAUI. MAUI was started as a fork of Xamarin.Forms and this is how it should be seen – as next version of Xamarin.Forms. There will be no version 6 of Forms (current is version 5). Instead, Xamarin.Forms apps will have to be upgraded to MAUI – Multi-platform App UI. Alternative to upgrading is staying on Xamarin.Forms 5, that will be supported only for 12 months after MAUI official release. So if we want to be on supported tech stack, then we need to start getting familiar with MAUI.

MAUI and also whole Xamarin framework will be part of .NET 6. It was initially planned to release MAUI already on November 2021 with the new .NET release. Now we know that production-quality release was postponed to Q2 2022. Instead of production-ready release, we will keep getting preview versions of MAUI. It also means that Xamarin.Forms will be supported longer (Q2 2022 + 12 months).

OK, but what changes we can expect with MAUI? Below the summary of key differences when compared to Xamarin Forms 5.

1. Single project experience

In Xamarin.Forms we need to have separate project(s) for each platform and also project(s) for shared code. In MAUI we have an option to work with single project only that can target multiple platforms:

In single project we can still have platform-specific directories (under “Platforms” directory) or even have platform specific code in single file by using pre-process directives:

This is not something that MAUI introduced, this is achieved thanks to so SDK-style projects which are available in .NET 5. Already in .NET 5 we can multi-target projects and instruct MS Build which files or directories should be target-specific.

Example of multiple target in SDK-style project in csproj:

Example of conditional compilation based on target:

So this is not a MAUI magic, it is just about using .NET 5 capabilities.

2. Central assets management

Consequence of consistent single-project experience is also the need to manage assets in a single project. MAUI accomplishes that for PNG and SVG images by doing compilation-time image resizing. We can still have platform-specific resources if needed, for example for other formats.

But again, it is not a revolutionary change. MAUI is just integrating ResizetizerNT as integrated part of the framework.

So this MAUI feature is another low-hanging fruit. It was possible to achieve that also with Forms, but now you do not have to add additional libraries.

3. New decoupled architecture

New architecture seems to be the change where most of the efforts of MAUI team goes. This change is actually significant. It is a big refactoring of Forms driven by new architecture called Slim Renderers.

Slim renderers was kind of temporary name, so let’s not get used to that. The term that we should remember and get familiar with is Handler. Role of Renderers from Xamarin.Forms is taken by Handlers in MAUI.

What’s the main difference? We can summarise it with one world: decoupling. This is how it looks in Xamarin.Forms:

Renderers, that produce native views tree depend on Forms controls. In the above diagram you can see example for Entry control on iOS and Android platform but the same idea applies for other controls (like Button, Label, Grid etc) and other platforms (Windows, MacOS).

MAUI introduces new abstraction layer of controls interfaces. Handlers depend only on interfaces, but not on the implementation of UI controls:

This approach allow to decouple the responsibility of rendering platform-specific code that is handled by handlers from the implementation of the UI framework. MAUI is split into Microsoft.Maui.Core namespace and Microsoft.Maui.Controls namespace:

What is important to notice, is that support for XAML or bindings implementation was also decoupled from handling platform specific code. It makes the architecture much more open for crating alternative UI frameworks based on Maui.Core, but using different paradigms. We can already see experimental framework Comet using that approach and proposing MVU pattern instead of MVVM:

There is also a rumour around MAUI project that Fabulous framework could follow that path, but an interesting thing is that Fabulous team does not seem to share the enthusiasm 😉 It will be interesting to see how the idea of supporting F# in MAUI will evolve.

But it is important to notice that MAUI does not have built-in MVU support. MAUI Controls are designed to support MVVM pattern that we know from Xamarin.Forms. What is changing, is the open architecture enabling alternative approaches, but alternatives are not built-in into MAUI.

4. Mappers

Ok, so there is no Renderers, there are Handlers. So how to introduce custom rendering when needed? Since there is no custom renderers, can we still have custom rendering? Yes, but we need to get familiar with one more new term in MAUI: Mappers.

Mapper is a public static dictionary-like data type exposed by Handlers:

It maps each of the properties defined on controls interface into a platform-specific handler function to render given property.

If we need custom behaviour we can just map our own function from our application:

See this repository created by Javier Suárez with examples and detailed explanations on how to migrate from Forms to MAUI: https://github.com/jsuarezruiz/xamarin-forms-to-net-maui

And do not worry about your existing custom renderers, they will still work thanks to MAUI compatibility package. Although it is recommended to migrate them into handlers, to get the benefits of improved performance.

5. Performance improvements

The goal of new architecture is not only to decouple layers, but also to improve performance. Handlers are more lightweight compared to renderers, as each property is handled in a separate handler instead of having one big rendering function to update the whole component.

MAUI also avoids assembly scanning at startup to find custom renderers. Custom handlers for your custom controls are registered in an explicit way on app Startup:

One more performance improvement should be reduced view nesting. Forms have the concept of fast renderers, in MAUI all handlers should be by design “fast”.

But MAUI release was not postponed without any reasons. The team is still working on performance improvements. First benchmarks are showing that MAUI apps start even slower than Forms apps, see this issue for details: https://github.com/dotnet/maui/issues/822. In this case observed difference is not dramatic, it is 100ms, but still, we should not take for granted that MAUI is already faster.

6. BlazorWebView

Do you like to use Blazor for web UIs? Great news, with MAUI you will be able to use Blazor components also on all the platforms supported by MAUI (Android, iOS, Windows, MacOS). Components will render locally into HTML. HTML UI will run in web view, but it will avoid web assembly or SingalR, so we can expect relatively good performance.

And what is most important, Blazor components will be able to access native APIs from code behind! In think this is a great feature that opens scenarios for interesting hybrid architecture (combining native and web stack in a single app).

See this video for details: Introduction to .NET MAUI Blazor | The Xamarin Show

7. C# Hot Reload

And last but not least, since MAUI will be part of .NET 6 it will get also all other benefits that are coming with .NET 6. And one of them is hot reload for C# code. In Forms we have hot reload only for XAML, so it is a great productivity improvement, especially for UI development.


MAUI introduces significant changes but this framework can be still considered as evolution of Forms. Fingers crossed that it will reach Forms 5 stability and will make the framework even better thanks to the above improvements.

A note about complexity

I came across this very interesting reading about complexity: https://iveybusinessjournal.com/publication/coping-with-complexity/

What was the most useful advice that I’ve found there was the idea of improvisation. So far in the context of professional work I had rather negative connotation of improvisation. I’ve seen it as a way of hiding lack of preparation or knowledge that was rather degrading expected quality.

But I was wrong. Improvisation turns out to be a great tool that anyone can start using relatively easy to deal with complexity. The inspiration is taken from theater improvisation and music jam sessions where the play is based on “yes, and…” rule.

Basically the rule says that whoever enters the show, has to build his part based on what others have already said or played. Not trying to negate or start something irrelevant. Participants are expected to continue and extend the plot that was already created.

I find this rule very useful when working on the complex projects. I can recall many situations in projects when there seem to be so many options with so much uncertainty that it seemed impossible to progress in the right direction. Those situations can be unblocked by improvisation, where we are allowed to progress based on knowledge that is limited. And in VUCA world we all have limited understanding about any subject that is non-trivial. The key is to identify the minimum set of facts required to progress and create progress based on your own expertise on top of what was already created. The facts from which to start, are identified by the skill of listening to others, not focusing solely on your own part.

The rule of not negating others work is the key factor here. You are allowed to suggest turns into left or right but it should still be a progress of the same journey. We should not start from a completely new point on our own, as it creates even more VUCA.

By using this method we can progress even when we are not sure where to go (like in machine learning). We can use joint force to explore and move faster. While we move on, we will make mistakes but also crate chances for victories. Moving on is the key. Staying in palce paralyzed by hard decision is something that may kill the project. And negating or ignoring what was already said and done, does not create progress.

In VUCA world, being certain that we are on the optimal path is impossible. What is possible, is exploration. If we are focused and making every small step based on competent knowledge, then we can expect that partial results will be achieved on daily basis and eventually also bigger goals are very likely to be met. Probably in a way that was not initially expected.

What software architecture is about?

Martin Fowler has assembled great material explaining what is software architecture: https://martinfowler.com/architecture/

My key takeaway from this reading, is that software architecture is about making sure that adding functionality to software will not become more and more expensive as the time goes.

In the other words, the development effort that is done today should also support future innovation. We often hear that motto in business to focus on strengths and build on top of that. One of the strengths that an organization has, may be its software. So to follow the motto in the software teams, it should be much easier to build new products and services on top of the existing codebase and infrastructure, rather than starting from the scratch. Existing codebase should be a competitive advantage, not a burden.

But is it always like that? Organizations in some cases come to the conclusion that it makes more sense to start a product from the scratch or rewrite existing software to support new functionalities. Does it mean that the old systems has wrong software architecture?

Yes and no. Old architecture could be great and efficient to support the business goals of the past, but may be not suitable in the context of new reality and new business goals. Sometimes there must be a brave decision made to switch to the new architecture, to be able to stay on top. Otherwise new competitors that do not have to carry the burden of historical decisions, may grow much faster in the new reality and eventually take over the market. Proper timing of such technological shifts may be crucial for organizations to perform well.

Changing environment does not have to mean the change of the business model or market, but may also mean availability of new technologies or organizational changes.

Nevertheless, those kind of radical architecture changes should happen as seldom as possible as there is always a huge cost and complexity behind such shifts. Architecture should aim to predict likely scenarios and aim to be open for changes. There is always at least a few options that are supported by current constraints. Options more open for change and extensibility should be always preferred if the cost is comparable.

Hangfire.io and .NET Expressions

I was troubleshooting an interesting bug recently thanks to which I’ve learned a bit more about Hangfire.io and expressions in .NET.

The situation was that Hangfire dashboard looked correctly, we had all jobs registered as expected. But what was actually executed by the scheduler for each job was same logic, which was supposed to be executed only for the last job. Nasty bug. We were not yet on production with hangfire.io, but still it was quite an unexpected behavior to see.

Reason was that we were wrapping each job in a class called JobRunner. This class was adding some generic functionality to update UI progress bars when jobs are running. Our code looked like that:

JobRunner runner = new JobRunner(myJobClass);
RecurringJob.AddOrUpdate(myJobClass.JobId, () => runner.Execute(), myJobClass.CronExpression);

Crucial thing to understand about Hangfire is that the what we pass to AddOrUpdate method is not a function to execute but an Expression describing the function to be executed. See this thread for difference between Expression<> and Func<>.

runner instance is not kept in memory or serialized. When Hangfire executes the job, it needs to create the instance by calling the constructor of given type. Constructor arguments are resolved from IoC container. In our case constructor argument was of type IJob. This interface was providing properties like JobId or CronExpression. So what was happening when EVERY job was running, was firsts implementation of IJob found in the container injected into a JobRunner. For each job same implementation of IJob was injected. And here we are – all jobs magically are executing same logic…

Now it seems quite obvious but it was necessary to learn couple of rules along the way to understand that behavior. It seems to be a common misunderstanding as there is even a comment about people making that mistake in hangfire.io source code, see Job.cs .

I hope this case study will help someone to avoid similar traps.

Lambda architcture λ

I’ve been doing some research recently about architectures for large scale data analysis systems. An idea that appears quite often when discussing this problem is lambda architecture.

Data aggregation

The idea is quite simple. Intuitive approach to analytics is to gather data as it comes and then aggregate data for better performance of analytic queries. E.g. when users do reports by date range, pre-aggregate all sales/usage numbers per day and then produce the result for given date range by making the sum of aggregates for each data in the range. If you have let’s say 10k transactions per day, that approach will create only 1 record per day. Of course in reality you’d probably need many aggregates for different dimensions, to enable data filtering, but still you will probably have much less dimensions than the number of aggregated rows.

Aggregation is not the only way to increase query performance. It could be any kind of pre-computing like batch processing, indexing, caching etc. This layer in lambda architecture is called “serving layer” as it’s goal is to server for the analytical queries as a fast source of information.

Speed layer

This approach has a significant downside – aggregated results are available after a delay. In the example above the aggregated data required to perform analytical queries, will be available on the next day. Lambda architecture mitigates that problem by introducing so called speed-layer. In our example that would be the layer keeping data for current day. The amount of data for a single day is relatively small and probably does not require aggregates to be queried or can fit into a relatively small number of fast and more expensive machines (e.g. using in-memory computing)


Analytical queries combine results from 2 sources: aggregates and speed layer. The speed layer can be also used to crate aggregates for the next day. Once data is aggregated it can be removed from speed layer to free the resources.

Master data

Let’s do not forget that besides speed layer and aggregates, there is also a so called master data that contains all raw, not aggregated records. This dataset in lambda architecture should be append-only


This architecture is technology-agnostic. For example you can build all the layers on top of SQL servers. But typically a distributed file system like  HDFS would be used as master data. MapReduce pattern would be used for batch processing the maser data. Technologies like Apache HBase or ElephantDb would be used to query the serving layer. And Apache Storm would be used for the speed layer. Those choices could be quite common in the industry but technology stack can vary a lot from project to project or company to company.




Learning from work experience vs self-studying

I’ve share in one of my articles that is is estimated that only 10% of the learning happens a the formal training. The remaining 90% comes from everyday-tasks and learning from coworkers. Formal training includes things like self-studying from online resources which I would like to emphasis in this article. Since it’s only 10%, can it be treated with low priority?

Self study – 10% of the time

I do not know how those numbers were calculated. I’ve seen those numbers in on of the managers training. In my opinion those numbers can be quite accurate when we think about the time that we are able to spend on learning. Most of the time we spend at work and this is where we have the biggest opportunity to learn. Work builds real experience and practice. The knowledge becomes not just theoretical but also tested in real-life. We are able to come up with our own use-cases, examples and experience practical challenges.

Studying vs practicing vs teaching

When we think about the levels of knowledge it may be illustrated as: student → practitioner → teacher. Self-study brings you only to a student level. Good courses include hands-on labs, so that you can experience also some practice. But training exercises are always simplified and cover only simple “happy paths”. They do not include production-level challenges. It’s like fight with a shadow vs fight with a real opponent in martial arts.

Self-study – the impact

But does it mean that self-study can be ignored as it contributes only 10% to your learning? Absolutely not. It’s 10% in terms of the time but can be much more in terms of the impact. We do not always have the comfort to learn new things at work. Especially when you are an architect or in general technical lead, your company expects that you are the one who teaches others, who knows the new trends and who is up to date with latest technologies. You have to do a lot of self-study so that the whole company does not settle down with old solutions.

Online resources for self-studying

I was recently writing about DNA program which is a great example of self studying materials for software architects. Recently I’ve also singed-up for Google Cloud Platform Architecture training with Coursera. After doing the first module and getting this certificate I can definitely recommend Coursera GCP trainings. The best thing is that the training include a lot of hands-on labs with real GCP resources provisioned via Qwiklabs platform.

I have to admit that the online training possibilities that we have now are amazing. For a small price you can have access to resources that are often of much better quality than (unfortunately) some lectures at stationary universities.


So, go and self-study! Then use it at work and build a better world 🙂

Monorepo in Azure DevOps

When working with projects using microservices architecture I opt for monorepo pattern. A single repository containing source code of all the services. It facilitates knowledge sharing between teams and encourages more unified programming style. Microservices give a lot of technological freedom, but this freedom should be used wisely. Common standards across the organization are still important.

Working with a single repository forces all the programmers to at least have a glance at the list of commits from other teams when making git pull. This channel of knowledge sharing can be very beneficial.

Despite having a single repository, we still need separate pipelines and policies for separate directories. Azure DevOps facilitates it by allowing directory path filter in crucial places:

  • path filter in build trigger settings
  • branch policies
  • required reviewers

It allows to setup a clear ownership of different parts of the repository and apply different pipelines to different parts of the repo while still having all the benefits of monorepo pattern.

Kubernetes basics for Docker users

The aim of this article is to build a high-level mind-map and understanding of concepts like Kubernetes, Helm and cloud-native applications. The assumption is that you have worked already with Docker.


So you know Docker. Docker container is like a virtual machine but lighter. Why lighter? It does not have operating system inside. It relies on host operating system. Docker adds only applications layer on top. You can run many Docker containers in a single server / virtual machine.

Docker Compose

You may have also heard about Docker Compose. For example when your application consists of .NET Core web server, MySQL database, and Redis cache – you can define 3 separate containers for it. To run all of them in a virtual network you define a docker-compose.yml file. Then 3 of them can be run with a single docker-compose up command.

Scaling the application for production

Now let’s imagine you want to scale your application. You introduce a load balancer and 2 additional web server containers. You are also adding a RabbitMq container and an instance of a background processing worker. There are also other requirements for production environment:

  •  containers need to be distributed across many servers
  • containers which do not need much resources can be run together in same server to use provisioned servers in a cost-effective way
  • when a container is not responding it should be restarted
  • when connectivity with container is lost it should be replaced with a new instance
  • number of containers should autoscale
  • number of servers should autoscale
  • new containers added to this environment should be auto-discovered
  • it should be possible to mount and share storage volumes in a flexible way

Things are getting complicated. To achieve all that requirements we must write a lot of code to monitor and manage infrastructure. This is called containers orchestration. Or… we can use Kubernetes (k8s) which has all that features and more built-in!

Kubernetes concepts


Abstraction of a single app. It can have one ore more containers. If containers are tightly coupled they may be placed in same pod. All containers inside a pod share storage volumes. A pod is an unit of deployment and scalability. Each pod has IP address assigned, so there is no need to care about port conflicts.


This is how k8s names physical servers or virtual machines hosting the containers.


Set of nodes available for k8s. Example cluster would be 4 nodes and 20 pods running on them, managed dynamically by k8s.


All objects within a cluster can have a namespace. It allows to create many virtual clusters inside single cluster. It is useful for example to model many independent environments for staging in a single k8s environment.


Since each pod has it’s own IP and pods can be started and shut down, it would be not easy for other pods to keep track of constantly changing IPs. That’s why we have Services in k8s. Service groups a set of pods by given labels. Pods may come and go, but as long as labels criteria are matching, all matching pods are automatically tracked. Service has a logical name assigned, so that other pods can use this name to communicate with the pods behind the service. Service routes and load-balances the traffic dynamically to relevant pods.

Services can be also used to point traffic to an endpoint outside k8s cluster. In this case instead of defining a service by providing pod labels selector, it is necessary to define an IP of the service backend.


Ingress is used to expose services to outside word via http(s). It can also terminate SSL.


Deployment specifies the pod and the number of its replicas that should be run. Deployments controller is responsible for rolling out updated pods (e.g. with updated container image). It starts new pods, shuts down old pods and then keeps monitoring them to make sure that desired number of replicas is run.


StatefulSets are used to manage containers which contain data. Containers that have data cannot be just removed and replaced as we cannot loose its data. Pods in StatefulSets have sticky identities and persistence storage assigned. Persistent storage is not deleted when pod is deleted.

Worth to mention that in many scenarios managing persistence would be simpler outside Kubernetes cluster. Many cloud providers have sql an noSql as-a-service offerings which usually takes care about things like backups, availability and replication.

Monitoring containers

Each container has a liveness and readiness probe defined. Typically those are HTTP endpoints called by Kubernetes to check if container is healthy. K8s calls the endpoints periodically, e.g. every 10 seconds depending on configuration. When liveness probe fails container is restarted. When readiness probe fails traffic is not anymore routed to this instance. Health check endpoints must be implemented in every service. Simple liveness endpoint could be just returning status code 200. Readiness endpoint could additionally check things like database connection, cache readiness or amount of currently used resources to check if service is really ready to process new requests. When readiness endpoint detects a problem that could be solved by restarting the container, it could potentially switch a variable to force liveness endpoint to fail causing a restart.


An application targeting Kubernetes is configured in a set of yaml files for deployments, services etc. Those sets of yaml files can grow pretty complex. We also need some versioning tool for them. A common approach is to package all k8s files into a Helm package called a chart. What is being deployed to k8s cluster is a chart.

Cloud native applications

Kubernetes is an operating system for cloud native applications. Cloud native applications are usually designed in microservices architecture and aim to be cloud-agnostic – can run in many public clouds or in a private/hybrid cloud. There is already a lot of predefined Helm charts available in public repositories if you’d like to pull scalable k8s setup e.g. for Cassandra or Prometheus in your cloud-native setup.


DNA – week one

What is DNA?

DNA comes from Polish name “Droga Nowoczesnego Architekta” – it means: “The road of modern architect”. English abbreviation would be “TROMA”. This one of very little examples where Polish is simpler than English 😉

This is a 19-weeks course crated by 3 experienced architects who are also trainers. They are supported by popular blogger Maciej Aniserowicz who is the publisher of the course. He made the program well known thanks to the broad IT audience in Poland following his devstyle.pl blog and social media channels.

Course is dedicated to senior software developers and architects. The goal is to propagate modern patterns in software development. Presented theory is backed up by cases from real-life. It is meant to be the “killing feature” of the course. In addition to that there are also practical exercises after each week. The idea is  that participants get not only theory but also examples from real life and exercises to practice.

Visit droga.dev to learn more about DNA.

First impressions

I was following more ore less what was published by DNA mentors as “teasers” in recent weeks before full program became available. I was prepared to expect good content and I was not disappointed. The content is solid and also the way of presenting it is fully professional.

Good content was not a surprise for me after buying access to the course. I believed that those guys will do great stuff. I knew what I am buying. When you go to good restaurant you expect to get good food, no excitement here. But there was one thing that was a bonus that I was not expecting to be so meaningful: the community.

Joining DNA community on Slack was pure fresh excitement. This is a closed group for all course members. Joining this group was like joining tens of new teams in different companies in one day. People share and discuss how they approach various topics in their projects. This is great, tons of information and different points of view! You can read about an useful tool, online resource or interesting approach to specific problem. I expect that the value of content generated by the community may exceed the value of the course content itself as the time goes. Of course it’s Slack so no structure is present here and it will be impossible to find anything after couple of weeks 😉 But among all possible distractions you can have turned on, this Slack community is the beneficial one and can really broaden your mind

Eye opener #1

I like to search for analogies between constructions and software industry. In general there is a lot of analogy, but one of them seemed harmful to me: software developer is a creative role whereas constructions developer most often is doing a repetitive job based on detailed project. DNA explained that discrepancy – software developer should not be treated as construction worker but as constructor.

Developers write code which is a kind of design. In software the role of worker is taken by compiler and build pipeline. Having code written, we can then start many instances of running program created almost automatically. How beautiful it would be if we had “compilers” for building construction projects that would automatically execute design documents to create a real building?

Disclaimer to all construction workers who read that: if you don’t have specs from architect/designer/constructor, then you are also a creative worker creating something out of nothing 🙂

Looking forward for more eye openers when continuing with the course 🙂