Skip to content
Get in touch
  • Technology & data
  • Delivery

Low code vs high code in data engineering

Low Code Vs High Code
Explore the pros and cons of low code and high code approaches in modern data engineering. Learn where each fits best within the medallion architecture.

In the world of modern data engineering, few debates stir more discussion than low code versus high code approaches. While the rise of visual ETL tooling has brought powerful abstraction and accessibility, it's important to assess where these tools add value and where they may inadvertently add friction.

In this article, I offer some thoughts to keep in mind when making the decision between high vs low code approaches. This is underpinned by the context of developing a classic medallion architecture data platform

Bronze layer: Metadata-driven and repeatable

Using a medallion architecture as a guiding principle, we can see areas where low code and high code approaches fit different niches. Bronze layer ingestion is typically a mechanical operation: loading source system extracts into raw tables with minimal transformation. These loads are best driven by metadata, control tables that pass configuration into templated logic. The result is a repeatable, auditable, and consistent ingestion pattern.

In this context, the benefits of a visual low code interface are limited. When each target is a 1:1 mapping of a source entity, there's little value in dragging shapes onto a canvas to represent what can be more easily defined in code once and reused. Simplicity and predictability are key, as adding abstraction via visualisation doesn’t offer much here.

Silver and gold layers: Where visualisation helps

By contrast, silver and gold layer pipelines often introduce more complex logic. Here we see transformations that include joins, filters, aggregations, window functions and more. These layers vary significantly between datasets and domains, and the ability to visualise the data flow can make the logic more transparent for a wider range of users.

In these stages, low code tools offer meaningful benefits, particularly where pipeline logic needs to be explained to analysts, business users, or newer team members. Visual clarity supports collaboration, and design patterns can be reused while still allowing enough flexibility to tailor logic as needed.

Not all logic belongs in a visual interface

That said, not every part of a pipeline should be forced into a low code user interface (UI). Some functionalities, such as custom logging, row-level auditing, or integration with external systems, require low-level control over execution. Wrapping these in a visual interface can result in workarounds that complicate the design, rather than simplifying it.

Unless there's a clear and justifiable reason to implement these elements in a low code tool, it's often more pragmatic to write them directly in code. The goal should be to use low code where it adds value, not to achieve blanket coverage at the cost of complexity or maintainability. Ask the question, “What value am I adding by pushing this logic to a low code tool?” If the answer is “none”, then maybe it’s time to reevaluate. 

Version control and team workflows

High code also has clear advantages when it comes to version control. Merge conflicts in hand-written code are usually clear and straightforward to resolve. The differences are human-readable, and engineers can quickly identify what’s changed.

Low code platforms, on the other hand, generate machine-written logic behind the scenes. Even small UI changes can result in large, noisy diffs that clutter up Git history and make branch merges more painful than they need to be. This can add unnecessary friction to collaborative development workflows.

Team proficiency and the learning curve

Team capability also plays a major role in tool choice. Low code platforms can be invaluable when onboarding less technical users, i.e. those who are more comfortable with visual workflows than raw code. Visual user interfaces can shorten the learning curve and enable participation from a broader range of contributors.

However, if you’re working with a team of highly proficient data engineers, enforcing a low code approach across all pipelines may limit flexibility and slow things down. Proficient engineers often prefer the precision and efficiency of working directly in code, and many low code tools don’t offer the same level of control. It’s important to keep flexibility in mind when making the choice. 

Visualising high code

It’s worth noting that some high code logic can still be surfaced through visual tools when needed. For example, teams may wrap parameterised code templates in a simple UI to allow business users to interact with a subset of logic without touching the underlying implementation.

While this hybrid model can work well, it does introduce another layer of abstraction. It’s important to consider whether this added complexity is justified by the value it brings, or if it simply becomes another interface the engineering team has to support.

Consistency vs pragmatism

There’s a broader question around team standards: should everyone use the same tool for every job, or is it better to adapt based on the problem at hand? Consistency has benefits - simplified onboarding, shared design patterns and easier support. But rigid standardisation can also reduce agility.

A sensible compromise is to agree on shared conventions about where low code adds value (e.g. silver and gold transformation pipelines) and where high code is a better fit (e.g. bronze ingestion, logging, deployment logic). The goal is to maintain consistency without forcing tools into places they don’t belong.

Deployment and operational considerations

Tooling choice shouldn’t be based solely on developer experience. Any adoption of low code tooling needs to consider how easy it is to package, deploy, and execute the code it generates. How well does it integrate into your deployment pipelines? Can it be tested, promoted, and monitored reliably?

Visual clarity during development is only part of the picture. If operational concerns are not well understood up front, you risk creating pipelines that are easy to build, but hard to maintain or scale in production.

Reversible decisions: What if the tool goes away?

Jeff Bezos describes reversible decisions as “two-way doors”, meaning decisions that can be reversed (as opposed to one-way doors, where you can’t go back!). This is an important point to keep in mind when considering low code tooling. If the code generated is difficult or impossible to understand and refactor without the UI itself being present, then the choice to adopt the tool in the first place was not a reversible decision. 

One of the few things that is certain in modern data engineering is that things change. So make sure you’re aware of your exit strategy for any / every tool in the ecosystem. If a low code UI is generating code that your team can’t support without the tool being part of the stack, you’re locked in to that tool and its vendor. 

Use the right tool for the right job

Low code and high code are not opposing philosophies. Rather, they are different approaches with different strengths. Low code shines when you need to visualise complex logic and bring less technical users into the fold. High code offers control, efficiency, and simplicity where precision and repeatability matter.

The best data engineering teams know how to blend both approaches to play to their respective strengths in the modern data architecture. The goal isn’t ideological purity, but delivering robust, maintainable, and understandable pipelines that serve your data needs today and scale into the future. 

Chris Austin's avatar

Chris Austin

Analytics Architect

Contact Chris