Hi everyone,I’m excited to share some news — DWHPro has a new website!It will focus on data-warehouse migrations to the cloud, helping professionals move from traditional systems like Teradata to modern platforms such as Snowflake, Databricks, and ot...
Sometimes, a simple rounding rule can change everything. Teradata and Snowflake don’t always agree on how to handle the same number — and the result might surprise you.Read the short explanation here → Read now....— Roland
1. The Forgotten Performance Trick: A Semicolon That Saves Time For decades, Teradata developers have quietly used one of the smallest but most powerful performance optimizations in BTEQ:a semicolon at the start of a line. This isn’t just a style cho...
Performance degradation caused by uneven workload distribution is one of the oldest and most persistent challenges in parallel data warehouse systems. Both Teradata and Snowflake can experience this imbalance, commonly known as skew. Although the ter...
Dear DWHPro reader,UPDATE looks identical in SQL — but in Teradata and Snowflake, it works in completely different ways.Teradata updates data blocks in place with journals and WAL.Snowflake rewrites partitions and commits through metadata.The impact?...
At first glance, an UPDATE looks universal.In reality, it’s one of the most misleading similarities between Teradata and Snowflake.The SQL is the same, but the storage, logging, recovery, and performance mechanics are completely different. If you’re...
For many years, Teradata was the undisputed leader in large-scale data warehousing. Banks, insurers, and telcos built their most critical systems on it. Today, the market is very different. Cloud-native databases such as Snowflake, BigQuery, and Data...
When migrating analytical workloads from Teradata to Snowflake, one subtle but important performance factor often gets overlooked: how the two systems handle GROUP BY operations on huge tables. The SQL looks the same, but the execution engines behave...
Are you migrating your data pipelines from Teradata to Snowflake? In my latest blog post, I break down how Snowflake’s COPY INTO paradigm differs from Teradata’s FastLoad — and how one big file could cost you performance and credits. Dive in to learn...
Big files, real risks, and how not to overspend On Snowflake, COPY INTO scales with the number of files, not the total GB. A single big file euqals one unit of work. To go fast and cheap, publish many medium parts (about 100–250 MB compressed), size...
Snowflake’s physical join execution is predominantly hash-based. In practice you’ll observe hash-join variants with two distributions: If you come from Teradata, the intent will feel familiar: both systems aim to co-locate equal keys before matching....
Dear DWHPro Community,In the rapidly evolving landscape of data management, it's crucial to acknowledge the enduring significance of established technologies amidst the rise of new trends.Similar to how mainframes have been declared obsolete numerous...
I was recently approached to support a case of importing the results of a Teradata query into a third-party vendor database. On the export side, Teradata happily wrote close to 60 million rows into a single, wide CSV file. On the other side, the impo...
Introduction to the Teradata AMP Worker Task The Teradata AMP Worker Task or AWT is the heart of the AMP, responsible for executing tasks and ensuring the smooth functioning of the system. AWTs are threads that process incoming tasks in the AMP. Each...
Introduction to Teradata Performance and NOT NULL Welcome to our latest Teradata performance blog post, a series designed to provide valuable insights into SQL queries. This article spotlights ‘NOT IN’. To delve deeper into ‘NOT IN’, it is crucial to...