The SQL Server Training Series: Normalization and Denormalization Techniques In SQL Server


The SQL Server Training Series: Normalization and Denormalization Techniques In SQL Server

In an SQL server training program, the most common debate that happens amongst professionals is if normalization or denormalization is more performance-oriented. It is important thing to know that neither of these approaches can be rejected because both have pros and cons. However, before we move on to that, let’s learn what normalization and denormalization are.

What is Normalization and Denormalization?

Relational databases move around the concept of normalization, which is a technique used to minimize duplication of data. In addition to this, it also cuts down the risk of errors that are generally caused when there are multiple copies of unsynchronized data. If one uses relationships, the duplication can be controlled and eventually eliminated. An example of this would be having a table that has information about all the employees and the job title that they occupy.

If two of the employees have the same designation, it is important that the data is entered twice and is kept in two places. A relationship is created between the employees and the position they occupy, so the table that lists down the occupations only needs to be updated in one place.

The second concept that one needs to be familiar with is denormalization, which is information that needs to be duplicated. This helps in getting better performance than a normalized database at the cost of data integrity problems as well as expanded database size. This is generally used when you need to have quick retrieval of data.

Learning about the Advantages and Disadvantages of a Normalized Database Design

Normalized databases are used in places where the read load is less than the write load and the applications tend to write intensive. Let’s have a look at why this happens:

  • Normalized data is generally small and has a similar footprint as the data is divided in many tables, vertically. It is due to this that the data tends to perform better.
  • Since the data is in one place and there are no duplications, the updates happen quicker.
  • Since there are no duplications and all the data is accumulated in one place, the inserts are rapid.
  • The normalized tables are small, which fits them in a buffer easily.
  • Since the data is fetched from one place, the selects tend to be fast.
  • The data is in one place so there is no need for distinct queries.

There are a lot of advantages, as mentioned above, but one of the main issues is that normalized data means a join between the tables, which generally causes the read operation to suffer as indexing strategies don’t tend to do well with table joins.

Learning about the Advantages and Disadvantages of a Denormalized Database Design

Denormalized data tends to do well when there is heavy read load and the application is read extensive. Let’s have a look at some of the reasons why this tends to happen:

  • The selects tend to be fast because the data is present in the same table, so there is no need for joins.
  • There is more efficient index usage because there is a single table that has all the required data. The results can be smoothly indexed and sorted with the same index if the columns are indexed properly. This is not possible in normalized data because the data is spread in different tables.

It is a known fact that denormalized tables are really fast but because the data is duplicated, inserts and updates tend to be difficult and can cost a lot of money.

In the real world, both approaches are needed and can’t be neglected because of write loads as well as read loads. So, the best way to go about is to use normalized as well as denormalized, depending on different situations.

Using Normalized and Denormalized Approach

One of the best ways to have an integrated approach is by duplicating the related columns from one table to the other. Let’s have a look at how you can achieve this:

 If you have two tables, one for products and the other for orders, in the normalized approach, you will have the product ID in the orders tables and other information related to the product in the products table. If you have a normalized schema, you will perform the query in the following way:

  • SELECT order date and product date
  • FROM orders INNER JOIN products USING(product_id)
  • WHERE product_name
  • ORDER by order_date DESC

You can completely change this query by denormalizing the schema. In doing so, you will have to add the product name column as well. This is how you will go about it:

  • SELECT order date and product date
  • FROM orders
  • WHERE product_name
  • ORDER by order_date DESC

This way the query becomes simple because you have a single index on columns order date and product name that you can use to sort things out.

Can both the techniques be used together? Yes, they can. In fact, in the real world, you must use both.


Denormalized schema tends to work better under extreme read load but one issue that people may face is that the inserts and updates can be complex because of duplicate data, so you will have to update them in more than one place.

One way in which you can solve this issue is with the help of triggers. If the orders table has the product name column, when the value of the product name needs to be updated, you can go about in the following way:

  • Have a trigger setup of the product table that updates the product name on any update to the products table.
  • After doing this, execute the update query in the products table. This way, the data will be automatically updated in the orders table.

Normalization and denormalization are essential elements of SQL Server training, being vital to database performance and elimination of data redundancy. Knowing how to implement each is important if the database is to run seamlessly.

Previous Post Next Post
Hit button to validate captcha