Skip to main content

What is CAP Theorem?

CAP Theorem states that a Distributed Database System can only have 2 out of 3 properties from Availability, Consistency, and Partition Tolerance. This means that every Big Data Engineer needs to do a trade-off between these three based on the use-case and Business requirements. It is very important for any Data engineer to understand the CAP Theorem and apply it when deciding the appropriate tools for the task in the hand. Let's discuss each of the properties in detail.



1. Availability 

This condition states that every request (read/write) will get a response on Success or Failure. That means every node in the system must return a response in a reasonable amount of time. This could be only possible if the system remains operational all the time. Hence, the databases are time-independent as they should be available all the time. Therefore if any two records are added to the database we don't know which one was added first and the output could be either one of them. Now let's take an example of Real-Time Streaming Data where the order and time of the events are important for the use case. High Availability condition is not appropriate in this case.

2. Consistency

This condition states that all the nodes in the system must have the same copy of the data, so for all the read requests all the nodes return the most recent write and same data. Therefore if any two records are added to the database, each of them will come with a certain timestamp and whenever the read request is generated then the most recent record will be returned. At the start or end of the transaction, the system should be in a consistent state, although during the transaction process the system can be in an inconsistent state any error would lead to a rollback of the transaction. 

3. Partition Tolerance

This condition states that the system continues to operate despite the network failure or network partition (a situation where two nodes are up and running but cannot communicate with each other) this is because the data is replicated across different nodes to keep the system fault-tolerant during the network failure. For most of the distributed system partition tolerance is a must and you must always do a trade-off between Consistency and Availability. Let's understand it better with the below case

Consider you have two nodes A and B in the master-master setup. Now, during the network partition i.e. when A and B can't communicate with each other and they can't sync updates, you have two choices

1. Either allow the nodes to get out of sync - Give up Consistency
2. Or make your cluster non-operational or make it unavailable when the node goes down - Give up Availability

Now let's look at the combination of these properties 

1. CA 

In these systems, the data is consistent between all the nodes, and also the system remains operational all the time (high Availability). So as long as all the nodes are up and available, and if you do read/write from any node, it will return the same data. But, if you ever develop a partition between the nodes then the data will be out of sync and it won't re-sync ever when the network partition is resolved (you can't achieve Partition Tolerance).

2. CP

In these systems, the data is consistent between all the nodes, and during a network partition, the system continues to run as it has Partition Tolerance. But you have to give up Availability.

3. AP

All the nodes remain online even if there is a partition (communication break) between the nodes and it will re-sync whenever the partition is resolved, but there is no guarantee that all the nodes will return the same data (high Consistency)

NOTE: The Consistency in CAP Theorem and Consistency in ACID properties are a totally different concepts. 

In ACID, it refers to the fact that any transaction should not violate the integrity constraints in the database.





Comments

Popular posts from this blog

Top 25 Data Engineer Interview Questions

In my last post  How to prepare for Data Engineer Interviews ,  I wrote about how one can prepare for the Data Engineer Interviews, and in this blog post, I am going to provide the  Top 25 Basic   data engineer interview questions  asked frequently and their brief answers. This is typically the first round of the Interview where the interviewer just wants to access whether you are aware of basic concepts or not and therefore you don't need to explain it in detail. Just a single statement would be sufficient. Let's get started Checkout the 5 Key Skills Data Engineers need in 2023 A. Programming  1. What is the Static method in Python? Static methods are the methods that are bound to the  Class  rather than the Class's Object. Thus, it can be called without creating objects of the class. We can just call it using the reference of the class. Also, all the objects of the class share only one copy of the static method. 2. What is a Decorator in Python?...

How to prepare for the Data Engineering Interviews?

In recent years, due to the humongous growth of Data, almost all IT companies want to leverage the Data for their Businesses, and that's why the Data Engineering & Data Science opportunities in IT companies are increasing at a rapid rate, we can easily say that Data Engineers are currently at the top of the list of "most hired profiles" in the year 2021-22.  And due to huge demand companies wants to hire Data Engineers who are skilled in programming, SQL, are able to design and create scalable Data Pipelines, and are able to do Data Modelling. In a way, Data engineers should possess all the skills that Software engineers have and as well as skills Data Analysts to possess. And, in interviews also the companies look for all the skills mentioned above in Data Engineers. Checkout the 5 Key skills Data Engineer need in 2023 So in this blog post, I am going to cover all the topics and domains one can expect in Data Engineer Interviews A. Programming Round Most of the Produ...

Building Scalable and Efficient Data Lakes with Apache Hudi

If you're looking to build a scalable and efficient data lake that can support both batch and real-time processing, Apache Hudi is a great tool to consider. In this blog post, we'll discuss what Apache Hudi is, how it works, and why it's a powerful tool for building data lakes. Apache Hudi is an open-source data management framework that provides several features to manage big data. It provides the ability to perform read and write operations on large datasets in real-time, while also supporting batch processing. With Hudi, you can also ensure data quality by performing data validation, data cleansing, and data profiling. One of the key advantages of Apache Hudi is its support for schema evolution. This means that as your data changes over time, Hudi can automatically update the schema of your data to accommodate these changes, without requiring any downtime or manual intervention. Another advantage of Hudi is its support for scalable and fault-tolerant data storage. Hudi p...