Skip to main content

What is CAP Theorem?

CAP Theorem states that a Distributed Database System can only have 2 out of 3 properties from Availability, Consistency, and Partition Tolerance. This means that every Big Data Engineer needs to do a trade-off between these three based on the use-case and Business requirements. It is very important for any Data engineer to understand the CAP Theorem and apply it when deciding the appropriate tools for the task in the hand. Let's discuss each of the properties in detail.



1. Availability 

This condition states that every request (read/write) will get a response on Success or Failure. That means every node in the system must return a response in a reasonable amount of time. This could be only possible if the system remains operational all the time. Hence, the databases are time-independent as they should be available all the time. Therefore if any two records are added to the database we don't know which one was added first and the output could be either one of them. Now let's take an example of Real-Time Streaming Data where the order and time of the events are important for the use case. High Availability condition is not appropriate in this case.

2. Consistency

This condition states that all the nodes in the system must have the same copy of the data, so for all the read requests all the nodes return the most recent write and same data. Therefore if any two records are added to the database, each of them will come with a certain timestamp and whenever the read request is generated then the most recent record will be returned. At the start or end of the transaction, the system should be in a consistent state, although during the transaction process the system can be in an inconsistent state any error would lead to a rollback of the transaction. 

3. Partition Tolerance

This condition states that the system continues to operate despite the network failure or network partition (a situation where two nodes are up and running but cannot communicate with each other) this is because the data is replicated across different nodes to keep the system fault-tolerant during the network failure. For most of the distributed system partition tolerance is a must and you must always do a trade-off between Consistency and Availability. Let's understand it better with the below case

Consider you have two nodes A and B in the master-master setup. Now, during the network partition i.e. when A and B can't communicate with each other and they can't sync updates, you have two choices

1. Either allow the nodes to get out of sync - Give up Consistency
2. Or make your cluster non-operational or make it unavailable when the node goes down - Give up Availability

Now let's look at the combination of these properties 

1. CA 

In these systems, the data is consistent between all the nodes, and also the system remains operational all the time (high Availability). So as long as all the nodes are up and available, and if you do read/write from any node, it will return the same data. But, if you ever develop a partition between the nodes then the data will be out of sync and it won't re-sync ever when the network partition is resolved (you can't achieve Partition Tolerance).

2. CP

In these systems, the data is consistent between all the nodes, and during a network partition, the system continues to run as it has Partition Tolerance. But you have to give up Availability.

3. AP

All the nodes remain online even if there is a partition (communication break) between the nodes and it will re-sync whenever the partition is resolved, but there is no guarantee that all the nodes will return the same data (high Consistency)

NOTE: The Consistency in CAP Theorem and Consistency in ACID properties are a totally different concepts. 

In ACID, it refers to the fact that any transaction should not violate the integrity constraints in the database.





Comments

Popular posts from this blog

Best Practices for Data Quality in Data Engineering: Tips and Strategies

Introduction: Data engineering is a critical aspect of modern businesses that rely on data-driven decision-making. However, the effectiveness of data engineering depends on the quality of data it produces. Poor data quality can lead to incorrect decisions, wasted resources, and lost opportunities. Therefore, it's important to implement best practices for data quality in data engineering. In this blog post, we will discuss the tips and strategies for ensuring data quality in data engineering. 1. Establish Data Governance: Data governance refers to the process of defining policies, procedures, and standards for data management. By establishing data governance, you can ensure that data is accurate, complete, and consistent across the organization. This can be achieved through the use of data quality rules, data validation, and data cleansing techniques. 2. Define Data Architecture: Data architecture is the blueprint that outlines the structure of data within an organization. By defini...

DataOps: The Future of Data Engineering

In recent years, a new approach to data engineering has emerged, known as DataOps. This approach emphasizes collaboration, automation, and continuous integration and delivery, and is becoming increasingly popular in organizations that rely heavily on data to drive their business operations. In this post, we'll explore the concept of DataOps, and why it is becoming the future of data engineering. What is DataOps? DataOps is an approach to data engineering that draws inspiration from the DevOps movement in software development. Like DevOps, DataOps emphasizes collaboration and communication between different teams and stakeholders, as well as automation and continuous delivery. In the context of data engineering, this means breaking down silos between data engineers, data scientists, business analysts, and other stakeholders, and creating a culture of shared responsibility for data quality, accuracy, and security. One of the key principles of DataOps is the idea of continuous integra...

How to use Cloud Function and Cloud Pub Sub to process data in real-time

Cloud Functions is a fully-managed, serverless platform provided by Google Cloud that allows you to execute code in response to events. Cloud Pub/Sub is a messaging service that allows you to send and receive messages between services. You can use Cloud Functions and Cloud Pub/Sub together to build event-driven architectures that can process data in real-time. Here is a high-level overview of how to use Cloud Functions with Cloud Pub/Sub: Create a Cloud Pub/Sub topic: The first step is to create a Cloud Pub/Sub topic that you will use to send and receive messages. You can do this using the Cloud Console, the Cloud Pub/Sub API, or the gcloud command-line tool. Create a Cloud Function: Next, you will need to create a Cloud Function that will be triggered by the Cloud Pub/Sub topic. You can create a Cloud Function using the Cloud Console, the Cloud Functions API, or the gcloud command-line tool. When you create a Cloud Function, you will need to specify the trigger type (in this case, C...