Author: admin

  • Binary Heaps

    What is Binary Heap?

    Heap or Binary Heap is a special case of balanced binary tree data structure. It is a complete binary tree structure. Means, all levels of the tree are fully filled except possibly for the last level which has all keys as left as possible.

    In this structure, the root node is compared to its children, meaning that the root node will be either the smallest or the largest element in the tree. The root node will be the topmost element of all the nodes in the tree.

    • Max-Heap: key(a) key(b)
    • Min-Heap: key(a) key(b)

    Representation of Binary Heap

    Binary heap is represented as an array. The root element will be at Arr[0]. For any given node at position i, the left child will be at position 2*i + 1 and the right child at position 2*i + 2.

    Binary Min-Heap Representation

    For a Min-Heap, the root element will be the minimum element in the tree. The left child will be greater than the parent and the right child will be greater than the parent. The representation of a Min-Heap is as follows:

    Binary heap

    Binary Max-Heap Representation

    For a Max-Heap, the root element will be the maximum element in the tree. The left child will be less than the parent and the right child will be less than the parent. The representation of a Max-Heap is as follows:

    Binary heap

    Operations on Binary Heap

    There are mainly three operations that can be performed on a binary heap:

    • Insert: Inserting a new element in the heap.
    • Delete: Deleting the root element of the heap.
    • Peek: Getting the root element of the heap.

    Heapify Operation

    Heapify operations are used for maintaining the heap property of the binary heap. There are two types of heapify operations:

    • Min-Heapify: It is also known as Bubble Down. It is used to maintain the Min-Heap property of the binary heap.
    • Max-Heapify: It is also known as Bubble Up. It is used to maintain the Max-Heap property of the binary heap.

    Insert Operation on Binary Heap

    Inserting a new element in the heap is done by inserting the element at the end of the heap and then heapifying the heap.

    The heapifying process is done by comparing the new element with its parent and swapping the elements if the new element is smaller than the parent. The process is repeated until the new element is greater than its parent.

    Algorithm for Insert Operation

    Following is the algorithm for Insert Operation on Binary Heap:

    1.Insert the new element at the end of the heap.
    2.Initialize i as the index of the new element.
    3.WHILE i is not 0 AND heap[parent(i)] > heap[i]
    4.SWAP heap[i] and heap[parent(i)]
    5.Update i to parent(i)
    

    Code for Insert Operation

    Following is the code for Insert Operation on Binary Heap:

    CC++JavaPython

    //C Program to perform Insert Operation on Binary Heap#include <stdio.h>intinsert(int heap[],int n,int key){
       n = n +1;int i = n -1;
       heap[i]= key;while(i !=0&& heap[(i-1)/2]> heap[i]){int temp = heap[i];
          heap[i]= heap[(i-1)/2];
          heap[(i-1)/2]= temp;
          i =(i-1)/2;}return n;}intmain(){int heap[100]={10,20,15,40,50,100,25};int n =7;int key =12;
       n =insert(heap, n, key);printf("Heap after Insert Operation: ");for(int i =0; i < n; i++){printf("%d ", heap[i]);}return0;}

    Output

    Following is the output of the above C program:

    Heap after Insert Operation: 10 12 15 20 50 100 25 40
    

    Delete Operation on Binary Heap

    Deleting the root element of the heap is done by replacing the root element with the last element of the heap and then heapifying the heap.

    The heapifying process is done by comparing the root element with its children and swapping the elements if the root element is greater than its children. The process is repeated until the root element is less than its children.

    Algorithm for Delete Operation

    Following is the algorithm for Delete Operation on Binary Heap:

    START
       Replace the root element with the last element of the heap.
       Initialize i as 0.
       WHILE 2*i + 1 < n
          Set left as 2*i + 1.
          Set right as 2*i + 2.
          Set min as left.
          IF right < n AND heap[right] < heap[left]
             Set min as right.
          IF heap[i] > heap[min]
             SWAP heap[i] and heap[min]
             Update i to min
          ELSE
             BREAK
    END
    

    Code for Delete Operation

    Following is the code for Delete Operation on Binary Heap:

    CC++JavaPython

    //C Program to perform Delete Operation on Binary Heap#include <stdio.h>intinsert(int heap[],int n,int key){
       n = n +1;int i = n -1;
       heap[i]= key;while(i !=0&& heap[(i-1)/2]> heap[i]){int temp = heap[i];
          heap[i]= heap[(i-1)/2];
          heap[(i-1)/2]= temp;
          i =(i-1)/2;}return n;}intdelete(int heap[],int n){int root = heap[0];
       heap[0]= heap[n-1];
       n = n -1;int i =0;while(2*i +1< n){int left =2*i +1;int right =2*i +2;int min = left;if(right < n && heap[right]< heap[left]){
             min = right;}if(heap[i]> heap[min]){int temp = heap[i];
             heap[i]= heap[min];
             heap[min]= temp;
             i = min;}else{break;}}return n;}intmain(){int heap[100]={10,20,15,40,50,100,25};int n =7;printf("Heap before Delete Operation: ");for(int i =0; i < n; i++){printf("%d ", heap[i]);}printf("\n");
       n =delete(heap, n);printf("Heap after Delete Operation: ");for(int i =0; i < n; i++){printf("%d ", heap[i]);}return0;}

    Output

    Following is the output of the above C program:

    Heap before Delete Operation: 10 20 15 40 50 100 25
    Heap after Delete Operation: 15 20 25 40 50 100
    

    Applications of Binary Heap

    Binary Heap is used for many applications. Some of the common applications of Binary Heap are as follows:

    • Heap Sort: Heap sort is used to sort an array. It is used in the heap data structure. Binary heap is used to implement heap sort.
    • Priority Queue: Binary heap is used for implementing priority queues. It is used in Dijkstra’s algorithm, Prim’s algorithm, etc.
    • Graph Algorithms: Binary Heap is used in graph algorithms like Kruskal’s algorithm, Prim’s algorithm, etc.

    Conclusion

    Binary heaps are useful data structures which can be used to manage the priority of the element. It is the backbone of many algorithms and data structures. It is used in many applications like heap sortpriority queuesgraph algorithms, etc.

  • Heap Data Structure

    Heap is a special case of balanced binary tree data structure where the root-node key is compared with its children and arranged accordingly. If α has child node β then −

    key(α) ≥ key(β)

    As the value of parent is greater than that of child, this property generates Max Heap. Based on this criteria, a heap can be of two types −

    For Input → 35 33 42 10 14 19 27 44 26 31
    

    Min-Heap − Where the value of the root node is less than or equal to either of its children.

    Max Heap Example

    Max-Heap − Where the value of the root node is greater than or equal to either of its children.

    Max Heap Example

    Both trees are constructed using the same input and order of arrival.

    Max Heap Construction Algorithm

    We shall use the same example to demonstrate how a Max Heap is created. The procedure to create Min Heap is similar but we go for min values instead of max values.

    We are going to derive an algorithm for max heap by inserting one element at a time. At any point of time, heap must maintain its property. While insertion, we also assume that we are inserting a node in an already heapified tree.

    Step 1 − Create a new node at the end of heap.
    Step 2 − Assign new value to the node.
    Step 3 − Compare the value of this child node with its parent.
    Step 4 − If value of parent is less than child, then swap them.
    Step 5 − Repeat step 3 & 4 until Heap property holds.
    

    Note − In Min Heap construction algorithm, we expect the value of the parent node to be less than that of the child node.

    Let’s understand Max Heap construction by an animated illustration. We consider the same input sample that we used earlier.

    Max Heap Animated Example

    Example

    Following are the implementations of this operation in various programming languages −

    CC++JavaPython

    //C code for Max Heap construction  Algorithm#include <stdio.h>#include <stdlib.h>// Structure to represent a heaptypedefstruct{int* array;// Array to store heap elementsint capacity;// Maximum capacity of the heapint size;// Current size of the heap} Heap;// Function to create a new heap
    Heap*createHeap(int capacity){
        Heap* heap =(Heap*)malloc(sizeof(Heap));
        heap->array =(int*)malloc(capacity *sizeof(int));
        heap->capacity = capacity;
        heap->size =0;return heap;}// Function to swap two elements in the heapvoidswap(int* a,int* b){int temp =*a;*a =*b;*b = temp;}// Function to heapify a subtree rooted at index ivoidheapify(Heap* heap,int i){int largest = i;int left =2* i +1;int right =2* i +2;// Check if the left child is larger than the rootif(left < heap->size && heap->array[left]> heap->array[largest])
            largest = left;// Check if the right child is larger than the largest so farif(right < heap->size && heap->array[right]> heap->array[largest])
            largest = right;// If the largest is not the root, swap the root with the largestif(largest != i){swap(&heap->array[i],&heap->array[largest]);heapify(heap, largest);}}// Function to insert a new element into the heapvoidinsert(Heap* heap,int value){if(heap->size == heap->capacity){printf("Heap is full. Cannot insert more elements.\n");return;}// Insert the new element at the endint i = heap->size++;
        heap->array[i]= value;// Fix the heap property if it is violatedwhile(i !=0&& heap->array[(i -1)/2]< heap->array[i]){swap(&heap->array[i],&heap->array[(i -1)/2]);
            i =(i -1)/2;}}// Function to extract the maximum element from the heapintextractMax(Heap* heap){if(heap->size ==0){printf("Heap is empty. Cannot extract maximum element.\n");return-1;}// Store the root elementint max = heap->array[0];// Replace the root with the last element
        heap->array[0]= heap->array[heap->size -1];
        heap->size--;// Heapify the rootheapify(heap,0);return max;}// Function to print the elements of the heapvoidprintHeap(Heap* heap){printf("Heap elements: ");for(int i =0; i < heap->size; i++){printf("%d ", heap->array[i]);}printf("\n");}// Example usage of the heapintmain(){
        Heap* heap =createHeap(10);insert(heap,35);insert(heap,33);insert(heap,42);insert(heap,10);insert(heap,14);insert(heap,19);insert(heap,27);insert(heap,44);insert(heap,26);insert(heap,31);printHeap(heap);int max =extractMax(heap);printf("Maximum element: %d\n", max);return0;}

    Output

    Heap elements: 44 42 35 33 31 19 27 10 26 14 
    Maximum element: 44
    

    Max Heap Deletion Algorithm

    Let us derive an algorithm to delete from max heap. Deletion in Max (or Min) Heap always happens at the root to remove the Maximum (or minimum) value.

    Step 1 − Remove root node.
    Step 2 − Move the last element of last level to root.
    Step 3 − Compare the value of this child node with its parent.
    Step 4 − If value of parent is less than child, then swap them.
    Step 5 − Repeat step 3 & 4 until Heap property holds.
    
    Max Heap Deletion Animated Example

    Example

    Following are the implementations of this operation in various programming languages −

    CC++JavaPython

    //C code for Max Heap Deletion Algorithm#include <stdio.h>#include <stdlib.h>// Structure to represent a heaptypedefstruct{int* array;// Array to store heap elementsint capacity;// Maximum capacity of the heapint size;// Current size of the heap} Heap;// create a new heap
    Heap*createHeap(int capacity){
        Heap* heap =(Heap*)malloc(sizeof(Heap));
        heap->array =(int*)malloc(capacity *sizeof(int));
        heap->capacity = capacity;
        heap->size =0;return heap;}// swap two elements in the heapvoidswap(int* a,int* b){int temp =*a;*a =*b;*b = temp;}// Heapify a subtree rooted at index ivoidheapify(Heap* heap,int i){int largest = i;int left =2* i +1;int right =2* i +2;// Check if the left child is larger than the rootif(left < heap->size && heap->array[left]> heap->array[largest])
            largest = left;// Check if the right child is larger than the largest so farif(right < heap->size && heap->array[right]> heap->array[largest])
            largest = right;// If the largest is not the root, swap the root with the largestif(largest != i){swap(&heap->array[i],&heap->array[largest]);heapify(heap, largest);}}// Function to insert a new element into the heapvoidinsert(Heap* heap,int value){if(heap->size == heap->capacity){printf("Heap is full. Cannot insert more elements.\n");return;}// Insert the new element at the endint i = heap->size++;
        heap->array[i]= value;// Fix the heap property if it is violatedwhile(i !=0&& heap->array[(i -1)/2]< heap->array[i]){swap(&heap->array[i],&heap->array[(i -1)/2]);
            i =(i -1)/2;}}// delete the maximum element from the heapintdeleteMax(Heap* heap){if(heap->size ==0){printf("Heap is empty. Cannot extract maximum element.\n");return-1;}// Store the root elementint max = heap->array[0];// Replace the root with the last element
        heap->array[0]= heap->array[heap->size -1];
        heap->size--;// Heapify the rootheapify(heap,0);return max;}// print the elements of the heapvoidprintHeap(Heap* heap){for(int i =0; i < heap->size; i++){printf("%d ", heap->array[i]);}printf("\n");}// Deallocate memory occupied by the heapvoiddestroyHeap(Heap* heap){free(heap->array);free(heap);}// Example usage of the heapintmain(){
        Heap* heap =createHeap(10);insert(heap,35);insert(heap,33);insert(heap,42);insert(heap,10);insert(heap,14);insert(heap,19);insert(heap,27);insert(heap,44);insert(heap,26);insert(heap,31);printf("Heap elements before deletion: ");printHeap(heap);// Deleting the maximum element in the heapint max =deleteMax(heap);printf("Maximum element: %d\n", max);printf("Heap elements after deletion: ");printHeap(heap);destroyHeap(heap);return0;}

    Output

    Heap elements before deletion: 44 42 35 33 31 19 27 10 26 14 
    Maximum element: 44
    Heap elements after deletion: 42 33 35 26 31 19 27 10 14
  • Path Compression and Union by Rank: Optimizing Disjoint Sets

    Path Compression and Union by Rank are two optimization techniques used in Disjoint Set Data Structure. These techniques help to improve the efficiency of the Disjoint Set operations like Find, Union, and MakeSet.

    Path Compression

    As the name suggests, Path Compression is a technique that compresses the path from a node to the root of the tree. When we perform a Find operation on a node, we traverse the path from the node to the root of the tree. During this traversal, we update the parent of each node to the root of the tree.

    It helps to reduce the height of the tree and improves the efficiency of the Find operation.

    Algorithm for Path Compression

    Here is the algorithm for Path Compression −

    1. If the node is the root of the tree, return the node
    2. Otherwise, recursively find the root of the tree
    3. Update the parent of the node to the root of the tree
    4. Return the root of the tree
    

    Code for Path Compression

    Let’s see the code for Path Compression in C, C++, Java, and Python −

    CC++JavaPython

    // C program to implement Path Compression in Disjoint Set#include <stdio.h>#define N 100int parent[N];int rank[N];voidmakeSet(int n){for(int i =0; i < n; i++){
          parent[i]= i;}}intfind(int x){if(parent[x]!= x){
          parent[x]=find(parent[x]);}return parent[x];}voidunionSet(int x,int y){int rootX =find(x);int rootY =find(y);if(rootX != rootY){if(rank[rootX]> rank[rootY]){
             parent[rootY]= rootX;}elseif(rank[rootX]< rank[rootY]){
             parent[rootX]= rootY;}else{
             parent[rootY]= rootX;
             rank[rootX]++;}}}intmain(){int n =5;makeSet(n);unionSet(0,1);unionSet(1,2);unionSet(3,4);printf("Find(2): %d\n",find(2));printf("Find(3): %d\n",find(3));return0;}

    Output

    Following is the output of the above C code −

    Find(2): 0
    Find(3): 3
    

    Union by Rank

    Union by Rank is another optimization technique used in Disjoint Set Data Structure. In this technique, we maintain the rank of each node in the tree. The rank of a node is the height of the tree rooted at that node. When we perform a Union operation, we merge the tree with a smaller rank into the tree with a larger rank.

    It helps to keep the height of the tree as small as possible and improves the efficiency of the Union operation.

    Algorithm for Union by Rank

    Here is the algorithm for Union by Rank −

    1. Find the root of the tree for the given nodes.
    2. If the rank of the root of the first tree is greater than the rank of the root of the second tree, make the root of the second tree as the child of the root of the first tree.
    3. Otherwise, make the root of the first tree as the child of the root of the second tree.
    4. If the rank of the root of the first tree is equal to the rank of the root of the second tree, increment the rank of the root of the second tree by 1.
    

    Code for Union by Rank

    Let’s see the code for Union by Rank in C, C++, Java, and Python −

    CC++JavaPython

    // C program to implement Union by Rank in Disjoint Set#include <stdio.h>#define N 100int parent[N];int rank[N];voidmakeSet(int n){for(int i =0; i < n; i++){
          parent[i]= i;
          rank[i]=0;}}intfind(int x){if(parent[x]!= x){
          parent[x]=find(parent[x]);}return parent[x];}voidunionSet(int x,int y){int rootX =find(x);int rootY =find(y);if(rootX != rootY){if(rank[rootX]> rank[rootY]){
             parent[rootY]= rootX;}else{
             parent[rootX]= rootY;if(rank[rootX]== rank[rootY]){
                rank[rootY]++;}}}}intmain(){int n =5;makeSet(n);unionSet(0,1);unionSet(1,2);unionSet(3,4);printf("Find(2): %d\n",find(2));printf("Find(3): %d\n",find(3));return0;}

    Output

    Following is the output of the above C code −

    Find(2): 0
    Find(3): 3
    

    Comparison of Path Compression and Union by Rank

    Let’s compare the efficiency of the Disjoint Set operations with and without Path Compression and Union by Rank −

    OperationWithout OptimizationWith Path CompressionWith Union by RankWith Path Compression and Union by Rank
    MakeSetO(1)O(1)O(1)O(1)
    FindO(n)O(log n)O(log n)
    UnionO(n)O(log n)O(log n)O(log n)

    Conclusion

    Path Compression and Union by Rank are two optimization techniques used in Disjoint Set Data Structure. These techniques help to improve the efficiency of the Disjoint Set operations like Find, Union, and MakeSet.

    Using both Path Compression and Union by Rank, we can keep the height of the tree as small as possible and improve the efficiency of the Disjoint Set operations.

  • Disjoint Set Data Structure

    Disjoint set also known as union-find data structure. It is a type of data structure that keeps track of a collection of elements that are partitioned into multiple non-overlapping (one element can be in only one set) disjoint sets.

    It provides operations for adding (merging) two sets, finding the representative of the set to which an element belongs, and finding if two elements are in the same set.

    Imagine you have a group of students. Initially, each student is in his/her own group. Over time, some students form their own groups due to common interests, and these groups can merge with other groups.

    For example,

    Initially :  {A}, {B}, {C}, {D}, {E} (Each student is in his/her own group)
    Then A and B form a group : {A, B}, {C}, {D}, {E}
    Then C and D form a group : {A, B}, {C, D}, {E}
    Then A, B, C, D form a group : {A, B, C, D}, {E}
    

    In the above example, we can see that the students are partitioned into multiple disjoint sets. They also merge with other sets to form a bigger set. Here, we can use disjoint set data structure to keep track of these sets.

    Key Features of Disjoint Set

    Following are the key features of disjoint set data structure:

    • Non-overlapping sets: Each element can belong to only one set.
    • Dynamic merging: We can merge two sets into one using Union operation.
    • Efficient querying: With the find operation, we can quickly find which set an element belongs to.

    Operations on Disjoint Set

    Following are the operations that can be performed on disjoint set:

    • Find(x): Find the representative of the set to which element x belongs.
    • Union(x, y): Merge the sets to which elements x and y belong.
    • MakeSet(x): Create a new set with element x.

    Implementation of Disjoint Set

    Disjoint set can be implemented using the following data structures:

    • Array: We can use an array to store the parent of each element. The parent of an element is the representative of the set to which the element belongs.
    • Rank: We can use rank to optimize the union operation. Rank is the height of the tree rooted at an element. We always attach the smaller tree to the larger tree to keep the height of the tree small.

    Example code for Disjoint Set using Array

    Following is the example code for disjoint set using array:

    In order to implement disjoint set using array, we need to create a structure that contains an array to store the parent of each element and the number of elements in the set. We will also cover the following operations −

    • MakeSet(x)
    • Find(x)
    • Union(x, y)

    These operations will help us to create a new set with element x, find the representative of the set to which element x belongs and merge the sets to which elements x and y belong.

    Algorithm

    We need to follow the following steps to implement disjoint set using array:

    1. Create a new set with element x using MakeSet(x) operation.
    2. Find the representative of the set to which element x belongs using Find(x) operation.
    3. Merge the sets to which elements x and y belong using Union(x, y) operation.
    

    Code

    Following is code for implementing disjoint set using array:

    CC++JavaPython

    #include <stdio.h>#include <stdlib.h>// Disjoint set data structurestructDisjointSet{int*parent;int n;};// Create a new set with element xvoidMakeSet(structDisjointSet*ds,int x){
       ds->parent[x]= x;}// Find the representative of the set to which element x belongsintFind(structDisjointSet*ds,int x){if(ds->parent[x]== x)return x;returnFind(ds, ds->parent[x]);}// Merge the sets to which elements x and y belongvoidUnion(structDisjointSet*ds,int x,int y){int xset =Find(ds, x);int yset =Find(ds, y);
       ds->parent[xset]= yset;}intmain(){structDisjointSet ds;int n =5;
       ds.n = n;
       ds.parent =(int*)malloc(n *sizeof(int));for(int i =0; i < n; i++)MakeSet(&ds, i);Union(&ds,0,1);Union(&ds,2,3);Union(&ds,0,2);for(int i =0; i < n; i++)printf("%d ", ds.parent[i]);return0;}

    Output

    Following is the output of the above code:

    1 3 3 3 4
    

    Complexity of Disjoint Set Operations

    Following are the time complexities of disjoint set operations:

    • Find(x): O(log n) where n is the number of elements.
    • Union(x, y): O(log n) where n is the number of elements.
    • MakeSet(x): O(1)

    Applications of Disjoint Set

    Following are the applications of disjoint set data structure:

    • Connected Components: Disjoint set can be used to find connected components in a graph.
    • Dynamic connectivity: It is also used to find if two elements are connected in a graph.
    • Image processing: It is used to find connected components in an image.
    • Kruskal’s algorithm: We can use disjoint set to find minimum spanning tree using Kruskal’s algorithm.
    • Network connectivity: You can check if two computers are connected in a network.
  • Collision in Hashing

    Hashing is a data structure that uses a hash function to map data to a location in the data structure. The hash function takes the data as input and returns an index in the data structure where the data should be stored. However, there can be cases where two different data elements map to the same index in the data structure. This is known as a collision.

    For example, suppose we have a hash table with 10 buckets and a hash function that maps data elements to the buckets based on their value. If two data elements have the same hash value, they will be stored in the same bucket, causing a collision.

    Collision Resolution Techniques

    If there is a collision, we need to resolve it for the data structure to work correctly. There are several techniques to handle collisions in hashing:

    • Open Addressing
    • Separate Chaining

    Open Addressing in Hashing

    Open addressing is also known as closed hashing. In open addressing all the keys are stored directly into the hash table. When situation arises where two keys are mapped to the same position, the algorithm searches for the next empty slot in the hash table for storing the key.

    There are several techniques for open addressing:

    • Linear Probing: In linear probing, if a collision occurs, the algorithm searches for the next empty slot in the hash table by moving one position at a time.
    • Quadratic Probing: In quadratic probing, if a collision occurs, the algorithm searches for the next empty slot in the hash table by moving to the next position using a quadratic function.
    • Double Hashing: In double hashing, if a collision occurs, the algorithm searches for the next empty slot in the hash table by moving to the next position using a second hash function.

    Algorithm of Open Addressing

    The algorithm of open addressing is as follows:

    1. Calculate the hash value of the key.
    2. If the slot is empty, store the key in that slot.
    3. If the slot is not empty, use a probing technique to find the next empty slot.
    4. Repeat steps 2 and 3 until an empty slot is found.
    

    Example of Open Addressing

    Following code demonstrates the open addressing technique using linear probing in C, C++, Python, Java programming languages.

    CC++JavaPython

    //C Program#include <stdio.h>#include <stdlib.h>#define SIZE 10intcustomHash(int key){return key % SIZE;}intprobe(int H[],int key){int index =customHash(key);int i =0;while(H[(index + i)% SIZE]!=0)
          i++;return(index + i)% SIZE;}voidinsert(int H[],int key){int index =customHash(key);if(H[index]!=0)
          index =probe(H, key);
       H[index]= key;}intsearch(int H[],int key){int index =customHash(key);int i =0;while(H[(index + i)% SIZE]!= key)
          i++;return(index + i)% SIZE;}intmain(){int HT[10]={0};insert(HT,12);insert(HT,25);insert(HT,35);insert(HT,26);insert(HT,45);insert(HT,55);insert(HT,65);insert(HT,75);insert(HT,85);insert(HT,95);int result =search(HT,26);if(result ==-1)printf("Key not found\n");elseprintf("Key found at index: %d\n", result);return0;}

    Output

    The output obtained is as follows −

    Key found at index: 7
    

    Separate Chaining in Hashing

    Separate chaining is also known as open hashing, in this techniques each slot in the hash table is a linked list. When a collision occurs, the data elements are stored in the linked list at that slot. This allows multiple data elements to be stored at the same index in the hash table.

    Separate chaining is a simple and effective technique for handling collisions in hashing. It allows for efficient storage and retrieval of data elements, even when collisions occur.

    Types of Separate Chaining

    There are several types of separate chaining techniques:

    • Simple chaining: In simple chaining, each slot in the hash table is a linked list that stores the data elements that map to that slot.
    • Dynamic hashing: In dynamic hashing, the hash table is dynamically resized to accommodate more data elements as needed.
    • Extendible hashing: In extendible hashing, the hash table is divided into blocks, and each block stores a subset of the data elements.

    Algorithm of Separate Chaining

    The algorithm of separate chaining is as follows:

    1. Calculate the hash value of the key.
    2. Store the key in the linked list at that index.
    3. If the linked list is empty, create a new node and store the key in that node.
    4. If the linked list is not empty, append the key to the end of the linked list.
    

    Example of Separate Chaining

    Following code demonstrates the separate chaining technique using linked list in C, C++, Python, Java programming languages.

    CC++JavaPython

    //C Program#include <stdio.h>#include <stdlib.h>structNode{int data;structNode* next;};#define SIZE 10intcustomHash(int key){return key % SIZE;}voidinsert(structNode* H[],int key){int index =customHash(key);structNode* newNode =(structNode*)malloc(sizeof(structNode));
       newNode->data = key;
       newNode->next = H[index];
       H[index]= newNode;}intsearch(structNode* H[],int key){int index =customHash(key);structNode* temp = H[index];while(temp !=NULL){if(temp->data == key)return index;
          temp = temp->next;}return-1;}intmain(){structNode* HT[10];for(int i =0; i < SIZE; i++)
          HT[i]=NULL;insert(HT,12);insert(HT,25);insert(HT,35);insert(HT,26);insert(HT,45);insert(HT,55);insert(HT,65);insert(HT,75);insert(HT,85);insert(HT,95);int result =search(HT,85);if(result ==-1)printf("Key not found\n");elseprintf("Key found at index: %d\n", result);return0;}

    Output

    The output obtained is as follows −

    Key found at index: 5
    

    Open Addressing Vs Separate Chaining

    Open AddressingSeparate Chaining
    Each slot in the hash table stores a single data element.Each slot in the hash table stores a linked list of data elements.
    Requires additional probing to find an empty slot when a collision occurs.Does not require additional probing, as data elements are stored in a linked list.
    Can lead to clustering of data elements in the hash table.Does not lead to clustering, as data elements are stored in separate linked lists.
    Can be less memory efficient, as each slot stores only one data element.Can be more memory efficient, as each slot stores a linked list of data elements.
    Can be faster for small hash tables with few collisions.Can be faster for large hash tables with many collisions.

    Conclusion

    Collision in hashing occurs when two different data elements map to the same index in the data structure. This can be resolved using collision resolution techniques like open addressing and separate chaining. These techniques allow for efficient storage and retrieval of data elements, even when collisions occur.

  • Hashing in Data Structure

    Hashing is a data structure, where we can store the data and look up that data very quickly. Hashing uses a special formula called a hash function to map data to a location in the data structure.

    The hash function takes the data as input and returns an index in the data structure where the data should be stored. This allows us to quickly retrieve the data by using the hash function to calculate the index and then looking up the data at that index.

    Imagine you have a list of names and you want to look up a specific name in the list. You could use a hash function to map the name to an index in a data structure, such as an array or a hash table. This allows you to quickly find the name in the data structure without having to search through the entire list.

    What is Hash Function?

    hash function is a mathematical function, which takes an input and returns a fixed size string of bytes. The output of the hash function is called a hash value or hash code. The hash function is designed to be fast and efficient, so that it can quickly calculate the hash value for a given input.

    Hash functions have several important properties:

    • These are deterministic meaning that the same input will always produce the same output.
    • They can quickly calculate the hash value for a given input.
    • Hash functions are secure, meaning that it is difficult to reverse-engineer the input from the hash value.
    • It has a fixed output size, so the hash value is always the same length.

    What is Hash Table?

    hash table is a data structure that make use of hash function to map keys to values. It consists of an array of buckets, where each bucket stores a key-value pair.

    The hash function is used to calculate the index of the bucket where the key-value pair should be stored. This allows us to quickly retrieve the value associated with a key by using the hash function to calculate the index and then looking up the value at that index.

    Properties of Hash Table

    Hash tables have several important properties:

    • They provide fast lookups, insertions, and deletions, with an average time complexity of O(1).
    • They can store a large amount of data easily, with a space complexity of O(n).
    • It can handle collisions, where two keys map to the same index, by using techniques like chaining or open addressing.

    Collision in Hashing

    Collision in hashing occurs when we get similar output values, or rather hash values, for different input values. This can happen due to the limited range of hash values or the nature of the hash function.

    Hashing Algorithms

    There are several hashing algorithms that are commonly used in computer science, such as:

    • MD5 (Message Digest Algorithm 5)
    • SHA-1 (Secure Hash Algorithm 1)
    • SHA-256 (Secure Hash Algorithm 256)
    • SHA-512 (Secure Hash Algorithm 512)
    • CRC32 (Cyclic Redundancy Check 32)

    These algorithms are used for various applications, such as data integrity checkspassword hashingdigital signatures, and encryption.

    Applications of Hashing

    Hashing is used in various applications in computer science, such as:

    • Storing and retrieving data quickly.
    • Implementing data structures like hash tables and hash maps.
    • Searching and indexing data efficiently.
    • Ensuring data integrity and security.
    • Password hashing and encryption.

    Conclusion

    We have discussed the concept of hashinghash functionshash tables, and collision in hashing. We also looked at some common hashing algorithms and applications of hashing in computer science.

  • Travelling Salesman Problem (Dynamic Approach)

    Travelling salesman problem is the most notorious computational problem. We can use brute-force approach to evaluate every possible tour and select the best one. For n number of vertices in a graph, there are (n1)! number of possibilities. Thus, maintaining a higher complexity.

    However, instead of using brute-force, using the dynamic programming approach will obtain the solution in lesser time, though there is no polynomial time algorithm.

    Travelling Salesman Dynamic Programming Algorithm

    Let us consider a graph G = (V,E), where V is a set of cities and E is a set of weighted edges. An edge e(u, v) represents that vertices u and v are connected. Distance between vertex u and v is d(u, v), which should be non-negative.

    Suppose we have started at city 1 and after visiting some cities now we are in city j. Hence, this is a partial tour. We certainly need to know j, since this will determine which cities are most convenient to visit next. We also need to know all the cities visited so far, so that we don’t repeat any of them. Hence, this is an appropriate sub-problem.

    For a subset of cities S ϵ {1,2,3,…,n} that includes 1, and j ϵ S, let C(S, j) be the length of the shortest path visiting each node in S exactly once, starting at 1 and ending at j.

    When |S|> 1 , we define C(S,1)= ∝ since the path cannot start and end at 1.

    Now, let express C(S, j) in terms of smaller sub-problems. We need to start at 1 and end at j. We should select the next city in such a way that

    C(S,j)=minC(S−{j},i)+d(i,j)whereiϵSandi≠j

    Algorithm: Traveling-Salesman-Problem
    C ({1}, 1) = 0
    for s = 2 to n do
       for all subsets S  {1, 2, 3,  , n} of size s and containing 1
          C (S, 1) = ∞
       for all j  S and j  1
          C (S, j) = min {C (S  {j}, i) + d(i, j) for i  S and i  j}
    Return minj C ({1, 2, 3, , n}, j) + d(j, i)
    

    Analysis

    There are at the most 2n.n sub-problems and each one takes linear time to solve. Therefore, the total running time is O(2n.n2).

    Example

    In the following example, we will illustrate the steps to solve the travelling salesman problem.

    travelling_salesman_problem

    From the above graph, the following table is prepared.

    1234
    10101520
    250910
    3613012
    48890

    S = Φ

    Cost(2,Φ,1)=d(2,1)=5

    Cost(3,Φ,1)=d(3,1)=6

    Cost(4,Φ,1)=d(4,1)=8

    S = 1

    Cost(i,s)=min{Cos(j,s−(j))+d[i,j]}

    Cost(2,{3},1)=d[2,3]+Cost(3,Φ,1)=9+6=15

    Cost(2,{4},1)=d[2,4]+Cost(4,Φ,1)=10+8=18

    Cost(3,{2},1)=d[3,2]+Cost(2,Φ,1)=13+5=18

    Cost(3,{4},1)=d[3,4]+Cost(4,Φ,1)=12+8=20

    Cost(4,{3},1)=d[4,3]+Cost(3,Φ,1)=9+6=15

    Cost(4,{2},1)=d[4,2]+Cost(2,Φ,1)=8+5=13

    S = 2

    Cost(2,{3,4},1)=min{d[2,3]+Cost(3,{4},1)=9+20=29d[2,4]+Cost(4,{3},1)=10+15=25=25

    Cost(3,{2,4},1)=min{d[3,2]+Cost(2,{4},1)=13+18=31d[3,4]+Cost(4,{2},1)=12+13=25=25

    Cost(4,{2,3},1)=min{d[4,2]+Cost(2,{3},1)=8+15=23d[4,3]+Cost(3,{2},1)=9+18=27=23

    S = 3

    Cost(1,{2,3,4},1)=min⎧⎩⎨⎪⎪d[1,2]+Cost(2,{3,4},1)=10+25=35d[1,3]+Cost(3,{2,4},1)=15+25=40d[1,4]+Cost(4,{2,3},1)=20+23=43=35

    The minimum cost path is 35.

    Start from cost {1, {2, 3, 4}, 1}, we get the minimum value for d [1, 2]. When s = 3, select the path from 1 to 2 (cost is 10) then go backwards. When s = 2, we get the minimum value for d [4, 2]. Select the path from 2 to 4 (cost is 10) then go backwards.

    When s = 1, we get the minimum value for d [4, 2] but 2 and 4 is already selected. Therefore, we select d [4, 3] (two possible values are 15 for d [2, 3] and d [4, 3], but our last node of the path is 4). Select path 4 to 3 (cost is 9), then go to s = step. We get the minimum value for d [3, 1] (cost is 6).

    get_minimum_value

    Implementation

    Following are the implementations of the above approach in various programming languages −

    CC++JavaPython

    #include <stdio.h>#include <limits.h>#define MAX 9999int n =4;int distan[20][20]={{0,22,26,30},{30,0,45,35},{25,45,0,60},{30,35,40,0}};int DP[32][8];intTSP(int mark,int position){int completed_visit =(1<< n)-1;if(mark == completed_visit){return distan[position][0];}if(DP[mark][position]!=-1){return DP[mark][position];}int answer = MAX;for(int city =0; city < n; city++){if((mark &(1<< city))==0){int newAnswer = distan[position][city]+TSP(mark |(1<< city), city);
             answer =(answer < newAnswer)? answer : newAnswer;}}return DP[mark][position]= answer;}intmain(){for(int i =0; i <(1<< n); i++){for(int j =0; j < n; j++){
             DP[i][j]=-1;}}printf("Minimum Distance Travelled -> %d\n",TSP(1,0));return0;}

    Output

    Minimum Distance Travelled -> 122
  • Longest Common Subsequence Algorithm

    The longest common subsequence problem is finding the longest sequence which exists in both the given strings.

    But before we understand the problem, let us understand what the term subsequence is −

    Let us consider a sequence S = <s1, s2, s3, s4, ,sn>. And another sequence Z = <z1, z2, z3, ,zm> over S is called a subsequence of S, if and only if it can be derived from S deletion of some elements. In simple words, a subsequence consists of consecutive elements that make up a small part in a sequence.

    Common Subsequence

    Suppose, X and Y are two sequences over a finite set of elements. We can say that Z is a common subsequence of X and Y, if Z is a subsequence of both X and Y.

    Longest Common Subsequence

    If a set of sequences are given, the longest common subsequence problem is to find a common subsequence of all the sequences that is of maximal length.

    Naive Method

    Let X be a sequence of length m and Y a sequence of length n. Check for every subsequence of X whether it is a subsequence of Y, and return the longest common subsequence found.

    There are 2m subsequences of X. Testing sequences whether or not it is a subsequence of Y takes O(n) time. Thus, the naive algorithm would take O(n2m) time.

    Longest Common Subsequence Algorithm

    Let X=<x1,x2,x3….,xm> and Y=<y1,y2,y3….,ym> be the sequences. To compute the length of an element the following algorithm is used.

    Step 1 − Construct an empty adjacency table with the size, n m, where n = size of sequence X and m = size of sequence Y. The rows in the table represent the elements in sequence X and columns represent the elements in sequence Y.

    Step 2 − The zeroeth rows and columns must be filled with zeroes. And the remaining values are filled in based on different cases, by maintaining a counter value.

    • Case 1 − If the counter encounters common element in both X and Y sequences, increment the counter by 1.
    • Case 2 − If the counter does not encounter common elements in X and Y sequences at T[i, j], find the maximum value between T[i-1, j] and T[i, j-1] to fill it in T[i, j].

    Step 3 − Once the table is filled, backtrack from the last value in the table. Backtracking here is done by tracing the path where the counter incremented first.

    Step 4 − The longest common subseqence obtained by noting the elements in the traced path.

    Pseudocode

    In this procedure, table C[m, n] is computed in row major order and another table B[m,n] is computed to construct optimal solution.

    Algorithm: LCS-Length-Table-Formulation (X, Y)
    m := length(X)
    n := length(Y)
    for i = 1 to m do
       C[i, 0] := 0
    for j = 1 to n do
       C[0, j] := 0
    for i = 1 to m do
       for j = 1 to n do
          if xi = yj
             C[i, j] := C[i - 1, j - 1] + 1
             B[i, j] := D
          else
             if C[i -1, j]  C[i, j -1]
                C[i, j] := C[i - 1, j] + 1
                B[i, j] := U
             else
                C[i, j] := C[i, j - 1] + 1
                B[i, j] := L
    return C and B
    
    Algorithm: Print-LCS (B, X, i, j)
    if i=0 and j=0
       return
    if B[i, j] = D
       Print-LCS(B, X, i-1, j-1)
       Print(xi)
    else if B[i, j] = U
       Print-LCS(B, X, i-1, j)
    else
       Print-LCS(B, X, i, j-1)
    

    This algorithm will print the longest common subsequence of X and Y.

    Analysis

    To populate the table, the outer for loop iterates m times and the inner for loop iterates n times. Hence, the complexity of the algorithm is O(m,n), where m and n are the length of two strings.

    Example

    In this example, we have two strings X=BACDB and Y=BDCB to find the longest common subsequence.

    Following the algorithm, we need to calculate two tables 1 and 2.

    Given n = length of X, m = length of Y

    X = BDCB, Y = BACDB

    Constructing the LCS Tables

    In the table below, the zeroeth rows and columns are filled with zeroes. Remianing values are filled by incrementing and choosing the maximum values according to the algorithm.

    table1

    Once the values are filled, the path is traced back from the last value in the table at T[4, 5].

    table2

    From the traced path, the longest common subsequence is found by choosing the values where the counter is first incremented.

    In this example, the final count is 3 so the counter is incremented at 3 places, i.e., B, C, B. Therefore, the longest common subsequence of sequences X and Y is BCB.

    Implementation

    Following is the final implementation to find the Longest Common Subsequence using Dynamic Programming Approach −

    CC++JavaPython

    #include <stdio.h>#include <string.h>intmax(int a,int b);intlcs(char* X,char* Y,int m,int n){int L[m +1][n +1];int i, j, index;for(i =0; i <= m; i++){for(j =0; j <= n; j++){if(i ==0|| j ==0)
                L[i][j]=0;elseif(X[i -1]== Y[j -1]){
                L[i][j]= L[i -1][j -1]+1;}else
                L[i][j]=max(L[i -1][j], L[i][j -1]);}}
       index = L[m][n];char LCS[index +1];
       LCS[index]='\0';
       i = m, j = n;while(i >0&& j >0){if(X[i -1]== Y[j -1]){
             LCS[index -1]= X[i -1];
             i--;
             j--;
             index--;}elseif(L[i -1][j]> L[i][j -1])
             i--;else
             j--;}printf("LCS: %s\n", LCS);return L[m][n];}intmax(int a,int b){return(a > b)? a : b;}intmain(){char X[]="ABSDHS";char Y[]="ABDHSP";int m =strlen(X);int n =strlen(Y);printf("Length of LCS is %d\n",lcs(X, Y, m, n));return0;}

    Output

    LCS: ABDHS
    Length of LCS is 5
    

    Applications

    The longest common subsequence problem is a classic computer science problem, the basis of data comparison programs such as the diff-utility, and has applications in bioinformatics. It is also widely used by revision control systems, such as SVN and Git, for reconciling multiple changes made to a revision-controlled collection of files.

  • 0-1 Knapsack Problem

    We discussed the fractional knapsack problem using the greedy approach, earlier in this tutorial. It is shown that Greedy approach gives an optimal solution for Fractional Knapsack. However, this chapter will cover 0-1 Knapsack problem using dynamic programming approach and its analysis.

    Unlike in fractional knapsack, the items are always stored fully without using the fractional part of them. Its either the item is added to the knapsack or not. That is why, this method is known as the 0-1 Knapsack problem.

    Hence, in case of 0-1 Knapsack, the value of xi can be either 0 or 1, where other constraints remain the same.

    0-1 Knapsack cannot be solved by Greedy approach. Greedy approach does not ensure an optimal solution in this method. In many instances, Greedy approach may give an optimal solution.

    0-1 Knapsack Algorithm

    Problem Statement − A thief is robbing a store and can carry a maximal weight of W into his knapsack. There are n items and weight of ith item is wi and the profit of selecting this item is pi. What items should the thief take?

    Let i be the highest-numbered item in an optimal solution S for W dollars. Then S = S {i} is an optimal solution for W wi dollars and the value to the solution S is Vi plus the value of the sub-problem.

    We can express this fact in the following formula: define c[i, w] to be the solution for items 1,2, , i and the maximum weight w.

    The algorithm takes the following inputs

    • The maximum weight W
    • The number of items n
    • The two sequences v = <v1, v2, , vn> and w = <w1, w2, , wn>

    The set of items to take can be deduced from the table, starting at c[n, w] and tracing backwards where the optimal values came from.

    If c[i, w] = c[i-1, w], then item i is not part of the solution, and we continue tracing with c[i-1, w]. Otherwise, item i is part of the solution, and we continue tracing with c [i-1, w-W].

    Dynamic-0-1-knapsack (v, w, n, W)
    for w = 0 to W do
       c[0, w] = 0
    for i = 1 to n do
       c[i, 0] = 0
       for w = 1 to W do
          if wi  w then
             if vi + c[i-1, w-wi] then
                c[i, w] = vi + c[i-1, w-wi]
             else c[i, w] = c[i-1, w]
          else
             c[i, w] = c[i-1, w]
    

    The following examples will establish our statement.

    Example

    Let us consider that the capacity of the knapsack is W = 8 and the items are as shown in the following table.

    ItemABCD
    Profit24710
    Weight1357

    Solution

    Using the greedy approach of 0-1 knapsack, the weight thats stored in the knapsack would be A+B = 4 with the maximum profit 2 + 4 = 6. But, that solution would not be the optimal solution.

    Therefore, dynamic programming must be adopted to solve 0-1 knapsack problems.

    Step 1

    Construct an adjacency table with maximum weight of knapsack as rows and items with respective weights and profits as columns.

    Values to be stored in the table are cumulative profits of the items whose weights do not exceed the maximum weight of the knapsack (designated values of each row)

    So we add zeroes to the 0th row and 0th column because if the weight of item is 0, then it weighs nothing; if the maximum weight of knapsack is 0, then no item can be added into the knapsack.

    0-1_knapsack_problems

    The remaining values are filled with the maximum profit achievable with respect to the items and weight per column that can be stored in the knapsack.

    The formula to store the profit values is −

    c[i,w]=max{c[i−1,w−w[i]]+P[i]}

    By computing all the values using the formula, the table obtained would be −

    maximum_weight

    To find the items to be added in the knapsack, recognize the maximum profit from the table and identify the items that make up the profit, in this example, its {1, 7}.

    maximum_profit_12

    The optimal solution is {1, 7} with the maximum profit is 12.

    Analysis

    This algorithm takes (n.w) times as table c has (n+1).(w+1) entries, where each entry requires (1) time to compute.

    Implementation

    Following is the final implementation of 0-1 Knapsack Algorithm using Dynamic Programming Approach.

    CC++JavaPython

    #include <stdio.h>#include <string.h>intfindMax(int n1,int n2){if(n1>n2){return n1;}else{return n2;}}intknapsack(int W,int wt[],int val[],int n){int K[n+1][W+1];for(int i =0; i<=n; i++){for(int w =0; w<=W; w++){if(i ==0|| w ==0){
                K[i][w]=0;}elseif(wt[i-1]<= w){
                K[i][w]=findMax(val[i-1]+ K[i-1][w-wt[i-1]], K[i-1][w]);}else{
                K[i][w]= K[i-1][w];}}}return K[n][W];}intmain(){int val[5]={70,20,50};int wt[5]={11,12,13};int W =30;int len =sizeof val /sizeof val[0];printf("Maximum Profit achieved with this knapsack: %d",knapsack(W, wt, val, len));}

    Output

    Maximum Profit achieved with this knapsack: 120
  • Floyd Warshall Algorithm

    The Floyd-Warshall algorithm is a graph algorithm that is deployed to find the shortest path between all the vertices present in a weighted graph. This algorithm is different from other shortest path algorithms; to describe it simply, this algorithm uses each vertex in the graph as a pivot to check if it provides the shortest way to travel from one point to another.

    Floyd-Warshall algorithm works on both directed and undirected weighted graphs unless these graphs do not contain any negative cycles in them. By negative cycles, it is meant that the sum of all the edges in the graph must not lead to a negative number.

    Since, the algorithm deals with overlapping sub-problems the path found by the vertices acting as pivot are stored for solving the next steps it uses the dynamic programming approach.

    Floyd-Warshall algorithm is one of the methods in All-pairs shortest path algorithms and it is solved using the Adjacency Matrix representation of graphs.

    Floyd-Warshall Algorithm

    Consider a graph, G = {V, E} where V is the set of all vertices present in the graph and E is the set of all the edges in the graph. The graph, G, is represented in the form of an adjacency matrix, A, that contains all the weights of every edge connecting two vertices.

    Algorithm

    Step 1 − Construct an adjacency matrix A with all the costs of edges present in the graph. If there is no path between two vertices, mark the value as ∞.

    Step 2 − Derive another adjacency matrix A1 from A keeping the first row and first column of the original adjacency matrix intact in A1. And for the remaining values, say A1[i,j], if A[i,j]>A[i,k]+A[k,j] then replace A1[i,j] with A[i,k]+A[k,j]. Otherwise, do not change the values. Here, in this step, k = 1 (first vertex acting as pivot).

    Step 3 − Repeat Step 2 for all the vertices in the graph by changing the k value for every pivot vertex until the final matrix is achieved.

    Step 4 − The final adjacency matrix obtained is the final solution with all the shortest paths.

    Pseudocode

    Floyd-Warshall(w, n){ // w: weights, n: number of vertices
       for i = 1 to n do // initialize, D (0) = [wij]
          for j = 1 to n do{
             d[i, j] = w[i, j];
          }
          for k = 1 to n do // Compute D (k) from D (k-1)
             for i = 1 to n do
                for j = 1 to n do
                   if (d[i, k] + d[k, j] < d[i, j]){
                      d[i, j] = d[i, k] + d[k, j];
                   }
          return d[1..n, 1..n];
    }
    

    Example

    Consider the following directed weighted graph G = {V, E}. Find the shortest paths between all the vertices of the graphs using the Floyd-Warshall algorithm.

    directed_weighted_graph

    Solution

    Step 1

    Construct an adjacency matrix A with all the distances as values.

    A=0∞3∞250∞∞∞∞102∞6∞405∞7∞30

    Step 2

    Considering the above adjacency matrix as the input, derive another matrix A0 by keeping only first rows and columns intact. Take k = 1, and replace all the other values by A[i,k]+A[k,j].

    A=0∞3∞25∞6∞

    A1=0∞3∞2508∞7∞102∞6∞405∞7∞30

    Step 3

    Considering the above adjacency matrix as the input, derive another matrix A0 by keeping only first rows and columns intact. Take k = 1, and replace all the other values by A[i,k]+A[k,j].

    A2=∞508∞71∞7

    A2=0∞3∞2508∞7610286∞4051271530

    Step 4

    Considering the above adjacency matrix as the input, derive another matrix A0 by keeping only first rows and columns intact. Take k = 1, and replace all the other values by A[i,k]+A[k,j].

    A3=3861028415

    A3=0435250810761028654051271530

    Step 5

    Considering the above adjacency matrix as the input, derive another matrix A0 by keeping only first rows and columns intact. Take k = 1, and replace all the other values by A[i,k]+A[k,j].

    A4=5102654053

    A4=04352508107610276540597730

    Step 6

    Considering the above adjacency matrix as the input, derive another matrix A0 by keeping only first rows and columns intact. Take k = 1, and replace all the other values by A[i,k]+A[k,j].

    A5=277597730

    A5=04352508107610276540597730

    Analysis

    From the pseudocode above, the Floyd-Warshall algorithm operates using three for loops to find the shortest distance between all pairs of vertices within a graph. Therefore, the time complexity of the Floyd-Warshall algorithm is O(n3), where n is the number of vertices in the graph. The space complexity of the algorithm is O(n2).

    Implementation

    Following is the implementation of Floyd Warshall Algorithm to find the shortest path in a graph using cost adjacency matrix –

    CC++JavaPython

    #include <stdio.h>voidfloyds(int b[3][3]){int i, j, k;for(k =0; k <3; k++){for(i =0; i <3; i++){for(j =0; j <3; j++){if((b[i][k]* b[k][j]!=0)&&(i != j)){if((b[i][k]+ b[k][j]< b[i][j])||(b[i][j]==0)){
                      b[i][j]= b[i][k]+ b[k][j];}}}}}for(i =0; i <3; i++){printf("Minimum Cost With Respect to Node: %d\n", i);for(j =0; j <3; j++){printf("%d\t", b[i][j]);}}}intmain(){int b[3][3]={0};
       b[0][1]=10;
       b[1][2]=15;
       b[2][0]=12;floyds(b);return0;}

    Output

    Minimum Cost With Respect to Node: 0
    0	10	25	
    Minimum Cost With Respect to Node: 1
    27	0	15	
    Minimum Cost With Respect to Node: 2
    12	22	0