## Dijkstra’s Algorithm Shortest Path

Dijkstra’s algorithm is the most popular algorithm to find the shortest paths from a certain vertex in a weighted graph.
In fact, the algorithm will find the shortest paths to every vertex from the start vertex.
The algorithm takes both greedy and dynamic programming approach that it will try to find local optimum paths first and use them to find the shortest paths.
Let’s take a look at how it works!

### How does Dijkstra’s algorithm work?

Let’s say we need to find the shortest path from node A to node E.

There are five nodes and each edge is weighted with the cost to traverse from a node to a node.
For example, it costs 5 to traverse from node A to B.

What would be the most efficient way to find out the shortest path?
In order to find the shortest path from node A to E, we need to find out what is the shortest path to nodes that are right before E – A, B, D.
Can you see where we are going?
In order to find out the shortest path to nodes B, D (I skipped A since it’s the start node), we need to find the shortest path to nodes right before – A, C.

Let’s take a look at each step.

From node A, we can reach to nodes B, C, D, and E.
What is the shortest path from A to B? There is only one path and it costs 5.
What is the shortest path from A to C? There is only one path and it costs 1.
The nodes B and C are very simple but it’s very important because this will be stepping stone to find the shortest paths to further nodes.

Now, based on the shortest paths to the node C, we can find out the shortest path from the node A to the node D.
There are two paths – A to D directly and A to C to D.
As we can see the shortest path from A to D is A -> C -> D with the total cost 3.

Finally, we have found all the shortest paths from A to B, C, and D which are the nodes right before E.
Based on the shortest paths we found so far, we just need to find which is the cheapest way to reach to E.

There are a total of four possible routes from A to E.

• A -> E : 10
• A -> B -> E : 11
• A -> D -> E : 9
• A -> C -> D -> E : 8

The shortest path from A to E is A -> C -> D -> E with the cost 8.

The graph itself is pretty simple.
However, the steps we took to find the shortest path is like this.
It really finds the shortest paths to adjacent nodes of A then builds the path based on previous best paths each round.

• Start from the start node A
• Just like breadth-first search, find shortest paths to adjacent nodes which are B, C, D, and E.
• Make sure you keep track of those records.
• From nodes B, C, and D, find the shortest path to the node E.

### Code Example

This code example has the implementation of Dijkstra’s algorithm which uses the graph in the above example.

```#include <iostream>
#include <vector>
#include <limits>
#include <algorithm>

using namespace std;

struct Edge
{
int destNodeId;
int weight;

Edge(int _destNodeId, int _weight) :
destNodeId(_destNodeId), weight(_weight)
{}
};

struct Node
{
// node id which is index of vector in this case
int nodeId;

// set of edges belong to this node
vector<Edge> edges;

// indicates if the node is processed - the shortest path found.
bool processed;

Node(int _nodeId, vector<Edge> _edges) :
nodeId(_nodeId), edges(_edges), processed(false)
{}
};

struct Graph
{
// list of nodes in this graph
vector<Node> nodes;

// indicates the shortest paths up to this node so far.
vector<int> shortestDistSoFar;

// indicates parent node id for each node id, which is represented by index
vector<int> parentNodes;
};

void initializeGraph(Graph &graph)
{
// initialize node A
vector<Edge> edges;
edges.emplace_back(1, 5);
edges.emplace_back(2, 1);
edges.emplace_back(3, 4);
edges.emplace_back(4, 10);

graph.nodes.emplace_back(0, edges);

// initialize node B
edges.clear();
edges.emplace_back(4, 6);

graph.nodes.emplace_back(1, edges);

// initialize node C
edges.clear();
edges.emplace_back(3, 2);

graph.nodes.emplace_back(2, edges);

// initialize node D
edges.clear();
edges.emplace_back(4, 5);

graph.nodes.emplace_back(3, edges);

// initialize node E
edges.clear();
graph.nodes.emplace_back(4, edges);

graph.shortestDistSoFar.resize(5, numeric_limits<int>::max());
graph.parentNodes.resize(5, -1);
}

vector<Node> dijkstra(Graph &graph, int startNodeId)
{
int currNodeId = startNodeId;
// distance from A to A is 0
graph.shortestDistSoFar[currNodeId] = 0;
// A doesn't have the parent node
graph.parentNodes[currNodeId] = 0;

while (!graph.nodes[currNodeId].processed)
{
graph.nodes[currNodeId].processed = true;

// take a look at adjacent nodes
for_each(
graph.nodes[currNodeId].edges.begin(),
graph.nodes[currNodeId].edges.end(),
[&graph, currNodeId](const Edge &destNodeEdge)
{
// an adjacent node from current node being processed
int destNodeId = destNodeEdge.destNodeId;

// weight of the edge
int weight = destNodeEdge.weight;

// if this is shorter than the record, update it
if (graph.shortestDistSoFar[currNodeId] + weight < graph.shortestDistSoFar[destNodeId])
{
graph.parentNodes[destNodeId] = currNodeId;
graph.shortestDistSoFar[destNodeId] = graph.shortestDistSoFar[currNodeId] + weight;
}
});

// find next node to process
// need to process shortest distance first
int minDistSoFar = numeric_limits<int>::max();
for_each(
graph.nodes.begin(),
graph.nodes.end(),
[&currNodeId, &graph, &minDistSoFar](const Node &node)
{
if (!node.processed && graph.shortestDistSoFar[node.nodeId] < minDistSoFar)
{
minDistSoFar = graph.shortestDistSoFar[node.nodeId];
currNodeId = node.nodeId;
}
});
}

vector<Node> shortestPath;

// need to find shortest path by recursively tracking parent node

currNodeId = 4;
while (graph.parentNodes[currNodeId] != currNodeId)
{
shortestPath.push_back(graph.nodes[currNodeId]);
currNodeId = graph.parentNodes[currNodeId];
}

// make sure A is in the path
shortestPath.push_back(graph.nodes);
reverse(shortestPath.begin(), shortestPath.end());
return shortestPath;
}

int main()
{
Graph graph;
initializeGraph(graph);

// dijkstra from A of which node id is 0
vector<Node> shortestPath = dijkstra(graph, 0);

for_each(
shortestPath.begin(),
shortestPath.end(),
[](const Node &node)
{
switch (node.nodeId)
{
case 0:
cout << "A -> ";
break;
case 1:
cout << "B -> ";
break;
case 2:
cout << "C -> ";
break;
case 3:
cout << "D -> ";
break;
case 4:
cout << "E ";
break;
}
});
cout << endl;
return 0;
}
```

### Performance

Time complexity: O(n^2)
It’s because the algorithm needs to visit each node to find the shortest path.
(You can simply find out there is nested loop)

### Conclusion

The algorithm works correctly as long as there aren’t any negative weight edges.
The reason is that the huge negative edge may change the shortest path record to certain edge which was already processed.
But I assume there won’t be many cases that graph will have negative edges.
Thank you for reading the post and please let me know if you have any questions/suggestions.

## Intro to Union-Find Data Structure

Union-Find is a data structure that mimics tree data structure except that nodes point to their parent instead of children.

The union-find is very useful for set operations such as the following.
1. find – find the root of the current node
2. same component – check if two trees have the same root node. (belongs to the same set)
3. union (merge) – merge two trees (sets) into one

Such operations are very useful for problems like connected components – each node belongs to only one connected component – and vertex coloring.

The union-find is useful for algorithms such as minimum-spanning tree (Kruskal’s algorithm) – a minimum subset of edges forming a tree connecting all vertices.

### How is the union-find structured?

The union-find data structure represents each subset as a reverse or backward tree that a node points to the parent instead of children.
In this example, I used an array to keep track of parent nodes efficiently and for simplicity.
But you could use hashmap to deal with a more complicated node structure.

Let’s take a look at the example below.

There are two sets. (Two trees with two nodes)
The first set has 4 nodes – 2, 1, 0, 3 – where node 2 is the root node.
The other set has 1 node – 4.

As you can see from the above I have an array to keep track of the parent of each node.
For example, the parent node of node 0 is 1 and the parent node of node 1 is 2.
You can see that the value of indices 0 and 1 is the value of their parent nodes.

### Algorithm

Let’s take a look at the algorithms for each union-find operation.

#### Find

Again, this operation checks the root node of the provided node.
For the find operation, you just need to recursively check the parent node until it reaches the root.
How do you know when it reaches the root?
In the above structure, you will see the index and the value are equal.
If you are using another data structure, which will likely be the one using a key and value, you just need to check if a key and value are the same.

#### Same Component

This operation checks if two nodes belong to the same tree, have the same root node.
This one is very easy once ‘Find’ operation is implemented as you just need to find the root node of both nodes and check if they are the same.

#### Merge

The operation merges two trees into one.
However, merging without much thought may cause an unbalanced tree and hurt overall performance.
How should we merge the tree?
You can always link the tree that has a lower height to the higher one.
For example, if you are to merge two trees in the above picture, there are only two cases.
You can merge root node 2 to 4 which will increase the height by 1.
Or you can merge root node 4 to 2 of which the height will remain the same.
Therefore you should always merge the lower tree to the higher one.

### Code Example

```#include <vector>

using namespace std;

class UnionFind
{
int totalNodes;

// indicates parent nodes
vector<int> parents;

// indicates number of elements in a subtree
vector<int> numElts;

public:
UnionFind(int _totalNodes) : totalNodes(_totalNodes), parents(totalNodes), numElts(totalNodes)
{
for (int i = 0; i < totalNodes; ++i)
{
parents[i] = i;
numElts[i] = 1;
}
}

int find(int nodeId)
{
if (parents[nodeId] == nodeId)
{
return nodeId;
}
else
{
return find(parents[nodeId]);
}
}

bool sameComponent(int node1, int node2)
{
return find(node1) == find(node2);
}

void merge(int node1, int node2)
{
int root1 = find(node1);
int root2 = find(node2);

// already in the same component. noop
if (root1 == root2)
{
return;
}

if (numElts[root1] >= numElts[root2])
{
parent[root2] = root1;
numElts[root1] += numElts[root2];
}
else
{
parent[root1] = root2;
numElts[root2] += numElts[root1];
}
}
};

```

### Performance

Find – O(log n)
It just needs to climb the tree until it reaches the root

Same Component – O(log n)
It only needs to complete two find operations.

Merge – O(log n)
It only needs to complete two find operations.

### Conclusion

The union-find is a very powerful data structure that handles set operations in a really fast time.
And the data structure itself is very simple too!
The data structure is used for many algorithms such as minimum spanning tree, Kruskal’s algorithm.

## How to detect a loop (cycle) in a linked list

Traversing linked list is easy but what would happen if there is a cycle in the linked list?
It will just fall into an infinity loop if traversing is implemented without a cycle prevention code.
Thankfully, there is already a known algorithm for this kind of problem.

### How to detect if a cycle exists?

Suppose there is a linked list that has a cycle like below.
Now the question is how to detect the cycle.

Let’s first think about what would happen if there is a cycle.

Iterating linked list typically happens in a while loop like these steps.

1. Check if the node is valid (not null)
2. Process node
3. Move on to next node and do #1,2

Normally iteration will be done once it reaches the end of the list of which next node is null.
However, it will fall into an infinite loop if there is a cycle since there won’t be any null node.

Then, the key is that we shouldn’t visit a node if it was already visited.
How do we detect visited nodes?
The easy answer is to have a cache (map, array or whatever) that keeps track of visited nodes.

So there will be an additional step for cycle detection

1. Check if the node is valid (not null)
2. Process node
3. Move on to next node
4. Check if this node is already visited. If yes then there is a cycle.

What is the time complexity of this problem? O(n). We only need to iterate once.
What is the space complexity of this problem? O(n). Potentially we need to keep track of all visited nodes.

### Can we do better?

We cannot do better on time complexity because you need to iterate the list at least once.
However, there is definitely a way to improve space complexity.
In fact, we can solve this problem with O(1) space complexity.

How can we do that?
Think about two people running on a circular track.
And let’s say one person is able to run twice faster than the other.
Then what would happen in that case?
The faster runner will catch the slow runner at the start point after 1 lap of slow person since the faster runner is able to run track twice while the slow runner finishes 1 lap.
We can apply exact same principle to detect a cycle in a linked list.
If we have two pointers that one pointer iterates twice faster than the other faster pointer will catch up slower pointer once it completes 1 lap.

### How to find out where the cycle starts?

I think some visualization will do a much better job than a verbal explanation.
So let’s take a look.

The cycle may start after a few nodes.
But as you can see in the picture, the number of nodes before cycle essentially means the number of nodes before cycle start node where two pointers begin iteration.

The above picture shows you that node 15 will be the location where two pointers will meet.
Then, all we need to do is have another pointer at the start of the linked list and iterate both pointer 1 (slow pointer) and 3rd pointer until they meet which is the cycle node!

### Code Example

```struct Node
{
int val;
Node *next;
Node(int x) : val(x), next(0) {}
};

{
{
return 0;
}

while (iter1 && iter2)
{
// cycle found!
if (iter1 == iter2)
{
break;
}

iter1 = iter1->next;
iter2 = iter2->next ? iter2->next->next : 0;
}

// no cycle
if (!iter1 || !iter2)
{
return 0;
}

while (iter1 != cycleNode)
{
iter1 = iter1->next;
cycleNode = cycleNode->next;
}

return cycleNode;
}```

## Intro to Topological Sort

Topological sort is an important algorithm in DAG – directed acyclic graphs.

It orders each vertex in line from left to right such that left most vertex could be considered as the highest level or root or entrance of the graph and rightmost as the lowest of the graph.

In the end, topological sort tells you the order of vertices you need to go through from one node to the other without missing any middle vertices.

There could be multiple topological sorts in the same graph.

Let’s take a look at an example for a better understanding

### Course Prerequisite Example

The course prerequisite is a great example of this kind of problem.

Let’s say there are 4 courses A, B, C, D and here are some prerequisites for each course.

1. Course C requires A, B, D
2. Course B requires A
3. CourseD requires A, B
4. A is a fundamental course you need to complete before any courses

With that in mind, in order to reach from A to C, we need to follow these steps.

1. We see a path to node C directly from A but we also see B and D
2. Visit B
3. From B we see a path to C and D
4. Visit D
5. Now the order of node visit is A, B, D, C which is the correct order for course completion

However, the above steps are not complete yet for topological sort algorithm since some intuition plays a role there.
Here is the correct algorithm.

### Topological Sort Algorithm

1. Visit a node and mark it visited (but not processed yet)
2. Search any nodes connected to the node from #1 and visit only if it’s not visited yet
3. If you see any visited but not processed node then there is a cycle and you cannot find topological sort. (It will be an infinite loop!)
4. Mark the node processed and push it into stack
5. After all the nodes are processed pop each value from stack and pop order is the correct order for topological sort

### Code Example

```enum Status
{
UNDISCOVERED,
DISCOVERED,
PROCESSED
};

struct Node
{
Status      status;
vector<int> destNodeIds;
};

// return false if there is a cycle
bool findOrderHelper(int currNodeId, unordered_map<int, Node> &nodes, vector<int> &order)
{
auto it = nodes.find(currNodeId);
it->second.status = DISCOVERED;

bool canPublish = true;

for_each(
it->second.destNodeIds.begin(),
it->second.destNodeIds.end(),
[&nodes, &canPublish, &order](int destNodeId)
{
auto it = nodes.find(destNodeId);
if (it->second.status == UNDISCOVERED)
{
canPublish &= findOrderHelper(destNodeId, nodes, order);
}
else if (it->second.status == DISCOVERED)
{
canPublish = false;
}
});

it->second.status = PROCESSED;
order.push_back(currNodeId);
return canPublish;
}

vector<int> findOrder(int numCourses, vector<vector<int>>& prerequisites)
{
vector<int> order;

// nodes is a graph constructed from prerequisites vector
unordered_map<int, Node> nodes;
for_each(
prerequisites.begin(),
prerequisites.end(),
[&nodes](const vector<int> &prerequisite)
{
auto it = nodes.insert(make_pair(prerequisite.back(), Node()));
it.first->second.destNodeIds.push_back(prerequisite.front());

nodes.insert(make_pair(prerequisite.front(), Node()));
});

// now iterate nodes (graph) to find topological sort
bool canPublish = true;
for_each(
nodes.begin(),
nodes.end(),
[&nodes, &order, &canPublish](const pair<int, Node> &nodeInfo)
{
if (nodeInfo.second.status == UNDISCOVERED)
{
canPublish &= findOrderHelper(nodeInfo.first, nodes, order);
}
});

if (!canPublish)
{
order.clear();
}
else if (order.size() != numCourses)
{
// courses don't have many prerequisites
for (int i = 0; i < numCourses; ++i)
{
auto it = find(order.begin(), order.end(), i);
if (it == order.end())
{
order.push_back(i);
}
}
}

reverse(order.begin(), order.end());

return order;
}```

## Intro to Depth First Search (DFS)

Depth first search is one of the popular graph traversal algorithms that work well for certain purposes.
As you can infer from the name DFS is quite different from BFS – breadth first search.
Therefore the applications of DFS are different from BFS.
In this post, I am going to discuss how DFS works with examples.
Please refer to my other post of BFS if you are interested in.

### How does DFS work?

I am going to start by explaining DFS by comparing it with BFS.
Please note that BFS searches all the neighbors at the same level or degree before you move to the next level.
You can imagine the water ripple starts from the center and move on to the next circle gradually.
BFS always visits all of its neighbor nodes first before any others.

DFS, on the other hand, uses a different algorithm to traverse.
DFS always visits available new nodes if there is any and therefore it quickly moves away from the starting point.
Basically, DFS always tries to move on to the first available neighbor node before processing the others.
And it winds back to the parent node (start node) after processing children nodes (neighbor nodes).

Let’s take a look DFS example.

#### Example

In this example, I am going to use the following notation to mark the nodes to avoid duplicate visits.

UNDISCOVERED – node is not visited in the past
DISCOVERED – node is visited but not completely processed
PROCESSED – node is processed (you can do any custom process of the node if necessary)

I am going to use DFS to traverse the below graph starting from node 1.

From node 1, found node 2 and move on to the node.
Node 1 is marked as DISCOVERED but not PROCESSED yet.

We are at node 2 at the moment. Look for neighbors of the node.
Node 2 has two neighbors – nodes 1 and 4.
And only node 4 is marked as UNDISCOVERED at the moment.
Move onto node 4 after marking node 2 DISCOVERED.

From node 4 we see only one unvisited neighbor – node 5.
Mark node 4 DISCOVERED and move onto node 5.

Node 5 doesn’t have any unvisited neighbors.
Process node 5 (mark as PROCESSED) and wind back to the parent node – node 4.

process node 4 (mark as PROCESSED) and wind back to the parent node – node 2.

process node 2 (mark as PROCESSED) and wind back to the parent node – node 1.

Now, we are at the start node – node 1 – but there is still unvisited neighbor node – node 3.
Move onto node 3 and process the node since node 3 doesn’t have any unvisited neighbors.

Wind back to the parent node and now you can mark node 1 PROCESSED.

### Internal Data Structure for DFS

DFS uses stack which is LIFO (Last in First Out) and that’s why it always processes the newest found node first.
When implemented recursively you don’t actually need to use stack explicitly because recursive calls can behave like stack.

### Where do you use DFS for?

As you see DFS actually tries to search nodes in depth.
Thanks to its nature, DFS is good for problems like finding cycle, topological sort and etc.
I will discuss finding cycle and topological sort in a different post.

### Code Example

```#include <iostream>
#include <queue>
#include <vector>
#include <algorithm>
using namespace std;

enum NodeStatus
{
UNDISCOVERED,
DISCOVERED,
PROCESSED
};

struct Node
{
// default status is UNDISCOVERED
NodeStatus status;

// node id which is index of vector in this case
int nodeId;

// parent node id
int parentNodeId;

// set of edges belong to this node
// destination node id for this edge
vector<int> edges;
};

vector<Node> initializeGraph()
{
vector<Node> nodes(5);
for (int i = 0; i < 5; ++i)
{
nodes[i].status = UNDISCOVERED;
nodes[i].nodeId = i;
}

// edges for node 0
nodes.edges.push_back(1);
nodes.edges.push_back(2);

// edges for node 1
nodes.edges.push_back(0);
nodes.edges.push_back(3);

// edges for node 2
nodes.edges.push_back(0);

// edges for node 3
nodes.edges.push_back(1);
nodes.edges.push_back(4);

// edges for node 4
nodes.edges.push_back(3);

return nodes;
}

void dfs(vector<Node> &graph, int nodeId)
{
cout << "current node id[" << nodeId << "] is discovered"
<< endl;
graph[nodeId].status = DISCOVERED;

for_each(
graph[nodeId].edges.begin(),
graph[nodeId].edges.end(),
[&graph, nodeId](int destNodeId)
{
if (graph[destNodeId].status == UNDISCOVERED)
{
cout << "Found next node[" << destNodeId << "] to visit"
<< endl;
graph[destNodeId].parentNodeId = nodeId;
dfs(graph, destNodeId);
}
else if (graph[destNodeId].status != PROCESSED &&
graph[destNodeId].parentNodeId != nodeId)
{
// node is discovered but not processed yet.
// this likely indicates cycle because this is not
// windback case
}

});

graph[nodeId].status = PROCESSED;
}

int main()
{
vector<Node> graph = initializeGraph();

// 0 is start point
dfs(graph, 0);
return 0;
}```

## Intro to Breadth First Search (BFS)

When dealing with a graph it might be necessary to traverse the graph.
Graph traversal is a systemic method to traverse the graph.
Breadth first search is one of the most popular graph traversal algorithms that work very well for certain purposes.
Today, I am going to introduce breadth first search in this post.

### How does breadth first search traverse?

BFS is a graph traversal algorithm to visit vertices from one starting point in a certain way.
How does it visit other nodes? It first visits its nearest neighbors and processes them before reaching any further nodes.
And it marks nodes as it visits so the same node won’t be visited more than once.
How it marks really depends on each implementation but I will use the following notation in this post.

UNDISCOVERED – node is not visited in the past
DISCOVERED – node is visited but not completely processed
PROCESSED – node is processed (you can do any custom process of the node if necessary)

#### BFS Example

Now, let’s take a look at the below example.
Let’s say we have a graph to traverse from node 1 using BFS.

Here are the algorithm steps for BFS

1. Start from node 1 (or any node you would like to start), look at neighbors of node 1 and save the node 2 and 3 (you typically use a queue for this purpose. Enqueue for save and dequeue for pop) for next processing. Mark node 1 processed, node 2 and 3 as discovered so they are not enqueued more than once.
2. Dequeue node 2 and enqueue neighbors of it which are node 1 and 4 but only node 4 will be enqueued since node 1 is already processed. Mark node 2 processed and node 4 discovered.
3. Dequeue node 3 and enqueue neighbors which are node 1 and node 4 but nothing will be enqueued since node 1 is already processed and node 4 is discovered – already enqueued. Mark node 3 processed
4. Dequeue node 4 and enqueue neighbors which are the node 2, 3 and 5 but only node 5 will be enqueued since 2 and 3 are already processed. Mark node 4 processed
5. Dequeue node 5 and mark it processed (nothing to enqueue – node 4 is already processed)

Since it might be hard to fully understand with the above algorithm let’s take a look at each step.

Starting from node 1 take a look at unvisited neighbors.
By the way, an unvisited node means its status is UNDISCOVERED.
Enqueue unvisited neighbor nodes and mark them DISCOVERED.
Now node 1 should be marked as PROCESSED.

Now dequeue node 2 (it could be node 3 depending on the enqueue order but it is node 2 here because I enqueued it first) and check unvisited neighbors of the node.
Enqueue node 4 and mark it DISCOVERED.
Although node 1 is the neighbor of node 2 we don’t enqueue it because it’s already marked PROCESSED.
Node 2 should be marked as PROCESSED.

Dequeue node 3 and look for any unvisited neighbor node.
It looks like node 3 has two neighbors – nodes 1 and 4.
However, we don’t enqueue any node because node 1 is marked PROCESSED and node 4 is marked DISCOVERED.
Mark node 3 as PROCESSED.

Dequeue node 4 and look for any unvisited neighbor node which is node 5 only.
Enqueue node 5 and mark it DISCOVERED.
Node 4 is also processed.

Dequeue node 5 and there isn’t any unvisited or processed nodes at the moment.
Mark node 5 as PROCESSED and breadth first search is complete.

### Internal Data Structure for BFS

BFS uses queue which is FIFO (First in First Out). As you can already imagine it processes all of the neighbor nodes of the current node before moving on.
Therefore it radiates out from the starting node slowly.

### Where do you use BFS for?

Now you might wonder where breadth first search algorithm is used for?
There are many BFS applications but I will just list two common ones here.

1. Shortest path of an unweighted graph. Due to its nature BFS guarantees to find the shortest path of the unweighted graph. All you need to do is keep track of the parent node.
2. The number of connected components in a graph. All the nodes may be connected for some graphs but there may be a graph that may have multiple components that are not connected. BFS could be very useful to find the number of connected components in a graph

### Code Example

Here I provided a code example, finding the shortest path from the node.

```#include <iostream>
#include <queue>
#include <vector>
#include <algorithm>
using namespace std;

enum NodeStatus
{
UNDISCOVERED,
DISCOVERED,
PROCESSED
};

struct Node
{
// default status is UNDISCOVERED
NodeStatus status;

// node id which is index of vector in this case
int nodeId;

// parent node id
int parentNodeId;

// set of edges belong to this node
// destination node id for this edge
vector<int> edges;
};

// this is really for testing purpose.
// I initialized graph as the above example.
vector<Node> initializeGraph()
{
vector<Node> nodes(5);
for (int i = 0; i < 5; ++i)
{
nodes[i].status = UNDISCOVERED;
nodes[i].nodeId = i;
}

// edges for node 0
nodes.edges.push_back(1);
nodes.edges.push_back(2);

// edges for node 1
nodes.edges.push_back(0);
nodes.edges.push_back(3);

// edges for node 2
nodes.edges.push_back(0);
nodes.edges.push_back(3);

// edges for node 3
nodes.edges.push_back(1);
nodes.edges.push_back(2);
nodes.edges.push_back(4);

// edges for node 4
nodes.edges.push_back(3);

return nodes;
}

// print each node as traversing
void bfs(vector<Node> &graph)
{
queue<int> q;
q.push(graph.nodeId);

while (!q.empty())
{
int currentNodeId = q.front();
q.pop();

// mark it processed first to prevent enqueuing itself
// for self-loop case
graph[currentNodeId].status = PROCESSED;

for_each(
graph[currentNodeId].edges.begin(),
graph[currentNodeId].edges.end(),
[&graph, &q, currentNodeId](int destNodeId)
{
if (graph[destNodeId].status == UNDISCOVERED)
{
q.push(destNodeId);
graph[destNodeId].status = DISCOVERED;
graph[destNodeId].parentNodeId = currentNodeId;
}

// you can do any edge processing you want here
});

//cout << "processed node[" << currentNodeId << "]" << endl;
}
}

void printShortestPath(int currentNodeId, const vector<Node> &graph)
{
if (currentNodeId == 0)
{
cout << "root node[" << currentNodeId << "]" << endl;
}
else
{
cout << "node[" << currentNodeId << "]" << endl;
printShortestPath(graph[currentNodeId].parentNodeId, graph);
}
}

int main()
{
vector<Node> graph = initializeGraph();

bfs(graph);

// print shortest path from node 0 -> node 4
printShortestPath(graph.size() - 1, graph);
return 0;
}```

### Conclusion

Breadth first search is a popular algorithm that is very good for certain applications such as finding the shortest path of an unweighted graph.
There is another graph traversal algorithm called depth first search which I will discuss in another post.

## Intro to Graphs

A graph is an abstract representation of a complex system such as networks, human relationships, and any organization.
A graph is very important because it can represent and simplify any complex relationships in a visual format.
It is pretty amazing that messy applied problems often could be described with simple and classical graph properties.

In this post, I am going to discuss some popular graphs and brief examples.

### Graph Examples

A graph G can be represented as a set of vertices and edges which can be said as G = (V, E)
Now, let’s take a look at various graphs and their properties.

#### Undirected vs. Directed Graph

A graph G = (V, E) is undirected if an edge between two vertices is equally applicable.
In other words, if you can travel from one vertex to another vertex in both directions with one edge it’s an undirected graph.
If not, it is a directed graph.
In an undirected graph, you can visit A from B, C, D whereas there isn’t any vertex that can visit A in a directed graph.

#### Weighted vs. Unweighted Graph

If each edge in a graph is assigned with numerical value then it is considered as a weighted graph.
Each edge may have different weights and therefore the costs of visiting node may be different even if the number of edges are the same.
If each edge in a graph is not assigned with numerical value then it is an unweighted graph. (All edges are assumed to be of equal weight)

As you can see above graph, the shortest path from A to C in the unweighted graph is edge A, C.
However, the story is different in weighted graph.
Since it costs 10 to use direct path it is cheaper to visit B and then C which is 5 (4 + 1)

#### Cyclic vs. Acyclic Graph

An acyclic graph does not have any cycles.
There is a cycle in a graph if you can visit a vertex and come back to the same vertex while traversing.

In the above cyclic graph example you can see that starting from vertex A you can visit C, B, and can come back to A which is a cycle.
But there isn’t any cycle in the acyclic graph.

### Code Example

Now, let’s discuss what kind of data structure we can use in order to represent a graph.
First, we need information about all the vertices and edges associated with each vertex.
In that sense, it would be reasonable to define a user-defined type for each edge.
And each vertex will know which edges are associated with it.

Here is a sample code representing data structure for a graph.
There could be many other ways as this is just one way to represent it.

```struct Edge
{
// destination vertex id for this edge
// it could be any other data structure to represent destination
int vertexId;

// weight of the edge if application. 0 means unweighted.
int weight;
};

struct Vertex
{
// all the edges associated with this vertex
vector<Edge> edges;
};

struct Graph
{
// here I use index of each vertex as vertex id for simplicity
// but you can use your own way to represent it differently.
vector<Vertex> vertices;

// is this directed graph?
bool directed;
};```

### Conclusion

There are many other types such as simple vs non-simple, sparse vs dense and etc which are not discussed here.
I just wanted to show very basic concepts of important graphs and their properties.

## Intro to Mergesort Algorithm

Mergesort is a sorting algorithm, a classic example of a divide and conquer sorting algorithm.
Divide and conquer is a technique to solve the problem by dividing it into smaller subsets which should be simple to easier to deal with.
The result of small problems now should help to solve bigger subsets and eventually the entire problem.
Let’s take a look at how mergesort works.

### Mergesort Algorithm

The following is the steps for mergesort algorithm

1. Recursively divide the array into smaller sub-arrays until each array has a single element.
2. Recursively merge sub-arrays with sorted order until all the sub-arrays are merged. Note that when merging two arrays it’s essentially interleaving each element from the front of the list based on the sort order

Let’s take a look at an example picture to help better understand.

The above example shows you dividing (partitioning) steps.
It divides the initial array until a single element is left in a sub-array.

Now it just needs to merge each sub-array based on sorting order, either incremental or decremental, like the below picture.

Although I mentioned the array in the above example you can also sort a linked list with mergesort.
In some sense, it is more efficient to apply mergesort for linked list because you don’t need to have an extra buffer to store the temporary result.
The temporary buffer is necessary for sorting the array since you need to store merged sub-arrays as you see in the picture.
However, it is not necessary for linked list since you just need to decouple a node from the list and merge it later.

### Code Example (Array)

```void merge(vector<int> &array, int low, int middle, int high)
{
// construct array 1, low to middle
vector<int> array1;
for (int i = low; i <= middle; ++i)
{
array1.push_back(array[i]);
}

// construct array 2, middle + 1 to high
vector<int> array2;
for (int i = middle + 1; i <= high; ++i)
{
array2.push_back(array[i]);
}

size_t array1Index = 0;
size_t array2Index = 0;
int resultIdx = low;

while (array1Index < array1.size() && array2Index < array2.size())
{
if (array1[array1Index] < array2[array2Index])
{
array[resultIdx++] = array1[array1Index++];
}
else
{
array[resultIdx++] = array2[array2Index++];
}
}

for (size_t i = array1Index; i < array1.size(); ++i)
{
array[resultIdx++] = array1[i];
}

for (size_t i = array2Index; i < array2.size(); ++i)
{
array[resultIdx++] = array2[i];
}
}

void mergesort(vector<int> &array, int low, int high)
{
if (low < high)
{
int middle = (low + high) / 2;
mergesort(array, low, middle);
mergesort(array, middle + 1, high);
merge(array, low, middle, high);
}
}```

```struct Node
{
int val;
Node *next;
}

{
// no need to sort if null
{
return 0;
}
// base case for recursive solution
{
}

// merge two sorted linked lists
// (list with one node and list with multiple nodes)
// l1 has one node and is smaller than all of l2
// just append l2 to l1 and return
if (l1->val < l2->val)
{
}

// need to find a place to insert l1
// this is essentially merging two sorted linked lists
while (l2->next)
{
// found correct place to merge
if (l1->val < l2->next->val)
{
Node *next = l2->next;
l2->next = l1;
l2->next->next = next;
break;
}

l2 = l2->next;
}

if (!l2->next)
{
l2->next = l1;
l1->next = 0;
}

}```

Although it is not partitioning efficiently like an array it is still mergesort as it merges each sorted linked lists until done.
Here is the mergesort steps for linked list.

1. Iterate the list until the end
2. While winding up start merging two sorted lists
3. A list with one node (current node) and sorted linked lists from winding up process

IF you would like, You can also mergesort linked list by having additional arrays.

1. An array to hold each node of linked list
2. Iterate the array and start merging linked lists – index 0 and 1, 2 and 3 or 0, 1, 2, 3 whichever you prefer (this is the same as above except using an array.)

### Performance

The performance of mergesort is O(n log n).
However, please note that you may need an extra buffer to store a temporary sorted result which may add copying cost.
You can avoid the extra buffer if you use linked list instead of array.

### Conclusion

Mergesort is a classical divide and conquer sorting algorithm.
Although it is very easy to understand and implement it may cost more than other sorting algorithms due to extra buffer and copy cost.
But the performance is still bound under O(n log n) consistently unlike quicksort.

## Intro to Quicksort Algorithm

Quicksort is one of the most popular sorting algorithms.
Although there are quite a few sorting algorithms such as mergesort, heapsort quicksort has a unique property that is noteworthy.
In general, quicksort performs better than others due to its property under a certain condition.

Let’s take a look at the basics of quicksort!

### How does quicksort work?

Quicksort sorts the array by partitioning the array recursively which is one of divide and conquer sorting algorithms.
Quicksort first selects a random element and partitions the elements based on the selected element which is called a pivot.
All the elements that are less than the pivot are placed on the left side of the pivot and the others placed on the right side.
However, placing the elements to the left side is enough because all the leftovers will be on the right side of the pivot.
It means you need to keep track of the index to place the next item during partitioning.

Let’s take a look at an example first which I believe will help understand much better than just words.
Here is the unsorted array.

#### Quicksort partitioning Algorithm

The core of quicksort is partitioning as quicksort is done by performing partitioning recursively.
Here are the partitioning algorithm steps.

• Selects a random element for the pivot
• Iterate the array from low to the high boundary and place the element to the left side of the partition
• Place the pivot element to the next partition index after the iteration is done
• Now you have two partitions
• Left partition has elements that are less than the pivot value
• Right partition has elements that are greater than or equal to the pivot value

#### Partitioning Algorithm Example

25 is selected as the pivot value.
Please note that I have to keep track of the index to place the next item which is called partition index in this example.
The iterator index is literally an iterator that will visit every single node here.

Now we need to do the following operations.

• Is 10 < 25 true?
• Swap the element with the one at the partition index. In this case, 10 is staying at the same place
• Increment the partition index by 1 since it’s filled
• Increment the iterator index by 1 to check next item
• If not true
• Increment the iterator index by 1 to check the next item

We perform the same operation as the above.

• Is 3 < 25 true?
• Swap the element with the one at the partition index. In this case, 3 is staying at the same place
• Increment the partition index by 1 since it’s filled
• Increment the iterator index by 1 to check next item
• If not true
• Increment the iterator index by 1 to check the next item

• Is 200 < 25 true?
• Swap the element with the one at the partition index
• Increment the partition index by 1 since it’s filled
• Increment the iterator index by 1 to check next item
• If not true
• Increment the iterator index by 1 to check the next item. Please note that we are only placing elements that are less than the pivot value.

• Is 55 < 25 true?
• Swap the element with the one at the partition index
• Increment the partition index by 1 since it’s filled
• Increment the iterator index by 1 to check next item
• If not true
• Increment the iterator index by 1 to check the next item

• Is 1 < 25 true?
• Swap the element with the one at the partition index
• Increment the partition index by 1 since it’s filled
• Increment the iterator index by 1 to check next item
• If not true
• Increment the iterator index by 1 to check the next item

• Is 2 < 25 true?
• Swap the element with the one at the partition index
• Increment the partition index by 1 since it’s filled
• Increment the iterator index by 1 to check next item
• If not true
• Increment the iterator index by 1 to check the next item

• Is 3 < 25 true?
• Swap the element with the one at the partition index
• Increment the partition index by 1 since it’s filled
• Increment the iterator index by 1 to check next item
• If not true
• Increment the iterator index by 1 to check the next item

• Is 88 < 25 true?
• Swap the element with the one at the partition index
• Increment the partition index by 1 since it’s filled
• Increment the iterator index by 1 to check next item
• If not true
• Increment the iterator index by 1 to check the next item

• Is 71 < 25 true?
• Swap the element with the one at the partition index
• Increment the partition index by 1 since it’s filled
• Increment the iterator index by 1 to check next item
• If not true
• Increment the iterator index by 1 to check the next item

After the iteration, you need to make sure the pivot is swapped with the partition index.
Now we have partitioned the array.
The left partition has all the elements that are smaller than the pivot (25) and the right partition has all the elements that are greater than or equal to the pivot.

#### Quicksort Algorithm

Quicksort becomes very easy once partitioning is done.
You just need to recursively apply partitioning to both left and right partitions.
This is why quicksort is a divide and conquer algorithm since it divides the problem to each subset (partitions) to solve the problem.
It will be clear once you see the code example.

### Code Example

```/**
* @brief  partition the array from low to high index
*
* @param  array  array to partition
* @param  low    lower boundary index
* @param  high   upper boundary index
*
* @return index of the pivot after partitioning
*/
int partition(vector<int> &array, int low, int high)
{
int partitionIdx = low;

// here I just chose the last element but it really should be random
// or some other method not to make quicksort to run worst case of O(n^2)
int pivot = high;

for (int i = low; i < high; ++i)
{
// current value < array[pivot], swap
if (array[i] < array[pivot])
{
swap(array[i], array[partitionIdx++]);
}
}

// place pivot to correct place. this value will never be moved now
swap(array[partitionIdx], array[pivot]);

return partitionIdx;
}

/**
* @brief  quicksort
*
* @param  array  array to sort
* @param  low    lower boundary index
* @param  high   upper boundary index
*
* @return index of the pivot after partitioning
*/
void quicksort(vector<int> &array, int low, int high)
{
if (low < high)
{
int partitionIdx = partition(array, low, high);
quicksort(array, low, partitionIdx - 1);
quicksort(array, partitionIdx + 1, high);
}
}```

### Performance

The key of the quicksort is partitioning and the key of partitioning is how well you can select a pivot element.
Do you remember I mentioned that quicksort performs better than others under certain conditions?
That really depends on selecting a pivot element.
If the random pivot is selected than it is likely to have balanced partitions and quicksort will perform better than many others because it sorts all the elements in place.
(Think about mergesort which requires extra buffer and copy)
Average case performance is, therefore, O(n log n)

However, in a worst-case scenario where the pivot is selected in incremental or decremental order, the performance will be O(n^2) which is not better than bubble sort.
It happens because there will be extremely unbalanced partitions where the size will be (n – 1) and 1 respectively.
That’s why the performance of the worst-case scenario is O(n^2).
For that reason, there are some algorithms for random pivot selection.

### Conclusion

The key to the quicksort is balanced partitioning based on a good pivot.
The partitioning algorithm itself is quite useful for cases such as finding the median. (Please note that finding median doesn’t require the entire array to be sorted)

## Intro to Binary Search Algorithm

Let’s say there is an unsorted array and you need to find certain values.
How would you search for those values?
You can’t avoid iterating the entire array every time you are searching since the array is unsorted.
Wouldn’t this be a very inefficient way of searching?
It may be fine if the size of the array is pretty small. However, what if there are one million elements in the array?
You probably do not want to iterate one million elements every time you search because it’s a waste of time.

Binary search algorithm is a very efficient search algorithm.
It requires the array to be sorted which may cost in the beginning.
However, once the array is sorted it’s extremely fast and easy to find an element in the array.
Let’s take a look at how binary search work in this post.

### How does binary search work?

First of all, please note that an array needs to be sorted in order to apply binary search and you will see why as you read.
Basically binary search means you eliminate half of the search candidates every time you compare so you don’t have to check every element in the array.
I believe an example will explain the concept better than any other word.

Let’s say there is a sorted array with 12 elements.
And you would like to find the index of the element 81 if it exists.

#### Binary Search Algorithm Steps

The followings are steps of binary search algorithm.

1. Find a middle point and check if this element is equivalent as the search key 81
2. If yes, we found the element and return the index
3. If the key is smaller than the middle then we only need to search the left side of the middle point since the key will never exist on the right side.
4. If the key is greater than the middle then we only need to search the right side of the middle point since the key will never exist on the left side.
5. Iterate steps #1 – 4 until the element is found.

Although it is pretty simple it might not be clear to understand at first glance.

#### Binary Search Example

Let’s take a look at how we can find 81 from the above array.
First, we need to check the middle point which is located at index 5 with the value 25.

81 is not the same as 25.
Since 81 is greater than 25 we don’t need to search the left side of 25 which is from index 0 to index 4.
We throw away the left side and need to search the right side only.
What is the index range we need to search this time? It is from the index 6 to 11.

The middle element this time is located at index 8 with value 51.
But 51 is still not the same as 81.
Since 81 is still greater than 51 we can throw away the left side of the middle point (index 8).
81 simply can’t be found in the left side of the index 8.
What is the next index range we need to search for? It is from the index 9 to 11.

The middle element this time is located at index 10 with value 81.
We finally found the element at index 10!
How many times did we need to compare to find the element? Only 3 times.
It is indeed a very efficient algorithm to search!

Now you might ask what happens if the array doesn’t have an element we are looking for.
In that case, the middle point index will eventually go out of a valid index search range and binary search will return proper error code.
It will be clear once you see the code example below.

### Code Example

```#include <vector>
using namespace std;

/**
* @brief  This function searches an element 'key' in the array using
*         binary search algorithm
*
* @param  array  array to search
* @param  low    lower boundary index
* @param  high   upper boundary index
* @parma  key    key to search in the array
*
* @return index of the found element, -1 if key doesn't exist
*/
int binarySearch(const vector<int> &array, int low, int high, int key)
{
// weren't able to find element up to this point
if (low > high)
{
return -1;
}

int middle = (low + high) / 2;
if (array[middle] == key)
{
return middle;
}
else if (array[middle] > key)
{
// throw away right half of the array in searching
return binarySearch(array, low, middle - 1, key);
}
else
{
// throw away left half of the array in searching
return binarySearch(array, middle + 1, high, key);
}
}```

### Performance

The search performance of binary search is O(log n) since it reduces half every time it compares.

### Conclusion

Binary search is a great algorithm to find an element.
However, please note that it works great when applied to data structures such as array because you can easily throw away half of the search candidates.
You can still use binary search in linked list but it will not be as efficient as array.