## Maximum Repeating Substring

This is an easy level coding interview problem from leetcode. The problem asks you to find the maximum number of repeating string which is given. Please refer to the problem description for more detail. For the solution, you need to iterate the sequence. At each index, you check if the substring matches the word. If it matches, then continue to the next index to find the end of the repeated word. If it doesn’t match, then just move on to the next index.

I provided solutions in C++ and Python3. You can try iterative solution but I provided recursive solutions here. For recursive solutions, it is following the same principles. But note that I keep track of the start index of the comparison because that will be used when the substring doesn’t match.

```void maxRepeating(const string &seq, int currIdx, int matchIdx, const string &word, int numRepeat, int &maxRepeat)
{
// this is base case. make sure you get the max repeat.
if (seq.size() - matchIdx < word.size())
{
maxRepeat = max(maxRepeat, numRepeat);
}
// word doesn't match. get the max repeat and then continue comparing the string from the currIdx which is the start of the matching
else if (seq.substr(matchIdx, word.size()) != word)
{
maxRepeat = max(maxRepeat, numRepeat);
maxRepeating(seq, currIdx + 1, currIdx + 1, word, 0, maxRepeat);
}
// word matches. keep checking. make sure currIdx is passed to note the start of the pattern
else
{
maxRepeating(seq, currIdx, matchIdx + word.size(), word, numRepeat + 1, maxRepeat);
}
}

int maxRepeating(string sequence, string word) {
int maxRepeat = 0;
maxRepeating(sequence, 0, 0, word, 0, maxRepeat);
return maxRepeat;
}```
```class Solution:
result = 0

def max_repeating(self, sequence, curr_idx, match_idx, word, num_repeat):
compare = sequence[match_idx : match_idx + len(word)]
if len(compare) < len(word):
self.result = max(self.result, num_repeat)
return
elif compare != word:
self.result = max(self.result, num_repeat)
self.max_repeating(sequence, curr_idx + 1, curr_idx + 1, word, 0)
return

self.max_repeating(sequence, curr_idx, match_idx + len(word), word, num_repeat + 1)

def maxRepeating(self, sequence: str, word: str) -> int:
self.max_repeating(sequence, 0, 0, word, 0)
return self.result
```

## Longer Contiguous Segments of Ones than Zeros

This is an easy level leetcode problem. Please refer to this link for the detail of the problem description.

Basically, the idea is to find the longest consecutive 1’s and 0’s and return True if 1’s > 0’s.

There are a couple of ways to achieve the problem.

The first one is a pythonic way. Since the string is a binary string that only has 1 and 0, you can split the string based on them. After the split, it’s really to find the longest string in the array. But this solution is only applicable to python.

```class Solution:
def checkZeroOnes(self, s: str) -> bool:
return max((len(digit) for digit in s.split('0'))) > max((len(digit) for digit in s.split('1')))```

Another solution is to explicitly iterate the string and count num 1’s and 0’s which are applicable to all other languages. You can do it in one pass as you see the solution below.

```class Solution:
def checkZeroOnes(self, s: str) -> bool:
ones_start_idx = 0
num_ones = 0
zeros_start_idx = 0
num_zeros = 0
for idx, digit in enumerate(s):
if digit == '0':
ones_start_idx = idx + 1
num_zeros = max(num_zeros, idx - zeros_start_idx + 1)
else:
zeros_start_idx = idx + 1
num_ones = max(num_ones, idx - ones_start_idx + 1)

return num_ones > num_zeros
```

## Delete Characters to Make Fancy String

This is an easy level leetcode problem, which you can use stack to solve.

Essentially, you cannot accept 3 or more consecutive duplicate letters. There could be many ways but using a stack seems to be the most elegant approach. Basically, you accumulate each letter if it doesn’t match with the top two entries in the stack or stack has less than 2 elements. This way, you are only accumulating letters that are not duplicate of 3 or more consecutive characters.

```class Solution:
def makeFancyString(self, s: str) -> str:
fancy = []
for letter in s:
if len(fancy) < 2 or fancy[-1] != letter or fancy[-2] != letter:
fancy.append(letter)

return "".join(fancy)
```

## Check Whether Two Strings are Almost Equivalent

This is a leetcode interview problem that requires a hash table with strings. Please refer to this link for more detail of the problem description.

Let me first show you the solution code.

```class Solution:
def checkAlmostEquivalent(self, word1: str, word2: str) -> bool:
def default_val():
return list((0, 0))
count = defaultdict(default_val)
for letter in word1:
count[letter] += 1

for letter in word2:
count[letter] += 1

for letter, freq in count.items():
if abs(freq - freq) > 3:
return False

return True```

A hash table is necessary in order to keep track of the frequencies of the letter in the word. Note that I used a list of which size is 2 – the first index is frequencies for word1 and the second index is frequencies for word2.

Once you collected the frequencies, you just need to check if the frequency difference is greater than 3.

## Intro to dynamic programming

This article is a simple intro to dynamic programming. Dynamic programming is typically considered the hardest algorithm to work with. However, there is a systematic way to solve the problem and we just need to practice based on that method.

Dynamic programming is best suited for the situation that needs to eliminate duplicate computations by caching it. Now let’s take a look at what steps are necessary to solve the problem.

### Steps to solve DP

• Define the state
• List out all the state transitions
• Implement a recursive solution
• Memoize (top-down)
• Make it bottom-up

#### Define the state

What are states in a dynamic programming context?

States are a set of parameters that define the system. Basically, we are trying to find the least number of parameters that change the state of the system. These are the inputs you are passing into the recursive function. Most of the time, it is just one or two parameters that define the state of the system.

After you define the state, you need to define the cost function and its return. It is the one we are trying to optimize for.

##### Example – knapsack problem

Provided a set of items, each with a weight and a value, determine the combination of each item to include in a bag so that the total weight is less than or equal to a given limit while having the largest total value possible.

states:
1. W – Available capacity of the back (limit)
2. i – Index of the item being considered

Cost function: knapsack(W, i), returns the largest total value that is less than or equal to the limit

#### Define the state transitions and optimal choice

At this step, you need to find out how the recurrence relation would be and what parameters you will pass to the recursive function. And how do we combine the result?

In order to find out the recurrence relation, you first need to identify the base cases.
1. very last state, where there are no more items to process.
2. some parameters are 0 or reached a final value that we are not able to proceed further.

After identifying the base cases, we need to define transitions. We need to identify all the valid candidates and try each of them one at a time (backtracking). Then, change parameters based on selected candidates and call recursive function. After trying all the candidates, the results need to be combined to find the optimal value we want to return.

##### Example – knapsack problem

Base cases:
1. If W = 0, it means the knapsack is full. Don’t have enough space for additional items.
2. If i = -1 (starting from last to 0), we processed all the items and nothing is left.
In those cases, the function returns 0

Transitions:
At each item, we have two decisions to make.
1. Take i-th item in the knapsack, then W will become W – weight[i]. It means weight[i] <= W must be true.
2. Skip the i-th item, and W remains the same.

This is the recurrence relation you see.

```                                knapsack(W, i)
/           \
value[i] + knapsack(W - weight[i], i-1)      knapsack(W, i-1)

then, optimal choice would be:
max(value[i] + knapsack(W - weight[i], i-1), knapsack(W, i-1))

Recurrence relation
Base cases:
knapsack(0, i) = 0
knapsack(W, -1) = 0

if weight[i] <= W
knapsack(W, i) = max(value[i] + knapsack(W - weight[i], i - 1), knapsack(W, i - 1))
else
knapsack(W, i) = knapsack(W, i - 1)```

#### Implement a recursive solution

This is python version but should be easy enough to understand.

```def knapsack(W, i, weights, values):
if W == 0 or i == -1:
return 0
if weights[i] <= W:
return max(
values[i] + knapsack(W - weights[i], i - 1, weights, values),
knapsack(W, i - 1, weights, values)
)
else:
return knapsack(W, i - 1, weights, values)```

#### Memoize (top-down)

The naive recursive solution is slow – O(2^n) – and we can improve the performance by using memoization. How do we use it?
We cache the results of subproblems to avoid computing duplicate subproblems.

If there is only one parameter, we can use a 1D array and use the value of the parameter as the index to store the result.
If there are two parameters, then we use a 2D array – matrix.
Please note to initialize the array with default values.

```def knapsack(W, i, weights, values, dp):
if W == 0 or i == -1:
return 0
# check the dp to avoid duplicate computation
if dp[W][i] != -1:
return dp[W][i]

# not computed yet
result = None
if weights[i] <= W:
result = max(
values[i] + knapsack(W - weights[i], i - 1, weights, values, dp),
knapsack(W, i - 1, weights, values, dp)
)
else:
result = knapsack(W, i - 1, weights, values, dp)

dp[W][i] = result
return result```

Now you completed a dp problem with top-down approach.

#### Bottom up

If we already completed the problem, why do we even need to solve it bottom up?
There can be a couple of reasons to do that.
1. You might get stack overflow error due to many recursive calls
2. Bottom up is faster than top-down

In bottom-up approach, you have to think in the reverse direction. You need to start at the leaf/bottom of the tree, solve all the problems starting from the leaves and make the way up. It means we will start with the base case and compute upward to final state. We use one for loop for each parameter and fill all the values in the table by solving smaller problems into big ones.

Now, we need to decide which problem to solve first. It actually depends on how the parameters change in recurrence relation. If the parameter decreases in the subproblem then we start from 0 and go all the way up to the maximum value of the parameter.

```Base case
if i = 0, return 0
if w = 0, return 0

Bottom up equation
dp[w][i] = max(values[i] + dp[w-weight[i]][i], dp[w][i-1])```
```def knapsack(W, weights, values):
dp = [[0 for i in range(0, len(weights) + 1)] for j in range(0, W + 1)]
for i in range(1, len(weights) + 1):
for w in range(0, W + 1):
if weights[i-1] <= w:
dp[w][i] = max(dp[w]][i-1], dp[w-weights[i-1]][i-1] + values[i-1])
else:
dp[w][i] = dp[w][i-1]
return dp[W][len(weights)]```

At first glance, bottom-up solution doesn’t look intuitive at all. However, after solving top-down and defining proper equation it totally makes sense. Dynamic programming is hard but you will definitely get better by practicing!

## Dijkstra’s Algorithm Shortest Path

Dijkstra’s algorithm is the most popular algorithm to find the shortest paths from a certain vertex in a weighted graph.
In fact, the algorithm will find the shortest paths to every vertex from the start vertex.
The algorithm takes both greedy and dynamic programming approach that it will try to find local optimum paths first and use them to find the shortest paths.
Let’s take a look at how it works!

### How does Dijkstra’s algorithm work?

Let’s say we need to find the shortest path from node A to node E.

There are five nodes and each edge is weighted with the cost to traverse from a node to a node.
For example, it costs 5 to traverse from node A to B.

What would be the most efficient way to find out the shortest path?
In order to find the shortest path from node A to E, we need to find out what is the shortest path to nodes that are right before E – A, B, D.
Can you see where we are going?
In order to find out the shortest path to nodes B, D (I skipped A since it’s the start node), we need to find the shortest path to nodes right before – A, C.

Let’s take a look at each step.

From node A, we can reach to nodes B, C, D, and E.
What is the shortest path from A to B? There is only one path and it costs 5.
What is the shortest path from A to C? There is only one path and it costs 1.
The nodes B and C are very simple but it’s very important because this will be stepping stone to find the shortest paths to further nodes.

Now, based on the shortest paths to the node C, we can find out the shortest path from the node A to the node D.
There are two paths – A to D directly and A to C to D.
As we can see the shortest path from A to D is A -> C -> D with the total cost 3.

Finally, we have found all the shortest paths from A to B, C, and D which are the nodes right before E.
Based on the shortest paths we found so far, we just need to find which is the cheapest way to reach to E.

There are a total of four possible routes from A to E.

• A -> E : 10
• A -> B -> E : 11
• A -> D -> E : 9
• A -> C -> D -> E : 8

The shortest path from A to E is A -> C -> D -> E with the cost 8.

The graph itself is pretty simple.
However, the steps we took to find the shortest path is like this.
It really finds the shortest paths to adjacent nodes of A then builds the path based on previous best paths each round.

• Start from the start node A
• Just like breadth-first search, find shortest paths to adjacent nodes which are B, C, D, and E.
• Make sure you keep track of those records.
• From nodes B, C, and D, find the shortest path to the node E.

### Code Example

This code example has the implementation of Dijkstra’s algorithm which uses the graph in the above example.

```#include <iostream>
#include <vector>
#include <limits>
#include <algorithm>

using namespace std;

struct Edge
{
int destNodeId;
int weight;

Edge(int _destNodeId, int _weight) :
destNodeId(_destNodeId), weight(_weight)
{}
};

struct Node
{
// node id which is index of vector in this case
int nodeId;

// set of edges belong to this node
vector<Edge> edges;

// indicates if the node is processed - the shortest path found.
bool processed;

Node(int _nodeId, vector<Edge> _edges) :
nodeId(_nodeId), edges(_edges), processed(false)
{}
};

struct Graph
{
// list of nodes in this graph
vector<Node> nodes;

// indicates the shortest paths up to this node so far.
vector<int> shortestDistSoFar;

// indicates parent node id for each node id, which is represented by index
vector<int> parentNodes;
};

void initializeGraph(Graph &graph)
{
// initialize node A
vector<Edge> edges;
edges.emplace_back(1, 5);
edges.emplace_back(2, 1);
edges.emplace_back(3, 4);
edges.emplace_back(4, 10);

graph.nodes.emplace_back(0, edges);

// initialize node B
edges.clear();
edges.emplace_back(4, 6);

graph.nodes.emplace_back(1, edges);

// initialize node C
edges.clear();
edges.emplace_back(3, 2);

graph.nodes.emplace_back(2, edges);

// initialize node D
edges.clear();
edges.emplace_back(4, 5);

graph.nodes.emplace_back(3, edges);

// initialize node E
edges.clear();
graph.nodes.emplace_back(4, edges);

graph.shortestDistSoFar.resize(5, numeric_limits<int>::max());
graph.parentNodes.resize(5, -1);
}

vector<Node> dijkstra(Graph &graph, int startNodeId)
{
int currNodeId = startNodeId;
// distance from A to A is 0
graph.shortestDistSoFar[currNodeId] = 0;
// A doesn't have the parent node
graph.parentNodes[currNodeId] = 0;

while (!graph.nodes[currNodeId].processed)
{
graph.nodes[currNodeId].processed = true;

// take a look at adjacent nodes
for_each(
graph.nodes[currNodeId].edges.begin(),
graph.nodes[currNodeId].edges.end(),
[&graph, currNodeId](const Edge &destNodeEdge)
{
// an adjacent node from current node being processed
int destNodeId = destNodeEdge.destNodeId;

// weight of the edge
int weight = destNodeEdge.weight;

// if this is shorter than the record, update it
if (graph.shortestDistSoFar[currNodeId] + weight < graph.shortestDistSoFar[destNodeId])
{
graph.parentNodes[destNodeId] = currNodeId;
graph.shortestDistSoFar[destNodeId] = graph.shortestDistSoFar[currNodeId] + weight;
}
});

// find next node to process
// need to process shortest distance first
int minDistSoFar = numeric_limits<int>::max();
for_each(
graph.nodes.begin(),
graph.nodes.end(),
[&currNodeId, &graph, &minDistSoFar](const Node &node)
{
if (!node.processed && graph.shortestDistSoFar[node.nodeId] < minDistSoFar)
{
minDistSoFar = graph.shortestDistSoFar[node.nodeId];
currNodeId = node.nodeId;
}
});
}

vector<Node> shortestPath;

// need to find shortest path by recursively tracking parent node

currNodeId = 4;
while (graph.parentNodes[currNodeId] != currNodeId)
{
shortestPath.push_back(graph.nodes[currNodeId]);
currNodeId = graph.parentNodes[currNodeId];
}

// make sure A is in the path
shortestPath.push_back(graph.nodes);
reverse(shortestPath.begin(), shortestPath.end());
return shortestPath;
}

int main()
{
Graph graph;
initializeGraph(graph);

// dijkstra from A of which node id is 0
vector<Node> shortestPath = dijkstra(graph, 0);

for_each(
shortestPath.begin(),
shortestPath.end(),
[](const Node &node)
{
switch (node.nodeId)
{
case 0:
cout << "A -> ";
break;
case 1:
cout << "B -> ";
break;
case 2:
cout << "C -> ";
break;
case 3:
cout << "D -> ";
break;
case 4:
cout << "E ";
break;
}
});
cout << endl;
return 0;
}
```

### Performance

Time complexity: O(n^2)
It’s because the algorithm needs to visit each node to find the shortest path.
(You can simply find out there is nested loop)

### Conclusion

The algorithm works correctly as long as there aren’t any negative weight edges.
The reason is that the huge negative edge may change the shortest path record to certain edge which was already processed.
But I assume there won’t be many cases that graph will have negative edges.
Thank you for reading the post and please let me know if you have any questions/suggestions.

## How to detect a loop (cycle) in a linked list

Traversing linked list is easy but what would happen if there is a cycle in the linked list?
It will just fall into an infinity loop if traversing is implemented without a cycle prevention code.
Thankfully, there is already a known algorithm for this kind of problem.

### How to detect if a cycle exists?

Suppose there is a linked list that has a cycle like below.
Now the question is how to detect the cycle.

Let’s first think about what would happen if there is a cycle.

Iterating linked list typically happens in a while loop like these steps.

1. Check if the node is valid (not null)
2. Process node
3. Move on to next node and do #1,2

Normally iteration will be done once it reaches the end of the list of which next node is null.
However, it will fall into an infinite loop if there is a cycle since there won’t be any null node.

Then, the key is that we shouldn’t visit a node if it was already visited.
How do we detect visited nodes?
The easy answer is to have a cache (map, array or whatever) that keeps track of visited nodes.

So there will be an additional step for cycle detection

1. Check if the node is valid (not null)
2. Process node
3. Move on to next node
4. Check if this node is already visited. If yes then there is a cycle.

What is the time complexity of this problem? O(n). We only need to iterate once.
What is the space complexity of this problem? O(n). Potentially we need to keep track of all visited nodes.

### Can we do better?

We cannot do better on time complexity because you need to iterate the list at least once.
However, there is definitely a way to improve space complexity.
In fact, we can solve this problem with O(1) space complexity.

How can we do that?
Think about two people running on a circular track.
And let’s say one person is able to run twice faster than the other.
Then what would happen in that case?
The faster runner will catch the slow runner at the start point after 1 lap of slow person since the faster runner is able to run track twice while the slow runner finishes 1 lap.
We can apply exact same principle to detect a cycle in a linked list.
If we have two pointers that one pointer iterates twice faster than the other faster pointer will catch up slower pointer once it completes 1 lap.

### How to find out where the cycle starts?

I think some visualization will do a much better job than a verbal explanation.
So let’s take a look.

The cycle may start after a few nodes.
But as you can see in the picture, the number of nodes before cycle essentially means the number of nodes before cycle start node where two pointers begin iteration.

The above picture shows you that node 15 will be the location where two pointers will meet.
Then, all we need to do is have another pointer at the start of the linked list and iterate both pointer 1 (slow pointer) and 3rd pointer until they meet which is the cycle node!

### Code Example

```struct Node
{
int val;
Node *next;
Node(int x) : val(x), next(0) {}
};

{
{
return 0;
}

while (iter1 && iter2)
{
// cycle found!
if (iter1 == iter2)
{
break;
}

iter1 = iter1->next;
iter2 = iter2->next ? iter2->next->next : 0;
}

// no cycle
if (!iter1 || !iter2)
{
return 0;
}

while (iter1 != cycleNode)
{
iter1 = iter1->next;
cycleNode = cycleNode->next;
}

return cycleNode;
}```

## Intro to Topological Sort

Topological sort is an important algorithm in DAG – directed acyclic graphs.

It orders each vertex in line from left to right such that left most vertex could be considered as the highest level or root or entrance of the graph and rightmost as the lowest of the graph.

In the end, topological sort tells you the order of vertices you need to go through from one node to the other without missing any middle vertices.

There could be multiple topological sorts in the same graph.

Let’s take a look at an example for a better understanding

### Course Prerequisite Example

The course prerequisite is a great example of this kind of problem.

Let’s say there are 4 courses A, B, C, D and here are some prerequisites for each course.

1. Course C requires A, B, D
2. Course B requires A
3. CourseD requires A, B
4. A is a fundamental course you need to complete before any courses

With that in mind, in order to reach from A to C, we need to follow these steps.

1. We see a path to node C directly from A but we also see B and D
2. Visit B
3. From B we see a path to C and D
4. Visit D
5. Now the order of node visit is A, B, D, C which is the correct order for course completion

However, the above steps are not complete yet for topological sort algorithm since some intuition plays a role there.
Here is the correct algorithm.

### Topological Sort Algorithm

1. Visit a node and mark it visited (but not processed yet)
2. Search any nodes connected to the node from #1 and visit only if it’s not visited yet
3. If you see any visited but not processed node then there is a cycle and you cannot find topological sort. (It will be an infinite loop!)
4. Mark the node processed and push it into stack
5. After all the nodes are processed pop each value from stack and pop order is the correct order for topological sort

### Code Example

```enum Status
{
UNDISCOVERED,
DISCOVERED,
PROCESSED
};

struct Node
{
Status      status;
vector<int> destNodeIds;
};

// return false if there is a cycle
bool findOrderHelper(int currNodeId, unordered_map<int, Node> &nodes, vector<int> &order)
{
auto it = nodes.find(currNodeId);
it->second.status = DISCOVERED;

bool canPublish = true;

for_each(
it->second.destNodeIds.begin(),
it->second.destNodeIds.end(),
[&nodes, &canPublish, &order](int destNodeId)
{
auto it = nodes.find(destNodeId);
if (it->second.status == UNDISCOVERED)
{
canPublish &= findOrderHelper(destNodeId, nodes, order);
}
else if (it->second.status == DISCOVERED)
{
canPublish = false;
}
});

it->second.status = PROCESSED;
order.push_back(currNodeId);
return canPublish;
}

vector<int> findOrder(int numCourses, vector<vector<int>>& prerequisites)
{
vector<int> order;

// nodes is a graph constructed from prerequisites vector
unordered_map<int, Node> nodes;
for_each(
prerequisites.begin(),
prerequisites.end(),
[&nodes](const vector<int> &prerequisite)
{
auto it = nodes.insert(make_pair(prerequisite.back(), Node()));
it.first->second.destNodeIds.push_back(prerequisite.front());

nodes.insert(make_pair(prerequisite.front(), Node()));
});

// now iterate nodes (graph) to find topological sort
bool canPublish = true;
for_each(
nodes.begin(),
nodes.end(),
[&nodes, &order, &canPublish](const pair<int, Node> &nodeInfo)
{
if (nodeInfo.second.status == UNDISCOVERED)
{
canPublish &= findOrderHelper(nodeInfo.first, nodes, order);
}
});

if (!canPublish)
{
order.clear();
}
else if (order.size() != numCourses)
{
// courses don't have many prerequisites
for (int i = 0; i < numCourses; ++i)
{
auto it = find(order.begin(), order.end(), i);
if (it == order.end())
{
order.push_back(i);
}
}
}

reverse(order.begin(), order.end());

return order;
}```

## Intro to Depth First Search (DFS)

Depth first search is one of the popular graph traversal algorithms that work well for certain purposes.
As you can infer from the name DFS is quite different from BFS – breadth first search.
Therefore the applications of DFS are different from BFS.
In this post, I am going to discuss how DFS works with examples.
Please refer to my other post of BFS if you are interested in.

### How does DFS work?

I am going to start by explaining DFS by comparing it with BFS.
Please note that BFS searches all the neighbors at the same level or degree before you move to the next level.
You can imagine the water ripple starts from the center and move on to the next circle gradually.
BFS always visits all of its neighbor nodes first before any others.

DFS, on the other hand, uses a different algorithm to traverse.
DFS always visits available new nodes if there is any and therefore it quickly moves away from the starting point.
Basically, DFS always tries to move on to the first available neighbor node before processing the others.
And it winds back to the parent node (start node) after processing children nodes (neighbor nodes).

Let’s take a look DFS example.

#### Example

In this example, I am going to use the following notation to mark the nodes to avoid duplicate visits.

UNDISCOVERED – node is not visited in the past
DISCOVERED – node is visited but not completely processed
PROCESSED – node is processed (you can do any custom process of the node if necessary)

I am going to use DFS to traverse the below graph starting from node 1.

From node 1, found node 2 and move on to the node.
Node 1 is marked as DISCOVERED but not PROCESSED yet.

We are at node 2 at the moment. Look for neighbors of the node.
Node 2 has two neighbors – nodes 1 and 4.
And only node 4 is marked as UNDISCOVERED at the moment.
Move onto node 4 after marking node 2 DISCOVERED.

From node 4 we see only one unvisited neighbor – node 5.
Mark node 4 DISCOVERED and move onto node 5.

Node 5 doesn’t have any unvisited neighbors.
Process node 5 (mark as PROCESSED) and wind back to the parent node – node 4.

process node 4 (mark as PROCESSED) and wind back to the parent node – node 2.

process node 2 (mark as PROCESSED) and wind back to the parent node – node 1.

Now, we are at the start node – node 1 – but there is still unvisited neighbor node – node 3.
Move onto node 3 and process the node since node 3 doesn’t have any unvisited neighbors.

Wind back to the parent node and now you can mark node 1 PROCESSED.

### Internal Data Structure for DFS

DFS uses stack which is LIFO (Last in First Out) and that’s why it always processes the newest found node first.
When implemented recursively you don’t actually need to use stack explicitly because recursive calls can behave like stack.

### Where do you use DFS for?

As you see DFS actually tries to search nodes in depth.
Thanks to its nature, DFS is good for problems like finding cycle, topological sort and etc.
I will discuss finding cycle and topological sort in a different post.

### Code Example

```#include <iostream>
#include <queue>
#include <vector>
#include <algorithm>
using namespace std;

enum NodeStatus
{
UNDISCOVERED,
DISCOVERED,
PROCESSED
};

struct Node
{
// default status is UNDISCOVERED
NodeStatus status;

// node id which is index of vector in this case
int nodeId;

// parent node id
int parentNodeId;

// set of edges belong to this node
// destination node id for this edge
vector<int> edges;
};

vector<Node> initializeGraph()
{
vector<Node> nodes(5);
for (int i = 0; i < 5; ++i)
{
nodes[i].status = UNDISCOVERED;
nodes[i].nodeId = i;
}

// edges for node 0
nodes.edges.push_back(1);
nodes.edges.push_back(2);

// edges for node 1
nodes.edges.push_back(0);
nodes.edges.push_back(3);

// edges for node 2
nodes.edges.push_back(0);

// edges for node 3
nodes.edges.push_back(1);
nodes.edges.push_back(4);

// edges for node 4
nodes.edges.push_back(3);

return nodes;
}

void dfs(vector<Node> &graph, int nodeId)
{
cout << "current node id[" << nodeId << "] is discovered"
<< endl;
graph[nodeId].status = DISCOVERED;

for_each(
graph[nodeId].edges.begin(),
graph[nodeId].edges.end(),
[&graph, nodeId](int destNodeId)
{
if (graph[destNodeId].status == UNDISCOVERED)
{
cout << "Found next node[" << destNodeId << "] to visit"
<< endl;
graph[destNodeId].parentNodeId = nodeId;
dfs(graph, destNodeId);
}
else if (graph[destNodeId].status != PROCESSED &&
graph[destNodeId].parentNodeId != nodeId)
{
// node is discovered but not processed yet.
// this likely indicates cycle because this is not
// windback case
}

});

graph[nodeId].status = PROCESSED;
}

int main()
{
vector<Node> graph = initializeGraph();

// 0 is start point
dfs(graph, 0);
return 0;
}```

## Intro to Breadth First Search (BFS)

When dealing with a graph it might be necessary to traverse the graph.
Graph traversal is a systemic method to traverse the graph.
Breadth first search is one of the most popular graph traversal algorithms that work very well for certain purposes.
Today, I am going to introduce breadth first search in this post.

### How does breadth first search traverse?

BFS is a graph traversal algorithm to visit vertices from one starting point in a certain way.
How does it visit other nodes? It first visits its nearest neighbors and processes them before reaching any further nodes.
And it marks nodes as it visits so the same node won’t be visited more than once.
How it marks really depends on each implementation but I will use the following notation in this post.

UNDISCOVERED – node is not visited in the past
DISCOVERED – node is visited but not completely processed
PROCESSED – node is processed (you can do any custom process of the node if necessary)

#### BFS Example

Now, let’s take a look at the below example.
Let’s say we have a graph to traverse from node 1 using BFS.

Here are the algorithm steps for BFS

1. Start from node 1 (or any node you would like to start), look at neighbors of node 1 and save the node 2 and 3 (you typically use a queue for this purpose. Enqueue for save and dequeue for pop) for next processing. Mark node 1 processed, node 2 and 3 as discovered so they are not enqueued more than once.
2. Dequeue node 2 and enqueue neighbors of it which are node 1 and 4 but only node 4 will be enqueued since node 1 is already processed. Mark node 2 processed and node 4 discovered.
3. Dequeue node 3 and enqueue neighbors which are node 1 and node 4 but nothing will be enqueued since node 1 is already processed and node 4 is discovered – already enqueued. Mark node 3 processed
4. Dequeue node 4 and enqueue neighbors which are the node 2, 3 and 5 but only node 5 will be enqueued since 2 and 3 are already processed. Mark node 4 processed
5. Dequeue node 5 and mark it processed (nothing to enqueue – node 4 is already processed)

Since it might be hard to fully understand with the above algorithm let’s take a look at each step.

Starting from node 1 take a look at unvisited neighbors.
By the way, an unvisited node means its status is UNDISCOVERED.
Enqueue unvisited neighbor nodes and mark them DISCOVERED.
Now node 1 should be marked as PROCESSED.

Now dequeue node 2 (it could be node 3 depending on the enqueue order but it is node 2 here because I enqueued it first) and check unvisited neighbors of the node.
Enqueue node 4 and mark it DISCOVERED.
Although node 1 is the neighbor of node 2 we don’t enqueue it because it’s already marked PROCESSED.
Node 2 should be marked as PROCESSED.

Dequeue node 3 and look for any unvisited neighbor node.
It looks like node 3 has two neighbors – nodes 1 and 4.
However, we don’t enqueue any node because node 1 is marked PROCESSED and node 4 is marked DISCOVERED.
Mark node 3 as PROCESSED.

Dequeue node 4 and look for any unvisited neighbor node which is node 5 only.
Enqueue node 5 and mark it DISCOVERED.
Node 4 is also processed.

Dequeue node 5 and there isn’t any unvisited or processed nodes at the moment.
Mark node 5 as PROCESSED and breadth first search is complete.

### Internal Data Structure for BFS

BFS uses queue which is FIFO (First in First Out). As you can already imagine it processes all of the neighbor nodes of the current node before moving on.
Therefore it radiates out from the starting node slowly.

### Where do you use BFS for?

Now you might wonder where breadth first search algorithm is used for?
There are many BFS applications but I will just list two common ones here.

1. Shortest path of an unweighted graph. Due to its nature BFS guarantees to find the shortest path of the unweighted graph. All you need to do is keep track of the parent node.
2. The number of connected components in a graph. All the nodes may be connected for some graphs but there may be a graph that may have multiple components that are not connected. BFS could be very useful to find the number of connected components in a graph

### Code Example

Here I provided a code example, finding the shortest path from the node.

```#include <iostream>
#include <queue>
#include <vector>
#include <algorithm>
using namespace std;

enum NodeStatus
{
UNDISCOVERED,
DISCOVERED,
PROCESSED
};

struct Node
{
// default status is UNDISCOVERED
NodeStatus status;

// node id which is index of vector in this case
int nodeId;

// parent node id
int parentNodeId;

// set of edges belong to this node
// destination node id for this edge
vector<int> edges;
};

// this is really for testing purpose.
// I initialized graph as the above example.
vector<Node> initializeGraph()
{
vector<Node> nodes(5);
for (int i = 0; i < 5; ++i)
{
nodes[i].status = UNDISCOVERED;
nodes[i].nodeId = i;
}

// edges for node 0
nodes.edges.push_back(1);
nodes.edges.push_back(2);

// edges for node 1
nodes.edges.push_back(0);
nodes.edges.push_back(3);

// edges for node 2
nodes.edges.push_back(0);
nodes.edges.push_back(3);

// edges for node 3
nodes.edges.push_back(1);
nodes.edges.push_back(2);
nodes.edges.push_back(4);

// edges for node 4
nodes.edges.push_back(3);

return nodes;
}

// print each node as traversing
void bfs(vector<Node> &graph)
{
queue<int> q;
q.push(graph.nodeId);

while (!q.empty())
{
int currentNodeId = q.front();
q.pop();

// mark it processed first to prevent enqueuing itself
// for self-loop case
graph[currentNodeId].status = PROCESSED;

for_each(
graph[currentNodeId].edges.begin(),
graph[currentNodeId].edges.end(),
[&graph, &q, currentNodeId](int destNodeId)
{
if (graph[destNodeId].status == UNDISCOVERED)
{
q.push(destNodeId);
graph[destNodeId].status = DISCOVERED;
graph[destNodeId].parentNodeId = currentNodeId;
}

// you can do any edge processing you want here
});

//cout << "processed node[" << currentNodeId << "]" << endl;
}
}

void printShortestPath(int currentNodeId, const vector<Node> &graph)
{
if (currentNodeId == 0)
{
cout << "root node[" << currentNodeId << "]" << endl;
}
else
{
cout << "node[" << currentNodeId << "]" << endl;
printShortestPath(graph[currentNodeId].parentNodeId, graph);
}
}

int main()
{
vector<Node> graph = initializeGraph();

bfs(graph);

// print shortest path from node 0 -> node 4
printShortestPath(graph.size() - 1, graph);
return 0;
}```

### Conclusion

Breadth first search is a popular algorithm that is very good for certain applications such as finding the shortest path of an unweighted graph.
There is another graph traversal algorithm called depth first search which I will discuss in another post.