1. Introduction
Algorithms
are at the heart of computer science. They are a set of instructions that help computers solve problems or perform specific tasks. Algorithms have been around since the time of ancient Greeks, who developed methods for solving mathematical problems. Over the centuries, algorithms have evolved into complex mathematical formulas and computer programs that make our lives easier.
In the modern world, algorithms are everywhere. They are used in search engines, social media, e-commerce, healthcare, and more. Algorithms can help us find information, make decisions, and solve problems faster and more efficiently than ever before.
The importance of algorithms in computer science cannot be overstated. They enable computers to process
and analyze
large amounts of data, solve complex problems, and automate tasks. With the rise of big data, the internet of things, and cloud computing, algorithms have become an essential part of our daily lives. They allow us to make sense of the massive amounts of information available to us and make informed decisions based on the data.
In this post, We will also discuss common types of algorithms, such as sorting
and searching
algorithms, and explain how they work. Additionally, we will examine the importance of algorithm analysis in computer science and discuss how to measure the efficiency of an algorithm. Finally, we will look at the real-world applications of algorithms in various industries.
All the algorithm implementations used in this blog post can be found in the algorithms repository.
2. Types of Algorithms
There are many different types of algorithms, each designed to solve a specific class of problems. To understand the breadth and depth of the field of algorithms, it’s helpful to categorize them into different types based on their purpose
, structure
, and underlying techniques
. Here, we’ll explore some of the most common types of algorithms and their uses.
2.1 Sorting algorithms
Sorting algorithms
are techniques used to arrange data elements
in ascending or descending order
. The main goal of sorting is to make searching
easier, efficient
and faster
. There are many algorithms available, and they can be broadly categorized into two types, comparison-based
sorting and non-comparison based
sorting.
The main difference between comparison-based and non-comparison based is the way they evaluate
and compare data elements
. In comparison-based, elements are compared with each other using operators like greater-than and less-than, to determine the order in which they should appear. Non-comparison based, on the other hand, do not rely on these comparison operators. Instead, they sort data elements based on their inherent properties, such as their numeric value or frequency of appearance. Non-comparison based are often faster than comparison-based algorithms, but they can only be used for certain types of data and require more memory.
The comparison-based algorithms include bubble sort
, selection sort
, insertion sort
, quicksort
, mergesort
, heap sort
, and shell sort
, while the non-comparison based algorithms include counting sort
, radix sort
, and bucket sort
. Each algorithm has its strength and weaknesses that make it suitable for specific use cases.
2.2 Algoritms Implementation
2.2.1 Bubble sort
Bubble sort
is a simple
comparison-based
sorting algorithm that repeatedly steps through a list , compares adjacent elements and swaps them if they are in the wrong order. The algorithm gets its name from the way smaller elements “bubble” to the top of the list while larger elements “sink” to the bottom.
In this example below, we define a function called bubble_sort
that takes an unsorted array as input and returns a sorted array using the bubble sort algorithm.
Here’s a pseudocode implementation of the bubble sort algorithm:
1
2
3
4
5
6
7
for i from 0 to n-1:
for j from 0 to n-i-1:
if arr[j] > arr[j+1]:
# swap elements
temp = arr[j]
arr[j] = arr[j+1]
arr[j+1] = temp
Here’s a Python implementation of the bubble sort algorithm, tested on an unsorted array [3, 5, 1, 4, 2]:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
def bubble_sort(arr):
"""
The bubble_sort function takes an array of integers as input and returns a sorted version of the array.
The function uses the bubble sort algorithm to sort the elements in ascending order.
:param arr: Pass in the array to be sorted
:return: A sorted array
:doc-author: Rabii Elbeji
"""
n = len(arr)
# Traverse through all array elements
for i in range(n):
# Keep a flag to identify if any swaps were made during this iteration
# If no swaps were made, the array is already sorted and we can exit the loop
swap_made = False
# Last i elements are already in place, so we only need to iterate through unsorted elements
for j in range(0, n - i - 1):
# Swap if the element found is greater than the next element, and set swap_made flag to True
if arr[j] > arr[j + 1]:
arr[j], arr[j + 1] = arr[j + 1], arr[j]
swap_made = True
# If no swaps were made during this iteration, the array is already sorted and we can exit the loop
if not swap_made:
break
return arr
# Test the function
arr = [3, 5, 1, 4, 2]
sorted_arr = bubble_sort(arr)
print(sorted_arr)
Let’s walk through the bubble sort algorithm step by step with the array [3, 5, 1, 4, 2]:
Initialize the array: [3, 5, 1, 4, 2]
- First iteration:
- Compare 3 and 5. Since 3 is smaller than 5, no swap is made. Array is still [3, 5, 1, 4, 2]
- Compare 5 and 1. Since 5 is greater than 1, swap is made. Array is now [3, 1, 5, 4, 2]
- Compare 5 and 4. Since 5 is greater than 4, swap is made. Array is now [3, 1, 4, 5, 2]
- Compare 5 and 2. Since 5 is greater than 2, swap is made. Array is now [3, 1, 4, 2, 5]
- Second iteration:
- Compare 3 and 1. Since 3 is greater than 1, swap is made. Array is now [1, 3, 4, 2, 5]
- Compare 3 and 4. Since 3 is smaller than 4, no swap is made. Array is still [1, 3, 4, 2, 5]
- Compare 4 and 2. Since 4 is greater than 2, swap is made. Array is now [1, 3, 2, 4, 5]
- Compare 4 and 5. Since 4 is smaller than 5, no swap is made. Array is still [1, 3, 2, 4, 5]
- Third iteration:
- Compare 1 and 3. Since 1 is smaller than 3, no swap is made. Array is still [1, 3, 2, 4, 5]
- Compare 3 and 2. Since 3 is greater than 2, swap is made. Array is now [1, 2, 3, 4, 5]
- Compare 3 and 4. Since 3 is smaller than 4, no swap is made. Array is still [1, 2, 3, 4, 5]
2.2.2 Selection sort
Selection sort
is a simple sorting algorithm that sorts an array or list by repeatedly finding the minimum
element from the unsorted part of the list and putting it at the beginning. The algorithm divides the input list into two parts: the sorted part, which is at the beginning of the list, and the unsorted part, which is at the end of the list.
The algorithm works as follows:
- Find the
minimum
element in the unsorted part of the list. - Swap the
minimum
element with the first element of the unsorted part of the list. - Move the boundary of the sorted part one element to the right.
Repeat steps 1 to 3 until the entire list is sorted.
Here’s a pseudocode implementation of the insertion sort algorithm:
1
2
3
4
5
6
7
8
9
10
11
for i = 0 to n-1
// Find the minimum element in the unsorted part of the list
min_index = i
for j = i+1 to n
if array[j] < array[min_index]
min_index = j
// Swap the minimum element with the first element of the unsorted part of the list
temp = array[i]
array[i] = array[min_index]
array[min_index] = temp
Here’s a Python implementation of the selection sort algorithm, tested on an unsorted array [3, 5, 1, 4, 2]:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
def selection_sort(array):
"""
The selection_sort function sorts an array of integers in ascending order.
:param array: Pass the array to be sorted into the function
:return: The sorted array
:doc-author: Rabii Elbeji
"""
n = len(array) # Get the length of the input array
for i in range(n - 1): # Iterate through the unsorted part of the list
# Find the minimum element in the unsorted part of the list
min_index = i # Set the minimum element index to the current element
for j in range(i + 1, n): # Iterate through the rest of the unsorted part of the list
if array[j] < array[min_index]: # If the current element is smaller than the minimum element
min_index = j # Update the minimum element index
# Swap the minimum element with the first element of the unsorted part of the list
temp = array[i] # Store the current element in a temporary variable
array[i] = array[min_index] # Replace the current element with the minimum element
array[min_index] = temp # Replace the minimum element with the current element
return array # Return the sorted array
# Test the implementation with an unsorted array
array = [3, 5, 1, 4, 2]
sorted_array = selection_sort(array)
print(sorted_array) # Output: [1, 2, 3, 4, 5]
Here’s a step-by-step explanation of how the selection sort algorithm works on the unsorted array [3, 5, 1, 4, 2]:
- First iteration of the outer loop:
- The first element of the unsorted part of the list is 3.
- The inner loop finds the minimum element in the unsorted part of the list, which is 1.
- The minimum element is swapped with the first element of the unsorted part of the list, so the array becomes [1, 5, 3, 4, 2].
- Second iteration of the outer loop:
- The first element of the unsorted part of the list is now 5.
- The inner loop finds the minimum element in the unsorted part of the list, which is 2.
- The minimum element is swapped with the first element of the unsorted part of the list, so the array becomes [1, 2, 3, 4, 5].
- The outer loop has completed, so the entire array is now sorted.
2.2.3 Insertion sort
Insertion sort
is a simple sorting algorithm that builds the final sorted array (or list) one item at a time by comparisons
.
The basic idea behind Insertion sort is to divide the input array into two parts - a sorted
section and an unsorted
section. At the beginning of the algorithm, the sorted section is empty and the unsorted section contains all the elements of the array.
The algorithm then iterates through the unsorted section, taking each element in turn and inserting it into the correct position in the sorted section. To do this, it compares
the current element with the elements in the sorted section, moving them up one position to make room for the current element. Once it has found the correct position for the element, it places it there and moves on to the next unsorted element.
This process is repeated until the whole array has been sorted. The resulting sorted array is now contained entirely within the sorted
section of the original array.
Here’s a pseudocode implementation of the insertion sort algorithm:
1
2
3
4
5
6
7
8
9
for i from 1 to n-1:
key = array[i]
j = i - 1
while j >= 0 and array[j] > key:
array[j+1] = array[j]
j = j - 1
array[j+1] = key
Here’s a Python implementation of the insertion sort algorithm, tested on an unsorted array [3, 5, 1, 4, 2]:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
def insertion_sort(arr):
"""
The insertion_sort function takes an array as input and sorts it in ascending order.
It does this by iterating through the array starting from the second element,
and inserting each element into its correct position in a sorted subarray to its left.
:param arr: Pass in the array to be sorted
:return: Inplace sorted array
:doc-author: Rabii Elbeji
"""
# iterate through the array starting from 2nd element
# assume 1st element is already sorted
for i in range(1, len(arr)):
# store the current value
value = arr[i]
# iterate backwards through the sorted portion of the array
# and shift elements to the right if they are greater than value
j = i - 1
while j >= 0 and value < arr[j]:
arr[j + 1] = arr[j]
j = j - 1
# insert the value in its correct position
arr[j + 1] = value
# Example usage:
arr = [3, 5, 1, 4, 2]
insertion_sort(arr)
print(arr)
Here’s a step by step explanation of how the insertion sort algorithm works on an unsorted array [3, 5, 1, 4, 2]:
- Starting with the second element, compare it with the first element. Since 5 is greater than 3, they are already in the correct order. Array: [3, 5, 1, 4, 2]
- Move to the third element, 1. Compare it with the elements to its left, 5 and 3, and move them one position to the right to make room for the 1. Insert the 1 in the first position. Array: [1, 3, 5, 4, 2]
- Move to the fourth element, 4. Compare it with the elements to its left, 5 and 3. Since 4 is greater than 3 but less than 5, move 5 one position to the right to make room for the 4. Insert the 4 in the second position. Array: [1, 3, 4, 5, 2]
- Move to the fifth and final element, 2. Compare it with the elements to its left, 5, 4, and 3. Since 2 is less than all of them, move them one position to the right to make room for the 2. Insert the 2 in the first position. Array: [1, 2, 3, 4, 5]
The array is now sorted in ascending order.
2.2.4 Merge sort
Merge sort
is a sorting algorithm that uses the divide-and-conquer
paradigm to sort an array of elements. It works by recursively dividing the input array into smaller subarrays until the subarrays have only one element. Then, it merges
the subarrays back together by comparing
and combining
the elements in a sorted
order until the entire array is sorted.
The merge sort algorithm can be implemented recursively or iteratively. The recursive implementation is intuitive and easy to understand, while the iterative implementation is more space-efficient.
The merge
process is the heart of the merge sort algorithm. It works by creating a new array and iterating over the two subarrays in parallel, selecting the smallest
element from each subarray and appending it to the new array. When one of the subarrays has been fully iterated over, the remaining elements in the other subarray are added to the end of the new array.
Here’s a pseudocode implementation of the merge sort algorithm:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
function merge_sort(array)
if length(array) <= 1
return array
middle = length(array) / 2
left_half = array[0:middle]
right_half = array[middle:]
left_sorted = merge_sort(left_half)
right_sorted = merge_sort(right_half)
return merge(left_sorted, right_sorted)
function merge(left_array, right_array)
merged = []
left_index = 0
right_index = 0
while left_index < length(left_array) and right_index < length(right_array)
if left_array[left_index] < right_array[right_index]
merged.append(left_array[left_index])
left_index += 1
else
merged.append(right_array[right_index])
right_index += 1
merged.extend(left_array[left_index:])
merged.extend(right_array[right_index:])
return merged
Here’s a Python implementation of the merge sort algorithm, tested on an unsorted array [3, 5, 1, 4, 2]:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
def merge_sort(array):
"""
The merge_sort function takes an array as input and returns a sorted version of that array.
The function uses the merge_sort algorithm to accomplish this task.
:param array: Pass the array to be sorted into the function
:return: A sorted array
:doc-author: Rabii Elbeji
"""
if len(array) <= 1:
return array
# Divide the input array into two halves
middle = len(array) // 2
left = merge_sort(array[:middle])
right = merge_sort(array[middle:])
# Merge the two sorted halves back together using the merge function
return merge(left, right)
def merge(left, right):
"""
The merge function takes two sorted lists and merges them into a single sorted list.
:param left: The left subarray
:param right: The right subarray
:return: A merged and sorted array
:doc-author: Rabii Elbeji
"""
result = [] # Initialize an empty list to hold the sorted elements
i, j = 0, 0 # Initialize two index variables to track the position in each subarray
# Iterate over the two subarrays in parallel and append the smallest element to the result list
while i < len(left) and j < len(right):
if left[i] < right[j]:
result.append(left[i])
i += 1
else:
result.append(right[j])
j += 1
# Add any remaining elements from the left or right subarray to the result list
result += left[i:]
result += right[j:]
# Return the sorted result list
return result
# Test the merge sort algorithm on an unsorted array
array = [3, 5, 1, 4, 2]
sorted_array = merge_sort(array)
print(sorted_array)
Here’s a step by step explanation of how the merge sort algorithm works on an unsorted array [3, 5, 1, 4, 2]:
- The array is recursively divided into smaller subarrays until each subarray has only one element.
- The subarrays are then merged together in sorted order using the merge function.
- The merge function iterates through both subarrays, comparing the first element of each and appending the smaller one to the merged array.
- The merge function continues iterating through both subarrays, comparing the next element of each and appending the smaller one to the merged array until both subarrays are fully merged.
- The merge_sort function then recursively merges the sorted subarrays until the entire original array is fully sorted.
2.2.5 Quick sort
Quick sort
is a popular and efficient sorting algorithm used to sort elements in a list or array. It is a divide-and-conquer
algorithm that partitions the list into two smaller sub-lists: one sub-list containing elements less than
a pivot value and the other sub-list containing elements greater than or equal to
the pivot value.
The basic idea behind quick sort is to repeatedly choose a pivot element from the list, partition the list around the pivot, and then recursively apply the same process to the two sub-lists created by the partitioning.
Quick sort has several variations, including the Lomuto partition scheme and the Hoare partition scheme. The choice of pivot element and partitioning scheme can affect the performance of the algorithm. In addition, care must be taken to handle edge cases such as duplicate
elements and very small
input sizes.
Here’s a pseudocode implementation of the quick sort algorithm:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
function quickSort(arr, left, right)
if left < right
pivotIndex = partition(arr, left, right)
quickSort(arr, left, pivotIndex - 1)
quickSort(arr, pivotIndex + 1, right)
function partition(arr, left, right)
pivotValue = arr[right]
pivotIndex = left
for i = left to right - 1
if arr[i] < pivotValue
swap arr[i], arr[pivotIndex]
pivotIndex++
swap arr[pivotIndex], arr[right]
return pivotIndex
Here’s a Python implementation of the quick sort algorithm, tested on an unsorted array [3, 5, 1, 4, 2]:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
def quick_sort(arr):
"""
The quick_sort function takes an array of integers as input and returns a sorted version of the array.
The quick_sort function uses the Quick Sort algorithm to sort the input array in ascending order. The Quick Sort
algorithm is a divide-and-conquer algorithm that works by partitioning an unsorted list into two sublists, one with
elements less than or equal to a pivot element and another with elements greater than or equal to the pivot element,
then recursively sorting each sublist until all elements are sorted.
:param arr: Pass the input array to the function
:return: A sorted array
:doc-author: Rabii Elbeji
"""
if len(arr) <= 1:
return arr
else:
# Select the pivot element as the first element of the input array
pivot = arr[0]
left = []
right = []
# Partition the input array into two sub-arrays based on whether the element is less than or greater than the
# pivot
for i in range(1, len(arr)):
if arr[i] < pivot:
left.append(arr[i])
else:
right.append(arr[i])
# Recursively sort the left and right sub-arrays and combine them with the pivot to produce the final sorted
# array
return quick_sort(left) + [pivot] + quick_sort(right)
# Test the implementation on an unsorted array
unsorted_array = [3, 5, 1, 4, 2]
sorted_array = quick_sort(unsorted_array)
print(sorted_array)
Here’s a step by step explanation of how the quick sort algorithm works on an unsorted array [3, 5, 1, 4, 2]:
- Select the first element of the input array as the pivot.
- Partition the input array into two sub-arrays based on whether the element is less than or greater than the pivot.
- Recursively sort the left and right sub-arrays.
- Combine the sorted sub-arrays with the pivot to produce the final sorted array.
2.2.6 Heap sort
Heap sort
is a comparison-based sorting algorithm that works by first creating a binary heap
from the array to be sorted , and then sorting the heap. The binary heap is a tree-based data structure where the parent node of any element in the tree has a value that is greater than or equal to
the value of its child nodes. Once the binary heap is built, the maximum value
is extracted from the root node and swapped
with the last element in the heap. This element is then removed from the heap and placed in its sorted position at the end of the array. The binary heap is then restructured
so that the root node contains the next maximum value, and the process is repeated until all elements are sorted.
The basic steps of heap sort are as follows:
- Convert the array to a
max-heap
using the max-heapify algorithm. - Swap the
root node
with thelast unsorted node
. - Decrement the
size of the heap
by 1. - Re-heapify the remaining nodes.
- Repeat steps 2-4 until the heap is empty.
Here is the pseudocode for the heap sort algorithm:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
heap_sort(array)
n = length(array)
# Convert the array to a max-heap
for i = n/2 down to 1
heapify(array, i, n)
# Sort the heap
for i = n down to 2
swap(array[1], array[i])
heapify(array, 1, i-1)
# Helper function to maintain the max-heap property
heapify(array, i, n)
largest = i
left = 2*i
right = 2*i + 1
if left <= n and array[left] > array[largest]
largest = left
if right <= n and array[right] > array[largest]
largest = right
if largest != i
swap(array[i], array[largest])
heapify(array, largest, n)
Here’s a Python implementation of the heap sort algorithm, tested on an unsorted array [3, 5, 1, 4, 2]:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
def heapify(array, n, i):
"""
The heapify function takes an array, the length of the array, and a starting index. It then compares that
starting index to its children (left and right) to see if it is larger than either of them. If so, it swaps with
whichever child is larger until there are no more children or until the parent node is greater than both children.
:param array: Pass in the array that we want to heapify
:param n: Specify the size of the heap
:param i: Specify the index of the root node
:return: In-place array
:doc-author: Rabii Elbeji
"""
largest = i
left = 2 * i + 1
right = 2 * i + 2
if left < n and array[left] > array[largest]:
largest = left
if right < n and array[right] > array[largest]:
largest = right
if largest != i:
array[i], array[largest] = array[largest], array[i]
# Call heapify recursively on the affected subtree.
heapify(array, n, largest)
def heap_sort(array):
"""
The heap_sort function takes an array as input and returns a sorted version of the array.
The function uses the heapify function to build a max heap from the given array, then extracts
the elements one by one and places them in order at the end of the list. The remaining elements are
then re-heapified until all elements have been extracted.
:param array: Pass in the array to be sorted
:return: The array sorted in ascending order
:doc-author: Rabii Elbeji
"""
n = len(array)
# Build max heap
for i in range(n // 2 - 1, -1, -1):
heapify(array, n, i)
# Extract elements one by one
for i in range(n - 1, 0, -1):
array[0], array[i] = array[i], array[0]
# Call heapify on the remaining heap.
heapify(array, i, 0)
return array
# Test the implementation with an unsorted array
array = [3, 5, 1, 4, 2]
sorted_array = heap_sort(array)
print(sorted_array) # Output: [1, 2, 3, 4, 5]
2.2.7 Counting sort
Counting sort
is a stable sorting algorithm that sorts an array of integers by counting the occurrences of each unique element
in the array and using this information to determine the relative order
of the elements. It is a non-comparative sorting algorithm that has a linear time complexity
, making it very efficient for sorting large data sets.
The counting sort algorithm works by first determining the range of values
in the input array, which allows it to create an array of counts for each value in the range. Then, it iterates over the input array and increments the corresponding count
for each value it encounters. Next, it modifies the count array by summing up the counts of the previous values
to determine the index at which each value should be placed in the output array. Finally, it iterates over the input array again, placing each value in its correct position
in the output array based on the modified count array.
Here’s the pseudocode for the counting sort algorithm:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
counting_sort(array):
# find range of elements
max_element = maximum in array
min_element = minimum in array
range_of_elements = max_element - min_element + 1
# create count array
count_array = array of zeros of length range_of_elements
# fill count array with frequencies of values in input array
for i = 0 to length of array - 1:
count_array[array[i] - min_element] += 1
# modify count array to contain actual position of elements in output array
for i = 1 to length of count_array - 1:
count_array[i] += count_array[i-1]
# create the output array
output_array = array of zeros of length of input array
# iterate over input array, place element in its sorted position in output array, and decrement count array
for i = length of array - 1 downto 0:
output_array[count_array[array[i] - min_element] - 1] = array[i]
count_array[array[i] - min_element] -= 1
return output_array
Here’s an example Python implementation of the counting sort algorithm:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
def counting_sort(arr):
"""
The counting_sort function takes an array of integers as input and returns a sorted version of the array.
The function uses counting sort to accomplish this task.
:param arr: Pass the array to be sorted
:return: The sorted array
:doc-author: Rabii Elbeji
"""
# Determine the maximum and minimum elements in the array
max_element = int(max(arr))
min_element = int(min(arr))
# Determine the range of elements in the array
range_of_elements = max_element - min_element + 1
# Create a count array and initialize it to all 0's
count_arr = [0 for _ in range(range_of_elements)]
# Create an output array and initialize it to all 0's
output_arr = [0 for _ in range(len(arr))]
# Traverse the input array and increment the value in the count array at the index of the element.
for i in range(0, len(arr)):
count_arr[arr[i] - min_element] += 1
# Modify the count array such that each element at each index stores the sum of previous counts.
for i in range(1, len(count_arr)):
count_arr[i] += count_arr[i - 1]
# Traverse the input array from right to left, and for each element, use the value in
# the count array at that index to determine the index of the element in the output array.
for i in range(len(arr) - 1, -1, -1):
output_arr[count_arr[arr[i] - min_element] - 1] = arr[i]
count_arr[arr[i] - min_element] -= 1
# Return the sorted array
return output_arr
# Test the implementation with an unsorted array
array = [3, 5, 1, 4, 2]
sorted_array = counting_sort(array)
print(sorted_array) # Output: [1, 2, 3, 4, 5]
Here’s a step by step explanation of how counting sort works on the given unsorted array [3, 5, 1, 4, 2]:
- The maximum and minimum elements in the array are determined.
- The range of elements in the array is calculated.
- A count array and an output array are created, with the count array initialized to all 0’s and the output array initialized to all 0’s with the length of the input array.
- Traverse the input array and increment the value in the count array at the index of the element.
- Modify the count array such that each element at each index stores the sum of previous counts.
- Traverse the input array from right to left, and for each element, use the value in the count array at that index to determine the index of the element in the output array.
- The sorted array is returned.
2.2.8 Radix sort
Radix sort
is a type of sorting algorithm that sorts integers based on their digits
. Instead of comparing each integer to another integer to determine its position in the sorted list, radix sort groups integers
based on their least significant digit (i.e., the rightmost digit) and puts them in buckets
. It then repeats this process, moving from the rightmost digit to the leftmost digit, until all the digits have been considered.
Radix sort can be implemented using either a stable
or an unstable
sorting algorithm. The stable approach ensures that the order of the input elements with equal digits is preserved
after each pass, while the unstable approach does not make any such guarantees.
Here’s the pseudocode for Radix Sort:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
function radixSort(arr):
maxNum = maximum element in arr
digit = 1
while maxNum / digit > 0:
countingSort(arr, digit)
digit *= 10
function countingSort(arr, digit):
countArray = [0] * 10
outputArray = [0] * len(arr)
for i in range(len(arr)):
digitValue = (arr[i] // digit) % 10
countArray[digitValue] += 1
for i in range(1, 10):
countArray[i] += countArray[i-1]
for i in range(len(arr)-1, -1, -1):
digitValue = (arr[i] // digit) % 10
outputArray[countArray[digitValue]-1] = arr[i]
countArray[digitValue] -= 1
for i in range(len(arr)):
arr[i] = outputArray[i]
The countingSort
function is a helper function that performs a counting sort on a particular digit of the input array.
Here’s a Python implementation of radix sort, along with a test example:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
def radix_sort(arr):
"""
The radix_sort function takes an array of integers and sorts them in ascending order.
It does this by using the counting sort algorithm to sort each digit from least significant
to most significant. The function first finds the maximum number in the array, then uses that
number to determine how many digits are needed for sorting (i.e., if max_num = 123, then 3 digits
are needed). Then it calls counting_sort on each digit starting with 1's place and ending with 10^n's place.
:param arr: Pass in the array that is to be sorted
:return: In-place sorted array
:doc-author: Rabii Elbeji
"""
max_num = max(arr)
digit = 1
while max_num // digit > 0:
counting_sort(arr, digit)
digit *= 10
def counting_sort(arr, digit):
"""
The counting_sort function takes in an array and a digit, and sorts the array based on that digit.
The function first creates two arrays: count_array, which is an array of size 10 with all values initialized to 0;
output_array, which is an empty copy of arr. Then it loops through the inputted arr and counts how many times each
value appears in that specific place (i.e., ones place). It then adds up all previous values to get a cumulative sum
for each index in count_array (this will be used later). Next it loops through arr backwards so as not
:param arr: Pass in the array to be sorted
:param digit: Determine which digit of the number we are sorting
:return: The sorted array
:doc-author: Rabii Elbeji
"""
count_array = [0] * 10
output_array = [0] * len(arr)
for i in range(len(arr)):
digit_value = (arr[i] // digit) % 10
count_array[digit_value] += 1
for i in range(1, 10):
count_array[i] += count_array[i - 1]
for i in range(len(arr) - 1, -1, -1):
digit_value = (arr[i] // digit) % 10
output_array[count_array[digit_value] - 1] = arr[i]
count_array[digit_value] -= 1
for i in range(len(arr)):
arr[i] = output_array[i]
# Test the implementation with an unsorted array
arr = [170, 45, 75, 90, 802, 24, 2, 66]
radix_sort(arr)
print(arr)
2.2.9 Bucket sort
Bucket sort
, also known as bucket sort algorithm or bin sort, is a sorting algorithm that works by distributing
the elements of an input array into a number of buckets, and then sorting each bucket
individually using another sorting algorithm or by recursively applying the bucket sort algorithm.
The basic idea of bucket sort is to divide the input array into a set of equally sized buckets
, each of which represents a range of values in the input domain. The elements in the input array are then distributed into the appropriate bucket
based on their value. Finally, the elements in each bucket are sorted
, either using another sorting algorithm or by recursively applying bucket sort.
Here are the basic steps of the bucket sort algorithm:
- Create a set of
empty buckets
. - For each element in the input array, compute its
bucket index
by dividing its value by the range of values in the input domain and multiplying the result by the number of buckets. - Add the element to the
appropriate bucket
based on its bucket index. Sort each non-empty bucket
.Concatenate the sorted elements
from each bucket into the output array.
Here’s the pseudocode for Bucket Sort:
1
2
3
4
5
6
7
8
9
10
function bucketSort(arr):
num_buckets = length(arr)
buckets = create empty list of num_buckets empty lists
for i in arr:
bucket_index = floor(i * num_buckets)
buckets[bucket_index].append(i)
for bucket in buckets:
sort(bucket)
sorted_array = concatenate all non-empty buckets
return sorted_array
Here’s a Python implementation of radix sort, along with a test example:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
def bucket_sort(arr):
"""
The bucket_sort function takes an array of floating point numbers and sorts them in ascending order.
The function first determines the number of buckets to use, which is equal to the length of the input array.
It then creates a list containing that many empty lists, which will serve as our buckets. The elements from
the input array are distributed into these buckets based on their value (i.e., 0-0.25 goes into bucket 1,
0.26-0.5 goes into bucket 2, etc.). Each non-empty bucket is sorted using Python's builtin sort method for lists.
:param arr: Pass in the array to be sorted
:return: A sorted array
:doc-author: Rabii Elbeji
"""
num_buckets = len(arr)
# Create a list of empty lists to serve as the buckets
buckets = [[] for _ in range(num_buckets)]
# Distribute the elements of the input array into the appropriate bucket
for i in arr:
bucket_index = int(i * num_buckets)
buckets[bucket_index].append(i)
# Sort each non-empty bucket
for bucket in buckets:
bucket.sort()
# Concatenate the sorted elements from each bucket into the output array
sorted_array = [item for bucket in buckets for item in bucket]
return sorted_array
# Test the implementation with an unsorted array
arr = [0.12, 0.23, 0.01, 0.34, 0.45, 0.56, 0.67, 0.78, 0.89, 0.90]
sorted_array = bucket_sort(arr)
print(sorted_array)
2.2 Searching algorithms
Searching algorithms
, also known as search algorithms, are computer algorithms that are designed to retrieve or find
a specific piece of data or information from a large set of data or information. They are widely used in computer science and data engineering to quickly find relevant information
in databases, search engines, and other data sources.
There are several different types of searching algorithms, including linear search
, binary search
, interpolation search
, and hash search
. Each of these algorithms has their unique advantages and limitations
, and the choice of algorithm depends on the specific needs of the application.
2.2.1 Linear search
Linear search
is a simple searching algorithm that checks each element
of a data structure in sequence until it finds the target element. It starts from the beginning of the data structure and iterates through each element
until the target is found or the end of the data structure is reached. Linear search can be performed on any data structure, regardless of whether it is sorted or not.
The basic steps of a linear search algorithm are as follows:
- Read the
target element
to be searched for. - Initialize a loop to
iterate over each element
in the data structure. Compare the current element with the target element
.- If the elements match,
return the position/index of the element
. - If the end of the data structure is reached and the target element is not found,
return a "not found" message or null
.
Here is the pseudo code for the linear search algorithm:
1
2
3
4
5
6
7
linear_search(list, target):
for i from 0 to length(list) - 1 do
if list[i] equals target then
return i
end if
end for
return "not found"
Here is an example Python implementation of the linear search algorithm along with a few test cases:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
def linear_search(arr, target):
"""
The linear_search function takes in an array and a target value.
It then iterates through the array, checking if each element is equal to the target.
If it finds a match, it returns that index. If not, it returns - 1.
:param arr: Pass in the array that is being searched
:param target: Find the index of the target value in an array
:return: The index of the target element if it is present in the array
:doc-author: Rabii Elbeji
"""
for i in range(len(arr)):
if arr[i] == target:
return i
return -1
# Test cases
arr1 = [1, 2, 3, 4, 5]
target1 = 4
result1 = linear_search(arr1, target1)
print(result1)
arr2 = ['apple', 'banana', 'cherry', 'date']
target2 = 'banana'
result2 = linear_search(arr2, target2)
print(result2)
arr3 = [10, 20, 30, 40, 50]
target3 = 60
result3 = linear_search(arr3, target3)
print(result3)
2.2.2 Binary search
Binary search
is a search algorithm that is used to find the position
of a target value in a sorted array or list
. This algorithm works by repeatedly dividing the search interval in half
to narrow down the possible location of the target value.
The basic steps of the binary search algorithm are as follows:
- Read the
target value
to be searched for. - Define the initial
search interval
for the array, typically the entire range. - Compute the
midpoint
of the search interval. Compare
the target value with the value at the midpoint of the array.- If the target value
matches
the midpoint value,return the position
of the midpoint. - If the target value is
less than
the midpoint value,discard the second half
of the search interval and repeat the process with the first half. - If the target value is
greater than
the midpoint value,discard the first half
of the search interval and repeat the process with the second half. - If the target value is
not found
after exhausting all possible intervals,return a "not found" message or null
.
Here is the pseudocode for the binary search algorithm:
1
2
3
4
5
6
7
8
9
10
11
12
BinarySearch(arr, target):
low = 0
high = len(arr) - 1
while low <= high:
mid = (low + high) / 2
if arr[mid] == target:
return mid
elif arr[mid] < target:
low = mid + 1
else:
high = mid - 1
return -1
Here is an example Python implementation of the binary search algorithm, along with some test cases:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
def binary_search(arr, target):
"""
The binary_search function takes in a sorted array and a target value.
It returns the index of the target value if it is found, otherwise - 1.
:param arr: Pass in the array that we want to search through
:param target: Find the index of a given value in an array
:return: The index of the target element if it is found in the array
:doc-author: Rabii Elbeji
"""
low = 0
high = len(arr) - 1
while low <= high:
mid = (low + high) // 2
if arr[mid] == target:
return mid
elif arr[mid] < target:
low = mid + 1
else:
high = mid - 1
return -1
# Test cases
arr1 = [2, 4, 6, 8, 10]
target1 = 6
result1 = binary_search(arr1, target1)
print(result1) # Output: 2
arr2 = ['apple', 'banana', 'cherry', 'date']
target2 = 'banana'
result2 = binary_search(arr2, target2)
print(result2) # Output: 1
arr3 = [10, 20, 30, 40, 50]
target3 = 60
result3 = binary_search(arr3, target3)
print(result3) # Output: -1
2.2.3 Interpolation search
Interpolation search
is an algorithm similar to binary search
, but it makes use of the distribution of values
in the data set to improve the search efficiency. In interpolation search, we first determine an interpolation formula
that can be used to estimate the position of the target element in the data set. We then use this estimate to narrow down the search interval in the same way as we do in binary search.
The basic steps of the interpolation search algorithm are as follows:
- Read the
target value
to be searched for. - Define the
initial search interval
for the array, typically the entire range. - Compute the
position estimate
of the target element using an interpolation formula that takes into account the distribution of values in the data set. Compare
the target value with the value at the estimated position.- If the target value
matches
the estimated value,return the position
of the target element. - If the target value is
less than
the estimated value,discard the second half
of the search interval and repeat the process with the first half. - If the target value is
greater than
the estimated value,discard the first half
of the search interval and repeat the process with the second half. - If the target value is
not found
after exhausting all possible intervals,return a "not found" message or null
.
Here is a simple pseudo-code implementation of the interpolation search algorithm:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
InterpolationSearch(arr, target):
low = 0
high = len(arr) - 1
while low <= high and target >= arr[low] and target <= arr[high]:
pos = low + int((target - arr[low]) * (high - low) / (arr[high] - arr[low]))
if arr[pos] == target:
return pos
elif arr[pos] < target:
low = pos + 1
else:
high = pos - 1
return -1
Here’s a Python implementation of the Interpolation search algorithm:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
def interpolation_search(arr, x):
"""
The interpolation_search function takes in a sorted array and a value to search for.
It returns the index of the value if it is present in the array, otherwise - 1.
The function uses interpolation search algorithm to find an item.
:param arr: Pass in the array to be searched
:param x: Find the value that we are looking for in the array
:return: The index of the element x in the array arr
:doc-author: Rabii Elbeji
"""
low = 0
high = len(arr) - 1
while low <= high and x >= arr[low] and x <= arr[high]:
pos = low + int((x - arr[low]) * (high - low) / (arr[high] - arr[low]))
if arr[pos] == x:
return pos
elif arr[pos] < x:
low = pos + 1
else:
high = pos - 1
return -1
# Test cases
arr = [10, 20, 30, 40, 50]
x = 30
result = interpolation_search(arr, x)
print(result) # output: 2
arr = [-2, 0, 3, 7, 11, 15, 18, 22, 30]
x = 15
result = interpolation_search(arr, x)
print(result) # output: 5
arr = [1, 2, 3, 4, 5]
x = 3
result = interpolation_search(arr, x)
print(result) # output: 2
arr = [1, 3, 5, 7, 9]
x = 2
result = interpolation_search(arr, x)
print(result) # output: -1
Here’s a Python implementation of the Hash search algorithm:
2.2.4 Hash search
Hashing
is a widely used search algorithm that uses a hash function
to map input data to a representative integer value, which can be used to narrow down
search queries. The hash function in a hash table
or hash map
is used to compute an index
, also called a hash code
, into an array of buckets, from which the desired value can be found. A well-designed hash table attempts to minimize
the number of hash collisions
, where the hash function generates the same index for more than one key.
During lookup using a hash function, the key is hashed and the resulting hash indicates where the corresponding value is stored. Ideally, each key is assigned a unique bucket in a hash table. This can help in reducing
the time complexity
of searching for a specific key in the table.
The key idea behind a hash search algorithm is to map the keys to a unique position in the hash table, and thus the average time complexity of each lookup is independent
of the number of elements stored in the table. This makes them a popular
choice to lookup data in real-time for large datasets.
Here’s a pseudocode for the basic Hash Search Algorithm:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
create_hash_table()
function insert(hash_table, key, value):
hash_value = hash(key)
hash_table[hash_value] = value
function search(hash_table, key):
hash_value = hash(key)
if hash_table[hash_value] is not None:
return hash_table[hash_value]
else:
return None
function hash(key):
hash_value = 0
for char in key:
hash_value += ord(char)
return hash_value % hash_table_size
Here’s a Python implementation of the Hash search algorithm:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
class HashTable:
def __init__(self):
"""
The __init__ function initializes the hash table with a size of 10.
It also creates an empty list for each index in the hash table.
:param self: Represent the instance of the class
:return: A hash table of size 10
:doc-author: Rabii Elbeji
"""
self.size = 10
self.hash_table = [[] for _ in range(self.size)]
def hash_function(self, key):
"""
The hash_function function takes in a key and returns the index of that key within the hash table.
The function uses modulo to determine where to place each item.
:param self: Represent the instance of the class
:param key: Determine the index of the hash table where an item will be stored
:return: The remainder of the key divided by the size
:doc-author: Rabii Elbeji
"""
return key % self.size
def insert(self, key, value):
"""
The insert function takes in a key and value, then hashes the key to find the index of the bucket it belongs in.
It then checks if that bucket already contains an item with that key. If so, it replaces its value with the new one.
If not, it appends a tuple containing both items to that bucket.
:param self: Refer to the object that is calling the method
:param key: Create the hash_key
:param value: Store the value of the key
:return: None
:doc-author: Rabii Elbeji
"""
hash_key = self.hash_function(key)
bucket = self.hash_table[hash_key]
found = False
for i, (k, v) in enumerate(bucket):
if k == key:
bucket[i] = (k, value)
found = True
break
if not found:
bucket.append((key, value))
def search(self, key):
"""
The search function takes a key as an argument and returns the value associated with that key.
If no value is found, it returns None.
:param self: Represent the instance of the class
:param key: Find the hash key
:return: The value of the key that is passed in
:doc-author: Rabii Elbeji
"""
hash_key = self.hash_function(key)
bucket = self.hash_table[hash_key]
for k, v in bucket:
if k == key:
return v
return None
ht = HashTable()
ht.insert(5, "apple")
ht.insert(20, "banana")
ht.insert(10, "orange")
ht.insert(15, "peach")
assert ht.search(10) == "orange"
assert ht.search(5) == "apple"
assert ht.search(15) == "peach"
assert ht.search(30) is None
2.3 Searching a graph
Searching a graph
algorithms are a set of techniques used to explore
and traverse
a graph data structure in order to find specific nodes
, paths
or other information. Graphs are a collection of vertices
(also known as nodes) connected by edges
. They can be directed
or undirected
, weighted
or unweighted
, and cyclic
or acyclic
. Searching a graph algorithms can be broadly classified into two categories: Breadth-First Search (BFS)
and Depth-First Search (DFS)
.
2.3.1 Breadth-First Search
Breadth-First Search (BFS)
is a popular graph traversal algorithm used to explore
all the nodes
of a graph or tree. The algorithm starts at the root node
(or any arbitrary node) and explores all the neighboring nodes at the current depth before moving on to the nodes at the next depth level. In other words, BFS expands
the nodes closest
to the starting node before moving on to nodes farther away.
Here is an overview of the BFS algorithm:
- Start at the root node or any arbitrary node.
- Add the node to a queue.
- Mark the node as visited.
- While the queue is not empty:
- Remove the first node from the queue.
- Explore all the neighboring nodes of the removed node.
- Add the unvisited neighboring nodes to the queue and mark them as visited.
- Repeat steps 4 until the queue is empty.
Here’s a pseudocode implementation of the BFS algorithm:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
BFS(G, s):
// G is the graph and s is the starting node
queue = Queue()
visited = set()
level = {}
parent = {}
queue.enqueue(s)
visited.add(s)
level[s] = 0
parent[s] = None
while not queue.isEmpty():
u = queue.dequeue()
for v in G.adjacent(u):
if v not in visited:
visited.add(v)
level[v] = level[u] + 1
parent[v] = u
queue.enqueue(v)
return visited, level, parent
Here’s an implementation of the BFS algorithm in Python using the queue module:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
from queue import Queue
from collections import deque
def bfs(graph, start, goal):
# keep track of visited nodes
visited = set()
# keep track of nodes to visit
queue = deque()
queue.append((start, [start]))
while queue:
(current, path) = queue.popleft()
visited.add(current)
# check if current node is goal node
if current == goal:
return path
# queue up all unvisited neighbours of current node
for neighbour in graph[current]:
if neighbour not in visited:
queue.append((neighbour, path + [neighbour]))
# return None if there is no path
return None
# test code
graph = {'A': ['B', 'C', 'E'],
'B': ['A', 'D', 'E'],
'C': ['A', 'F', 'G'],
'D': ['B'],
'E': ['A', 'B', 'D'],
'F': ['C'],
'G': ['C']}
start = 'A'
goal = 'G'
path = bfs(graph, start, goal)
if path:
print(f"Shortest path from {start} to {goal}: {' -> '.join(path)}")
else:
print(f"No path found from {start} to {goal}")
2.3.2 Depth-First Search
Depth-first search (DFS)
is a common algorithm used for traversing
or searching
tree or graph data structures. The algorithm starts at the root node
, or an arbitrary node
in the case of a graph, and explores as far as possible along each branch before backtracking
. This means that it will follow a path as deep as possible before backing up and exploring a different branch.
DFS is commonly implemented using recursion
or a stack
, where nodes are pushed
onto the stack to represent the current path being explored. The backtracking process occurs when a node is popped
off the stack to continue exploring other branches of the graph. DFS can be used for multiple purposes such as finding connected components
, topological sorting
, finding strongly connected components
, and cycle detection
among others.
DFS is generally not guaranteed to find the shortest path
, but it is more memory-efficient
than breadth-first search (BFS) because it only needs to remember the path from the root node to the current node. BFS, on the other hand, requires the use of a data structure such as a queue to keep track of all the nodes that have been visited but not yet explored.
Here’s the pseudocode for an iterative
implementation of Depth-First Search:
1
2
3
4
5
6
7
8
9
10
11
function DFS_iterative(graph, start):
let visited be a set initialized to empty
let stack be a stack initialized with the start node
while stack is not empty:
node = stack.pop()
if node is not in visited:
visited.add(node)
for neighbor in graph[node]:
if neighbor not in visited:
stack.push(neighbor)
return visited
Here’s the pseudocode for a recursive
implementation of Depth-First Search:
1
2
3
4
5
function DFS_recursive(graph, node, visited):
visited.add(node)
for neighbor in graph[node]:
if neighbor not in visited:
DFS_recursive(graph, neighbor, visited)
Here’s an implementation of the DFS algorithm in Python using the queue module:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
def dfs(graph, start, goal):
"""
The dfs function takes a graph, start node and goal node as input.
It returns the path from the start to goal nodes if there is one.
If not, it returns None.
:param graph: Pass in the graph that we want to search
:param start: Specify the starting node
:param goal: Specify the goal node
:return: A list of nodes that are visited, in the order they were visited
:doc-author: Rabii Elbeji
"""
visited = set()
stack = [(start, [start])]
while stack:
(vertex, path) = stack.pop()
if vertex not in visited:
if vertex == goal:
return path
visited.add(vertex)
for neighbor in graph[vertex]:
stack.append((neighbor, path + [neighbor]))
return None
# test code
graph = {'A': ['B', 'C', 'E'],
'B': ['A', 'D', 'E'],
'C': ['A', 'F', 'G'],
'D': ['B'],
'E': ['A', 'B', 'D'],
'F': ['C'],
'G': ['C']}
start = 'A'
goal = 'G'
path = dfs(graph, start, goal)
if path:
print(f"Shortest path from {start} to {goal}: {' -> '.join(path)}")
else:
print(f"No path found from {start} to {goal}")
3. Time Complexity
Time complexity
is a measure of the amount of time it takes for an algorithm to run as a function of the size of the input. In other words, time complexity describes how an algorithm’s performance scales with the amount of data it’s processing.
The most common way to calculate time complexity is to use Big O notation
, which provides an upper bound on the growth rate of the algorithm. For example, an algorithm that takes constant time (O(1)
) will take the same amount of time to run regardless of the size of the input, while an algorithm that takes linear time (O(n)
) will take time proportional to the size of the input.
Some common examples of algorithms with different time complexities include:
Constant Time Complexity: Algorithms whose running time is the same regardless of input size. Examples include retrieving the first element in an array or returning a constant value.
Linear Time Complexity: Algorithms whose running time increases linearly with input size. Examples include searching an unsorted array, accessing every element in an array, or iterating through a linked list.
Quadratic Time Complexity: Algorithms whose running time increases quadratically with input size. Examples include nested loops to compare all elements in a matrix or sorting algorithms like selection sort and bubble sort.
Logarithmic Time Complexity: Algorithms whose running time increases logarithmically with input size. Examples include binary search and searching in balanced trees.
Exponential Time Complexity: Algorithms whose running time increases exponentially with input size. Examples include brute force algorithms like generating all possible permutations or subsets.
It’s important to consider the time complexity of an algorithm when choosing which one to use for a specific problem, particularly when working with large datasets. An algorithm with lower time complexity will generally be faster than an algorithm with higher time complexity, although there are other factors that can also affect performance, such as memory usage and processing power.
4. Space Complexity
Space complexity
is a measure of the amount of memory or storage space required by an algorithm as a function of the size of the input. It describes how much additional memory an algorithm needs to allocate to perform its operations.
Similar to time complexity, space complexity is often expressed using Big O notation
. Some common examples of algorithms with different space complexities include:
Constant Space Complexity: Algorithms that require a fixed amount of additional memory regardless of input size. Examples include swapping two variables or accessing a single element in an array.
Linear Space Complexity: Algorithms that require additional memory proportional to the size of the input. Examples include merging two sorted arrays or copying the elements of an array.
Quadratic Space Complexity: Algorithms that require additional memory proportional to the square of the input size. Examples include generating all possible pairs of elements in a matrix.
Logarithmic Space Complexity: Algorithms that require additional memory proportional to the logarithm of the input size. Examples include recursive algorithms like binary search.
Exponential Space Complexity: Algorithms that require additional memory proportional to some exponential function of the input size. Examples include generating all possible subsets of a set.
It’s important to consider the space complexity of an algorithm when working with large datasets, as an algorithm that requires too much memory can cause the program to crash or slow down significantly. An algorithm with lower space complexity will generally require less memory and be more efficient than an algorithm with higher space complexity.
5. Conclusion
In this post, we discussed algorithms, which are step-by-step procedures for solving problems or performing tasks. We looked at some examples of algorithms, including sorting algorithms and search algorithms, and discussed how they are used in computer science.
Algorithms are essential in computer science because they allow us to solve complex problems efficiently and effectively. Without algorithms, computers would not be able to perform many of the tasks that we take for granted today.
By understanding algorithms and how they work, computer scientists can design more efficient and effective algorithms, which can lead to faster and more powerful computing systems.